Vous êtes sur la page 1sur 699

Clustered Data ONTAP Administration

NETAPP UNIVERSITY

Clustered Data
ONTAP Administration
Student Guide
Course ID: STRSW-ILT-D8CADM-REV04
Catalog Number: STRSW-ILT-D8CADM-REV04-SG
Content Version: 1.0

NetApp University - Do Not Distribute

ATTENTION
The information contained in this course is intended only for training. This course contains information and activities that,
while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other
severe consequences in a production environment. This course material is not a technical reference and should not,
under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product
documentation that is located at http://now.netapp.com/.

COPYRIGHT
2015 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of NetApp, Inc.

U.S. GOVERNMENT RIGHTS


Commercial Computer Software. Government users are subject to the NetApp, Inc. standard license agreement and
applicable provisions of the FAR and its supplements.

TRADEMARK INFORMATION
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Customer Fitness, CyberSnap,
Data ONTAP, DataFort, FilerView, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexCache, FlexClone,
FlexPod, FlexScale, FlexShare, FlexVol, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore,
OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator,
SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, and WAFL are trademarks or registered trademarks of
NetApp, Inc. in the United States and/or other countries.
Other product and service names might be trademarks of NetApp or other companies. A current list of NetApp trademarks
is available on the Web at http://www.netapp.com/us/legal/netapptmlist.aspx.

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

TABLE OF CONTENTS
WELCOME .......................................................................................................................................................... 1
MODULE 1: EXPLORING DATA ONTAP STORAGE FUNDAMENTALS .................................................... 1-1
MODULE 2: HARDWARE AND INITIAL SETUP ........................................................................................... 2-1
MODULE 3: INITIAL STORAGE SYSTEM CONFIGURATION ..................................................................... 3-1
MODULE 4: STORAGE MANAGEMENT ....................................................................................................... 4-1
MODULE 5: NETWORK MANAGEMENT ...................................................................................................... 5-1
MODULE 6: IMPLEMENTING NAS PROTOCOLS ........................................................................................ 6-1
MODULE 7: IMPLEMENTING SAN PROTOCOLS ........................................................................................ 7-1
MODULE 8: SNAPSHOT COPIES .................................................................................................................. 8-1
MODULE 9: MANAGING STORAGE SPACE ................................................................................................ 9-1
MODULE 10: DATA PROTECTION .............................................................................................................. 10-1
MODULE 11: MONITORING YOUR STORAGE SYSTEM ........................................................................... 11-1
MODULE 12: UPGRADING AND TRANSITIONING TO CLUSTERED DATA ONTAP .............................. 12-1
BONUS MODULE A: INFINITE VOLUMES .................................................................................................... A-1
BONUS MODULE B: ENGAGING NETAPP SUPPORT ................................................................................ B-1
BONUS MODULE C: ONCOMMAND INSIGHT WALKTHROUGH ............................................................... C-1
BONUS MODULE D: DATA ONTAP PHYSICAL STORAGE MAINTENANCE ............................................ D-1
BONUS MODULE E: CLUSTERED DATA ONTAP ARCHITECTURE.......................................................... E-1

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Administration

Course ID: STRSW-ILT-D8CADM-REV04

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP ADMINISTRATION


Course ID: STRSW-ILT-D8CADM-REV04
Content Version: 1.0

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logistics
The Class

Resources

Schedule

Materials

Structure

Support

Activities
Participation rules

WebEx (in some classes)


Collaboration tools
Participants panel
Breakout sessions
Participation rules
2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGISTICS

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Learn Together
Participant
Video

Learn
Share
Participate
Whiteboard
3

Polling

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LEARN TOGETHER

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Chat

About This Course


This is a three-day foundational course that is intended as an introduction to Clustered Data ONTAP.
This course is targeted at students who are new to NetApp storage, or who have no previous clustered
Data ONTAP experience.
This course is part of a comprehensive Data ONTAP curriculum that is outlined below.
You will examine the features of clustered Data ONTAP through lectures and hands-on exercises.
To help orient you, we will compare Data ONTAP 8.3 features with those of Data ONTAP 7-Mode.
Fundamental

Intermediate

Advanced

Clustered Data ONTAP


Administration

Data ONTAP
NFS Administration

Advanced
Performance

Clustered Data ONTAP


Installation Workshop

Data ONTAP
SMB (CIFS) Administration

Troubleshooting

Data ONTAP
SAN Administration

Tools

Data ONTAP
Protection Administration

Enterprise
Applications

You are here

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ABOUT THIS COURSE


Following the Clustered Data ONTAP Administration course (and Installation Workshop if required),
intermediate courses can be taken to further understand and support protocols, data protection, performance,
troubleshooting, management software and enterprise applications.

Intermediate courses can be taken to match a particular role as needed.


Advanced categories should be taken only after taking all the intermediate courses.
NOTE: The advanced categories typically contain one or more courses. Refer to the NetApp University
learning maps for more details.

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Objectives
By the end of this course, you should be able to:

Identify the components of a Data ONTAP storage environment


Install and set up a Data ONTAP cluster
Perform basic administration on a storage system
Configure physical and logical storage
Configure client protocols for a simple environment
Describe Snapshot copies and space consumption in Data ONTAP
Discuss backup methods that are available in Data ONTAP
Describe the process of upgrading Data ONTAP
5

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COURSE OBJECTIVES

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Course Agenda
Day 1

Module
Module
Module
Module

1: Exploring Data ONTAP Storage Fundamentals


2: Hardware and Initial Setup
3: Initial Storage System Configuration
4: Storage Management

Day 2

Module
Module
Module
Module

5: Network Management
6: Implementing NAS Protocols
7: Implementing SAN Protocols
8: Snapshot Copies

Day 3

Module
Module
Module
Module

9: Managing Storage Space


10: Data Protection
11: Monitoring Your Storage System
12: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COURSE AGENDA

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Modules
Bonus Module A: Infinite Volumes
Bonus Module B: Engaging NetApp Support
Bonus Module C: OnCommand Insight Walkthrough
Bonus Module D: Data ONTAP Physical Storage
Maintenance
Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULES

10

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken, Your Guide to This Course


A professional who maintains and provisions storage
Responds to requests and issues
Follows best practices

A thoughtful person who asks not only


how but also why
A troubleshooter who solves storage
problems

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN, YOUR GUIDE TO THIS COURSE

11

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Your Exercise Environment


Classroom,
Desktop, or
Your Laptop

Windows Server
2012 R2

CentOS
6.5

Remote
Desktop

Clustered Data ONTAP 8.3:


A One-Node Cluster and a Two-Node Cluster

Location

Username

Windows

Administrator

CentOS

root

Clustered Data ONTAP

admin
(case-sensitive)

Password for all: Netapp123

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

YOUR EXERCISE ENVIRONMENT

12

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Start your session:
1. Log in to your Windows machine.
2. From the desktop, open PuTTY.

3. Select the cluster1 or


cluster2 cluster-mgmt PuTTY session.
4. Click OK.

5. Enter the following


credentials:
Username: admin
Password: Netapp123
10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK


Throughout this course, pauses enable you to experience the feature being discussed in the lecture. These
minilabs are called Try This. Feel free to participate if desired.

13

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp University Information Sources


NetApp Support Site

http://support.netapp.com/

NetApp University

http://www.netapp.com/us/servicessupport/university/index.aspx

NetApp University
Support

http://netappusupport.custhelp.com

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP UNIVERSITY INFORMATION SOURCES

14

Clustered Data ONTAP Administration: Welcome

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 1

Exploring Data ONTAP


Storage Fundamentals

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 1: EXPLORING DATA ONTAP STORAGE FUNDAMENTALS


In this module, you are introduced to Data ONTAP and its place in the storage industry. This module defines
storage virtual machines (SVMs) and describes their role in secure multi-tenancy. You also learn how you can
scale Data ONTAP as your storage needs increase, and you learn about the available administrative interfaces.

1-1

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

What is clustered Data ONTAP? What


exactly can it do for my company?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

1-2

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Traditional Storage Design


Inflexible, costly capacity silos
Complexity at large scale
Performance bottlenecks
Insufficient availability

Silo

Silo

Silo

Silo

Windows

Linux Cluster

UNIX

Future

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRADITIONAL STORAGE DESIGN


Traditional, inflexible storage designs that separate data into silos cannot keep pace with the storage needs of
businesses today. Traditional storage does not scale, does not provide nondisruptive operations (NDO), and is
not truly unified.

1-3

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

New Demands on IT
Agility at Scale

SLAs

Negotiated

Provisioning

Weeks

Service-Driven
Minutes
No Outage Windows

Availability

Maintenance Windows

Economics

Bear the Cost of Ownership

Exploit Data

Infrastructure

Isolated in Silos, Dedicated

Shared, Consolidated

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NEW DEMANDS ON IT: AGILITY AT SCALE


IT is evolving to provide a service-oriented approach. Increasingly, the focus is on innovating, improving
SLAs, eliminating maintenance windows, and leveraging the advantages of cloud storage. Agility at scale is
your storage infrastructures ability to be flexible and adaptable as it grows. This table compares the agility of
builders and operators to that of brokers of services. Clustered Data ONTAP can help you to achieve that
higher level of agility.

1-4

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Operating System


Number-one storage OS in the world*

Operating modes:
Earlier than 8.3: Available as either Data ONTAP operating in 7Mode or clustered Data ONTAP
8.3 and later: Available only as clustered Data ONTAP

Runs on industry-leading FAS high-availability (HA)


configurations

* Source: IDC Worldwide Quarterly Disk Storage Systems Tracker


5

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP OPERATING SYSTEM


The features of the Data ONTAP operating system are myriad. The basic goals of Data ONTAP are IT
efficiency, business agility, and simplified management for enterprise data centers, disaster-recovery centers,
and storage-as-a-service (cloud) implementations.
Data ONTAP is well known for providing flexible, resizable volumes. The features of Data ONTAP also
include simultaneous NAS and SAN client access, HA configurations, and unique storage efficiency
technologies.

1-5

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Best-in-Class Functionality

Nondisruptive
Operation
(NDO)

Proven
Efficiency

Seamless
Scalability

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP: BEST-IN-CLASS FUNCTIONALITY


NetApp Data ONTAP 8.3 is strictly a clustered Data ONTAP operating system. Data ONTAP 8.3 delivers the
three pillars of best-in-class functionality: NDO, proven efficiency, and seamless scalability.
The Data ONTAP 8.3 operating system provides continuous data access, even when you upgrade storage
controllers or move data within or between clusters. And Data ONTAP 8.3 uses fewer resources than
competing data-copy approaches.
Data ONTAP 8.3 offers a comprehensive group of storage-efficiency capabilities across the entire product
line of storage systems and for a range of protocols. Data ONTAP 8.3 makes it possible to scale capacity,
performance, and operations without compromise, regardless of protocol, system size, or workload.

1-6

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Data ONTAP 8.3

NAS and SAN


Familiar Ethernet and
FC infrastructures
No special client or host
code
SATA, SAS, FC, and
solid-state drive (SSD)
storage
Third-party arrays with
FlexArray Storage
Virtualization Software
Flash Cache feature
Flash Pool feature
7

Multiprotocol Access (NFS, CIFS, FC, iSCSI, FCoE)

Virtualized Storage and Network

NetApp Storage
with FAS

NetApp and
Third-Party Arrays
with FlexArray
virtualization

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP: DATA ONTAP 8.3


A Data ONTAP cluster uses a standard Ethernet infrastructure for its cluster interconnect. Ethernet is also
used for connecting to NAS clients and to iSCSI and FCoE hosts
FC is used to connect to FC hosts. The cluster interconnect provides high scalability with a networking
infrastructure that is inexpensive and simple to manage.
NAS clients require no additional code to take advantage of any of the clustered Data ONTAP functionality.
SAN clients require an asymmetric logical unit access (ALUA)enabled multipath I/O driver to take
advantage of LUN mobility.
A mix of FC, SAS, SATA, and SSD drives is supported, as per the standard rules of connectivity between
shelf and FAS controller.
You can optimize workloads by adding Flash Cache cards for read-intensive I/O. The Flash Pool feature can
be used to optimize random read and write performance.
Third-party storage can be used by Date ONTAP clusters that have FlexArray licenses.

1-7

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP 8.3


Enhancements That Are Discussed in This Course

MetroCluster for clusters


Advanced Disk Partitioning
Support for new NetApp storage
systems
FAS 8000 series
FAS 2500 series

Improvements to SnapMirror
functionality
Larger Flash Pool cache sizes
Support for using Microsoft
Management Console (MMC) to
manage files and file shares

Improvements for transitioning data


onto clustered Data ONTAP (using
NetApp 7-Mode Transition Tool 2.0)

IPspaces
System Setup 3.0

OnCommand System Manager 8.3 IPv6 Enhancements


Automated nondisruptive upgrade Selective LUN mapping
(automated NDU)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP 8.3: ENHANCEMENTS THAT ARE DISCUSSED IN THIS COURSE


Clustered Data ONTAP 8.3 brings improvements and enhancements to a wide variety of technologies and
features. This course covers the enhancements in this list. Other enhancements are covered in additional
courses in the Data ONTAP 8.3 curriculum.
For an overview of all key features in Data ONTAP 8.3, enroll in the web-based course Clustered Data
ONTAP 8.3: New Features.
Enhancements
MetroCluster for clusters: MetroCluster extends nondisruptive operations across clusters and across data
centers. If a failure occurs, the surviving nodes continue to serve dataeven if the nodes and their clusters are
located in different data centers.
Advanced Disk Partitioning: The Advanced Disk Partitioning feature has a root-data hard-disk drive (HDD)
slicing functionality. The HDD slicing functionality significantly lowers root aggregate disk consumption on
entry-level storage systems. Solid-state drive (SSD) slicing for Flash Pool cache enables customers to group
SSDs into a shared resource that is allocated to multiple Flash Pool aggregates. This functionality spreads the
cost of the parity SSDs over more aggregates, increases SSD allocation flexibility, and maximizes SSD
performance.
Support for new NetApp storage systems:
FAS 8000 series
FAS 2500 series
IPspaces: Data ONTAP 8.3 introduces a new IPspaces feature that enables customers to configure a Data
ONTAP cluster so that clients can access the cluster from separate network domains, even if those domains
use identical private IP address subnet ranges. This new ONTAP 8.3 capability enables separation of client
traffic for privacy and security.
1-8

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Setup 3.0: The System Setup tool is designed to improve the initial overall customer experience.
System Setup 3.0 supports Data ONTAP 8.3 and can be used to set up new FAS2200, FAS3200, and
FAS8000 storage systems.
OnCommand System Manager 8.3: System Manager 8.3 provides manageability support for new NetApp
storage platforms, innovative clustered Data ONTAP features, and commonly used customer workflows.
System Manager 8.3 is hosted on a Data ONTAP cluster, and it enhances the simplicity and management of
Data ONTAP 8.3 environments.
Automated NDU: Earlier releases of the Data ONTAP operating system enabled nondisruptive upgrades
(NDUs). But the Data ONTAP 8.3 operating system greatly automates and simplifies the upgrade process.
Whether processed through rolling upgrades or batch upgrades, upgrades to later versions of Data ONTAP
will be simple, nondisruptive, and automated.
Improvements to SnapMirror functionality: Data ONTAP 8.3 operating system provides key data-protection
benefits through expanded SnapMirror fan-in and fan-out ratios and improvements to SnapMirror
compression performance.
Larger Flash Pool cache sizes: The maximum supported Flash Pool cache sizes increase considerably. This
increase can help to improve the I/O performance that is so crucial for this company.
Support for using Microsoft Management Console (MMC) to manage files and file shares: With support for
MMC functionality, admins can manage elements of their NetApp storage directly from the Microsoft
Management Console, so they can spend less time managing their data and more time on strategic company
tasks.
Improvements for transitioning data onto clustered Data ONTAP (using NetApp 7-Mode Transition Tool
2.0): The 7-Mode Transition Tool is easy to use and greatly simplifies migration from Data ONTAP operating
in 7-Mode to Data ONTAP 8.3. Enhancements include the migration of MetroCluster configurations,
migration of volumes that contain LUNs, removal of the /vol path from all junctions, and the ability to keep
7-Mode volumes online during and after storage cutover.
IPv6 Enhancements: Data ONTAP enables the creation of logical interfaces (LIFs) with IPv6 addresses. New
supported features for IPv6 include support for IPspaces. intercluster peering including SnapMirror over IPv6,
support for MetroCluster, and support for DNS load balancing.

1-9

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp FAS Product Line


Support for unified storage: Industry-leading data
management
NAS and SAN client access

Flash-accelerated
hybrid storage
2x price/performance
ratio

Cornerstone for
cloud services

Storage-efficiency
technologies

FAS2552

336 TB

518 TB

84 Drives
4-TB VST Flash

144 Drives
4-TB VST Flash

576 TB

144 Drives
4-TB VST Flash

FAS8040

FAS8060

2,880 TB

4,800 TB

1,200 Drives
72-TB VST Flash

720 Drives
48-TB VST Flash

FAS2554

FAS8020
1,920 TB

480 Drives
24-TB VST Flash

FAS8080EX

FAS
Portfolio

FAS2520

IT operations unified with


FlexArray virtualization

5,760 TB

1,440 Drives
72-TB VST Flash

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP FAS PRODUCT LINE


NetApp FAS8000 series storage systems are designed to adapt faster to changing business needs while
delivering on core IT requirements for uptime, scalability, and cost efficiency. Our most powerful hybrid
storage array, the FAS8080EX, is built specifically for business-critical workloads that require massive
performanceup to 4 million input/output operations per second (IOPS), multi-petabyte scale, and leading
flash integration, including all-flash configurations.
The NetApp FAS2500 series simplifies storage operations and connection to the cloud with a unified
architecture that supports a broader set of SAN and NAS workloads, faster than any other entry-class hybrid
array.
Data ONTAP 8.3 continues to support the FAS6200 series controllers.

1-10

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Unified Storage Architecture

NFS
Corporate
LAN

iSCSI

SMB
FCoE
FC

NAS
(File-Level
Access)

10

NetApp FAS

SAN
(Block-Level
Access)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP UNIFIED STORAGE ARCHITECTURE


NetApp Unified Storage Architecture refers to the ability to serve both file and block protocols from the same
storage system. SAN and NAS are the two basic topologies in the storage industry today.
SAN connections provide access to blocks of disk space, which are interpreted as directories and files by the
client OS rather than by the Data ONTAP operating system on the storage. Data ONTAP views the data that
is provided through SAN protocols only as blocks; the OS does not read it as data.
NAS connections to Data ONTAP storage provide file-based storage. Data ONTAP makes data available over
the network by using the NFS and CIFS protocols.
The term unified storage refers to the fact that all NetApp FAS systems operate with the Data ONTAP
operating system. All NetApp FAS storage systems can serve in NAS and SAN topologies simultaneously.

1-11

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Data ONTAP Scaling Methods

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: DATA ONTAP SCALING METHODS

1-12

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

How does Data ONTAP scale to meet the


needs of my end users?

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

1-13

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

High-Availability Configurations
7-Mode

Clustered

High Availability

High Availability

Fault tolerance
Provides takeover within
client timeout values
Allows seamless
giveback without client
disruption

High Availability

Ability to perform
nondisruptive operations
Hardware and software
upgrades
Hardware maintenance

Storage controllers in a cluster are called nodes.


13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HIGH-AVAILABILITY CONFIGURATIONS
High-availability (HA) pairs provide hardware redundancy that, along with redundant disk cabling, provides
the basis for nondisruptive operations and fault tolerance. Each node has the ability to take over its partners
storage and network traffic in case of an outage, and return it when the problem is resolved.
The controllers are connected to each other through an HA interconnect. Each node continually monitors its
partner, mirroring the data for each others nonvolatile memory (NVRAM or NVMEM). If both controllers
are in the same chassis, the interconnect is internal and requires no external cabling. Otherwise, external
cabling is required to connect the two controllers.
Clusters are built for continuous operation; no single failure on a port, disk, card, or motherboard will cause
data to become inaccessible in a system. Clustered scaling and load balancing are both transparent.
Clusters provide a robust feature set, including data protection features such as Snapshot copies, intracluster
asynchronous mirroring, SnapVault backups, and NDMP backups.

1-14

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Capacity Scaling
Rapid and seamless
deployment of new
storage or applications or
both
No required downtime
Movement that is
transparent to clients

C2 B2
B
C1

14

C
C3

B1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CAPACITY SCALING
In this example, more capacity is needed for Project B. You can increase the capacity by adding disks to an
HA pair, and then you can transparently move some of the data to the new storage. You can then expand the
amount of storage that is dedicated to Project B.
This expansion and movement are transparent to client machines.

1-15

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Linear Scaling
C2

A2

B2

A3

C1

C3

A
D

B1

Throughput that scales linearly to multiple gigabytes per second


Seamless scalability with up to 69 petabytes (PB) of storage:
24 controller nodes for NAS protocols
8 nodes for SAN and NAS protocols

Transparent load balancing and linear, nondisruptive scaling


NOTE: Not all platform combinations can scale up to 24 nodes
15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LINEAR SCALING
Clustered Data ONTAP solutions can scale from 1 to 24 nodes and are primarily managed as one large
system. More importantly, to client systems, a cluster looks like a single file system. The performance of the
cluster scales linearly to multiple gigabytes per second of throughput, and the capacity scales to petabytes.
Clusters are a fully integrated solution. This example shows a 16-node cluster that includes 10 FAS systems
with 6 disk shelves each, and 10 FAS systems with 5 disk shelves each.

1-16

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Secure Multi-Tenancy
Tenants

Shared Infrastructure: hardware and software


Consolidation of diverse requirements
Administrative isolation and control
FAS Storage Platform with Data ONTAP
Tenants or Virtual Storage Systems

Departments
Customers

Applications

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SECURE MULTI-TENANCY: TENANTS


Multi-tenancy refers to the ability of a computing, network, or storage infrastructure to support multiple
simultaneous users. These users are referred to as tenants. They can be customers, business units, departments
within an organization, or individual applications.
Many people are familiar with server virtualization, which enables multi-tenancy at the compute and
application layer of the infrastructure. NetApp provides multi-tenancy at the storage layer of the
infrastructure, supporting a variety of server and application environments without the need to install
dedicated storage hardware for each user. This multi-tenancy is implemented by partitioning Data ONTAP
storage platforms into multiple virtual storage systems.
It is important to point out that individual tenants might have a diverse set of needs and might even be
competitorstherefore, secure isolation and administrative control are critically important.

1-17

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Secure Multi-Tenancy
Storage Containers

Clustered

Node SVMs
Admin SVMs
Data SVMs

How might you use vFiler units or


SVMs in a production environment?
17

Cluster Interconnect

Data ONTAP with


three vFiler units

HA

Storage virtual
machine (SVM)

vFiler unit
Data ONTAP
with vFiler0

Administrative SVM

HA

7-Mode

Data SVM

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SECURE MULTI-TENANCY: STORAGE CONTAINERS


Both 7-Mode and clustered Data ONTAP provide a framework to isolate and secure data for the purpose of
multi-tenancy. 7-Mode implements vFiler units by using MultiStore software. Clusters use storage virtual
machines (SVMs). Both features serve similar functions but are implemented in very different ways.
Both vFiler units and SVMs offer:

1-18

Consolidation and ease of management: Application service providers can consolidate your storage
needs. You can maintain domain infrastructure while providing multidomain storage consolidation. You
can reduce management costs while offering independent, domain-specific storage management.
Security: Security is one of the key concerns when storage is consolidated, either within an organization
or by an application service provider. Different vFiler units or SVMs can have different security systems
within the same storage system or cluster.
Delegation of management: Role-based access control (RBAC) provides administrator access that is
specific to a vFiler unit or an SVM.

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which is a supported cluster configuration?


1. A 5-node SAN-only cluster
2. A 24-node NAS-only cluster
3. A 6-node cluster that includes a 7-Mode HA pair
4. A 10-node cluster serving both SAN and NAS

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

1-19

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Data ONTAP Administrative


User Interfaces

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: DATA ONTAP ADMINISTRATIVE USER INTERFACES

1-20

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Keep talking. How do I work with Data


ONTAP?

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

1-21

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Interfaces
Methods

7-Mode

CLI: system> aggr create

Clustered
CLI: c1::> aggr create

GUI:

GUI:

Configuration files

Replicated databases (RDBs)

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE INTERFACES: METHODS


Data ONTAP provides several tools and methods for setup, configuration, and administration. Both
operating modes provide a CLI and various UIs. Notice the difference in the command prompts of the two
operating modes, and learn to recognize them. 7-Mode stores configuration data in sequential files. Because
of the distributed nature of clustered Data ONTAP and the need to synchronize state information across all
nodes in the cluster, clustered Data ONTAP stores configuration data in databases. A copy of each database
is stored on every node in the cluster.

1-22

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Interfaces
Shells

7-Mode

Clustered

Clustershell

Admin Shell

Node Shell

System
Shell

System
Shell

NOTE: Operations should be performed at the highest available level.


22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE INTERFACES: SHELLS


7-Mode:
The primary 7-Mode CLI is called the admin shell. The admin shell is accessed through the console or
through Ethernet. No more than two sessions are allowed, one from the console and one from Ethernet
Secure Shell (SSH) or Telnet. The operating system determines the number of sessions that are allowed by
default. Data ONTAP 7-Mode storage systems default to one session and one user at a time. Data ONTAP 8
storage systems default to two sessions and up to two users at a time. Creating additional sessions generates
an error.
Clustered:
The best way to use the CLI to manage a cluster is to use SSH to log in to the clustershell with the cluster
management LIF IP address. The default shell is the clustershell, which is the shell that you should always use
unless you are instructed otherwise by NetApp technical support. The clustershell can also be accessed by the
system console on any node in the cluster.
You access the node shell from the clustershell. The scope of access is limited to one node at a time, but you
can access any node in the cluster. You can enter the node shell for an interactive session or execute
individual commands directly from the clustershell. The clustershell command for accessing the node shell is
system node run.
Use the system shell to access the BSD environment that the Data ONTAP operating system runs in. You
should access the system shell only under the supervision of NetApp technical support. You can access the
system shell only as the diag user and only from within the clustershell. Root access to the system shell is
not available from Data ONTAP clusters. Use the security login unlock -username diag
cluster shell command to unlock the diag user. Then use the security login password username diag command to set the diag user password.
There are few reasons to use node shell and system shell in clustered Data ONTAP. They should be used only
with help from NetApp technical support.
1-23

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


Identify the clustershell prompt.

1.cluster>
2.x::storage aggregate*>
3.cluster#
4.::cluster999>

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

1-24

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Interfaces
Privilege Levels

Data ONTAP 7-Mode


system> priv set advanced
Warning: These advanced commands are potentially dangerous;
use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
system*> priv set admin
system>

Clustered Data ONTAP


c1::> set -priv advanced
Warning: These advanced commands are potentially dangerous;
use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
c1::*> set admin
c1::>

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE INTERFACES: PRIVILEGE LEVELS


Both modes feature privilege levels to force administrators to be mindful of the commands that can be
dangerous to the health of the storage system. The admin privilege level is used for most tasks. Advanced and
diagnostic levels are reserved for more risky functions.
Data ONTAP provides multiple sets of commands that are based on privilege level. 7-Mode offers
administrative and advanced levels. Clustered Data ONTAP adds a third level, which is called diag. Both 7Mode and clustered Data ONTAP use the priv command to set the privilege level.
The administrative level provides access to commands that are sufficient for managing your storage system.
The advanced and diag levels provide access to these same administrative commands, plus additional
troubleshooting and diagnostic commands.
Advanced and diag-level commands should be used only with the guidance of NetApp technical support.

1-25

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustershell
Command Scope
c1::> storage aggregate
c1::storage aggregate> modify

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERSHELL: COMMAND SCOPE


Typing the first two levels of the command directory puts you in that command directory. From there, you
can type a command from that level or type a fully-qualified command from a different command directory.

1-26

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustershell
Question Mark
c1::> storage aggregate
c1::storage aggregate> modify ?
[-aggregate] <aggregate name>

Aggregate

[ -disktype|-T {ATA | BSAS | FCAL | FSAS | LUN | MSATA | SAS

| SATA | SSD | VMDISK} ]

Disk Type
[ -free-space-realloc {on|off|no_redirect} ]

Free Space Reallocation

[ -ha-policy {sfo|cfo} ]

HA Policy

[ -percent-snapshot-space <percent> ]

Space Reserved for Snapshot Copies

[ -space-nearly-full-threshold-percent <percent> ]
Aggregate Nearly Full Threshold Percent
[ -space-full-threshold-percent <percent> ]

Aggregate Full Threshold Percent

[ -hybrid-enabled {true|false} ]

Hybrid Enabled

[ -force-hybrid-enabled|-f [true] ]

Force Marking of Aggregate as Hybrid Enabled

[ -maxraidsize|-s <integer> ]

Max RAID Size

...

c1::storage aggregate> modify


26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERSHELL: QUESTION MARK


At the command line, press the question mark (?) key to show the command directories and commands that
are available at that command level.

1-27

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


In your lab kit, log in to cluster 2.
Enter: ?
Is a show command available?

Enter: cluster show


How many nodes does this cluster have?
What is the status of these nodes?

Enter: cluster
What command scope are you in now?
Is there a show subcommand?

Enter: ?
Is a show command available?

Enter: show
How do you exit to the root command scope?
27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

1-28

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustershell
Tab Completion
c1::storage aggregate>

modify

Tab

aggr0_c_01 aggr0_c_02 aggr_NFS1 aggr_CIFS1


c1::storage aggregate> modify aggregate aggr _NFS1
-state online
c1::storage aggregate>

28

Tab

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERSHELL: TAB COMPLETION


Press the Tab key to show directories, commands, and parameters that are available or to automatically
complete a command (or a portion of a command). You can also use the Tab key to complete nonambiguous
substrings of commands, parameters, and values.

1-29

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustershell
Scope Return
c1::storage aggregate> ..
c1::storage> top
c1::>

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERSHELL: SCOPE RETURN


Use the .. command to move up one level in the command hierarchy. Use the top command to move to
the top level of the command hierarchy.

1-30

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustershell
Additional Features

The search path enables you to run commands


out of context:
c1::> disk show = storage disk show

Abbreviation is allowed (shortest unambiguous sequences of


characters):
c1::> storage aggregate create = sto a c or aggr create
c1::> network interface show = n i show

You can run queries with patterns and wildcards:


c1::> storage disk show physical-size >500gb

Use the up arrow key to review command history.

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERSHELL: ADDITIONAL FEATURES


You can abbreviate commands and parameters in the clustershell if the abbreviation is unambiguous in the
current context. You can also run commands out of context if the command is not available in any other
context.

1-31

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


Identify the CLI prompt for Data ONTAP 7-Mode.

1.cluster>
2.x::storage aggregate*>
3.cluster#
4.::cluster999>

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

1-32

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand System Manager


Easily manage storage, configuration, diagnostics, system, and
performance.
7-Mode
Clustered

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ONCOMMAND SYSTEM MANAGER


OnCommand System Manager is the tool for everyday storage management tasks. Use it to manage your
storage systems and HA configurations; to provision storage for NAS and SAN environments; to configure
protocols such as CIFS, NFS, iSCSI, FC, and FCoE; to create and manage SnapMirror and SnapVault
relationships; and to create and manage SVMs.
At a glance, you can view properties of the cluster, system alerts, alarms, and storage capacity information.
The System Manager interface simplifies complex cluster configurations, such as: creating SVMs; creating
management LIFs and data LIFs; and using cluster peering (an operation that enables SVMs from different
clusters to become SnapMirror or SnapVault partners).

1-33

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Manager 8.3


Enhancements to Streamline Administration
Support for clustered Data ONTAP 8.3 networking objects

Storage pools
Protection workflow support for version-flexible SnapMirror technology
Service Processor management support

SVM workflows
Summarizing disks
IPv6 support
On-box operation
OnCommand System Manager 3.1.x
is still a supported off-box option.
33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SYSTEM MANAGER 8.3: ENHANCEMENTS TO STREAMLINE ADMINISTRATION


System Manager 8.3 provides common customer workflows, manageability support for new NetApp storage
platforms, and the innovative clustered Data ONTAP features that are shown on this slide. System Manager
8.3 enhances the simplicity and management of Data ONTAP 8.3 environments.
The OnCommand System Manager 3.1.x series continues as an off-box option for managing Data ONTAP
operating in 7-Mode and mixed-mode environments. This series will continue to be supported for as long as
Data ONTAP operating in 7-Mode is supported.

1-34

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Unified Manager

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ONCOMMAND UNIFIED MANAGER


OnCommand Unified Manager provides automated monitoring, provisioning, and protection for NetApp
storage systems. You can script commands by using the CLI of the Unified Manager server software that runs
on a server that is separate from the cluster.
You can use Unified Manager to monitor tasks on nodes in a Data ONTAP storage system or cluster. Unified
Manager cluster monitoring, alerts, and reporting help you adopt clustered Data ONTAP scaling technology.
Unified Manager helps you review objects such as clusters, SVMs, and large aggregates. In addition, the
Unified Manager Performance Advisor capability has been enhanced for clustered Data ONTAP.
Performance Advisor can track IOPS, latency, and throughput for physical and logical cluster objects.

1-35

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

Many professionals have a strong preference


for either the CLI or GUI.
Which do you prefer?
Why?

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

1-36

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Ive learned about the benefits of


clustered Data ONTAP, about unified
storage and seamless scaling, and
about some of the GUIs that are
available to me when I work on my
cluster. Im excited to learn more.

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

1-37

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
http://www.NetApp.com
https://Twitter.com/NetApp
https://www.facebook.com/NetApp
http://www.youtube.com/user/NetAppTV

Clustered Data ONTAP Storage Platform Mixing Rules


NetApp Hardware Universe:
http://hwu.netapp.com/home.aspx

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

1-38

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 1: Exploring Data


ONTAP Storage Fundamentals

Time Estimate: 30 minutes

38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

1-39

Clustered Data ONTAP Administration: Exploring Data ONTAP Storage Fundamentals

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 2

Hardware and Initial Setup

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 2: HARDWARE AND INITIAL SETUP


In this module you discuss supported hardware configurations, shelf and network cabling, and booting a FAS
controller that is running Data ONTAP. You also learn how the System Setup 3.0 utility performs initial
cluster configuration.

2-1

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Ive learned that clustered Data ONTAP


can scale to multiple nodes. What
system configuration options are
available to me?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

2-2

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

New NetApp FAS Platforms


NetApp FAS platforms for midsize to large businesses:

The FAS2200 and FAS2500 series: midsize businesses


The FAS6200 series: enterprise storage
The FAS8000 series: performance storage

FAS2500

FAS8000

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP FAS PLATFORMS


NetApp storage systems offer business agility, application uptime, and simplicity of management.
The FAS2200 and FAS2500 series: The NetApp FAS2200 series and the new FAS2500 series for midsize
businesses and enterprises enable you to start right, keep it simple, and grow smart. You can manage your
growing, complex data in dispersed departments or remote locations and add functionality easily with a costeffective package.
The FAS6200 series: These are the key points to remember about the FAS6200 series:

This series is built for large data centers.


The FAS6200 is scalable to a capacity of over 69 petabytes (PB)
For enterprise-level SAN and NAS performance from a unified platform, a FAS6200 cluster can process
over 1.5 million input/output operations per second (IOPS).
The FAS6200 series offers flash integration and can automatically adapt to changing workloads.

The FAS8000 series: These new midrange storage platforms enable flash and clustering to improve IT
performance and agility. The FAS3200 series offers up to 80% more performance and 100% more capacity
over the FAS2200 series. The FAS8000 series is flash-ready, with up to 18 terabytes (TB) of flash to increase
performance. The FAS8000 series uses scalable, cluster-ready architecture to meet new business demands.

2-3

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS Configurations
Single-chassis standalone
configuration

e0a

e0b

e0c

e0d

e0e

0a

e0f

0b

LINK
LINK

LINK LINK

0c

0d

LINK LINK

LINK

LINK

e0a

e0b

e0c

e0d

e0e

0a

e0f

0b

LINK
LINK

LINK LINK

0c

0d

LINK LINK

LINK

LINK

High-availability (HA)
configuration

e0a

e0b

e0c

e0d

e0e

0a

e0f
LINK

LINK

LINK LINK

0b

0c
LINK LINK

0d
LINK

LINK

Single-controller
configuration with an I/O
expansion module (IOXM)
4

e0a

e0b

e0c

e0d

e0e

0a

e0f
LINK

LINK

LINK LINK

0b

0c
LINK LINK

0d
LINK

LINK

13

14

15

10

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS CONFIGURATIONS
This slide shows an example of the FAS6200 series controller configuration. Although configurations can
vary with models, they have a few things in common:

Single-chassis configuration:

Standalone configuration with controller and blank panel: good for single-node configurations where high
availability is not a requirement, or where the controllers of an HA pair are spread out in the data center.
Standalone configuration with controller and I/O expansion module (IOXM): good for configurations that
require additional PCIe cards.

NOTE: This configuration is available only with the FAS3200 and FAS6200 series.

High-availability (HA) configuration with two controllers: Sometimes called HA-in-a-box; the HA
interconnect is handled within the chassis connection (rather than over external cables).

Dual-chassis HA configuration, which requires MetroCluster software


NOTE: This configuration is available only with the FAS8000 series.

2-4

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp FlexArray Platforms


In the FAS6000 and FAS8000 platforms, FlexArray Storage Virtualization Software
uses Data ONTAP to manage third-party RAID arrays.
Enterprise

SAN

Departmental

Enterprise

NAS

Departmental

NFS
LAN

Vol
Aggregate

HITACHI

CB1

D A T A SYSTEMS

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP FLEXARRAY PLATFORMS


Hardware platforms for NetApp FlexArray Storage Virtualization software enable you to connect third-party
RAID arrays to NetApp storage controllers and use Data ONTAP to manage the third-party RAID arrays.
You can:

2-5

Integrate data protection between different types of storage arrays from different vendors
Use storage virtualization to achieve higher use and scalability
Create NAS and SAN gateways for your current storage arrays for NFS, CIFS, iSCSI, FC, and FCoE
network protocols

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS Disk Shelves


Size:

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

450GB

DS4243

DS424x
4U, 24 disks

DS4486

4U, 48 disks

Tandem
drive carrier

15K SAS
7.2K SATA
SSD

DS4243
DS4246
DS4486
0

10

11

12

13

14

15

16

17

18

19

20

21

22

Disk Speeds:

23

DS2246

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

600GB

2U, 24 disks

DS2246

10K SAS
SSD

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS DISK SHELVES


NetApp storage supports a variety of disk shelves and storage media.

DS2246 is a 2U, 24-disk shelf with dual 6-Gbps I/O modules.


Supports performance (10K RPM) hard-disk drives (HDDs), self-encrypting HDDs, and ultraperformance
solid-state drives (SSDs)
Supports full-shelf SSDs, full-shelf HDD, or mixed HDDs and SSDs

DS4246 is a 4U, 24-disk shelf with dual 6-Gbps I/O modules.


Supports high-capacity (7.2K RPM) HDDs, self-encrypting HDDs, and ultraperformance SSDs
Supports full-shelf SSDs, full-shelf HDD, or mixed HDDs and SSDs

DS4486 is a 4U, 48-disk shelf (using tandem drive carriers) with dual 6-Gbps I/O modules.
Supports high-capacity (7.2K RPM) HDDs

DS4243 is a 4U, 24-disk shelf with dual 3-Gbps I/O modules.


Supports performance (15K RPM) HDDs

NOTE: Although Data ONTAP supports FC shelves, they are no longer sold with new systems.

2-6

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS Configurations
Single-Node Cluster
Advantages:

Controller or Node 1

Low cost
Controller and disk
shelves have redundant
fans and power supplies
to keep hardware
running

Disadvantages:
Storage system is the
single point of failure
Loss of a controller or
cable could lead to loss
of data or loss of data
access

Disk Shelves

Does not enable


nondisruptive
operations (NDO)

Storage System
SAS or FC (simplified)
* Multipath HA (dual-path cabling) is recommended.
7

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS CONFIGURATIONS: SINGLE-NODE CLUSTER


A single point of failure:

Represents the failure of a single hardware component that can lead to loss of data access or loss of data
Does not include multiple or rolling hardware errors, such as triple disk failure or dual disk-shelf module
failure

All hardware components that are included with the storage system have demonstrated very good reliability
with low failure rates. If a hardware component such as a controller or adapter fails, you can replace the failed
component, but client applications and users will not be able to access their data until the system is repaired.

2-7

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS Configurations
Switchless Two-Node Cluster
Advantages:
Fault tolerance:
When a node fails,
a takeover occurs
and the partner
node continues to
serve the failed
nodes data
NDO: During
maintenance and
upgrades, takeover
occurs while the
partner is being
upgraded

Controller or Node
1

Controller or Node
2

Disk Shelves

Disk Shelves

No shelf-loss
protection (data
duplication) is
included

HA Pair
SAS or FC (simplified)
HA interconnect
Cluster interconnect

Disadvantage:

(clustered Data ONTAP only)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS CONFIGURATIONS: SWITCHLESS TWO-NODE CLUSTER


An HA pair is two storage systems (nodes) whose controllers are connected to each other directly. In this
configuration, one node can take over its partner's storage to provide continued data service if the partner goes
down.
You can configure the HA pair so that each node in the pair shares access to a common set of storage,
subnets, and tape drives, or each node can own a distinct set of storage.
The controllers are connected to each other through an HA interconnect. This connection, and the redundant
connections to disks, enable one node to serve data that resides on the disks of its failed partner node. Each
node continually monitors its partner, mirroring the data for each others nonvolatile memory (NVRAM or
NVMEM). The interconnect is internal and requires no external cabling if both controllers are in the same
chassis.
Takeover is the process in which a node takes over the storage of its partner. Giveback is the process in which
that storage is returned to the partner. Both processes can be initiated manually or configured for automatic
initiation.

2-8

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Configurations


Switched Clusters
Node 1

Node 2

Node 3

Node 4

Switched clusters enable you


to scale beyond two nodes.
Disk Shelves

Disk Shelves

SAS or FC (simplified)
HA interconnect
Cluster interconnect

Disk Shelves

Disk Shelves

Cluster Interconnect
Switches

9
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP CONFIGURATIONS: SWITCHED CLUSTERS


Clusters of more than two nodes use a supported switch to attach each node to the cluster interconnect. Up to
24 nodes are supported in a cluster, depending on the hardware platform, the protocols that are in use, and the
type of switch that is used for the cluster interconnect.

2-9

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cluster Interconnect Switches


Supported cluster interconnect switches
NetApp CN1610

10

Cisco Nexus 5596

Sixteen 10-GbE enhanced small


form-factor pluggable (SFP+)
Ethernet ports

48 (base) 10-GbE SFP+


Ethernet ports

Supports up to 12-node
configurations

Optional expansion module up


to 96 ports

Dual fans and power supplies

Dual fans; one power supply

Four Inter-Switch Links (ISLs)


between switches

Eight ISLs between switches

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTER INTERCONNECT SWITCHES


To provide a complete storage solution, the Data ONTAP 8 operating system introduced a cluster
interconnect and management switch-configuration standard to which all clustered Data ONTAP deployments
are required to adhere. Clustered Data ONTAP 8.3 continues this policy to provide the best solution in terms
of reliability and serviceability.
NetApp CN1610:

Up to eight nodes are supported in 8.1.1 or up to 12 nodes in 8.2 or later.


Two switches per cluster are required for redundancy and bandwidth.
Four ISLs are required between the switches.

Cisco Nexus 5596:

This switch is recommended for clusters larger that 12 nodes or platforms that support four cluster
interconnects per node.
Two switches per cluster are required for redundancy and bandwidth.
Eight ISLs are required between the switches.

Although they are supported, the Cisco Nexus 5010 and Nexus 5020 switches are no longer being sold. The
NetApp CN1610 and Cisco Nexus 5596 replace the 5010 and 5020 respectively. NetApp clusters do not
support the Cisco Nexus 5548.
For switch setup and configuration information:
Clustered Data ONTAP Switch Setup Guide for Cisco Switches
CN1601 and CN1610 Switch Setup and Configuration Guide

2-10

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

7-Mode Configurations
Mirrored HA Pairs and MetroCluster Software

Advantages:
Mirrored HA pairs
maintain two
complete copies of
all mirrored data
MetroCluster
software provides
failover to another
site that contains a
nearly real-time
copy of the data at
the failed site

Controller
1

Controller
2

node 2
data

node 1
data
node 2
mirror

node 1
mirror

Disk Shelves

Disadvantages:
Cost is higher
Each node is
managed
separately and
has its own
disks, resources,
and mirrors of
the other node

Disk Shelves

HA Pair
SAS or FC (simplified)
HA interconnect

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

7-MODE CONFIGURATIONS: MIRRORED HA PAIRS AND METROCLUSTER


SOFTWARE
Mirrored HA pairs provide high availability through failover, just as standard HA pairs do. Additionally,
mirrored HA pairs maintain two complete copies of all mirrored data. These copies are called plexes and are
continually and synchronously updated every time that Data ONTAP writes to a mirrored aggregate. The
plexes can be physically separated to protect against the loss of one set of disks or array LUNs.
Mirrored HA pairs use SyncMirror technology. For more information about SyncMirror, see the Data ONTAP
Data Protection Online Backup and Recovery Guide for 7-Mode.
Mirrored HA pairs do not provide the capability to fail over to the partner node if one node fails or is
disabled. For example, if an entire node loses power, including its storage, you cannot fail over to the partner
node. For this capability, use a MetroCluster configuration.
In situations such as prolonged power outages or natural disasters, you can use the optional MetroCluster
feature of Data ONTAP to provide a quick failover to another site that contains a nearly real-time copy of the
data at the disaster site.

2-11

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Configurations


MetroCluster for Clusters

Four nodes, in a fabric-attached configuration, only in Data ONTAP 8.3

Cluster A in
Data Center A

Cluster B in
Data Center B

MetroCluster technology enables


business continuity and continuous
availability beyond the data center.

MetroCluster

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP CONFIGURATIONS: METROCLUSTER FOR CLUSTERS


With the release of Data ONTAP 8.3, the MetroCluster feature becomes available to the clustered Data
ONTAP operating system. This feature extends NDO across clusters and across data centers. With the
MetroCluster feature of clustered Data ONTAP 8.3, if a failure occurs, the surviving nodes continue to serve
dataeven if the nodes and their clusters are located in different data centers.

2-12

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS Platforms and FlexPod Solutions


FlexPod solutions are datacenter-tested and ready.
FlexPod solutions combine:
NetApp storage systems
Application servers
Cisco fabric network

13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS PLATFORMS AND FLEXPOD SOLUTIONS


The FAS6200 series is certified by Cisco to be integrated into the FlexPod solution. The FlexPod solution
portfolio combines NetApp storage systems, Cisco Unified Computing System (UCS) servers, and Cisco
Nexus fabric into a single, flexible architecture. FlexPod solutions are validated and tested to reduce risk and
increase IT efficiency.

2-13

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Hardware Universe
Hardware Universe is:
A web-based tool for
employees, partners,
and customers
A consolidated hardware
specifications tool for:

14

Controllers
Adapters
Shelves
Disks
Cabinets
Cables

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HARDWARE UNIVERSE
Hardware Universe (HWU) is a web-based tool that replaces the System Configuration Guide. HWU provides
you with a visual presentation of the complete NetApp line of hardware products. HWU provides a powerful
configuration resource for NetApp employees, partners, and customers by consolidating hardware
specifications for the following products and components:

Controllers: FAS and V-Series


Adapters
Shelves
Drives
Cabinets
Cables

You can make a side-by-side comparison of the various controllers in terms of system capacity, memory size,
maximum spindle count, and other features so that you can decide which controllers will meet your
requirements.
In addition, you can save your personal queries for re-use, or draw from your last 20 queries. This
functionality is a convenient way to revisit your favorite configurations over time.
Hardware Universe is also available for iOS and Android mobile phones and tablets. To download the mobile
HWU apps, visit http://app.netapp.com/public/hardware.html.

2-14

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion


How much resiliency do you or your application
require?

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

2-15

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I have my hardware racked and


stacked. I know that Data ONTAP
comes installed on new hardware.
But how do I get a Data ONTAP
cluster up and serving data?

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

2-16

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Basic Steps for Setting Up a System


1. Connect controllers, disks, and cables.

This Module

2. Set up and configure nodes.


3. Install software onto nodes. (Software is preinstalled on most systems.)
4. Initialize disks (not necessary on new clusters).
5. System setup: use System Setup 3.0.
6. Complete the initial configuration.
7. Create data aggregates.
8. Create a data storage virtual machine (a data SVM).
9. Create data volumes and protocol configuration.

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BASIC STEPS FOR SETTING UP A SYSTEM


The process of starting the system setup for 7-Mode is very similar to the process for clustered Data ONTAP
because the two modes use the same hardware. You must connect the controller, disks, and cables first.
Powering on should start with the networking, then disk shelves, and finally the controllers.

If the system is new and does not require a software upgrade (or downgrade), simply start the setup
process.
If the system requires an upgrade or downgrade for any reason, install the software first. After the
software installation is complete, initialize the disks. (This initializing will take a while.)

When the system boots completely, you will run a setup procedure to set up and configure the system or
cluster. After the configuration is complete, you can create storage resources.

2-17

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Hardware Setup
Connect:

HA interconnect
Controllers to disk shelves

Controllers to networks
Any tape devices
Controllers and disk shelves to power

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HARDWARE SETUP

2-18

Connect controllers to disk shelves. Verify that shelf IDs are set properly.
If required for your controller type, connect NVRAM HA cable between partners. The connections can be
through the chassis, 10-gigabit Ethernet (10-GbE), or InfiniBand, depending on your storage controllers.
Connect controllers to networks.
If present, connect any tape devices. (This task can be performed later.)
Connect controllers and disk shelves to power.

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

HA Interconnect Links
Can be either:
External cables (dedicated 10-GbE or InfiniBand)
Internal interconnect (over the backplane in the chassis)

Primarily used to mirror NVRAM


Provide a channel for certain types of communication traffic
between the nodes in a pair:

19

Failover
Disk firmware
Heartbeats
Version information
Virtual Target Interconnect (VTIC) for FC SAN

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HA INTERCONNECT LINKS
HA interconnects connect the two nodes of each HA pair for all controllers. These connections are internally
provided over the backplane in the chassis of a dual-controller configuration. For chassis with single
controllers, a dedicated InfiniBand or 10-GbE link is required, depending on the model and enclosure. Visit
the NetApp Support site to see the appropriate hardware configuration guide for your model storage
controller.
The following types of traffic flow over the HA interconnect links:

Failover: The directives are related to performing storage failover (SFO) between the two nodes,
regardless of whether the failover is:

Negotiated (planned and in response to an administrator request)


Not negotiated (unplanned and in response to an improper system shutdown or booting)

Disk firmware: Nodes in an HA pair coordinate the update of disk firmware. While one node is updating
the firmware, the other node must not perform any I/O to that disk.
Heartbeats: Regular messages demonstrate availability.
Version information: The two nodes in an HA pair must be kept at the same major and minor revision
levels for all software components.

For 7-Mode only, the HA interconnect also provides Virtual Target Interconnect (VTIC), which connects the
two nodes in an HA pair. In FC SAN environments, VTIC enables LUNs to be served through target ports on
both nodes. For example, the output of igroup show v displays the FC initiator that is logged in on
physical ports and a port that is called vtic.

2-19

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

on

on

h C
C

h C
C

cb

0a

LINK/ACT

LINK/ACT

Chelsio
Communications

LINK/ACT

X1107A

LINK/ACT

0a

X1107A

Chelsio
Communications

N
u

ab

KL

1
c

KL

e0

00

Controller 2

e0

CK

0d

T/

CA
C
T

X A

2
0

S
P
A
O

N
R

P
S

X
1

O
A

R
N

A
2
N

6MO I

K N L

K N L

6MO I
K N L

6MO I

6MO I

CCAD

K N L

6MO I

CCAD

K N L

K N L

6MO I

K N L

6MO I

6MO I
CCAD

K N L

6MO I
K N L

K N L

6MO I

CCAD

x
2

K N L

K N L

6MO I

Shelf 1

x
2

CCAD

2
2

x
2

Stack 1

6MO I

Shelf 2

CCAD

K N L

6MO I

K N L

CCAD

Shelf 2

6MO I

K N L

CCAD

6MO I

K N L

Shelf 1

6MO I

1
A

T/

CK

A1

0d

/I

AN

0X

/I

00

71

Controller 1

AN

ab
K

cb

Disk-Shelf Cabling

x
2

Stack 2

Starting with shelf ID 10

Starting with shelf ID 20

Disk shelf SAS cables can be either:


SAS
ACP

HA

20

Copper (Cu) in lengths of 1m, 2m, 3m, and 5m


Optical in lengths of 15m, 30m, and 50m (more options available)

Alternate Control Path (ACP) is a protocol to manage and


control SAS-connected shelves

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK-SHELF CABLING
This diagram combines SAS and ACP shelf cabling and the HA cabling for controller 1 and controller 2 of an
HA pair. For a complete course in Data ONTAP Cluster-Mode cabling, see the Data ONTAP 8 Cabling
course (STRHW-WBT-DOTCABL).
ACP is a protocol that enables Data ONTAP to manage and control a SAS-connected storage shelf
subsystem. It uses a separate network (an alternate path) from the data path, so management communication
is not dependent on the data path being intact and available.
You do not need to actively manage the SAS-connected storage shelf subsystem. Data ONTAP automatically
monitors and manages the subsystem without operator intervention. However, you must provide the required
physical connectivity and configuration parameters to enable the ACP functionality.
NOTE: You can install SAS-connected storage shelves without configuring ACP. However, for maximum
storage availability and stability, you should always have ACP configured and enabled.
After you enable ACP, you can use the storage show acp and acpadmin list_all commands
(available through the node shell in clustered Data ONTAP) to display information about your ACP
subsystem.
Because ACP communication is on a separate network, it does not affect data access.
NOTE: Although FC shelves are supported in Data ONTAP, they are no longer sold with new systems.

2-20

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk-Shelf Best Practices


Single-Controller Configuration

Best practices:

Single-controller
configuration must use
a dual path
FAS22xx systems with
external storage must
use a dual path
Dual path is
recommended for
greater resiliency

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK-SHELF BEST PRACTICES: SINGLE-CONTROLLER CONFIGURATION

2-21

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk-Shelf Best Practices


Multipath HA Configuration

Best Practices:
HA pair configuration must
use multipath high
availability (MPHA)

FAS22xx systems with


external drives must use
MPHA

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK-SHELF BEST PRACTICES: MULTIPATH HA CONFIGURATION

2-22

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk-Shelf Best Practices


ACP Configuration
Best Practices:

ACP protocol is recommended


ACP enables Data ONTAP to
manage and control the disk
shelf management system
Must use CAT-6 Ethernet
cables and RJ-45 connectors
One Ethernet port per
controller is required
Configuration is cabled by
using daisy-chained method

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK-SHELF BEST PRACTICES: ACP CONFIGURATION

2-23

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

What are the advantages of using multipath HA


(MPHA) cabling instead of single-path cabling?

Why do I need ACP? The FC connected shelves


never needed an alternate path.

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION


Multipath HA cabling provides a second physical path to all shelves from the controller. This second path is
useful in case of a cable, connector, or port failure.
ACP enables enhanced disk-shelf management and monitoring by the Data ONTAP operating system. This
enhanced managing and monitoring were not available with the old FC-style shelves.

2-24

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Networks
Cluster Interconnect*
ISLs

e0b

node4
LINK/ACT

Chelsio
Communications

Chelsio
Communications

0a
LNK

c0a

0c

c0b

0d

e0a

1
2

0b
LNK

SLOT2

Chelsio
Communications

PS2

PS1

Cisco Nexus 5010

Chelsio
Communications

e0b

STAT

LINK/ACT

e0a

0d

LINK/ACT

0c

LNK

X1107A

c0a

c0b

0b

LINK/ACT

0a
LNK

LINK/ACT

Chelsio
Communications

LINK/ACT

LINK/ACT

node3
Chelsio
Communications

X1107A

LINK/ACT

X1107A

e0b

LINK/ACT

e0a

0d

LINK/ACT

0c

LNK

X1107A

c0a

c0b

0b

X1107A

0a
LNK

LINK/ACT

Chelsio
Communications

LINK/ACT

Chelsio
Communications

X1107A

LINK/ACT

node2

LINK/ACT

e0b

X1107A

e0a

0d

LINK/ACT

0c

LNK

LINK/ACT

c0a

c0b

0b

X1107A

node1
0a
LNK

L1

L2 MGMTO

MGMT1

CONSOLE

10

11

12

13

14

15

16

17

10

19

20

Data Network

ISLs

(Ethernet, FC,
or Converged)

Management
Network

*Required for clustered Data ONTAP


25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORKS
Data ONTAP operating in 7-Mode and clustered Data ONTAP begin to differ the most when it comes to
networking. Because clustered Data ONTAP is essentially a cluster of HA pairs, a cluster network or cluster
interconnect is needed for all the nodes to communicate with each other.
On this slide you see a four-node cluster and three distinct networks. 7-Mode and clustered Data ONTAP both
require data and management connectivity, which could coexist on the same network.
In multinode configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic. In a
two-node configuration, the cluster interconnect can be as simple as two cables between the nodes, or a
cluster network if expansion is desired. In clusters of more than two nodes, a cluster network is required.
Single-node clusters do not require a cluster interconnect unless the cluster is expanded later.
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.
For larger clusters that use higher-end platforms (FAS8040 or FAS8060) that are running clustered Data
ONTAP 8.2.1 or later, four cluster interconnects are recommended.
For proper configuration of the NetApp CN1601 and CN1610 switches, refer to the CN1601 and CN1610
Switch Setup and Configuration Guide.

2-25

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Communication Connections
Console connection (using ANSI-9600-8N1)
Remote management device connection (dependent on
model):
Service Processor (SP)
Remote LAN Module (RLM)

Management network connections

Cluster interconnect connections


Data network connections
ACP

Console

Management

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COMMUNICATION CONNECTIONS
Each controller should have a console connection, which is required to get to the firmware and the boot menu
(for the setup, installation, and initialization options, for example). A remote management device connection,
although not required, is helpful in the event that you cannot get to the UI or console. Remote management
enables remote booting, the forcing of core dumps, and other actions.
Use a serial console port to set up and monitor the storage system. When you set up your system, use a serial
cable to connect to your PC. An RJ45 port that is marked IOIOI is located on the rear panel. Connect the DB9
end to a serial port on a host computer.
Properties:

Speed: 9,600 bits per second (bps)


Data bits: 8
Stop bits: 1
Parity: none
Flow control: hardware or none

Each node must have two connections to the dedicated cluster network. Each node should have at least one
data connection, although these data connections are necessary only for client access. Because the nodes are
clustered together, it is possible to have a node that participates in the cluster with its storage and other
resources but doesnt field client requests. Typically, however, each node has data connections.
The cluster connections must be on a network that is dedicated to cluster traffic. The data and management
connections must be on a network that is distinct from the cluster network.

2-26

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Management Interfaces
e0M interface:
Dedicated for management traffic
Used for Data ONTAP administrative tasks

RLM or SP interface:
Is used to manage and provide remote management capabilities for the
storage system
Provides remote access to console, and monitoring,
troubleshooting, logging, and alerting features
Remains operational
Command to set up SP:
system node service-processor

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGEMENT INTERFACES
Some storage system models include an e0M interface. This interface is dedicated to Data ONTAP
management activities. An e0M interface enables you to separate management traffic from data traffic on
your storage system for better security and throughput.
To set up a storage system that has the e0M interface, remember this information:

The Ethernet port that is indicated by a wrench icon on the rear of the chassis connects to an internal
Ethernet switch.
Follow the Data ONTAP setup script.
To manage LAN in environments where dedicated LANs isolate management traffic from data traffic,
e0M is the preferred interface.
Configure e0M separately from the RLM or SP configuration.
Both configurations require unique IP and MAC addresses to enable the Ethernet switch to direct traffic
to either the management interfaces or the RLM or SP.

For more information on configuring remote support, refer to the Data ONTAP System Administration Guide
and Data ONTAP Remote Support Agent Configuration Guide.
Enhancements to SP in Data ONTAP 8.2.1:

Available through Ethernet or serial console:

2-27

Shares management wrench port with e0M for Ethernet


Toggle from serial console into SP with Ctrl+G
Toggle back to serial console with Ctrl+D

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Actively manages some hardware:

2-28

Fans
Field replaceable unit (FRU) tracking
Advanced sensor management

SP enhancements are available for the FAS8000, FAS6200, FAS3200, and FAS2200 platforms.

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Interfaces
Use these methods to administer the storage system:
CLI connections:

Secure Shell (SSH)


Telnet (off by default)
Remote Shell (RSH) (off by default)
Serial terminal connected to the console port
Ethernet connection to a remote management device

GUI connections:

OnCommand System Manager


NetApp OnCommand management software
Third-party management software using the NetApp
Manageability Software Development Kit (SDK)

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE INTERFACES
Types of administrative interfaces:

A serial terminal connected to the console port of the storage system


An Ethernet connection to the remote management device in the storage system
A Telnet session to the storage system
A remote shell program, such as the UNIX RSH utility (provides access for a limited set of commands)
A secure shell application program, such as SSH or OpenSSH for UNIX

Most of the time, you use the NetApp OnCommand System Manager program as your UI connection.
OnCommand System Manager is a web-based graphical management interface that enables you to manage
storage systems and storage objects, such as disks, volumes, and aggregates.
You can also administer NetApp storage systems through management software such as:

SnapManager
SnapProtect
Snap Creator Framework
Third-party management software using the NetApp Manageability Software Development Kit (SDK)

The CLI enables you to execute all Data ONTAP administrative commands, except some Windows server
administrative commands.
The management device enables you to remotely execute all Data ONTAP administrative commands.

2-29

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

What is the difference between console access and SP


or RLM access?

If the SP is being accessed through e0M, how can I


access it through the console?

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION


The console communicates with and monitors the Data ONTAP operating system. The console is available on
a running controller. The SP and RLM communicate with the FAS controller at the hardware level,
monitoring resources (such as system sensors, including voltages, temperatures, and battery condition) and
they notify Data ONTAP of any issues. They run if there is power to the controller.

2-30

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Powering On a System
1. Power on network switches.

2. Power on disk shelves.


3. Power on tape devices (if present).
4. Power on storage controllers.

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

POWERING ON A SYSTEM
This order is the recommended order for powering on the hardware devices in a cluster.

2-31

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Firmware
Use LOADER firmware.

Press any key to enter the firmware.


Two boot device images exist: flash0a and flash0b.
Use printenv to show the firmware environment variables.
Use setenv to set the firmware environment variables; for
example, setenv AUTOBOOT true.

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FIRMWARE
1.
2.
3.
4.

Use LOADER firmware.


From the console, early in the booting process, press any key to enter the firmware.
Use version to show the firmware version.
Two boot device images exist (depending on the platform): flash0a and flash0b.

CompactFlash
USB flash

5. Use printenv to show the firmware environment variables.


6. Use setenv to set the firmware environment variables; for example, setenv AUTOBOOT true.
To copy flash0a to flash0b, run flash flash0a flash0b.
To put (flash) a new image onto the primary flash, you must first configure the management interface. The
auto option of ifconfig can be used if the management network has a Dynamic Host Configuration
Protocol (DHCP) or BOOTP server. If it doesnt, you must run ifconfig <interface> addr=<ip>
mask=<netmask> gw=<gateway>.
After the network is configured, ensure that you can ping the IP address of the TFTP server that contains the
new flash image. Then, to flash the new image, run
flash tftp://<tftp_server>/<path_to_image> flash0a.

2-32

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Boot Menu


Choose one of the following:
Normal boot

Boot without /etc/rc (no effect on clusters)


Change password
Clean configuration and initialize all disks
Maintenance mode boot
Update flash from backup config

Install new software first


Reboot node
32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE BOOT MENU

2-33

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Installing and Initializing a Node


You need:
Access to an FTP, Trivial File Transfer Protocol (TFTP), or HTTP server
The software image file on that server

From the boot menu, complete the following:


1.Select Option 7.
2.When prompted, enter a URL to a Data ONTAP .tgz image
3.When these steps are complete, allow the system to boot.

From the boot menu, select Option 4:

Deletes all data on the disks that are owned by the controller
Creates a new root aggregate and root volume for configuration

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INSTALLING AND INITIALIZING A NODE


After you boot the system, if the node stops at the firmware prompt by itself (which happens if the firmware
environment variable AUTOBOOT is set to false), type boot_primary to enable the node to continue to the
boot menu. If AUTOBOOT is set to true, the node goes straight to the boot menu.
If you use TFTP, beware of older TFTP servers that have limited capabilities and might cause installation
failures.
Because all disks are initialized parallel to each other, the time that is required to initialize the disks is based
on the size of the largest disk that is attached to the node, not on the sum capacity of the disks. After the disks
are initialized, the nodes first aggregate and its vol0 volume are automatically created.

2-34

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Normal Boot Sequence


Data ONTAP 7-Mode

Clustered Data ONTAP

1. Loads the kernel into memory from boot


device

1. Loads the kernel into memory from the


boot device

2. Mounts the root volume on the physical


disk
3. Runs startup scripts:

/etc/rc file (boot initialization)

/etc/registry file (option configurations)

/etc/hosts file (local name resolution)

4. Is ready for use

2. Mounts the / root image from rootfs.img


on the boot device
3. Loads Init and runs startup scripts

4. Loads NVRAM kernel modules


5. Creates /var partition on NVRAM
(restored from boot device if a backup
copy exists)
6. Starts management processes
7. Loads the data and network modules
8. Mounts vol0 root volume
9. Is ready for use

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NORMAL BOOT SEQUENCE

2-35

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Setup 3.0


Overview
Set up your FAS2200, FAS3200, and FAS8000 systems three times faster
You dont need to be a storage expert
(Default settings take the guesswork out of the setup process)
Get NetApp best practices for optimal performance
(including deduplication, FlexVol volumes, automatic resizing, and provisioning
recommendations)

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SYSTEM SETUP 3.0: OVERVIEW


The System Setup tool is designed to improve the initial overall customer experience.
System Setup 3.0 supports Data ONTAP 8.3 and can be used to set up new FAS2200, FAS3200, and
FAS8000 storage systems. It runs on Windows XP, Windows 7, and Windows Server 2008 R2 x64, and SE
x64. It requires .NET Framework 3.5 SP1.
How does it work? System Setup takes the user through a series of six simple steps to set up and provision a
new FAS system. System Setup automatically discovers networked storage, and you only need to enter some
basic information. Defaults are provided for address, nodes, and so on. System Setup also automatically
arranges disks into the largest possible recommended configuration for optimal performance and efficiency.
As part of recommended best practices, NetApp recommends that AutoSupport is turned on; this option is
provided as part of the setup process. After the clustered storage is set up, customers are ready to start using
OnCommand System Manager.

2-36

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Setup 3.0


Six Steps
1. Perform console-free
discovery.
2. Set up cluster
management.
3. Configure AutoSupport.

4. Set up disks and


storage.
5. Set up SVMs and the
network.
6. Enable protocols.

Congratulations!

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SYSTEM SETUP 3.0: SIX STEPS


A wizard guides customers through the setup screens to perform discovery without a console, set up cluster
management, configure AutoSupport, set up disks and storage, set up an SVM and the network, and enable
the protocols. System Setup verifies successful storage system setup and readiness for using OnCommand
System Manager to manage the new storage.

2-37

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

You can use System Setup 3.0 to configure Data


ONTAP on which hardware platforms?
1. FAS2200 series platforms only
2. Enterprise-level platforms only
3. Single-node clusters
4. All supported NetApp FAS platforms

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

2-38

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Review
Basic Steps for Setting Up a System

This Module

1. Connect controllers, disks, and cables.


2. Set up and configure nodes.
3. Install software onto nodes. (Software is preinstalled on most systems.)

4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes to the cluster.
6. Complete the initial configuration.

7. Create data aggregates.


8. Create an SVM.
9. Create data volumes and protocol configuration.

38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REVIEW: BASIC STEPS FOR SETTING UP A SYSTEM

2-39

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Additional Training
Clustered Data ONTAP Installation Workshop
System installation and configuration for clustered Data ONTAP

Data ONTAP 8 Cabling

39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADDITIONAL TRAINING

2-40

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Ive learned a lot in this module. I can


explain the configurations of Data
ONTAP operating in 7-Mode and
clustered Data ONTAP, explain the
supported and required hardware,
and describe System Setup 3.0.

40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

2-41

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP System Administration Guide for
Cluster Administrators
Clustered Data ONTAP High-Availability Configuration Guide
Clustered Data ONTAP Remote Support Agent Configuration
Guide
Clustered Data ONTAP Switch Setup Guide for Cisco
Switches

CN1601 and CN1610 Switch Setup and Configuration Guide

41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

2-42

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 2:
Hardware and Initial Setup

Time Estimate: 15 minutes

42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

2-43

Clustered Data ONTAP Administration: Hardware and Initial Setup

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 3

Initial Storage System


Configuration

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 3: INITIAL STORAGE SYSTEM CONFIGURATION


In this module, you learn how to configure key features of clustered Data ONTAP, such as role-based access
control (RBAC), feature licensing, Network Time Protocol (NTP), and the AutoSupport tool. You also learn
about policies and job schedules, which are used throughout this course.

3-1

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

The installation is complete. How do I


prepare Data ONTAP for my enterprise?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

3-2

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Basic Steps for Setting Up a System


1. Connect controllers, disks, and cables.
2. Set up and configure nodes.
3. Install software on nodes. (Software is preinstalled on most systems.)

This Module

4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes to the
cluster.

6. Complete the initial configuration.


7. Create data aggregates.
8. Create a storage virtual machine (SVM).

9. Create data volumes and protocol configuration.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BASIC STEPS FOR SETTING UP A SYSTEM

3-3

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Role-Based Access Control

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: ROLE-BASED ACCESS CONTROL

3-4

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Access
Default system administrator account:
Data ONTAP operating in 7-Mode: root
Clustered Data ONTAP: admin

Additional administrator accounts provide differing degrees of administrative access.


Use the CLI or OnCommand System Manager.

Clustered Data ONTAP administrators are associated with cluster or data SVMs.
Administrator accounts are created with role-based access control (RBAC):
Data ONTAP 7-Mode

system> useradmin
Clustered Data ONTAP
c1::> security login role create
5

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE ACCESS
You can use the default system administration account for managing a storage system, or you can create
additional administrator user accounts to manage administrative access to the storage system.
You might want to create an administrator account for these reasons:

3-5

You can specify administrators and groups of administrators with differing degrees of administrative
access to your storage systems.
You can limit an administrators access to specific storage systems by providing an administrative
account on only those systems.
Creating different administrative users enables you to display information about who is performing which
commands on the storage system.

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Role-Based Access Control


Data ONTAP 7-Mode

RBAC:
Manages a set of capabilities for users and administrators
Enables you to monitor user and administrator actions

To implement RBAC:
Create a role with specific capabilities
Create a group with one or more assigned roles
Create one or more users, and assign them to a group or groups

Groups

Roles

Capabilities

Users
6

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ROLE-BASED ACCESS CONTROL: DATA ONTAP 7-MODE


RBAC is a mechanism that specifies how users and administrators can use a specific computing environment.
In most organizations, some system administrators require more privileges than others. You provide specific
levels of system access by selectively granting and revoking privileges:

User: Locally created or from a domain; must be assigned to a group when it is created
Role: Set of capabilities assigned to a group
Capability: Privilege granted to a role to execute commands: login, CLI, API, and security rights
Groups: Collection of users or domains that are granted one or more roles

You use RBAC to define sets of capabilities. You then assign a set of capabilities to one or more users or
groups of users.

3-6

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

RBAC
Predefined Roles in Clustered Data ONTAP

Cluster-scoped roles
admin

backup

readonly

autosupport

none
c1::> security login role show vserver c1

Data SVMscoped roles


vsadmin

vsadmin-backup

vsadmin-volume

vsadmin-readonly

vsadmin-protocol
c1::> security login role show vserver svm1
7

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RBAC: PREDEFINED ROLES IN CLUSTERED DATA ONTAP


Clustered Data ONTAP includes administrative access-control roles that can be used to subdivide
administration duties for SVM administration tasks.
The vsadmin role is the superuser role for an SVM. The admin role is the superuser for a cluster.
Clustered Data ONTAP 8.1 and later versions support the vsadmin role. The vsadmin role grants the data
SVM administrator full administrative privileges for the SVM. Additional roles include the vsadminprotocol role, the vsadmin-readonly role, and the vsadmin-volume role. Each of these roles provides a
unique SVM administration privilege.
A cluster administrator with the readonly role can grant read-only capabilities. A cluster administrator with
the none role cannot grant capabilities.

3-7

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

RBAC
Custom Roles

Role name

Command directory
Query
c1::> security login role create

c1::> security login modify vserver svm1 user ken -role svm1vols

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RBAC: CUSTOM ROLES


Cluster administrators can create access-control roles to be applied to cluster or SVM administrators. These
roles can grant or limit authority to perform certain administrative tasks. An access-control role consists of a
role name and a command or a command directory to which the role has access. It can include an access level
(none, readonly, or all) and a query that applies to the specified command or command directory. In this
example, we create a role that is named svm1vols and that grants access to the volume commands, but it
limits access to aggregates that start with the string aggr7. We assign the role to a user who is named Ken.
After the role is created, it can be applied to individual administrators.
c1::> security login modify vserver svm1 user ken -role svm1vols
c1::> security login role create vserver svm1 -role svm1vols -cmddirname
volume -query "-aggr aggr7*" -access all

3-8

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Active Directory Authentication


for Administrators
Clustered Data ONTAP 8.2
Active Directory authentication with SVM as a dedicated
communication tunnel
Requires CIFS license

Clustered Data ONTAP 8.2.1 and later


Active Directory authentication functionality fully supported
Does not require CIFS license
c1::> security login create -vserver c1
-username learn\Administrator -application ssh
-authmethod domain
9

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ACTIVE DIRECTORY AUTHENTICATION FOR ADMINISTRATORS


Clustered Data ONTAP 8.2 introduced support for Active Directory authentication for cluster and SVM
administrators. A dedicated, CIFS-licensed SVM serves as a communication tunnel to the administration
server. With clustered Data ONTAP 8.2.1 and later, the functionality is fully supported, but a CIFS protocol
license is no longer required. This enhancement satisfies customers who want to use Active Directory to
authenticate their storage and SVM administrators but do not need CIFS data access.
To enable Active Directory domain users to access the cluster in releases earlier than Data ONTAP 8.2.1, you
must set up an authentication tunnel through a CIFS-enabled virtual storage server. You must also create
cluster user accounts for the domain users. This functionality requires that CIFS is licensed on the cluster. The
following commands create a CIFS server for the vs0 SVM, specify vs0 as the tunnel for Active Directory
domain authentication, and create a cluster user account to enable the administrator user of the LEARN
domain to access the cluster through Secure Shell (SSH).
For clustered Data ONTAP 8.2 only:
c1::> vserver cifs create -vserver vs0 -cifs-server vs0cifs -domain
learn.netapp.local
c1::> security login domain-tunnel create -vserver vs0
c1::> security login create -vserver c1 -username learn\Administrator application ssh -authmethod domain

3-9

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Administrative Security
Use the security login command to configure role-based
administrative access to the cluster
Configure by application: console, HTTP, SNMP, SSH, and the ONTAPI
interface library
To enable and disable security audit logging, use:
c1::> security audit modify -cliset on -httpset on
cliget on -httpget on

s-

Audited commands go to mgwd.log files, with the console or the ssh


prefix:
/mroot/etc/mlog/mgwd.log*
Nodes keep track of local SSH and console commands:
/mroot/etc/mlog/command-history.log
10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADMINISTRATIVE SECURITY
cliset: Allow user to create or modify settings by using the clustershell
cliget: Allow user to view settings by using the clustershell
httpset: Allow user to create or modify settings with management tools that use the http protocol
httpget: Allow user to view settings with management tools that use the http protocol

3-10

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion


How does multi-tenancy affect the way that
you might use RBAC?

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

3-11

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
Licensing

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: LICENSING

3-12

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Licensing Model


Proof-of-sale is recorded as a license
entitlement record
Standard licenses are linked to the controller
serial number in Data ONTAP 8.2 and later
Features are licensed on every node
License keys are 28 characters long
Nondisruptive upgrades from
Data ONTAP 8.1 to 8.2 and later do not
immediately require new keys

Licensing model is unchanged for Data


ONTAP 8.3
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP LICENSING MODEL


A license is a record of one or more software entitlements. License keys, also known as license codes, enable
you to use certain features or services on your cluster. Each cluster requires a cluster base license key, which
you can install either during or after the cluster setup. Some features require additional licenses. Data ONTAP
feature licenses are issued as packages, each of which contains multiple features or one feature. A package
requires a license key, and installing the key enables you to access all features in the package. Data ONTAP
prevents you from installing a feature license before a cluster base license key is installed.
Starting with Data ONTAP 8.2, all license keys are 28 characters long. Licenses that were installed earlier
than Data ONTAP 8.2 continue to work in Data ONTAP 8.2 and later releases. However, Data ONTAP
requires that you enter the license key in the 28-character format if you need to reinstall a license; for
example:

3-13

If you delete a previously installed license and want to reinstall it in Data ONTAP 8.2 or later
If you perform a controller replacement procedure for a node in a cluster that is running Data ONTAP 8.2
or later

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

License Types
Standard license
Locked to a node
Feature functions with one licensed node if
a licensed node is running

Site license
Single license that enables the feature on
the entire cluster
Is not carried with nodes that are removed
from the cluster

Evaluation license
Also known as a demo license
Temporary license with an expiration date
Cluster-side and not locked to a node
14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LICENSE TYPES

3-14

Standard license: A standard license is a node-locked license. It is issued for a node with a specific
system serial number and is valid only for the node that has the matching serial number. Installing a
standard, node-locked license entitles a node to the licensed functionality. It does not entitle the entire
cluster to use the feature. For the cluster to be enabled, though not entitled, to use the licensed
functionality, at least one node must be licensed for the functionality. However, if only one node in a
cluster is licensed for a feature, and that node fails, then the feature will no longer function on the rest of
the cluster until the licensed node is restarted.
Site license: A site license is not tied to a specific system serial number. When you install a site license,
all nodes in the cluster are entitled to the licensed functionality. The system license show
command displays site licenses under the cluster serial number. If your cluster has a site license and you
remove a node from the cluster, the node does not carry the site license with it, and that node is no longer
entitled to the licensed functionality. If you add a node to a cluster that has a site license, the node is
automatically entitled to the functionality that is granted by the site license.
Evaluation license: An evaluation license is a temporary license that expires after a certain period of
time. It enables you to try certain software functionality without purchasing an entitlement. It is a clusterwide license, and it is not tied to a specific serial number of a node. If your cluster has an evaluation
license for a package and you remove a node from the cluster, the node does not carry the evaluation
license with it.

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

License Commands
c1::> license ?
(system license)
add
clean-up
delete
show
status>

15

Add one or more licenses


Remove unnecessary licenses
Delete a license
Display licenses
Display license status

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LICENSE COMMANDS
Data ONTAP enables you to manage feature licenses in the following ways:

Add one or more license keys


Display information about installed licenses
Display the packages that require licenses and their current license status on the cluster
Delete a license from a cluster or from a node whose serial number you specify
NOTE: The cluster base license is required for the cluster to operate. Data ONTAP does not enable you
to delete it.

3-15

Display or remove expired or unused licenses

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Manager License Page

Features that can


be licensed

Licensed for this


cluster or not?
Node-locked?
Demo?
Unlicensed?

Licenses on the
cluster for the
selected feature

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SYSTEM MANAGER LICENSE PAGE


The License page in System Manager is located on the Cluster tab. The upper pane shows licensed features.
The lower pane shows licensed nodes and expiration dates. It also shows whether a license is a valid legacy
license from an earlier version of clustered Data ONTAP.

3-16

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which of the following statements about standard license


keys for Data ONTAP 8.2 and later are true? (Select all
that are true.)

1. They are node-locked.


2. They are 28 characters long.
3. They enable nondisruptive upgrade from Data ONTAP
8.1.x.
4. They are optional after upgrade from Data ONTAP 8.1.x
17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

3-17

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Policies and Schedules

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: POLICIES AND SCHEDULES

3-18

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks

Ive learned so much already. Is there


anything else that I need to understand
as we continue this course? Are there
basic concepts that are used throughout
clustered Data ONTAP?

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

3-19

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Policy-Based Storage Services


Policy:
A collection of rules that are created and managed by the cluster or
SVM administrator
Predefined or created to manage data access

Policy examples:
Firewall and security
Export, quota, file, and data
Snapshot copy and SnapMirror
Quality of service (QoS)
20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

POLICY-BASED STORAGE SERVICES


Examples of services that are policy-based:

3-20

Firewall
System health
SnapMirror
Volume efficiency
Volume FlexCache
Volume quota
Volume Snapshot
SVM CIFS group
SVM data
SVM export
SVM fpolicy
SVM security file-directory
QoS policy-group
Failover

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Policy Example
policyA

Rule1 criteria1
Rule2 criteria2
Rule3 criteria3
policyB
property
Rule1 criteria1
property
Rule2 criteria2
property
Rule3 criteria3
property
property
property

fwall_policy1

192.168.1.0/24 ssh
192.168.1.0/24 http
Rule3 criteria3
fwall_policy2
property
192.168.21.0/24 ssh
property
192.168.22.0/24 ssh
property
192.169.23.0/24 ssh
allow

You assign a policy to a service or resource


A rule criteria in the policy matches the service or resource
The matching rules properties apply to the service or resource

The example is a firewall to allow or deny access to a protocol for specific IP address ranges.
21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

POLICY EXAMPLE
A policya concept that is specific to clustered Data ONTAPis a collection of rules that are created and
managed by the cluster or SVM administrator. Policies are predefined as defaults or they are created to
manage the properties of various types of objects. Examples of policy use include firewall, security, export,
quota, storage quality of service, and replication rules.
Some rules are indexed, meaning that you can specify the order in which each rule is considered for use in a
particular situation. Rule indexes can be modified, and additional rules can be inserted in the list by
specifying the rules new position in the list.
In Data ONTAP operating in 7-Mode, you have to create rules at the object level. As a result, there is a
separate rule for every instance of the object, even if the rule is identical to others. With clustered Data
ONTAP policies, you create the set of rules one time, and then you associate it with all objects that adhere to
the same set of rules.

3-21

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Job Schedules
Job schedules can be used:
Globally (by all virtual storage systems and SVMs)
For functions that can be automated
For SnapShot, SnapMirror, and SnapVault events, for example

Note the following job schedule syntax:


@:00,:05,:10...:55 means every five minutes on the five-minute
marks

@2 means daily at 2:00 a.m.


@0:10 means daily at 12:10 a.m.
@:05 means hourly at five minutes after the hour
22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

JOB SCHEDULES
Schedules apply to Data ONTAP 7-Mode and clustered Data ONTAP. Schedules are used to control events
that are automated and time-based. The most common examples are data replication and AutoSupport
messages.

3-22

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

Network Time Protocol

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: NETWORK TIME PROTOCOL

3-23

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Time Protocol


Manually set the date, time, and time zone with system date
modify.
Kerberos is time-sensitive and typically requires the Network Time
Protocol (NTP).
NTP enablement and disablement are cluster-wide.
NTP configuration is per node in releases that are earlier than Data
ONTAP 8.3
Data ONTAP 7-Mode

system> options timed.proto ntp


system> options timed.servers <ntp-server>

Clustered Data ONTAP (release 8.2 and earlier)

c1::> system services ntp server create node *


-server <ntp-server>
24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORK TIME PROTOCOL


Use the timezone command to set the time zone; use the date command to set the clock. By default, the
clock is reset on all the nodes in the cluster.

3-24

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Time Protocol


Enhancement for Data ONTAP 8.3

The cluster-wide NTP configuration command in Data ONTAP


8.3 replaces the previous, per-node command.

c1::> cluster time-service ntp server show


Server
Version
-------------------- ----------ntp1.example.com
auto
ntp2.example.com
auto

The cluster-wide ntp command does not work until the entire
cluster is running Data ONTAP 8.3 or later.

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORK TIME PROTOCOL: ENHANCEMENT FOR DATA ONTAP 8.3


Data ONTAP 8.3 adds a new cluster-wide version of the NTP configuration command. This command
replaces the per-node NTP configuration command that is available in earlier releases of Data ONTAP. The
command options are create, modify, delete, and show.
The cluster-wide ntp command does not work until the entire cluster is running Data ONTAP 8.3 or later.

3-25

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


Which function or functions rely on NTP?
1. Log file review
2. Troubleshooting
3. Setting up CIFS
4. All of the above

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

3-26

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

The AutoSupport Tool

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: THE AUTOSUPPORT TOOL

3-27

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks

I depend on my NetApp storage to keep the


business up and running. Even with the
nondisruptive operations (NDO), nondisruptive
upgrade (NDU), and scaling features of NetApp, I
would like more peace of mind.
I would like to be sure that there are no
undetected problems on the horizon. How can
NetApp help me to proactively prevent problems
before they happen?

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

3-28

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

AutoSupport Monitoring and Reporting


The AutoSupport tool:
Monitors a clusters or storage system's operations
Sends automatic messages to technical support
Should be configured on all nodes

HTTP or HTTPS

SMTP

Email Server
AutoSupport messages are generated:
When triggering events occur
When you initiate a test message
When the system reboots
Daily (logs only)
Weekly (logs, configuration, and health data)

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AUTOSUPPORT MONITORING AND REPORTING


The AutoSupport tool is a call-home feature that is included in the Data ONTAP operating system for all
NetApp systems. This integrated and efficient monitoring and reporting tool constantly monitors the health of
your system.
Whenever system events occur, the AutoSupport tool enables storage systems to send messages directly to
system administrators and to NetApp technical support, which has a dedicated team that continually monitors
AutoSupport analysis tools. AutoSupport messages contain useful information that enables technical support
to identify and solve problems quickly and proactively.
You can also subscribe to the abbreviated version of urgent AutoSupport messages through alphanumeric
pages, or you can customize the type of message alerts that you want to receive.
The AutoSupport Message Matrices list all the current AutoSupport messages in order of software version.
To continuously monitor your systems status and health, the AutoSupport support tool:

3-29

Is automatically triggered by the kernel once a week to send information to the email addresses that are
specified in the autosupport.to option. In addition, you can use the options command to manually
invoke the AutoSupport mechanism to send this information.
Sends a message in response to events that require corrective action from the system administrator or
NetApp technical support.
Sends a message when the system reboots.

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Examples of AutoSupport Events


Event

Email Subject Line

Low NVRAM battery

BATTERY_LOW

Disk failure

DISK_FAIL!!!

Disk scrub detected checksum errors

DISK_SCRUB CHECKSUM ERROR

Shutdown occurred because of overheating

OVER_TEMPERATURE_SHUTDOWN!!!

Partial RPS failure occurred

REBOOT

Disk shelf error occurred

SHELF_FAULT

Spare disk failure occurred

SPARE DISK FAILED

Weekly backup of /etc/messages occurred

WEEKLY_LOG

Successful HA takeover of partner

CLUSTER TAKEOVER COMPLETE

Unsuccessful HA takeover

CLUSTER TAKEOVER FAILED

HA takeover of a node

REBOOT (CLUSTER TAKEOVER)

HA giveback occurred

CLUSTER GIVEBACK COMPLETE

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXAMPLES OF AUTOSUPPORT EVENTS


AutoSupport messages are event-driven. Many events that can occur on a NetApp storage system can trigger
an AutoSupport transmission. This table shows a list of common AutoSupport trigger events.
To read descriptions of some of the AutoSupport messages that you might receive, go to the NetApp Support
site and search for AutoSupport message matrices. You can view either the online version or the version in
the Data ONTAP operating system guide.

3-30

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Configuring AutoSupport
Data ONTAP 7-Mode
1.
2.
3.
4.
5.
6.
7.
8.

system>
system>
system>
system>
system>
system>
system>
system>

options
options
options
options
options
options
options
options

autosupport.support.enable on
autosupport.support.transport [smtp|http|https]
autosupport.mailhost xx.xx.xx.xx
autosupport.from bob@learn.local
autosupport.to support@netapp.com
autosupport.noteto tom@learn.local
autosupport.enable on
autosupport.doit testing asup

Clustered Data ONTAP


1. c1::> autosupport modify -node * -support enable -transport smtp
-mailhost xx.xx.xx.xx -from bob@learn.local -to support@netapp.com
-noteto tom@learn.local state enable
2. c1::> system node autosupport invokec-node cluster_02 type test
31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONFIGURING AUTOSUPPORT
AutoSupport configuration involves identifying the transport of choice (SMTP, HTTP, or HTTPS) and the
details that are necessary to transport the message to NetApp. The steps to configure AutoSupport on Data
ONTAP operating in 7-Mode are quite different from the steps on clustered Data ONTAP, but they both
involve basically the same information, including mail host, the from email address, and any recipients,
including NetApp Support. You can also use the noteto option to send notifications to internal and
external recipients without sending the entire AutoSupport payload.
After configuring AutoSupport, always send a test message to verify that you get the desired result. For
testing your AutoSupport configuration on 7-Mode, NetApp recommends that you use the message TEST or
TESTING. In clustered Data ONTAP, sending a message of the type test is sufficient.

3-31

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

What if one node in my HA pair encounters a


recurring panic overnight that keeps it from
restarting?
Suppose that a failover is initiated, and clients
are served nondisruptively from the partner
node.
Is it possible that nobody would notice?
What if NetApp technical support notified you by
phone or email that a node was down?

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

3-32

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

My AutoSupport
Key Features

NetApp Systems

Identifies risks and provides best


practice tips

AutoSupport
Messages

Compares your hardware and


software versions and alerts you to
potential obsolescence

NetApp
SSC Partners
and Customers

Provides performance and storage


utilization reports to proactively plan
capacity needs
Provides new system visualization
tools, Transition Advisor, and
Upgrade Advisor for clustered Data
ONTAP systems

My AutoSupportMobile App
AutoSupport
Data
Warehouse

NetApp Support Cloud


33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MY AUTOSUPPORT
My AutoSupport is a suite of web-based applications hosted on the NetApp Support site and accessible via
your web browser. Using the data from AutoSupport, My AutoSupport proactively identifies storage
infrastructure issues through a continuous health-check feature and automatically provides guidance on
remedial actions that help increase uptime and avoid disruptions to your business.
My AutoSupport provides four primary functions.
First, it identifies risks and provides best practice tips. For example, My AutoSupport might find a
configuration issue, a bad disk drive, or version incompatibility on your system.
Second, My AutoSupport can compare your hardware and software versions and alert you to potential
obsolescence. For example, My AutoSupport alerts you about end-of-life (EOL) issues or an upcoming
support contract expiration date.
Third, My AutoSupport provides performance and storage utilization reports to help you proactively plan
capacity needs.
Finally, My AutoSupport provides new system visualization tools and transition advisor tools for clustered
Data ONTAP systems.
If you plan any changes to your controllers, NetApp recommends manually triggering an AutoSupport
message before you make the changes. This manually triggered AutoSupport message provides a before
snapshot for comparison, in case a problem arises later.

3-33

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP System Administration Guide for
Cluster Administrators
Clustered Data ONTAP Software Setup Guide

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

3-34

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Now that Ive created a cluster, Im able to


log in to OnCommand System Manager,
install my feature licenses, and set up the
system time. I can also explain policies
and schedules.

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

3-35

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 3: Initial Storage


System Configuration

Time Estimate: 20 minutes

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

3-36

Clustered Data ONTAP Administration: Initial Storage System Configuration

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 4

Storage Management

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 4: STORAGE MANAGEMENT


This module teaches you how to manage storage in a clustered environment. It also describes physical,
logical, and virtual storage architecture. Finally, you also learn how to configure storage resources in
clustered Data ONTAP.

4-1

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

This Module

Basic Steps for Setting Up a System

1.

Connect controllers, disks, and cables.

2.

Set up and configure nodes.

3.

Install software onto nodes. (Software is preinstalled on most systems.)

4.

Initialize disks.

5.

System setup: Create a cluster on the first node, then join additional nodes to the cluster.

6.

Complete the initial configuration.

7.

Create data aggregates.

8.

Create a storage virtual machine (SVM).

9.

Create data volumes and protocol configuration.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BASIC STEPS FOR SETTING UP A SYSTEM

4-2

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I am ready for data! How do I start


organizing my storage?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-3

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Data ONTAP Storage


Architecture

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: DATA ONTAP STORAGE ARCHITECTURE

4-4

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Storage Architecture


Files and LUNs

Logical Layer

FlexVol volumes
Aggregate
RAID Groups

Physical Layer

Disks

Data ONTAP architecture


leverages a dynamic virtualization
engine
5

Volumes are dynamically


mapped to physical space

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP STORAGE ARCHITECTURE


The architecture of the Data ONTAP operating system leverages a dynamic virtualization engine. Logical
data volumes are dynamically mapped to physical space.
Think of the aggregate as a pool of physical 4-KB blocks. Storing a 20-MB file, for example, would require
multiple 4-KB blocks. To protect these blocks, NetApp storage uses one or more RAID groups. As files come
into the system, they are stored across these protected blocks. To make it easier to manage the files, and the
aggregates pool of blocks, containers that are called volumes are created. The volumes expose data to the
client.
A volume can be any of these types:

A traditional volume is contained by a single, dedicated aggregate. Because a traditional volume is tightly
coupled with its containing aggregate, no other volumes can get their storage from an aggregate that
contains a traditional volume. Only Data ONTAP operating in 7-Mode uses traditional volumes, and they
are not recommended.
A FlexVol volume allocates only a portion of the available space within an aggregate. One or more
volumes can be on an aggregate. This type is the default volume type.
An infinite volume is a single, scalable volume that can store up to 2 billion files and tens of petabytes of
data. An infinite volume uses storage from multiple aggregates on multiple nodes. (Only clustered Data
ONTAP uses infinite volumes.)

Data ONTAP is optimized for writes. It can write any file system block (except the one that contains the root
inode) to any location on disk, it can write blocks to disk in any order, and it improves RAID performance by
writing to multiple blocks in the same stripe by creating a full-stripe write.

4-5

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
Disks

When a disk is inserted into a


system:
Disk is initially unowned
Ownership can be assigned:

Unowned
Disks

Automatically
Manually

When owned, disks can:


Spare
Disk

Function as hot spares


Aggregate
Storage

Be assigned to aggregates

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: DISKS


A disk (or drive) is the basic unit of storage for storage systems that use the Data ONTAP operating system to
access native disk shelves. An array LUN is the basic unit of storage that a storage array provides to a storage
system that runs Data ONTAP. A virtual disk is the basic unit of storage for a storage system that runs Data
ONTAP-v (the technology behind Data ONTAP Edge).
At any given time, each disk, and therefore each aggregate and volume that are built on the disk, can be
owned by one of the two nodes in a high-availability (HA) pair. This form of software ownership is made
persistent by writing the information on the disk itself. The ability to write disk ownership information is
protected by the use of persistent reservations. Persistent reservations can be removed from disks by powercycling the shelves or by selecting maintenance mode while in boot mode and then issuing manual
commands. If the node that owns the disks is running in normal mode, the node reasserts its persistent
reservations every 30 seconds. Changes in disk ownership are managed automatically by normal operations,
although there are commands to manipulate disk ownership manually, if necessary.
Each node in an HA pair can perform reads from any disk to which the node is connected, even if the node
isnt that disks owner; however, only the node that is marked as a disks current owner is allowed to write to
that disk.

4-6

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
Disk Types
Data ONTAP
Disk Type

Industry-Standard
Disk Type

Disk Class

Description

BSAS

Capacity

SATA

Bridged SAS-SATA disks

FSAS

Capacity

NL-SAS

Near-line SAS

mSATA

Capacity

SATA

SATA disk in multidisk


carrier storage shelf

SAS

Performance

SAS

Serial-attached SCSI

SSD

Ultraperformance

SSD

Solid-state drive

ATA

Capacity

SATA

Serial ATA

FC-AL

Performance

FC

Fibre Channel

LUN

N/A

LUN

Array LUN

SAS

N/A

VMDK

Virtual disks

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: DISK TYPES


The Data ONTAP operating system associates a type with every disk. Data ONTAP reports some disk types
differently than the industry standards. To avoid confusion, you should understand how Data ONTAP disk
types map to industry standards.
Data ONTAP supports two storage connection types for physical disks: SAS and FC. The FC connection type
supports three topologies: arbitrated loop, switched, and point-to point.

SAS, BSAS, FSAS, solid-state drive (SSD), and MSATA disks use the SAS connection type.
SAS-connected storage shelves are connected to the controller on a daisy chain that is called a stack.
FC and ATA disks use the FC connection type with an arbitrated-loop topology (FC-AL).
FC-connected storage shelves are connected to the controller on a loop.

Data ONTAP also supports storage arrays and virtual storage (Data ONTAP-v):

Array LUNs use the FC connection type, with either point-to-point or switched topology.

An array LUN is a logical storage device backed by storage arrays and used by Data ONTAP as a disk. These
LUNs are referred to as array LUNs to distinguish them from the LUNs that Data ONTAP serves to clients.
The disk show command displays these as a LUN disk type.

NetApp Cloud ONTAP runs as a virtual machine and uses Virtual Machine Disk (VMDK).

Virtual disks are formatted and managed by VMware ESX.


The disk show command displays VMDK disks as a SAS disk type.

You cannot combine different connection types in the same loop or stack. However, for MetroCluster
configurations, the FC and SAS connection types can be combined in a bridged connection, with FC on the
controller side and SAS on the shelf side. The bridged connection can be used in either a direct-attached
topology or a switched topology.

4-7

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
Array LUNs

Array LUNs from FlexArray


Storage Virtualization Software
are treated like disks, except:
Enterprise
Storage
Array

Unowned
Array
LUNs

An array LUN is created on the


storage array and presented to
Data ONTAP
Ownership is assigned
manually

When owned, array LUNs can:


Array LUN
Spare
8

Aggregate
Storage

Function as hot spares


Be assigned to aggregates

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: ARRAY LUNS


FlexArray Storage Virtualization Software licenses (formally available only with the V-Series platform)
enable an enterprise storage array to present an array LUN to Data ONTAP. An array LUN uses the FC
connection type, with either point-to-point topology or switched topology. Therefore, the way that Data
ONTAP treats an array LUN is similar to the way that it treats a typical disk. These are the exceptions:

A LUN must be created on the enterprise storage array by using the vendors best practices.
A logical relationship must be created manually between the array LUN and Data ONTAP, where Data
ONTAP is the owner.
An array LUN can only be part of a RAID 0 aggregate. RAID protection for the array LUN is on the
enterprise storage array, not Data ONTAP.

NOTE: Array LUN reconfiguration, such as resizing the array LUN, must be done from the storage array.
Before such activities can occur, you must release Data ONTAP ownership of the array LUN.

4-8

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
RAID Groups
RAID Group

Disks are added to RAID groups:

Disk must be same type


SAS, SATA, or SSD
Array LUNs

Disks should be the same


speed and size:
Data
Disks

DoubleParity Parity
Disk Disk

SAS 15K or 10K


SATA 7.5K

Provide sufficient hot spares

Hot spares
9

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: RAID GROUPS


A RAID group consists of one or more data disks or array LUNs, across which client data is striped and
stored. It includes up to two parity disks, depending on the RAID level of the aggregate that contains the
RAID group.
You change the size of RAID groups on a per-aggregate basis. You cannot change the size of an individual
RAID group.
Follow these guidelines when sizing your RAID groups of hard disk drives (HDDs) or SSDs:

RAID groups are composed of the same disk type.


All RAID groups in an aggregate should have the same number of disks.

If this guideline is impossible to follow, any RAID group with fewer disks should have only one disk less
than the largest RAID group.
NOTE: The SSD RAID group size can be different from the RAID group size for the HDD RAID groups in a
flash pool aggregate. Usually, you should ensure that you have only one SSD RAID group for a flash pool
aggregate, to minimize the number of SSDs that are required for parity.

The recommended range of RAID group size is:

Between 12 and 20 for SATA HDDs


Between 20 and 28 for SAS HDDs and SSDs

The reliability and smaller size (faster rebuild times) of performance HDDs can support a RAID
group size of up to 28, if needed.

4-9

NetApp recommends that you do not mix 10K RPM and 15K RPM disks in the same aggregate.

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mixing 10K RPM disks with 15K RPM disks in the same aggregate effectively throttles all disks down to
10K RPM. This throttling results in longer times for corrective actions such as RAID reconstructions.
Recommendations about spares vary by configuration and situation. For information about best practices for
working with spares, see Technical Report 3437: Storage Subsystem Resiliency Guide.

4-10

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
RAID Types
RAID 4

RAID groups can be:

RAID 4
Data
Disk

Single parity for single-disk


failure
Minimum two disks

Parity
Disk

RAID-DP

RAID-DP (default)
Data
Disk

Double parity for double-disk


failure
Minimum three disks

Parity dParity
Disk
Disk

RAID 0

One
or
Many
Array LUNs
10

RAID 0 (striping)
Storage array provides RAID
protection; Data ONTAP does not

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: RAID TYPES


Understanding how RAID protects your data and data availability can help you administer your storage
systems more effectively.
For native storage, Data ONTAP uses RAID-DP (double-parity) or RAID 4 protection to ensure data integrity
within a RAID group even if one or two of those disks fail. Parity disks provide redundancy for the data that
is stored on the data disks. If a disk fails (or, for RAID-DP, up to two disks), the RAID subsystem can use the
parity disks to reconstruct the data in the disk that failed.

RAID-DP uses two parity disks to ensure data recoverability even if two disks within the RAID group
fail.
RAID 4 uses one parity disk to ensure data recoverability if one disk within the RAID group fails.

For array LUNs, Data ONTAP stripes data across the array LUNs using RAID 0. The storage arrays, not Data
ONTAP, provide the RAID protection for the array LUNs that they make available to Data ONTAP.
RAID 0 does not use any parity disks; it does not provide data recoverability if any disks in the RAID group
fail.
NOTE: NetApp imposes a five-disk minimum for RAID-DP, and a four-disk minimum for RAID4. This
minimum is enforced at the aggregate level, not at the RAID group level.

4-11

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
Aggregates
Composed of disks or array LUNs in
RAID groups (rg)

Storage System
Aggregate (aggrA)

Types:

Plex (plex0)

Earlier than Data ONTAP 8.3:


32-bit or 64-bit
Data ONTAP 8.3 and later:
64-bit only

rg0

Have a single copy of data that is


called a plex:

rg1

Plex contains all RAID groups that


belong to the aggregate
Mirrored aggregates have two plexes
A pool of hot spare disks is assigned to
each plex

pool0

Hot Spares
11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: AGGREGATES


To support the differing security, backup, performance, and data sharing needs of your users, you can group
the physical data storage resources on your storage system into one or more aggregates. You can then design
and configure these aggregates to provide the appropriate level of performance and redundancy.
Each aggregate has its own RAID configuration, plex structure, and set of assigned disks or array LUNs. The
aggregate provides storage, based on its configuration, to its associated FlexVol volumes.
Aggregates have the following characteristics:

They can be composed of disks or array LUNs.


They can be mirrored (7-Mode or clustered Data ONTAP MetroCluster) or unmirrored.
They can be in 64-bit (default) or 32-bit format.

When you create a new aggregate, the default is a 64-bit format aggregate.
64-bit aggregates have much larger size limits than 32-bit aggregates (16 TB).
64-bit and 32-bit aggregates can coexist on the same storage system.

NOTE: NetApp recommends using only 64-bit aggregates in clustered Data ONTAP 8.2 and later.

If they are composed of disks, they can be either:

Single-tier aggregates (composed of only HDDs or only SSDs)


Flash pool aggregates, which include HDD RAID groups and an SSD cache.

For information about best practices for working with aggregates, see Technical Report 3437: Storage
Subsystem Resiliency Guide.

4-12

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Layer
Aggregate Types

Aggregate types:

Root Aggregate

Root (aggr0)
Automatically created during
system initialization
Should only contain the node
root volume with log files and
configuration information
Should not contain user data

Data Parity dParity


Disk Disk Disk

Data

Data Aggregate
rg0

Data Disks

12

Parity dParity
Disk Disk

Single-tiered (HDD, SDD, or


array LUNs)
Flash pool (multi-tier of HDDs
and SSDs)
NetApp enforces a 5-disk
minimum

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL LAYER: AGGREGATE TYPES


Each node of an HA pair requires three disks (RAID-DP) to be used in the root aggregate. The root aggregate
is created when the system is first initialized. This aggregate contains vol0, which contains the configuration
information and log files. The root aggregate should not be used for user data.
After the initial system setup, you can create data aggregates. Data aggregates can be in one of the following
configurations:

4-13

Composed of array LUNs in RAID 0


A single tier, composed of only HDDs or only SSDs in a RAID-DP or RAID 4 configuration
Flash Pool aggregates, which include HDD RAID groups and an SSD cache

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Using cluster2 in your lab kit, try these tasks:
Use the aggr show command
Can you tell who owns the aggregate?
What is the RAID status?
How would you see how many disks are in each aggregate?

Different commands show similar things in different ways:


Enter aggr show aggr aggr0.
Enter storage disk show aggr aggr0.
How do the outputs differ?

How would you find a broken disk?


Enter storage disk show -broken.
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

4-14

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

My disks are provisioned into aggregates. Does


my user data go directly into the aggregate?

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-15

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Layer
FlexVol Volumes

FlexVol volumes:
LUN

Files

FlexVol

FlexVol

Depend on their associated


aggregate for their physical
storage
Can increase or decrease in size
as needed

Aggregates can contain


multiple FlexVol volumes
Data containers
NAS: Contain file systems for
user data
SAN: Contain LUNs

Aggregate

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL LAYER: FLEXVOL VOLUMES


A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can
share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source
of all the storage that is used by all the FlexVol volumes that are contained by that aggregate.
Because a FlexVol volume is managed separately from the aggregate, you can create small FlexVol volumes
(20 MB or larger), and you can increase or decrease the size of FlexVol volumes in increments as small as 4
KB.
FlexVol volumes have one of two formats: 64-bit or 32-bit. A 64-bit volume has a larger maximum size than
a 32-bit volume. A newly created FlexVol volume has the same format as its associated aggregate. However,
a volume can have a different format from its associated aggregate in certain cases. The maximum size of a
64-bit volume is determined by the size of its associated aggregate, which depends on the storage system
model. A 32-bit volume has a maximum size of 16 TB.
Data ONTAP operating in 7-Mode also supports traditional volumes. A traditional volume is a volume that is
contained by a single, dedicated aggregate. It is tightly coupled with its containing aggregate. No other
volumes can get their storage from this containing aggregate. Traditional volumes and their containing
aggregates are always the 32-bit type. You cannot grow a traditional volume larger than 16 TB.
NOTE: The use of traditional volumes is not recommended.

4-16

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How FlexVol Volumes Work


FlexVol Volumes use dynamic
allocation

Blocks are allocated from the


aggregate only as needed:

FlexVol
volume 1

vol1

Space can be reserved in the


aggregate (thin provisioning), but
blocks are not allocated
If space is not reserved (full or thick
provisioning), the aggregate can be
over-provisioned

FlexVol volumes can be increased,


decreased, or added to an
aggregate as needed

16

FlexVol
volume 2

FlexVol
volume 3

vol2
vol3

RG1

Aggregate

RG2

aggr1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HOW FLEXVOL VOLUMES WORK


A FlexVol volume is assigned to an aggregate and can write to any of the blocks of an aggregate.

Because the volume and the aggregate are managed separately, you can create small FlexVol volumes (20
MB or larger) and then increase or decrease the size of the volumes in increments as small as 4 KB.
You can create FlexVol volumes almost instantaneously.
You can guarantee space reservations (full or thick provisioning), so any client user or machine is
guaranteed the ability to write to the full size of the volume.
Blocks are not allocated until they are needed (in other words, you are guaranteeing space in the
aggregates, but not the actual blocks).
If you do not guarantee space reservations (by using thin-provisioning), space is not guaranteed for the
client user or machine.

A thin-provisioned volume can exceed the size of the assigned aggregate.


For example, you can create a 100-TB thin-provisioned volume in a 50-TB aggregate.

You can increase or decrease the size of a FlexVol volume without disruption in a few seconds, using only
one command.

4-17

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Layer
Files and LUNs

LUN

Files

FlexVol

FlexVol

File: refers to any data that is


exported or shared to NAS
clients (including text file,
spreadsheet, database)
LUN: represents a logical disk
that is addressed by a SCSI
protocol (FC or iSCSI):
Block level
Data can be accessed only by a
properly mapped SCSI host

Aggregate

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL LAYER: FILES AND LUNS


When data is stored in a volume for a NAS environment, it is stored as files. Files can be documents, database
files and logs, audio and video, or application data. Data ONTAP handles the file system operations, and
clients access the data.
When data is stored in a SAN environment, it is stored differently than in files. SAN data is stored in a logical
container representing a SCSI disk. This container is called a LUN. The LUN is presented to a host, which
treats it like a standard SCSI disk and writes data to it in 512-byte logical blocks. Therefore, SAN is often
called block-level storagebecause data is stored in 512-byte SCSI blocks. Data ONTAP is unaware of the
stored files; it is aware only of the 512-byte blocks that the host is reading or writing to.
NOTE: Because SAN data (block data) and NAS data (file data) are treated differently, files and LUNs
should not be placed in the same FlexVol volume.

4-18

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Layer
Qtrees and Directories

Qtrees: volume partitions


created by the storage
administrator for:

/vol
/vol1

Managing quotas, security


styles, and CIFS oplocks
Backups (7-Mode only)

/tree1
/vol2
/dir1

Directories: volume partitions


created by the OS administrator
Most modern OS implementations
have features similar to qtrees
FlexVol
volume
18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL LAYER: QTREES AND DIRECTORIES


Qtrees enable you to partition your volumes into smaller segments that you can manage individually. You can
set a qtree's size or security style. In 7-Mode, you can also back up and restore qtrees. Qtrees are created by
the storage administrator. The term qtree stands for quota trees.
Examples of qtree usage strategies include:

Quotas: You can limit the size of the data used by a particular project by placing all of that project's files
into a qtree and applying a tree quota to the qtree.
Security style: If you have a project that needs to use NTFS-style security because the members of the
project use Windows files and applications, you can group the data for that project in a qtree and set its
security style to NTFS, without requiring that other projects also use the same security style.
CIFS oplocks settings: If a project uses a database that requires CIFS oplocks to be off, you can set CIFS
oplocks to off for that project's qtree while allowing other projects to retain CIFS oplocks.
Backups (7-Mode only): You can use qtrees to keep your backups more modular, to add flexibility to
backup schedules, or to limit the size of each backup to one tape.

Qtrees are similar to directories in that they partition volumes and can have quotas set. In most cases,
directories can be created on a FlexVol volume that is being shared to clients by the users or administrator.
Use of qtrees, directories, or neither depends on the use case and administrative requirements.
NOTE: NetApp encourages the use of volumes rather than qtrees in clustered Data ONTAP.

4-19

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

How do I decide when I should just use volumes, or


create qtrees or directories?

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

4-20

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Data ONTAP File System

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: DATA ONTAP FILE SYSTEM

4-21

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

What makes NetApp storage unique? What


happens behind the scenes to make all this
work?

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-22

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Write Anywhere File Layout


The WAFL (Write Anywhere File Layout) file system optimizes writes to disk.
The operation by which the calculated RAID stripes are written to disk is
called a consistency point (CP).
FlexVol
Volume

inode

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WRITE ANYWHERE FILE LAYOUT


A file system organizes blocks of data on the storage into files. For Data ONTAP, the file system is the
volume.
The WAFL file system stores the metadata (the data about data) in inodes. Inodes are pointers to the blocks
on disk that hold the actual data. Every file has an inode, and each volume has a hidden inode file (collections
of inodes). The term inode refers to index nodes.
The WAFL file system optimizes writing to disk in several ways:

When a write comes into system memory, the write is not put straight down to the disk.
The write is written into the NVRAM, which is battery-backed.
Then the write is sent to the NVRAM in the HA partner.
These writes are collected in system memory and NVRAM from different LUNs or different files.
When enough writes are collected, or every 10 seconds (whichever comes first), the WAFL file system
looks at the disk subsystem and chooses a place with enough free space.
WAFL chooses a segment across all the disks in the RAID group.
WAFL puts a group of writes that are from the same LUN or file next to each other.
The writes need to be located together later, when they are read back.

RAID is calculated in memory, which helps to maintain fast write performance. The operation by which the
calculated RAID stripes are written to disk is called a consistency point (CP). At the moment when the CP
occurs, system memory and disks are consistent.
A CP occurs:

At least once every 10 seconds


When the NVRAM buffer is full

4-23

In a standalone configuration, when NVRAM is one half full


In an HA configuration, when NVRAM is one quarter full
Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Data Access


Indirect

LIF

LIF

23

Direct Data Access

Indirect Data Access

The target volume is


owned by the controller
that is hosting the logical
interface (LIF).

The target volume is not


owned by the controller
that is hosting the LIF.
Data traverses the cluster
interconnect to the node
that hosts the target
volume.

Volume

Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP DATA ACCESS


Clustered Data ONTAP and Data ONTAP operating in 7-Mode both support direct data access; however,
only clustered Data ONTAP supports indirect data access.

4-24

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Direct Access Write Operations


Node 1
1
3

Host

Node 2
2a

2b

2c

Node 3
1

24

Write is sent from host to


storage system via network
interface card (NIC) or host
bus adapter (HBA).
Write is simultaneously
processed into system
memory (2a) and logged in
NVRAM (2b) and in the
NVRAM mirror of the
partner node of the HA pair
(2c).

Node 4
3

Write is acknowledged to
the host.
Write is sent to storage in a
consistency point (CP).

Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DIRECT ACCESS WRITE OPERATIONS


Write operations for direct access take the following path through the storage system:
1. The write request is sent from the host to the storage system through a NIC or an HBA.
2. The write is simultaneously processed into system memory and logged in NVRAM and in the NVRAM
mirror of the partner node of the HA pair.
3. The write is acknowledged to the host.
4. The write is sent to storage in a consistency point (CP).

4-25

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Indirect Access Write Operations


2

Node 1

1
4

Host

Node 2

3a
3b

3c

Node 3
1

Write is sent from the host


to the storage system via a
NIC or an HBA.
Write is processed and
redirected (via the cluster
interconnect) to the node
that owns the volume.

Node 4
3

Volume

4
5

25

Write is simultaneously
processed into system
memory (3a) and logged in
NVRAM (3b) and in the
NVRAM mirror of the
partner node of the HA pair
(3c).
Write is acknowledged to
the host.
The write is sent to storage
in a CP.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INDIRECT ACCESS WRITE OPERATIONS


Write operations for indirect data access take the following path through the storage system:
1. The write request is sent from the host to the storage system through a NIC or an HBA.
2. The write is processed and redirected (through the cluster interconnect) to the storage controller that owns
the volume.
3. The write is simultaneously processed into system memory and logged in NVRAM and in the NVRAM
mirror of the partner node of the HA pair.
4. The write is acknowledged to the host.
5. The write is sent to storage in a CP.

4-26

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Consistency Points
Certain circumstances trigger a CP:

An NVRAM buffer fills up, and it is time to flush the writes to


disk
A ten-second timer runs out

A Snapshot copy is created


inode

Block A

26

New Snapshot

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONSISTENCY POINTS
Certain circumstances trigger a CP:

An NVRAM buffer fills up, and it is time to flush the writes to disk.
A ten-second timer runs out.
A resource is exhausted or hits a predefined scenario that indicates that it is time to flush the writes to
disk.

In the latter case, all other CP types occur. This situation can happen if Snapshot copies are created or the
system halts.

4-27

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NVRAM and Write Operations


Write operations are sent to disk:
From system memory in a CP
Not from NVRAM

CP

NVRAM (and NVMEM) characteristics:


Battery-backed memory
Double-buffered journal of write operations
Mirrored between storage controllers in
an HA pair

Writes in memory that have been logged in NVRAM (and


mirrored) are persistent

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NVRAM AND WRITE OPERATIONS


NVRAM is battery-backed memory. During a power outage, NVRAM uses its own battery to keep the
memory contents alive.
In a clean shutdown:

When you terminate services gracefully, the storage system commits all write requests to disk and clears
NVRAM.
When you boot the storage system, the boot process checks whether the shutdown was clean.

In a dirty shutdown:

When power is suddenly removed from the storage system, the NVRAM battery preserves the contents of
the memory.
When you boot the storage system, the NVRAM is signaled to replay its content into system memory.

Comparison of NVRAM and system memory:

4-28

Main memory, and not NVRAM, is the write cache.


NVRAM is used only for writes and never for reads.
NVRAM is a redo log or short-term transaction log.
NVRAM typically contains less than 10 seconds of data.
NVRAM is an insurance or safety net in case the system crashes after acknowledging a client write
request and before committing data to disk.

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Direct Access Read Operations


Node 1
1
2b
3c

Host

Node 2
2a

Node 3
1

28

Read request is sent from


the host to the storage
system via a NIC or an
HBA (1).
If the read is in system
memory, it is sent to the
host (2b); otherwise, the
system keeps looking for
the data.

Node 4

3b

3
3a

Finally the block is read


from storage (3a), brought
into memory (3b), and then
sent to the host (3c).

Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DIRECT ACCESS READ OPERATIONS


Read operations for direct access take the following path through the storage system:
1. The read request is sent from the host to the storage system through a NIC or an HBA.
2. If the read is in system memory, it is sent to the host; otherwise, the system keeps looking for the data.
3. The block is read from storage, brought into memory, and then sent to the host.

4-29

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Indirect Access Read Operations


2
3b

Node 1

4c
1
3b

Host

4b

Node 2

3a

Node 3
1

29

Read request is sent from


the host to the storage
system via a NIC or an
HBA (1).
Read request is sent to the
node that owns the volume.
(2).
If the read is in system
memory, it is sent to the
host (3b); otherwise, the
system keeps looking for
the data.

Node 4

4b

4
4a

Finally, the block is read


from storage (4a), brought
into memory (4b) and then
sent to the host (4c).

Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INDIRECT ACCESS READ OPERATIONS


Read operations for indirect data access take the following path through the storage system:
1.
2.
3.
4.

The read request is sent from the host to the storage system through a NIC or an HBA.
The read request is sent to the storage controller that owns the volume.
If the read is in system memory, it is sent to the host; otherwise, the system keeps looking for the data.
Flash Cache (if it is present) is checked and, if the blocks are present, they are brought into memory and
then sent to the host; otherwise, the system keeps looking for the data.
5. The block is read from storage, brought into memory, and then sent to the host.
Due to the asymmetric logical unit access (ALUA) multipath I/O configuration on the host, SAN access is
always direct if the system is configured properly.

4-30

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Virtual Storage Tier

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: VIRTUAL STORAGE TIER

4-31

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

How can I take advantage of NetApp flash


technologies without breaking our IT budget?

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-32

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Virtual Storage Tier


Flash Cache intelligent caching
Highest performance for file services
Improves latency for random reads
Delivers predictable, high-speed data
access

Server

Flash Pool intelligent caching


Highest performance for OLTP
Best for SATA enablement across
multiple workloads
Caching for random reads and writes
Automates the use of SSD technology

Flash Cache

Flash Pool
Storage

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP VIRTUAL STORAGE TIER


At the storage level, there are two ways to implement Virtual Storage Tier (VST):

The Flash Cache feature is controller-based, provides acceleration of random-read operations, and
generally provides the highest performance solution for file services workloads.
The Flash Pool feature is implemented at the disk-shelf level, allowing SSDs and traditional HDDs to be
combined in a single Data ONTAP aggregate. In addition to read caching, Flash Pool technology also
provides write caching and is particularly well-suited for OLTP workloads, which typically have a higher
percentage of write operations.

Both VST technologies improve overall storage performance and efficiency and are simple to deploy and
operate.

4-33

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flash Cache 2 Feature


512-GB, 1-TB, or 2-TB Peripheral Component
Interconnect Express (PCIe) module
Plug-and-play device (no required configuration)
7-Mode or clustered Data ONTAP

See TR-3832 for


more information.

Supports all protocols


Acts as an extention to the WAFL buffer cache and saves evicted buffers
duplicated and compressed blocks are maintained in the cache
Shared by all volumes on a node

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLASH CACHE 2 FEATURE


Flash Cache intelligent caching is a solution that combines software and hardware within NetApp storage
controllers to increase system performance without increasing the disk count. The Flash Cache plug-and-play
PCIe module requires no configuration to use the default settings, which are recommended for most
workloads. The original Flash Cache module is available in 256-GB, 51-GB, or 1-TB capacities and
accelerates performance on all supported Data ONTAP client protocols. The Flash Cache controller-based
solution is available to all volumes that are hosted on the controller.
Flash Cache 2 is the second generation of Flash Cache performance accelerators. The new architecture of
Flash Cache 2 accelerators enables them to provide even higher throughput. Flash Cache 2 accelerators
provide 512-GB, 1-TB, and 2-TB densities.
For more information, see TR-3832: Flash Cache Best Practice Guide.

4-34

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flash Pool Feature


Provides an aggregate-level read and
write cache
Aggregate contains one or more HDD RAID
groups and an SSD RAID group

Capacity

Performance

+
HDD
Flash Pool

Like the Flash Cache feature, uses


4-KB block granularity and real-time
caching
Flash Pool feature is not a replacement for
the Flash Cache feature

Cache remains populated and


available during storage failover (SFO)
events
SSDs cache random data
Previously written data (overwrites)
Read data expired from main memory

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLASH POOL FEATURE


WAFL aggregates are built with disks of the same type: SATA hard disks, and FC and SAS hard disks or
solid-state disks (SSDs). Flash pools allow the mixing of SSDs and hard disks within the same aggregate. The
SSD tier aggregate is used as a cache and doesnt contribute to usable space. When an aggregate is converted
to hybrid, the usable space in the aggregate does not change. The disks that a hybrid aggregate consists of are
treated like any disks in a NetApp storage array, and any class of disk can be added on demand, subject to
best practices around data, such as parity ratios and RAID types.
Flash pools provide:

4-35

Improved cost performance with fewer spindles, less rack space, and lower power and cooling
requirements
Highly available storage with a simple administrative model
Improved cost-to-performance and cost-to-capacity ratios compared to those of an SSD and SATA
combination with pure FC SAS
Predictable and better degraded mode operation across controller failures and with takeover and giveback
Automatic, dynamic, policy-based placement of data on appropriate tiers of storage (hard disks or SSDs)
at WAFL-block granularity for either data or system metadata

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Blocks in the SSD Tier


Flash pool metadata

Read-cached blocks:
Are a cached copy of the
blocks from the hard disk tier
Still exist on the hard disk tier

Write-cached blocks:

Are written directly to the SSD


tier
Are not yet written to the hard
disk tier

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BLOCKS IN THE SSD TIER


The following blocks are stored in the SSD tier of the flash pool:
Flash pool metadata: All metadata that is associated with the flash pool is stored in the SSD tier of the
aggregate.
Read-cached blocks: These blocks are cached copies of a block in the hard disk tier of the aggregate and are
stored in the SSD tier. Almost all data from the active file system in a read/write volume is eligible to be readcached into the SSD tier.
Write-cached blocks: These blocks are associated with a FlexVol volume that is written directly to the SSD
tier of the aggregate. Only one copy of the block exists. The copy is in the SSD tier. A hard-disk block is
reserved for write-cached blocks for an eventual move into the hard-disk tier after access to the block ceases.

4-36

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Provisioning for Flash Pool Cache


Increased storage utilization for SSDs in Flash Pool
aggregates
Ability to share spares between HA partners
Better utilization of SSD performance

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PROVISIONING FOR FLASH POOL CACHE


Another new Data ONTAP 8.3 feature is advanced SSD partitioning for Flash Pool cache. SSD partitioning
for Flash Pool cache enables customers to group SSDs into a shared resource, which is allocated to multiple
Flash Pool aggregates. This feature spreads the cost of the parity SSDs over more aggregates, increases SSD
allocation flexibility, and maximizes SSD performance.

4-37

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Creation

Storage Pool
(STORAGEPOOL1)

Allocation Unit

Allocation units become a RAID group when they are assigned


to a Flash Pool aggregate.
1

SSD Disk1 Through Disk6

c1::> storage pool create -storage-pool STORAGEPOOL1


disk-list disk1,disk2,disk3,disk4,disk5,disk6
37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: CREATION


SSD storage pools provide SSD cache to two or more Flash Pool aggregates. Creating an SSD storage pool
requires between 2 and 28 spare SSD disks.
In this example, SSD Disk1 through Disk6 are available as spares. The storage pool create
command is used to create the storage pool. The unit of allocation for an SSD storage pool is equal to a single
slice from each SSD disk in the storage pool. During the storage pool create command, each SSD
disk is sliced into four equal pieces, making an allocation unit that is equal to one fourth of all of the SSD
disks in the storage pool.
An allocation unit becomes a RAID group when it is assigned to a Flash Pool aggregate.

4-38

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Ownership

Storage Pool
(STORAGEPOOL1)

Node1
Node2
1

c1::> storage pool reassign -storage-pool STORAGEPOOL1


from-node Node1 -to-node Node2 allocation-units 1
38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: OWNERSHIP


By default, two allocation units are assigned to each node in the HA pair. To change the ownership of one or
more allocation units of a storage pool from one HA partner to the other, use the storage pool
reassign command. In this example, one allocation unit is reassigned from Node1 to Node2.

4-39

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Provisioning

Storage Pool
(STORAGEPOOL1)

Aggr2

Aggr1

HDD rg0
HDD rg1
SSD rg2

HDD rg0
DATA

DATA

DATA

DATA

PARITY

PARITY

DATA

DATA

DATA

DATA

PARITY

PARITY

DATA

DATA

DATA

DATA

PARITY

PARITY

HDD rg1
HDD rg2
SSD rg3
SSD rg4

c1::> storage aggregate add


aggregate
Aggr1 Aggr2
-allocation-units 1
add-disks
aggregate
raidtype
raid_dp 2
-storage-pool
STORAGEPOOL1
-allocation-units
raidtype raid_dp
-storage-pool STORAGEPOOL1
39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: PROVISIONING


Use the storage aggregate add-disks command to add storage pool allocation units to an
aggregate. In this example, the customer adds one allocation unit of RAID-DP from a storage pool
(STORAGEPOOL1) to an aggregate (Aggr1). Note the creation of the SSD RAID group in Aggr1.
Next, the customer adds two allocation units of RAID-DP from the storage pool (STORAGEPOOL1) to an
aggregate (Aggr2). Note the creation of the two SSD RAID groups in Aggr2.

4-40

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Mixed RAID Types
RAID-DP

Aggr5

Aggr6

X X X X

X X X X

X X X X

10

11

12

13

14

15

Aggr7

16

17 18

RAID 4

Maximum RAID group size is 28 for RAID-DP and 14 for RAID 4


Storage pool is expanded to 18 drives (12 new SSDs are added)
To avoid orphaned SSD partitions, use one RAID type across all
allocation units in an SSD storage pool
40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: MIXED RAID TYPES


Customers have several considerations when creating SSD storage pools with mixed RAID groups. In this
example, the customer has one storage pool with four allocation units spread across three aggregates. Three
allocation units are allocated as RAID 4, and one allocation unit is allocated as RAID-DP. The customer
wants to add SSDs to the storage pool.
The maximum RAID group size for RAID-DP is 28 disks, and the maximum RAID group size for RAID 4 is
14 disks. The customer adds 12 new SSDs and expands the storage pool to 18 disks.
Note the orphaned partitions. Remind customers to use one RAID type across all allocation units in an SSD
storage pool to avoid orphaned SSD partitions.

4-41

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Rules to Consider

SSD storage pools can contain only SSDs

HDDs cannot be added to an SSD storage pool


SSD storage pools can contain between 2 and 28 SSDs
All SSDs in an SSD storage pool must be owned by the same
HA pair

41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: RULES TO CONSIDER


When using SSD partitioning for Flash Pool cache with the Data ONTAP 8.3 operating system, the storage
pools must conform to several rules:

4-42

SSD storage pools can contain only SSDs. HDDs cannot be added to an SSD storage pool.
SSD storage pools can contain between 2 and 28 SSDs. If an SSD storage pool contains more SSDs than
the maximum RAID 4 group size for SSDs, then that pool cannot be used for a Flash Pool aggregate with
a RAID 4 cache.
All SSDs in an SSD storage pool must be owned by the same HA pair.

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SSD Partitioning for Flash Pool Cache


Costs and Limitations

Weigh advantages against costs:

Reduced fault isolation


Reduced performance isolation
Increased management complexity
Technologies that do not support SSD storage pools:
MetroCluster technology

SyncMirror technology
Physical SSDs
42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SSD PARTITIONING FOR FLASH POOL CACHE: COSTS AND LIMITATIONS


Customers should weigh the SSD storage pool advantages against the costs:

The loss of one SSD affects all RAID groups that include a partition of that SSD. In this situation, every
Flash Pool aggregate that has cache allocated from the SSD storage pool that contains the affected SSD
has one or more RAID groups in reconstruction.
If the Flash Pool cache is not properly sized, then contention for the cache can exist between the Flash
Pool aggregates that share that cache. This risk can be mitigated through proper cache sizing and qualityof-service (QoS) controls.
Storage pools are another storage object to manage. In addition, when multiple aggregates share a storage
resource, customers must take that into account whenever they operate on the shared resource. For
example, suppose that the customer wants to destroy an aggregate to free up its storage and move that
storage to a different node. The customer cannot move the SSDs in the storage pool until the customer
destroys every aggregate to which storage was allocated from that storage pool, as well as destroying the
storage pool itself.

Data ONTAP 8.3 SSD partitioning for Flash Pool cache support has a few limitations in a clustered
environment:

4-43

MetroCluster technology does not support storage pools.


SyncMirror mirrored aggregates can coexist with Flash Pool aggregates that use storage pools, but Flash
Pool aggregates cannot be mirrored.
Flash Pool aggregates cannot use SSD storage pools and physical SSDs in the same aggregate.

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Virtual Storage Tier


Feature Comparison
FLASH CACHE
What is it?

A controller-based PCIe card

FLASH POOL
What is it?
Storage-level, RAID-protected cache
(specific to aggregates)

A plug-and-play device
What does it do?

What does it do?

Provides per-controller cache

Caches random reads and


overwrites

Caches random reads

43

Provides cached data persistence


through failovers

Where does it fit?

Where does it fit?

With random-read workloads; for


example, file services

With random-overwrite-heavy
workloads; for example, OLTP

With volumes that span multiple


aggregates

With consistent performance


(required)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP VIRTUAL STORAGE TIER: FEATURE COMPARISON


The Flash Cache and Flash Pool features bring flash technology to Data ONTAP. This table compares the
primary uses and benefits of both features.

4-44

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Maximum Cache per HA Pair


Data ONTAP 8.3 cache sizes increase to four times the Data ONTAP
8.2.x sizes for most FAS platforms.
FAS Models

Data ONTAP 8.2.x

Data ONTAP 8.3

6290

24 tebibytes (TiB)

96 TiB

8080EX, 8060, 6280, 6250

18 TiB

72 TiB

8040, 6240, 6220

12 TiB

48 TiB

8020

6 TiB

24 TiB

6210, 3250

4 TiB

16 TiB

3270

2 TiB

8 TiB

3220

1.6 TiB

6.4 TiB

3240

1.2 TiB

3.2 TiB

800 gibibytes (GiB)

800 GiB

2240, 2220
44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MAXIMUM CACHE PER HA PAIR


Maximum flash pool cache sizes per HA pair have increased with the Data ONTAP 8.3 operating system.
With some exceptions in entry to midrange models, cache sizes have increased by a factor of four. With these
increases, customers should be able to get enough cache capacity to keep their working set in flash memory.

4-45

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

Will VST help an older system run faster?

45

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

4-46

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

Data ONTAP Physical Storage


Configuration

46

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: DATA ONTAP PHYSICAL STORAGE CONFIGURATION

4-47

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I understand the concepts and benefits of


NetApp storage. Now how do I configure it on my
cluster?

47

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-48

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Steps to Add Physical Storage

Aggregate

3. Create Aggregate

48

2. Assign Ownership

Spare Disks

1. Add Disks

Unowned Disks

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STEPS TO ADD PHYSICAL STORAGE


To add physical storage resources to create volumes, you must first add disks to the system. Disks come into
the system unowned, but they should automatically be assigned ownership. You can also manually assign
ownership and change ownership as needed.
When the disks have been assigned ownership, they are available as spares. Spares play an important part in
the storage systems availability. You should always have an optimal number of spare disks that can take over
for a failing disk.
To store data on the system, you must create an aggregate. An aggregate is a collection of RAID groups.
When a volume is created on an aggregate, the data is striped across the entire aggregate. For optimal system
performance, all RAID groups within an aggregate should have an equal number of disks of the same type.

4-49

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Identifying Disks
The shelf ID and bay designate the specific shelf and bay
number where the disk is located.
Shelf ID

Bay 0
Bay 4

Bay 20

3.0TB

3.0TB

Bay1
Bay 5

3.0TB

3.0TB

Bay 2
Bay 6

3.0TB

3.0TB

Bay 3
Bay 7

3.0TB

DS4486
3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

3.0TB

Bay 21

3.0TB

Bay 22

3.0TB

Bay 23

3.0TB

DS4486

49

Before 8.3:

SAS Disk Name = <node>:<slot><port>.<shelfID>.<bay>


Example: node1:0a.0.22
Used for bays with two disks

8.3 and later:

SAS Disk Name = <stack_id>.<shelfID>.<bay>.<position>


Example: 1.0.22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IDENTIFYING DISKS
Disks are numbered in all storage systems. Disk numbering enables you to interpret messages that are
displayed on your screen, such as command output or error messages, and to quickly locate a disk that is
associated with a displayed message
With Data ONTAP 8.2.x and earlier, disks are numbered based on a combination of their node name, slot
number, and port number, and either the loop ID for FC-AL-attached shelves or the shelf ID and bay number
for SAS-attached shelves.
With Data ONTAP 8.3 and later, when a node is part of a functioning cluster, the disk name is independent of
the nodes to which the disk is physically connected and from which the client accesses the disk.
Data ONTAP assigns the stack ID. Stack IDs are unique across the cluster and they start with 1.
The shelf ID is set on the storage shelf when the shelf is added to the stack or loop. If a shelf ID conflict exists
for SAS shelves, then the shelf ID is replaced with the shelf serial number in the drive name.
The bay is the position of the disk within its shelf. Clients can find the bay map in the administration guide for
the storage shelf. The position is used only for multidisk carrier storage shelves. For carriers that house two
disks, the position can be 1 or 2.
During system boot, before the node has joined the cluster or if certain cluster components become
unavailable, drive names revert to the classic format, based on physical connectivity.

4-50

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

4-51

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk Ownership
A disk is not useable until it is assigned to a controller
Disk ownership determines which controller owns a disk:
Ownership is automatically assigned (default)
Ownership can be manually assigned or changed
Software disk ownership is made persistent by writing the ownership information onto the
disk

It is possible for disks to be unowned


Data ONTAP 7-Mode
system> disk show n

Clustered Data ONTAP


c1::> storage disk show container-type unassigned
Usable
Container
Disk
Size Shelf
Bay Type
Position Aggregate Owner
-------------------- ------ ----- ----- ---------- --------- --------- ----9.11.18
1
1 unassigned present
-

50

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK OWNERSHIP
Disk are not useable in Data ONTAP until ownership is assigned to a controller. Fortunately, Data ONTAP
automatically assigns disks to a controller in the initial setup and checks occasionally to see if new disks are
added. When the disk is assigned, the disk ownership information is written to the disk so the assignment
remains persistent.
Ownership can be modified or removed. A disks data contents are not destroyed when the disk is marked as
unowned; only the disks ownership information is erased. Unowned disks that reside on an FC-AL, where
the owned disks exist, have ownership information applied automatically to guarantee that all disks on the
same loop have the same owner.
Automatic ownership assignment is enabled by default and is invoked at the following times:

Every five minutes during normal system operation


Ten minutes after the initial system initialization. (This delay enables the person configuring the system
enough time to finish the initial disk assignments so that the results of the automatic ownership
assignment are correct.)
Whenever you enable automatic ownership assignment

The automatic ownership assignment can also be manually initiated by using the disk assign command
with the auto parameter.
If your system is not configured to assign ownership automatically, or if your system contains array LUNs,
you must assign ownership manually.
NOTE: It is a NetApp best practice to unassign only spare disks.

4-52

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Spare Disks
Spare disks are used to:
Increase aggregate capacity
Replace failed disks

Disks must be zeroed before use:


Disks are automatically zeroed when they are added to an
aggregate
NetApp recommends manually zeroing disks before use
DoubleParity Parity

Data Disks

51

Spare Disks

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SPARE DISKS
You can add spare disks to an aggregate to increase its capacity or to replace a failed disk. If the spare is
larger than the other data disks, it becomes the parity disk. However, it does not use the excess capacity unless
another disk of similar size is added. The second largest additional disk has full use of additional capacity.
Zeroing used disks:
After you assign ownership to a disk, you can add that disk to an aggregate on the storage system that owns it,
or leave it as a spare disk on that storage system. If the disk has been used previously in another aggregate,
you should use the disk zero spares command to zero the disk to reduce delays when the disk is used.
Zeroing disks in Data ONTAP 7-Mode:
Use the disk zero spares command.
Zeroing disks in clustered Data ONTAP:
Use the storage disk zerospares command.

4-53

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Selecting Hot Spares

Larger Size:
Unused Capacity

Exact
Match

52

Different Speed:
Performance

!!

Degraded Mode:
No replacement

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SELECTING HOT SPARES


Data ONTAP always tries to choose a hot spare that exactly matches the disk that is failing or has failed. If an
exact match is not available, Data ONTAP uses the best available spare or goes into a degraded mode.
Understanding how Data ONTAP chooses an appropriate spare when there is no matching spare enables you
to optimize the spare allocation for your environment.
Data ONTAP picks a nonmatching hot spare based on the following criteria:

If the available hot spares are not the correct size, Data ONTAP uses one that is the next size up, if there
is one.
The replacement disk is downsized to match the size of the disk it is replacing; the extra capacity is not
available.
If the available hot spares are not the correct speed, Data ONTAP uses one that is a different speed.
Using disks with different speeds within the same aggregate is not optimal. Replacing a disk with a
slower disk can cause performance degradation, and replacing a disk with a faster disk is not costeffective.

If no spare exists with an equivalent disk type or checksum type, the RAID group that contains the failed disk
goes into degraded mode; Data ONTAP does not combine effective disk types or checksum types within a
RAID group.
NOTE: Degraded mode is intended to be a temporary condition until an appropriate spare disk can be added.
Do not run in degraded mode for more than 24 hours.

4-54

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Aggregates
Aggregate: Pool of storage
Plex: Used for mirrored aggregates

Aggregate

RAID group: Disk-level protection


Disk: HDD or SSD
-t = RAID type
-T = Disk type
Data ONTAP chooses disks, or
manually lists disks by name.

plex0
rg0

rg1

Data ONTAP 7-Mode


system> aggr create aggr1 T SAS 20

Clustered Data ONTAP


c1::> aggr create
-aggregate n1_aggr1
-node c1-01
-diskcount 20
-disktype SAS

53

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AGGREGATES
Aggregates provide storage to volumes. They are composed of RAID groups of disks or array LUNs, but not
both. The Data ONTAP operating system organizes the disks or array LUNs in an aggregate into one or more
RAID groups. RAID groups are then collected into one or two plexes, depending on whether RAID-level
mirroring (SyncMirror technology) is in use.
The Data ONTAP storage architecture contains:

Aggregates: Each aggregate contains a plex or plexes, a RAID configuration, and a set of assigned
physical disks to provide storage to the volumes that the aggregate contains.
Plexes: Each plex is associated with an aggregate and contains RAID groups. Typically, an aggregate has
only one plex. Aggregates that use SyncMirror technology have two plexes (plex0 and plex1); plex1
contains a mirror of the plex0 data.
RAID groups: Each RAID group contains physical disks and is associated with a plex. A RAID group
has either a RAID 4 or RAID-DP configuration.
Disks: Disks play different roles at different times, depending on the state of the disk.

Disk states:

4-55

Data
Parity
Double-parity
Spare
Broken
Unowned
Uninitialized (not zeroed)

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

RAID Group Sizing


HDD and SSD RAID Group

RAID group sizing:

HDD and SSD RAID groups:


Data Disks

Flash Pool

Parity dParity
Disk Disk

HDD rg

Data Disks

Parity dParity
Disk Disk

SSD rg

Data Disks

54

Parity
Disk

All RAID groups in aggregate


should be the same size
HDD: 12 to 20 (per RAID group)
SSD: 20 to 28 (per RAID group)

Flash pools:
SSD RAID group size and type
can be different from HDD RAID
group size

dParity
Disk

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RAID GROUP SIZING


Configuring an optimum RAID group size for an aggregate that is made up of disks requires a trade-off of
factors. You must decide which factorspeed of recovery, assurance against data loss, or maximizing data
storage spaceis most important for the aggregate that you are configuring.
You change the size of RAID groups on a per-aggregate basis. You cannot change the size of an individual
RAID group.
Follow these guidelines when sizing your RAID groups that are composed of HDDs or SSDs:

All RAID groups in an aggregate should have the same number of disks. If this guideline is impossible to
follow, any RAID group with fewer disks should have only one disk less than the largest RAID group.
The recommended range of HHD RAID group size is between 12 and 20 disks.
The reliability of performance disks can support a RAID group size of up to 28 disks, if needed.
The recommended range of SSD RAID group size is between 20 and 28. The reason for a higher SSD
recommendation is to minimize the number of SSDs required for parity.
If you can satisfy the first guideline with multiple RAID group sizes, you should choose the larger size.

Guidelines for SSD RAID groups in Flash Pool aggregates: The SSD RAID group size can be different
from the RAID group size for the HDD RAID groups in a Flash Pool aggregate. Usually, you should ensure
that you have only one SSD RAID group for a Flash Pool aggregate, to minimize the number of SSDs
required for parity.
For information about best practices for working with aggregates, see Technical Report 3437: Storage
Subsystem Resiliency Guide.
To see the physical and usable capacity for a specific disk, see the Hardware Universe at hwu.netapp.com.

4-56

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

The node root aggregates are important to the


health of my cluster, and I like that they are
protected by RAID-DP. But what about small
entry-level platforms with limited disks? Do I
really need to use three disks for aggr0?

55

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

4-57

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Advanced Disk Partitioning


Root-Data HDD Partitioning

New feature for Data ONTAP 8.3

Significantly lower root aggregate (aggr0) disk consumption


HA configurations for 12-disk entry-level platforms and allflash FAS.

Lower spare capacity overhead for active-passive


configurations
RAID-DP technology recommended practice for root
aggregates on entry-level platforms
FAS2520

56

FAS2554

FAS2552

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADVANCED DISK PARTITIONING: ROOT-DATA HDD PARTITIONING


Entry-level Data ONTAP storage systems can benefit significantly from a functionality of the new Advanced
Disk Partitioning feature that is introduced in the Data ONTAP 8.3 operating system; this beneficial
functionality is the root-data HDD partitioning functionality.
Benefits include significantly lower root-aggregate disk consumption. HA configurations can be enabled for
12-disk entry-level platforms. Lower spare-capacity overhead for active-passive configurations is another
benefit. And the RAID-DP recommended practice can be enabled for root aggregates on entry-level
platforms.
Supported configurations include the FAS2200 and FAS2500 series of storage controllers, as well as all-flash
FAS clusters.

4-58

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2520 HDDs Without Root-Data HDD


Partitioning
FAS2520

57

NODE1 ROOT

NODE1 ROOT PARITY

NODE1 ROOT PARITY

USER AGGR

USER AGGR

USER AGGR PARITY

USER AGGR PARITY

SPARE

SPARE

NODE2 ROOT

NODE2 ROOT PARITY

NODE2 ROOT PARITY

Data2 Disks
/ Total
= Efficiency
Data HDDs
and 12Disks
HDDs Total
= 17%

10

11

12

100%

0%

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS2520 HDDS WITHOUT ROOT-DATA HDD PARTITIONING


To illustrate the limitation that the new root-data HDD partitioning feature overcomes, consider an entry-level
FAS2520 storage system in an active-passive configuration.
The FAS2520 storage system has 12 disks that are internal to the controller chassis. To calculate the
efficiency of the disks, divide the number of data disks by the total number of disks (12, in this case).
Of these 12 disks, one is for the node 1 root aggregate and two are for the node 1 root aggregate parity. One
disk is for the node 2 root aggregate and two are for the node 2 root aggregate parity. Two disks are for spares
and two are for data parity. Two disks remain for data.
Based on the efficiency calculation, this FAS2520 configuration has an efficiency of approximately 17%.

4-59

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2520 HDDs With Root-Data HDD


Partitioning

<- NODE1 ROOT

<- NODE1 ROOT

<- NODE1 ROOT

<- N1 ROOT PARITY

<- N1 ROOT PARITY

<- SPARE

<- NODE2 ROOT

<- NODE2 ROOT

<- NODE2 ROOT

<- N2 ROOT PARITY

<- N2 ROOT PARITY

<- SPARE

DATA

DATA

DATA

DATA

DATA

DATA

DATA

DATA

DATA

PARITY

PARITY

SPARE

FAS2520

10

11

12

Root Partition

Data Partition

58

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS2520 HDDS WITH ROOT-DATA PARTITIONING


The root-data HDD partitioning functionality logically divides each disk into two slices. One slice forms the
root partition, and the other slice forms the data partition.
Each slice is treated like a physical disk, enabling RAID groups to be formed from disk slices. Each root
aggregate requires five slices from the root partition: three slices for the root aggregate data portion and two
for RAID-DP parity. Two slices from the root partition remain for use as spares.
The data aggregate can now be created by using 11 data partition slices: 9 slices for the data aggregate data
portion and 2 for RAID-DP parity. One slice from the data partition remains for use as a spare.
Keep in mind that this example uses an active-passive configuration. An active-active configuration would
require two data aggregate portions from the 12 data partition slices.

4-60

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FAS2520 Disk-Sharing Efficiency


FAS2520

9 Data HDDs and 12 HDDs Total = 72%


100%

SPARE

PARITY

PARITY

DATA

DATA

DATA

DATA

DATA

DATA

DATA

DATA

Data Partition

DATA

Root Partition

0%

59

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAS2520 DISK SHARING EFFICIENCY


To calculate the efficiency of the disks by using the new disk-sharing method, divide the number of disks that
are used for data (9) by the total number of disks (12). The new efficiency percentage is approximately
72%a 55% increase over the nonshared disk configuration.

4-61

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Root-Data HDD Partitioning


Limitations

Storage devices that cannot use root-data HDD partitioning:

Array LUNs
Virtual disks that are created for use with Data ONTAP-v
Disk types that are unavailable as internal disks: ATA, FC-AL, and mSATA

Technologies that do not support root-data HDD partitioning:


MetroCluster technology
RAID 4
Midrange and enterprise-level FAS platforms
60

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ROOT-DATA HDD PARTITIONING: LIMITATIONS


Support for root-data HDD partitioning in Data ONTAP 8.3 has a few limitations in a cluster environment.
Array LUNs, virtual disks that are created for use with Data ONTAP-v, and disk types that are unavailable as
internal disksATA, FC-AL, and mSATAcannot use root-data HDD partitioning. Also, some
technologiesincluding MetroCluster technology and RAID 4do not support root-data HDD partitioning.

4-62

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Aggregate Formats
64-bit aggregate:
Maximum size: dependent on controller model and Data ONTAP
version
Is the default

32-bit

32-bit aggregate:

64-bit

Aggregate

Maximum size: 16 TB
Not supported in Data ONTAP 8.3 and later

IMPORTANT: You must expand all aggregates to 64-bit before


upgrading to Data ONTAP 8.3.

61

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AGGREGATE FORMATS
Aggregates have two formats, with different maximum sizes:

32-bit: Maximum size is 16 TB.


64-bit: Maximum size depends on the FAS model and the Data ONTAP version.

For more information about 64-bit aggregates, see Technical Report 3786, A Thorough Introduction to 64-Bit
Aggregates.

4-63

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

In-Place 32-Bit-to-64-Bit Aggregate Expansion


You can expand 32-bit aggregates to 64-bit aggregates while an aggregate
is online and serving data. Aspects of this expansion:
The process works in the background but affects performance.
Automatically: If you add a disk that would exceed the 32-bit aggregates
16-TB maximum size
Manually: If you use the expansion commands (Data ONTAP 8.2.1 and
later)
Data ONTAP 7-Mode

IMPORTANT: You must expand all aggregates to


64-bit before upgrading to Data ONTAP 8.3.

system> aggr 64bit-upgrade start aggr32

Clustered Data ONTAP


c1::> storage aggregate 64bit-upgrade start aggregate n1_aggr32

62

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IN-PLACE 32-BIT-TO-64-BIT AGGREGATE EXPANSION

4-64

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

Why do disks need to be zeroed?

63

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

4-65

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Using cluster2 on your lab kit, enter the storage disk show
command.

Are any disks unassigned?


On your lab kit, could you unassign a disk or reassign it to
another node?
If so, what commands would you use?

64

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

4-66

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

Data ONTAP FlexVol


Configuration

65

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: DATA ONTAP FLEXVOL CONFIGURATION

4-67

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data SVM
With FlexVol Volumes

data
LIFs
-----

mgmt
LIF

FlexVol
Volumes

SVM Admin

NFS
CIFS
iSCSI
FC

Client Access

SVM with FlexVol Volumes

SVMs serve data to clients


SVMs securely isolate the shared data storage and network
Separate administration for volumes and logical network interfaces
Can be used for NAS, SAN, or both
66

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA SVM: WITH FLEXVOL VOLUMES


SVMs, formerly known as Vservers, contain data volumes and one or more LIFs through which they serve
data to the clients. Starting with clustered Data ONTAP 8.1.1, SVMs can either contain one or more FlexVol
volumes, or a single Infinite Volume.
SVMs securely isolate the shared virtualized data storage and network, and each SVM appears as a single
dedicated server to the clients. Each SVM has a separate administrator authentication domain and can be
managed independently by its SVM administrator.
In a cluster, SVMs facilitate data access. A cluster must have at least one SVM to serve data. SVMs use the
storage and network resources of the cluster. However, the volumes and LIFs are exclusive to the SVM.
Multiple SVMs can coexist in a single cluster without being bound to any node in a cluster. However, they
are bound to the physical cluster on which they exist.
A cluster can have one or more SVMs with FlexVol volumes and SVMs with Infinite Volume.
Each SVM with FlexVol volumes in a NAS environment presents a single directory hierarchical view and has
a unique namespace. The namespace enables NAS clients to access data without specifying the physical
location of the data. The namespace also enables the cluster and SVM administrators to manage distributed
data storage as a single directory with multiple levels of hierarchy.
The volumes within each NAS SVM are related to each other through junctions and are mounted on junction
paths. These junctions present the file system in each volume. The root volume of the SVM is a FlexVol
volume that resides at the top level of the namespace hierarchy; additional volumes are mounted to the SVM
root volume to extend the namespace. As volumes are created for the SVM, the root volume of the SVM
contains junction paths.
SVMs with FlexVol volumes can contain files and LUNs. They provide file-level data access by using NFS
and CIFS protocols for the NAS clients, and block-level data access by using iSCSI and FC (FCoE included)
for SAN hosts.
4-68

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FlexVol Volumes
Overview

Volumes are the unit of data management in SVMs


They can be moved, copied, mirrored, backed up, or copied by
using Snapshot copies within the SVM
They can be mirrored (not moved) to other SVMs
A volume can exist only within a single SVM

Volumes are joined together through junctions to create the


namespace of an SVM

67

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLEXVOL VOLUMES: OVERVIEW


Clustered Data ONTAP FlexVol volumes are functionally equivalent to FlexVol volumes in the Data ONTAP
7-Mode and the Data ONTAP 7G operating systems. However, clustered Data ONTAP systems use FlexVol
volumes differently than Data ONTAP 7-Mode and Data ONTAP 7G systems do. Because Data ONTAP
clusters are inherently flexible (particularly because of the volume move capability), volumes are deployed as
freely as UNIX directories and Windows folders are deployed to separate logical groups of data.
Volumes can be created and deleted, mounted and unmounted, moved around, and backed up as needed. To
take advantage of this flexibility, cluster deployments typically use many more volumes than traditional Data
ONTAP 7G deployments use. In an HA pair, aggregate and volume limits apply to each node individually, so
the overall limit for the pair is effectively doubled.

4-69

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FlexVol Volumes
Types

FlexVol volume types:


Files

LUN

FlexVol

FlexVol

System (or node root)


Typically named vol0
Should only contain configuration
and logs
Should not contain user data

SVM root volume


Top level of the namespace
Should not contain user data

Data
NAS: Contain file systems for user
data
SAN: Contain LUNs

Aggregate

68

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLEXVOL VOLUMES: TYPES


FlexVol volumes are used:

4-70

As node root volumes to hold state data for the node and for the cluster
As the root of an SVM namespace
To store user data within an SVM

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Management of FlexVol Volumes


Data ONTAP 7-Mode

Create

/vol

system> vol create vol1 aggr1 size[k|m|g|t]

Resize
system> vol size vol1 [[+|-]<size>[b|b|g|t]]

Offline and online


system> vol offline vol1
system> vol online vol1

Destroy
system> vol destroy vol1

Must be offline

Clustered Data ONTAP

Create
system> volume create vs svm1 name vol1 aggr aggr1

-size size[kb|mb|gb|tb]

Resize
system> vol modify vs svm1 name vol1 size [+|-] 10gb

Offline and online


system> vol offline vserver svm1 name vol1
system> vol online vserver svm1 name vol1

Destroy
system> vol delete vserver svm1 name vol1

69

Must be offline

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGEMENT OF FLEXVOL VOLUMES

4-71

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Using cluster2 on your lab kit:

Enter the vol show command.


Now enter the vol show instance command.
Lastly, enter the vol show fields comment command.
What was different about the output?
Can you think of other reasons to use fields?
How would you get a list of all the fields that are available
for a command?

70

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

4-72

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Now I understand the benefits of RAID and how


to use aggregates and volumes to provision
storage. I also know that I must make sure that
all my aggregates are expanded to 64-bit format
before I upgrade my cluster to Data ONTAP 8.3. I
look forward to learning how to implement NAS
and SAN so that my clients can access their
data.

71

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

4-73

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Data ONTAP 8.2 System Administration Guides

Clustered Data ONTAP Physical Storage Management Guide


Clustered Data ONTAP Logical Storage Management Guide
TR-3832: Flash Cache Best Practices Guide

TR-4070: NetApp Flash Pool Design and Implementation Guide

72

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

4-74

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 4: Storage
Management

Time Estimate: 30 minutes

73

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Refer to your exercise guide.

4-75

Clustered Data ONTAP Administration: Storage Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 5

Network Management

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 5: NETWORK MANAGEMENT


In this module you learn how clustered Data ONTAP uses physical, logical, and virtual network resources to
enable nondisruptive data availability. You learn about IPspaces, broadcast domains, and subnets. You also
learn to effectively configure and manage network resources.

5-1

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Now that I have my cluster ready to


store data, how do I enable my storage
customers to access it?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-2

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Networks
Cluster Interconnect*

One or two
cluster ports
per switch
node1

Two ISLs

node2

Management
Network

node3

Two management
ports per node

NetApp recommends redundant


data and management networks.
3

Four or eight
Inter-Switch
Links (ISLs)

Data Network
(Ethernet, FC, or
Converged)

node4

Redundant
networking

* Required for clustered Data ONTAP only

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORKS
Networking is where Data ONTAP operating in 7-Mode and clustered Data ONTAP differ most. Because a
clustered Data ONTAP system is essentially a cluster of high-availability (HA) pairs, you need a cluster
network or cluster interconnect for all the nodes to communicate with each other. You should always keep
this principle in mind: If a node cannot see the cluster interconnect, it is not part of the cluster. Therefore, the
cluster interconnect requires adequate bandwidth and resiliency.
This graphic shows a four-node cluster and three distinct networks. 7-Mode and clustered Data ONTAP
require both data and management connectivity, which can coexist on the same data network.
In multi-node configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic. In
a two-node configuration, the cluster interconnect can be as simple as cabling the two nodes or using switches
if expansion is desired. In clusters of more than two nodes, switches are required. Single-node clusters do not
require a cluster interconnect if the environment does not require high availability and nondisruptive
operations (NDO).
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.
For the larger clusters that use higher-end platforms (FAS8040, FAS8060, and FAS8080) that are running
clustered Data ONTAP 8.2.1, four cluster interconnects are the default. Optionally, a FAS8080 can be
configured to use 6 cluster interconnect ports with expansion 10-gigabit Ethernet network interface cards (10GbE NICs).
For proper configuration of the NetApp CN1601 and CN1610 switches, refer to the CN1601 and CN1610
Switch Setup and Configuration Guide.

5-3

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1
Network Ports

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: NETWORK PORTS

5-4

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Networking


Logical*

Virtual

LIF

smv1-mgmt

smv1-data1

VLAN

a0a-50

a0a-80

Ifgrp

Physical

a0a

Port

e2a

e3a

* Clustered Data ONTAP only

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP NETWORKING


Data ONTAP systems can be analyzed as having three network layers:

5-5

Physical: network ports


Virtual: interface groups (ifgrps) and virtual LANs (VLANs)
Logical interfaces (LIFs): for clustered Data ONTAP only

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Port Types
Physical port
Ethernet
FC
Unified Target Adapter (UTA)
UTA is a 10-GbE port
UTA2 is configured as either:
10-GbE
or 16-Gbps FC

Virtual port
Interface group (ifgrp)
Virtual LAN (VLAN)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PORT TYPES
Port types can be either physical or virtual.
Physical:

Ethernet port: 1-Gb or 10-Gb Ethernet (10-GbE) ports that can be used in NFS, CIFS, and iSCSI
environments

FC port: 4-Gbps, 8-Gbps, or 16-Gbps port that can be used as a target in FC SAN environment. It can be
configured as an initiator for disk shelves or tape drives.
Unified Target Adapter (UTA) port: 10-GbE port that can be used in NFS, CIFS, iSCSI and FCoE
environments
Unified Target Adapter 2 (UTA2) port: Configured as either a 10-GbE Ethernet or 16-Gbps FC port

10-Gb ports can be used in NFS, CIFS, iSCSI, and FCoE environments
16-Gbps FC ports can be used as targets in FC SAN environments

NOTE: UTA2 FC ports are not supported with DS14 disk shelves or FC tape drives.
Virtual:

5-6

Interface group: An interface group implements link aggregation by providing a mechanism to group
together multiple network interfaces (links) into one logical interface (aggregate). After an interface group
is created, it is indistinguishable from a physical network interface.
VLAN: Traffic from multiple VLANs can traverse a link that interconnects two switches by using VLAN
tagging. A VLAN tag is a unique identifier that indicates the VLAN to which a frame belongs. A VLAN
tag is included in the header of every frame that is sent by an end-station on a VLAN. On receiving a
tagged frame, a switch identifies the VLAN by inspecting the tag, then forwards the frame to the
destination in the identified VLAN.

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Ports Example


FAS8040 and FAS8060
Management port:
e0M or Service Processor (SP)

10-GbE and UTA2 ports use either twinax copper (Cu) or


optical (LC) cables, depending on the personality and type of
small form-factor pluggable (SFP) module.

SAS

10-GbE cluster* or data ports:


e0a*, e0b, e0c*, and e0d

GbE data ports:


e0i, e0j, e0k, and e0l

ACP

Console port
(also SP)

UTA2 data ports:


e0e/0e, e0f/0f, e0g/0g, and e0h/0h
* Required for clustered Data ONTAP only
7

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL PORTS EXAMPLE: FAS8040 AND FAS8060


These are the network ports on the FAS8040 and FAS8060 systems.
Clustered Data ONTAP configuration:

4 x 10-GbE ports for cluster interconnects

4 x Unified Target Adapter 2 (UTA2) ports can be configured as either 10-GbE or 16-Gbps FC for data

e0e/0e and e0f/0f, and e0g/0g and e0h/0h, are port pairs

Choose from FC SFP+, 10-GbE SFP+, or twinax Ethernet

Set port mode command is ucadmin (7-Mode and clustered Data ONTAP)

e0M runs at GbE


SP runs at 10/100

1 x private management port that is used as an alternate control path (ACP) for SAS shelves
1 x console port (can be configured for SP)

5-7

Can only be used for data (not cluster interconnects)


Port pairs must be set the same:

4 x GbE ports for data


1 x management port (default for node-management network)

Supported: two cluster interconnects (e0a and e0c) and two data (e0b and e0d)
Recommended: four cluster interconnects (switched clusters only)

To toggle from serial console into SP, use Ctrl-G.


To toggle back, use Ctrl-D.

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

7-Mode configuration:
Same as Data ONTAP, except that 4 x 10-GbE ports are used for data.
Open slots can be used for Flash Cache, FC-VI, UTA2, 10-GbE, 4-Gbps, 8-Gbps, 16-Gbps FC or GbE cards.

5-8

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Physical Port Identification


Ethernet ports are named e<location><letter>
e0a is the first port on the controllers motherboard
e3a is a port on a card in slot 3

FC ports are named <location><letter>


0a is the first port on the controllers motherboard
3a is a port on a card in slot 3

UTA ports have both an Ethernet name and an FC name


e<location><letter>/<location><letter>
e0e/0e is the first port on the controllers motherboard
e3a/3a is a port on a card in slot 3
Show commands list only by FC label name (even in Ethernet mode)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PHYSICAL PORT IDENTIFICATION


Port names consist of two or three characters that describe the port's type and location. You must be aware of
certain conventions of naming the ports on the network interfaces.
Ethernet ports: The first character describes the port's type and is always e to represent Ethernet. The second
character is a numeral that identifies the slot in which the port adapter is located; the numeral 0 (zero)
indicates that the port is on the node's motherboard. The third character indicates the port's position on a
multiport adapter. For example, the port name e0b indicates the second Ethernet port on the motherboard, and
the port name e3a indicates the first Ethernet port on an adapter in slot 3.
FC ports: consist of two characters (dropping the e), but they otherwise follow the same naming convention
as Ethernet ports. For example, the port name 0b indicates the second FC port on the motherboard, and the
port name 3a indicates the first FC port on an adapter in slot 3.
UTA ports: are unique in that they are physically a single port but can pass either Ethernet traffic or FC
traffic. Therefore, they are labeled with both the Ethernet name and the FC name. For example, the port name
e0b/0b indicates the second UTA port on the motherboard, and the port name e3a/3a indicates the first UTA
port on an adapter in slot 3.
NOTE: UTA adapter ports are listed by the FC label name only when you are using the ucadmin command,
even when the personality is configured as 10-GbE.

5-9

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Using cluster1 on your lab kit, use the net port show
command.
What ports do you see?
In what slots are the adapters attached?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

5-10

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Modifying Network Port Attributes


Set UTA2 Port Personality
-m sets personality

Data ONTAP 7-Mode


NOTE: Offline the port first.
system> ucadmin modify -m fc|cna 0e
system> reboot

Clustered Data ONTAP


NOTE: Remove any LIFs and offline the port first.
c1::> system node hardware unified-connect modify
-node c1-01 -adaptor 0e
-mode fc|cna
c1::> system node reboot node c1-01

Insert the proper optical module before changing modes.

10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODIFYING NETWORK PORT ATTRIBUTES: SET UTA2 PORT PERSONALITY


The ucadmin and system node hardware unified-connect commands change the adapter
configuration.
7-Mode:
The mode value refers to the personality mode of the adapter, which can be FC or converged network adapter
(CNA). The type value refers to the FC-4 type of the adapter; it can be initiator or target. (CNA mode
supports only FC target for the FCoE function.) Any changes to the adapter mode or type require a reboot for
the changes to take effect.
The adapter must also be offline before any changes can be made.

When the adapter type is initiator, use the storage disable adapter command to bring the
adapter offline.
When the adapter type is target, use the fcp config command to bring the adapter offline.

Clustered Data ONTAP:


In clustered Data ONTAP, UTA ports are handled in a similar way and require a reboot to take effect. The
adapter must also be offline before any changes can be made.

When the adapter type is initiator, use the run local storage disable adapter command to
bring the adapter offline.
When the adapter type is target, use the network fcp adapter modify command to bring the
adapter offline.

For more information about configuring FC ports, refer to the Data ONTAP SAN Administration Guide for
your release, or attend the NetApp University SAN Implementation course.

5-11

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Interface Groups
Interface groups enable link aggregation of one or more
Ethernet interfaces:
Single-mode (active-standby)
Multimode

1-GbE
Interfaces

Static (active-active)
Dynamic (LACP*)

10-GbE
Interfaces

Interface Groups

*Link Aggregation Control Protocol


11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INTERFACE GROUPS
The following network terms are described as they are implemented within Data ONTAP:

Interface groups aggregate network interfaces into a trunk.


You can implement link aggregation on your storage system to group multiple network interfaces (links)
into one logical interface (aggregate).
After an interface group is created, the interface group is indistinguishable from a physical network
interface.

Be aware that different vendors refer to interface groups by the following terms:

Virtual aggregations
Link aggregations
Trunks
EtherChannel

Interface groups can be implemented in two modes: single-mode and multimode.

In single-mode link aggregation, one interface is active, and the other interface is inactive (on standby).
In multimode, all links in the link aggregation are active.

A dynamic multimode interface group can detect loss of link status and data flow.
Multimode requires a compatible switch to implement configuration.

Data ONTAP link aggregation complies with the IEEE 802.3ad static standard and multimode dynamic link:
Link Aggregation Control Protocol (LACP).

5-12

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating Interface Groups


Data ONTAP 7-Mode
system> ifgrp create {single} ifgrp1 e0a e0b
system> ifgrp create {multi|lacp} ifgrp2 b {rr|mac|ip|port}
a0c a0d
ifgrp create is not persistent; use System
Manager or edit the /etc/rc file.

-b sets load balancing


method.

Clustered Data ONTAP


c1::> network port ifgrp create -node c1-01
-ifgrp a0a
distr-func {mac|ip|sequential|port}
-mode {multimode|multimode_lacp|singlemode}

Interface group
name must be in
an
"a<number><lett
er> format.

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING INTERFACE GROUPS


You can create interface groups for higher throughput, fault tolerance, and elimination of single points of
failure.
7-Mode:
You can use the ifgroup create command to create interface groups. Interfaces must be down before
you add them to an interface group.
Single mode: You name the group and specify the ports to be included. The active path is randomly selected.
You override the random selection by using the ifgrp favor and ifgrp nofavor commands.
Multimode: You name the group and specify the ports, and you specify the load balancing method by using
the b option:

rr specifies round-robin load balancing


mac specifies MAC address load balancing
ip indicates IP address load balancing (default)
port indicates port-based load balancing

Interface groups that are created by using the ifgrp create and ifgrp favor commands are not
persistent across reboots unless they are added to the /etc/rcfile or unless you use System Manager.

5-13

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP:


In clustered Data ONTAP, interface groups are handled in a similar way, with the exception of the following:

Interface groups must be named by using the syntax a<number><letter>.


A port that is already a member of an interface group cannot be added to another interface group.
Multimode load balancing methods:

mac: Network traffic is distributed on the basis of MAC addresses.


ip: Network traffic is distributed on the basis of IP addresses.
sequential: Network traffic is distributed as it is received.
port: Network traffic is distributed on the basis of the transport layer (TCP/UDP) ports.

For more information about load balancing, please refer to TR-4182: Ethernet Storage Best Practices for
Clustered Data ONTAP Configurations.

5-14

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Interface Group Considerations


Due to the limited capabilities of single mode, it is
recommended that you do not use this type of interface group
in clustered Data ONTAP
To take advantage of all the performance and resiliency
functionality of dynamic multimode (LACP), it is
recommended when you use interface groups

Requires an LACP-enabled switch


All the interfaces in the group will be active, will share the same
MAC address, and will use load balancing outbound traffic (not
inbound)
A single host will not achieve larger bandwidth than any of the
constituent connection (two 10-GbE 20GbE)
May not have any advantages for iSCSI hosts

13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INTERFACE GROUP CONSIDERATIONS


Interface groups can be configured to add an additional layer of redundancy and functionality to a clustered
Data ONTAP environment. Interface groups can also be used in conjunction with a failover group, which
would help protect against layer 2 and layer 3 Ethernet failures.
A single-mode interface group is an active-passive configuration (one port sits idle, waiting for the active
port to fail), and it cannot aggregate bandwidth. Due to its limited capabilities, as a best practice
recommendation NetApp advises not using this type. To achieve the same level of redundancy, you can use
failover groups or one of the two multimode methods.
A static multimode interface group might be used if you want to use all the ports in the group to
simultaneously service connections. It does differ from the type of aggregation that happens in a dynamic
multimode interface group, in that no negotiation or auto detection happens within the group in regard to the
ports. A port sends data when the node detects a link, regardless of the state of the connecting port on the
switch side.
A dynamic multimode interface group might be used to aggregate bandwidth of more than one port. LACP
monitors the ports on an ongoing basis to determine the aggregation capability of the various ports and
continuously provides the maximum level of aggregation capability achievable between a given pair of
devices. However, all the interfaces in the group are active, share the same MAC address, and handle load
balancing outbound traffic. But this fact does not mean that a single host will achieve larger bandwidth,
exceeding the capabilities of any of the constituent connections. For example, adding four 10-GbE ports to a
dynamic multimode interface group will not result in one 40-GbE link for one host. This situation is due to
the way that the aggregation of the ports in the interface group is handled by both the switch and the node. A
recommended best practice is to use this type of interface group so that you can take advantage of all the
performance and resiliency functionality that the interface group algorithm has to offer.

5-15

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

There are two methods to achieve path redundancy when using iSCSI in clustered Data ONTAP: by using
interface groups or by configuring hosts to use multipath I/O over multiple distinct physical links. Because
use multipath I/O is required, interface groups might have very little value.
For more information, refer to TR-4182: Ethernet Storage Best Practices for Clustered Data ONTAP
Configurations.

5-16

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

VLANs

Switch 1

Switch 2

Router

Mgmt
Switch

VLAN70
Clients
14

VLAN172
Tenant B

VLAN171
Tenant A

VLAN170
Mgmt

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VLANS
A port or interface group can be subdivided into multiple VLANs. Each VLAN has a unique tag that is
communicated in the header of every packet. The switch must be configured to support VLANs and the tags
that are in use. In Data ONTAP, a VLAN's ID is configured into the name. For example, VLAN "e0a-70" is a
VLAN with tag 70 configured on physical port e0a. VLANs that share a base port can belong to the same or
different IP spaces, and it follows that the base port could be in a different IP space than its VLANs.
Different configurations of LIFs, failover groups, VLANs, and interface groups are possible in a clustered
Data ONTAP environment. The best practice recommendation is to use a configuration that takes advantage
of the cluster-wide failover capabilities of failover groups, the port aggregation functionality of interface
groups, and the security aspects of VLANs.
For more examples, refer to the Clustered Data ONTAP 8.2 Network Management Guide and TR-4182:
Ethernet Storage Best Practices for Clustered Data ONTAP Configurations.

5-17

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating VLANs
Data ONTAP 7-Mode
system> vlan create e4 10 20 30

Creates e4-10, e4-20,


and e4-30

system> ifconfig -a
e0a: flags=0x80e08866<BROADCAST,RUNNING,MULTICAST,VLAN> mtu 1500
ether 00:0c:29:56:54:7e (auto-1000t-fd-up) flowcontrol full

Clustered Data ONTAP


c1::> network port vlan create -node c1-01
-vlan-name e1c-80

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING VLANS
You can create a VLAN for ease of administration, confinement of broadcast domains, reduced network
traffic, and enforcement of security policies.
7-Mode:
In 7-Mode, You can use the vlan create command to include an interface in one or more VLANs, as
specified by the VLAN identifier, enable VLAN tagging, and optionally enable GVRP (enabled with the g
option). VLANs that you create by using the vlan create command are not persistent across reboots
unless you add them to the /etc/rcfile or you use System Manager.
Clustered Data ONTAP:
In clustered Data ONTAP, interface groups are handled in a similar way.

5-18

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ports, Interface Groups, and VLANs

vlan

vlan

ifgrp

port

port

vlan

port

vlan
ifgrp

port

16

port

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PORTS, INTERFACE GROUPS, AND VLANS


Ports are the physical network adapters that are cabled to a switch or a virtualization. This switch or
virtualization subdivides or groups the physical ports. Ports can be arranged in a hierarchy, with VLANs
subdividing the interface groups.
In 7-Mode, interface groups can be created from ports or other interface groups; the latter are called secondlevel interface groups. You can create a second-level interface group by using two multimode interface
groups. Second-level interface groups enable you to provide a standby multimode interface group in case the
primary multimode interface group fails.
VLANs cannot be created from other VLANs, and interface groups cannot be created from VLANs.
In clustered Data ONTAP, interface groups cannot be created from other interface groups, VLANs cannot be
created from other VLANs, and interface groups cannot be created from VLANs.
NOTE: You cannot create VLANs or interface groups on cluster ports on clustered Data ONTAP.

5-19

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
IPspaces

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: IPSPACES
The MultiStore feature for Data ONTAP software was created to enable service providers to partition the
resources of a single storage system so that it appears as multiple virtual storage systems on a network.
The IPspace feature was created for MultiStore to enable a single storage system to be accessed by clients
from more than one disconnected network, even if those clients are using the same IP address.
Clustered Data ONTAP has had a feature similar to MultiStore virtual storage systems, and IPspaces were
introduced to clustered Data ONTAP in version 8.3.
Conceptually, IPspaces in 7-Mode and clustered Data ONTAP are similar, but the configuration is very
different. In this lesson only, clustered Data ONTAP 8.3 examples are discussed. For information on how to
configure IPspaces for MultiStore environments in 7-Mode, refer to the MultiStore Management Guide For 7Mode for the version of Data ONTAP that you are configuring.

5-20

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

My company serves data to multiple


networks that are isolated from each
other. Some even use overlapping IP
address ranges. How can clustered
Data ONTAP handle that?

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-21

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

IPspaces
Overview

Default

Company A

Company B

SVM 1

SVM_A-1

SVM_B-1

SVM 2

SVM_A-2

SVM_B-2

Default
routing
table

Company A
routing
table

Company B
routing
table

IPspace
Storage service
provider (SSP)
point of
presence

19

IPspace

IPspace

Default: 192.168.0.0

Company A: 10.0.0.0

Company B: 10.0.0.0

192.168.0.5

10.1.2.5

10.1.2.5

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IPSPACES: OVERVIEW
The IPspace feature enables a storage system or cluster to be accessed by clients from more than one
disconnected network, even if those clients are using the same IP address.
An IPspace defines a distinct IP address space in which virtual storage systems can participate. IP addresses
that are defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained
for each IPspace. No cross-IPspace traffic routing happens. Each IPspace has a unique loopback interface that
is assigned to it. The loopback traffic on each IPspace is completely isolated from the loopback traffic on
other IPspaces.
Example:
A storage service provider (SSP) needs to connect customers of companies A and B to a storage system on
the SSP's premises. The SSP creates storage virtual machines (SVMs) on the clusterone per customerand
provides a dedicated network path from one SVM to As network and one from the other SVM to Bs
network.
This deployment should work if both companies use nonprivate IP address ranges. However, because the
companies use the same private addresses, the SVMs on the cluster at the SSP location have conflicting IP
addresses.
To overcome this problem, two IPspaces are defined on the clusterone per company. Because a distinct
routing table is maintained for each IPspace, and no cross-IPspace traffic is routed, the data for each company
is securely routed to its respective network, even if the two SVMs are configured in the 10.0.0.0 address
space.
Additionally, the IP addresses that are referred to by the various configuration files (such as the /etc/hosts file,
the /etc/hosts.equivfile, and the /etc/rcfile) are relative to that IPspace. Therefore, the IPspaces enable the SSP
to use the same IP address for the configuration and authentication data for both SVMs, without conflict.

5-22

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

IPspaces
Defaults

Two IPspaces and two system SVMs are created automatically when the
cluster is initialized:
Default IPspace
A container for ports, subnets, and SVMs that serve data for configurations that
do not need separate IPspaces for clients
Also contains the cluster management and node management ports
For the Default IPspace, a system SVM named after the cluster is created

Cluster IPspace

20

Contains all cluster ports from all nodes in the cluster


Provides connectivity to the internal private cluster network
Not created for a single-node cluster
For the Cluster IPspace, a system SVM named Cluster is created

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IPSPACES: DEFAULTS
Two special IPspaces are created by default when the cluster is first created, and a special SVM is created for
each IPspace.

Default IPspace
This IPspace is a container for ports, subnets, and SVMs that serve data. If your configuration does not
need separate IPspaces for clients, all SVMs can be created in this IPspace. This IPspace also contains the
cluster management and node management ports.

Cluster IPspace
This IPspace contains all cluster ports from all nodes in the cluster. It is created automatically when the
cluster is created. It provides connectivity to the internal private cluster network. As additional nodes join
the cluster, cluster ports from those nodes are added to the Cluster IPspace.

A system SVM exists for each IPspace. When you create an IPspace, a default system SVM of the same name
is created:

The system SVM for the Cluster IPspace carries cluster traffic between nodes of a cluster on the
internal private cluster network. It is managed by the cluster administrator, and it has the name Cluster.
The system SVM for the Default IPspace carries management traffic for the cluster and nodes,
including the intercluster traffic between clusters. It is managed by the cluster administrator, and it uses
the same name as the cluster.
The system SVM for a custom IPspace that you create carries management traffic for that SVM. It is
managed by the cluster administrator, and it uses the same name as the IPspace

One or more SVMs for clients can exist in an IPspace. Each client SVM has its own data volumes and
configurations, and it is administered independently of other SVMs.

5-23

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

IPspaces
Managing IPspaces

You can create IPspaces when you need your SVMs to have their own
secure storage, administration, and routing:
c1::> network ipspace create ipspace IPspace_A
c1::> network ipspace create ipspace IPspace_B

NOTE: A system SVM with the same name as the IPspace name is automatically
created.

IPspaces that have been created can be renamed and deleted.


c1::> network ipspace rename ipspace new_ipspace_name
c1::> network ipspace delete ipspace ipspace_name

NOTE: The system-defined Default and Cluster IPspaces cannot be renamed


or deleted.
21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IPSPACES: MANAGING IPSPACES


IPspaces are distinct IP address spaces in which SVMs reside. You create IPspaces when you need your
SVMs to have their own secure storage, administration, and routing. All IPspace names must be unique
within a cluster.

If required, you can change the name of an existing IPspace (except for the two system-created IPspaces)
by using the network ipspace rename command.
If you no longer need an IPspace, you can delete it by using the network ipspace delete
command.

NOTE: There must be no broadcast domains, network interfaces, or SVMs associated with the IPspace you
want to delete. The system-defined Default and Cluster IPspaces cannot be deleted.
You can display the list of IPspaces that exist in a cluster, and you can view the SVMs, broadcast domains,
and ports that are assigned to each IPspace.
After you create an IPspace and before you create its SVMs, you must create a broadcast domain that defines
the ports that will be part of the IPspace.

5-24

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

IPspaces
Verifying IPspaces

To view IPspaces:
c1::> network ipspace show
IPspace
Vserver List
---------- ---------------------------Cluster
Cluster
Default
svm1, svm2, c1
IPspace_A SVM_A-1, SVM_A-2, IPspace_A
IPspace_B SVM_B-1, SVM_B-2, IPspace_B

Broadcast Domains
------------------Cluster
Default
bcast_A
bcast_B
IPspace_A and IPspace_B
SVMs are system SVMs.

SVMs can be assigned after you


create broadcast domains.

To view nodes and ports within an IPspace:


c1::> network ipspace show ipspace IPspace_A
IPspace name: IPspace_A
Ports: c1-01:e0f, c1-02:e0f
Broadcast Domains: bcast_A
Vservers: SVM_A-1, SVM_A-2, IPspace_A
22

Broadcast
domain and port
assignments

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IPSPACES: VERIFYING IPSPACES


Notice the system SVMs that were created in the example: IPspace_A and IPspace_B.
NOTE: This output shows data SVMs and a broadcast domain that will be created separately.

5-25

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Okay, I understand what I can do with


IPspaces. Now do I add network ports
directly to an IPspace?

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-26

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Broadcast Domains
Overview
Broadcast domains enable you to group network ports that belong to the same
layer 2 network
The ports in the group can then be used by an SVM for data or management traffic

Default
Broadcast Domain
Company A
Broadcast Domain
Company B
Broadcast Domain

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BROADCAST DOMAINS: OVERVIEW


Broadcast domains are commonly used when a system administrator wants to reserve specific ports for use by
a certain client or group of clients. A broadcast domain should include ports from many nodes in the cluster to
provide high availability for the connections to SVMs.
The graphic shows the ports assigned to three broadcast domains in a four-node cluster:

The Default broadcast domain, which was created automatically during cluster initialization, has been
configured to contain a port from each node in the cluster.
The Company A broadcast domain has been created manually, and it contains a one port each from the
nodes in the first HA pair.
The Company B broadcast domain has been created manually, and it contains a one port each from the
nodes in the second HA pair.
The Cluster broadcast domain is also created automatically during cluster initialization, but it is not
shown on this slide.

The two broadcast domains were created by the system administrator specifically to support the customer
IPspaces.

5-27

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Broadcast Domains
Defaults

A broadcast domain resides in an IPspace

During cluster initialization, the system creates two default


broadcast domains:
The Default broadcast domain contains ports that are in the
Default IPspace

Its ports are used primarily to serve data


Cluster management and node management ports are also in this
broadcast domain

The Cluster broadcast domain contains ports that are in the


Cluster IPspace

Its ports are used for cluster communication and include all cluster ports
from all nodes in the cluster

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BROADCAST DOMAINS: DEFAULTS


Broadcast domains enable you to group network ports that belong to the same layer 2 network. The ports in
the group can then be used by an SVM for data or management traffic.
A broadcast domain resides in an IPspace. During cluster initialization, the system creates two default
broadcast domains:

The Default broadcast domain contains ports that are in the Default IPspace. These ports are used
primarily to serve data. Cluster management and node management ports are also in this broadcast
domain.
The Cluster broadcast domain contains ports that are in the Cluster IPspace. These ports are used for
cluster communication and include all cluster ports from all nodes in the cluster.

If you have created unique IPspaces to separate client traffic, you need to create a broadcast domain in each of
those IPspaces. If your cluster does not require separate IPspaces, then all broadcast domains, and all ports,
reside in the system-created Default IPspace.

5-28

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Broadcast Domains
Managing Broadcast Domains

You create broadcast domains to group ports for an IPspace:


c1::> network port broadcast-domain create broadcast-domain bcast_A mtu
1500 ipspace IPspace_A ports c1-01:e0f, c1-02:e0f

Ports (physical, interface groups, or VLANs) that do not belong to another


broadcast domain can be added and removed
Ports that you plan to add to a broadcast domain must not belong to another
broadcast domain
Ports that already belong to an interface group cannot be added individually to a
broadcast domain
c1::> network port broadcast-domain add-ports broadcast-domain
bcast_A ports ports_list
c1::> network port broadcast-domain remove-ports broadcast-domain
bcast_A ports ports_list

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BROADCAST DOMAINS: MANAGING BROADCAST DOMAINS


You create a broadcast domain to group network ports in a cluster that belongs to the same layer 2 network.
The ports can then be used by SVMs.
NOTE: The ports that you plan to add to the broadcast domain must not belong to another broadcast domain.

All broadcast domain names must be unique within an IPspace.


The ports added to a broadcast domain can be network ports, VLANs, or interface groups.
Add ports by using the network port broadcast-domain add-ports command.
If the ports that you want to use belong to another broadcast domain, but are unused, use the network
port broadcast-domain remove-ports command to remove the ports from the existing
broadcast domain.
The maximum transmission units (MTUs) of the ports that are added to a broadcast domain are updated to
the MTU value that is set in the broadcast domain.
The MTU value must match all the devices that are connected to that layer 2 network.
If you do not specify an IPspace name, the broadcast domain is created in the Default IPspace.

Broadcast domains that are created can be renamed or deleted; however, the system-created Cluster and
Default broadcast domains cannot be renamed or deleted.
To make system configuration easier, a failover group of the same name is created automatically, and it
contains the same ports. All failover groups related to the broadcast domain are removed when you delete the
broadcast domain.

5-29

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Broadcast Domains
Verifying Broadcast Domains

To view broadcast domains:


c1::> network port broadcast-domain show
IPspace
Broadcast
Name
Domain Name
MTU
Port List
---------- ------------ ---- ------------------Cluster
Cluster
9000 c1-01:e0a
c1-01:e0c
c1-02:e0a
c1-02:e0c
c1-03:e0a
c1-03:e0c
c1-04:e0a
c1-04:e0c
Default
Default
1500 c1-01:e0e
c1-02:e0e
c1-03:e0e
c1-04:e0e
IPspace_A bcast_A
1500 c1-01:e0f
c1-02:e0f
IPspace_B bcast_B
1500 c1-03:e0g
c1-04:e0g

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BROADCAST DOMAINS: VERIFYING BROADCAST DOMAINS

5-30

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

An IPspace is associated with exactly one _________?

1. broadcast domain
2. SVM
3. port
4. root volume

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

5-31

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Subnets
Overview

Subnets allow the allocation of specific blocks, or pools, of IP


addresses for easier LIF creation
A subnet is created within a broadcast domain and contains a pool
of IP addresses that belong to the same layer 3 subnet

29

Default
Broadcast Domain

subnet

Company A
Broadcast Domain

subnet

Company B
Broadcast Domain

subnet

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBNETS: OVERVIEW
Subnets enable you to allocate specific blocks, or pools, of IP addresses for your Data ONTAP network
configuration. This allocation enables you to create LIFs more easily when you use the network
interface create command, by specifying a subnet name instead of having to specify IP address and
network mask values.
A subnet is created within a broadcast domain, and it contains a pool of IP addresses that belong to the same
layer 3 subnet. IP addresses in a subnet are allocated to ports in the broadcast domain when LIFs are created.
When LIFs are removed, the IP addresses are returned to the subnet pool and are available for future LIFs.
It is recommended that you use subnets because they make the management of IP addresses much easier, and
they make the creation of LIFs a simpler process. Additionally, if you specify a gateway when defining a
subnet, a default route to that gateway is added automatically to the SVM when a LIF is created using that
subnet.

5-32

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Subnets
Managing Subnets

To create a subnet:
c1::> network subnet create subnet-name subnet_A broadcast-domain bcast_A
ipspace IPspace_A subnet 10.1.2.0/24 gateway 10.1.2.1 -ip-ranges 10.1.2.9110.1.2-94 -force-updatelif-associations true

NOTE: The broadcast domain and IPspace where you plan to add the subnet
must already exist, and subnet names must be unique within an IPspace.

All subnet names must be unique within an IPspace


The subnet create command fails if any Service Processor or network
interfaces are currently using the IP addresses in the specified range
Setting -force-updatelif-associations to true associates any
manually addressed interfaces with the current subnet and enables the
command to succeed
30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBNETS: MANAGING SUBNETS


You create a subnet to allocate, or reserve, specific blocks of IPv4 or IPv6 addresses for Data ONTAP
network configuration. This allocation enables you to create interfaces more easily by specifying a subnet
name instead of having to specify the IP address and network mask values for each new interface.
NOTE: The broadcast domain and IPspace where you plan to add the subnet must already exist.
A few things to note when creating subnets:

All subnet names must be unique within an IPspace.


When adding IP address ranges to a subnet, you must ensure that there are no overlapping IP addresses in
the network so that different subnets, or hosts, do not attempt to use the same IP address.
If you specify a gateway when defining a subnet, a default route to that gateway is added automatically to
the SVM when a LIF is created using that subnet. If you do not use subnets, or if you do not specify a
gateway when defining a subnet, you must use the route create command to add a route to the SVM
manually.
The value true can be set for the -force-update-lif-associations option.
This command fails if any Service Processor or network interfaces are currently using the IP addresses in
the specified range. Setting this value to true associates any manually addressed interfaces with the
current subnet and enables the command to succeed.

5-33

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Subnets
Subnets and Gateways

When creating subnets:


If a gateway is specified, a default route is added automatically to the
SVM when a LIF is created using that subnet
If you do not use subnets, or if you do not specify a gateway when
defining a subnet, you must use the route create command to add a
route to the SVM manually

If you add or change the gateway IP address:


The modified gateway is applied to new SVMs when a LIF is created in
them that uses the subnet
A default route to the gateway is created for the SVM if the route does
not already exist
NOTE: You might need to manually add a new route to the SVM when you
change the gateway IP address.
31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBNETS: SUBNETS AND GATEWAYS

5-34

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Subnets
Verifying Subnets

To view broadcast domains:


c1::> network subnet show
Subnet
Name
Subnet
----------- --------------subnet_def 192.168.0.0/24

subnet_A

10.1.2.0/24

subnet_B

10.1.2.0/24

Broadcast
Avail/
Domain
Gateway
Total Ranges
--------- ------------ ------ -----Default
192.168.0.1 10/50
192.168.0.101-192.168.0.150
bcast_A
10.1.2.1
4/4
10.1.2.91-10.1.2-94
bcast_B
10.1.2.1
4/8
10.1.2.91-10.1.2.98

Subnets A and B have the


same subnet and gateway but
different broadcast domains.

32

Notice how subnets A and B use


overlapping IP ranges
(.91 thru .94).

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBNETS: VERIFYING SUBNETS

5-35

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion


When do I need to create IPspaces, broadcast domains, or
subnets?

Does having system SVMs mean that there is a


fourth SVM type (in addition to admin, node, and
data)?

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

5-36

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Network Interfaces

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: NETWORK INTERFACES

5-37

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

How does a client continue to access data


from a network perspective when there is
a problem with a physical portor even a
problem with an entire node?

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-38

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Interfaces
Overview

Clustered Data ONTAP

Data ONTAP 7-Mode


Network interfaces can be
configured by using physical
ports, interface groups, or
VLANs
Network interfaces are owned by
a virtual storage system

LIFs can be configured on physical


ports, interface groups, or VLANs
LIFs are owned by data SVMs
Ports, interface groups, and
VLANs can be used across
multiple LIFs and SVMs

Use System Manager for


configuration persistence.
36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORK INTERFACES: OVERVIEW


A LIF is associated with a physical port, an interface group, or a VLAN. They are owned by virtual storage
systemsVLANs and SVMs. Multiple LIFs belonging to multiple SVMs can reside on a single port.

5-39

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Review of Data ONTAP Networking

Logical

Virtual

LIF

svm1-mgmt

svm1-data1

VLAN

a0a-50

a0a-80

ifgrp

Physical

37

a0a

Port

e2a

e3a

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REVIEW OF DATA ONTAP NETWORKING


Data ONTAP systems can be analyzed as having three network layers:

5-40

Physical: network ports


Virtual: interface groups (ifgrps) and VLANs
LIFs: for clustered Data ONTAP only

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Interfaces
Overview

An IP address or World Wide Port Name (WWPN) is associated with a LIF


If subnets are configured (recommended), IP addresses are automatically
assigned when a LIF is created
If subnets are not configured, IP addresses must be manually assigned when
LIF is created
WWPNs are automatically assigned when an FC LIF is created

One node-management LIF exists per node


One cluster-management LIF exists per cluster
Two* cluster LIFs exist per node
Multiple data LIFs are allowed per port (Client-facing: NFS, CIFS, iSCSI,
and FC access)

For intercluster peering, intercluster LIFs must be created on each node


* The default is platform dependent.
38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL INTERFACES: OVERVIEW


Earlier than clustered Data ONTAP 8.3, ports were assigned roles. Each LIF had an associated role and had to
be assigned to the network port of a compatible role. In clustered Data ONTAP 8.3, broadcast domains
eliminate the need for port roles.
Data LIFs can have a many-to-one relationship with network ports: Many data IP addresses can be assigned
to a single network port. If the port becomes overburdened, NAS data LIFs can be transparently migrated to
different ports or different nodes. Clients know the data LIF IP address but they do not know which node or
port is hosting the LIF. If a NAS data LIF is migrated, the client might unknowingly be contacting a different
node. The NFS mountpoint or CIFS share is unchanged.
A node can have a maximum of 128 NAS data LIFs per node (with failover enabled). For SAN LIF
maximums, refer to the SAN Configuration Guide for your Data ONTAP release.

5-41

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Interfaces
Managing LIFs

To create a LIF in Data ONTAP 8.3:


c1::> network interface create vserver SVM_A-1 lif SVM_A-1_lif2
data data-protocol nfs home-node c1-02 home-port e0f
subnet-name subnet_A

role

Specifying the subnet names automatically assigns an IP address


to the LIF within the range specified (recommended)
You must specify the IP address when:

Subnets are not configured


Using a version of clustered Data ONTAP earlier than 8.3

When creating LIFs:

All IP-based LIFs (except cluster LIFs) are compatible with physical
ports, interface groups, and VLANs
Cluster LIFs can only be on physical ports

39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL INTERFACES: MANAGING LIFS


A LIF is an IP address or WWPN associated with a physical port. If there is any component failure, most LIF
types (excluding SAN) can fail over to or be migrated to a different physical port, thereby continuing to
communicate with the cluster.

The underlying physical network port must be configured to the administrative up status.
If you are planning to use a subnet name to allocate the IP address and network mask value for a LIF, the
subnet must already exist.
You can create IPv4 and IPv6 LIFs on the same network port.
You cannot assign NAS and SAN protocols to a LIF.

The supported protocols are CIFS, NFS, FlexCache, iSCSI, and FC.

The data-protocol parameter must be specified when the LIF is created, and it cannot be modified
later.

If you specify none as the value for the data-protocol parameter, the LIF does not support any data
protocol.

The home-node parameter is the node to which the LIF returns when the network interface
revert command is run on the LIF.
The home-port parameter is the port or interface group to which the LIF returns when the network
interface revert command is run on the LIF.
All the name mapping and host-name resolution servicessuch as DNS, Network Information Service
(NIS), Lightweight Directory Access Protocol (LDAP), and Active Directorymust be reachable from
the data, cluster-management, and node-management LIFs of the cluster.
A cluster LIF should not be on the same subnet as a management LIF or a data LIF.
When using a subnet to supply the IP address and network mask, if the subnet was defined with a
gateway, a default route to that gateway is added automatically to the SVM when a LIF is created using
that subnet.

5-42

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which statement about LIFs is true?


1. There is one cluster management LIF per node.
2. One port can host multiple data LIFs.
3. Cluster LIFs and data LIFs can share a port.
4. A data LIF can be associated with multiple SVMs.

40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

5-43

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Interfaces
Nodes, Ports, and LIFs Example

c1-01:e0e: Default broadcast domain

192.168.0.101 (SVM 1, LIF 1)


192.168.0.102 (SVM 2, LIF 1)
c1-02:e0f: Broadcast domain A
10.1.2.92 (SVM_A-1, LIF 2)

10.1.2.94 (SVM_A-2, LIF 2)


e0c

Examine on
next slide

c1-04:e0f: Broadcast domain B

10.1.2.92 (SVM_B-1 , LIF 2)


10.1.2.94 (SVM_B-2 , LIF 2)
41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL INTERFACES: NODES, PORTS, AND LIFS EXAMPLE


In the environment that is shown here, each of the nodes contains four network ports. This graphic shows the
many-to-one relationship between LIFs and network ports.

Node 1 has two data LIFs that are assigned to the first port, one for each SVM. The IP address is also
listed.
Not shown: There are LIFs on the first port of the other three nodes, one for each SVM.
Node 2 has two data LIFs that are assigned to the second port, one for each SVM that is assigned to
Company A.
Not shown: There are two LIFs on the first node also. It is recommended to put LIFs on both nodes of an
HA pair.
Node 4 has two data LIFs that are assigned to the second port, one for each SVM that is assigned to
Company B.
Not shown: There are two LIFs on the third node also. It is recommended to put LIFs on both nodes of an
HA pair.

In a NAS environment, the name is not the actual host name that is associated with the IP address. The name
is an internal name that can be used as the host name for the IP address in the DNS. In a NAS environment,
all these IP addresses can share one host name, such that a DNS round robin picks an IP address every time
that the host name is used; for example, for an NFS mount command.
This graphic shows how an environment can randomly distribute client connections across a cluster while the
cluster looks to every user and every client as if there is only one storage host.

5-44

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Logical Interfaces
LIF Attributes

To view a LIF:
c1::> network interface show vserver lif SVM_A-1_lif2
Vserver Name: SVM_A-1
Logical Interface Name: SVM_A-1_lif2
Role: data
Data Protocol: nfs
Home Node: c1-02
Output edited
Home Port: e0f
for readability
Current Node: c1-02
Current Port: e0f
...
Is Home: true
Network Address: 10.1.2.92
Netmask: 255.255.255.0
...
Subnet Name: subnet_A
...
IPspace of LIF: IPspace_A
42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOGICAL INTERFACES: LIF ATTRIBUTES


The output that is shown here was edited to make it more compact, but it retains the information that has been
discussed so far.

5-45

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Components

43

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORK COMPONENTS
This graphic shows clustered Data ONTAP 8.3 from a data network component perspective. Clustered Data
ONTAP requires data and management connectivity, which could coexist on the same network. In multinode
configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic.
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.

5-46

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

Nondisruptive LIF
Configuration

44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: NONDISRUPTIVE LIF CONFIGURATION


Clustered Data ONTAP 8 has always supported nondisruptive LIF configuration. The process of performing a
failover has changed throughout the versions leading up to the 8.3 release. Here is a brief history:

Data ONTAP 8.0: Failover rules (network interface failover) were the primary way to
control failover based on port role and priority.
Data ONTAP 8.1: Failover groups (network interface failover-groups) became the
primary method to control failover. Failover rules were deprecated.
Data ONTAP 8.3: Failover groups and failover policies were changed to work with broadcast domains.
There are fewer failover groups and more failover policies.

Conceptually, LIF failover is similar in the different versions of clustered Data ONTAP, but the configuration
is very different. This lesson discusses only examples of clustered Data ONTAP 8.3. For more information
about how to configure LIF failover in older versions of clustered Data ONTAP, refer to the Network
Management Guide for the version of clustered Data ONTAP that you are configuring.

5-47

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive LIF Features


LIF failover means automatic migration that occurs because of a
link failure or reboot
LIF migrate means manual migration
LIF revert means manually or automatically sending a LIF back
home (node and port)
A LIF failover group is a list of ports (physical or virtual):

The list of ports defines the targets for the LIF


When creating a broadcast domain, a failover group is automatically
created
Does not apply to iSCSI or FC SAN LIFs

A LIF failover policy is used to restrict the list of ports within a


failover group that will be available as failover targets for a LIF
45

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NONDISRUPTIVE LIF FEATURES


Why migrate a LIF? It might be necessary for troubleshooting a faulty port or to offload a node whose data
network ports are being saturated with other traffic. The LIF fails over if its current node is rebooted.
Unlike storage failover (SFO), LIF failover or migration does not cause a reboot of the node from which the
LIF is migrating. After a LIF is migrated, the LIF can remain on the new node for as long as the administrator
wants.
Failover groups for LIFs can be broadcast domainbased or user-defined. You create a failover group of
network ports so that a LIF can automatically migrate to a different port if a link failure occurs on the LIF's
current port. This failover group enables the system to reroute network traffic to other available ports in the
cluster.

The ports that are added to a failover group can be network ports, VLANs, or interface groups.
All the ports that are added to the failover group must belong to the same broadcast domain.
A single port can reside in multiple failover groups.
If you have LIFs in different VLANs or broadcast domains, you must configure failover groups for each
VLAN or broadcast domain.
Failover groups do not apply in SAN iSCSI or FC environments.

You can configure a LIF to fail over to a specific group of network ports by applying a failover policy and a
failover group to the LIF. You can also disable a LIF from failing over to another port. Failover policies can
be:

broadcast-domain-wide: All ports on all nodes in the failover group


system-defined: Only those ports on the LIF's home node and a non-SFO partner
local-only: Only those ports on the LIF's home node
sfo-partner-only: Only those ports on the LIF's home node and its SFO partner
disabled: Not configured for failover

NOTE: LIFs for SAN protocols do not support failover; therefore, these LIFs are always set to disabled.
5-48

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Failover Groups
Broadcast DomainBased

These failover groups are created automatically based on the network ports
that are present in the particular broadcast domain:

A Cluster failover group contains the ports in the Cluster broadcast


domain
These ports are used for cluster communication and include all cluster ports from
all nodes in the cluster

A Default failover group contains the ports in the Default broadcast


domain
These ports are used primarily to serve data, but they are also used for cluster
management and node management

Additional failover groups are created for each broadcast domain that you
create
The failover group has the same name as the broadcast domain, and it contains
the same ports as those in the broadcast domain
46

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAILOVER GROUPS: BROADCAST DOMAINBASED


There are two types of failover groups: those created automatically by the system when a broadcast domain is
created, and those that a system administrator defines.

5-49

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Failover Groups
User-Defined

Custom failover groups can be created for specific LIF failover


functionality when:
The automatic failover groups do not meet your requirements
Only a subset of the ports that are available in the broadcast
domain are required
Consistent performance is required
For example, create a failover group consisting of only 10-GbE ports that
enables LIFs to fail over only to high-bandwidth ports

47

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAILOVER GROUPS: USER-DEFINED


User-defined failover groups can be created for special failover situations when the default broadcast domain
based groups do not meet your needs.

5-50

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Failover Policies

48

Failover Policy

Available Target Ports

Details

broadcast-domainwide

The LIF fails over to a port in the


same broadcast domain as the
home port. Includes all ports from all
nodes in the failover group

Default for cluster


management LIF

system-defined

The LIF fails over to a port on the


home node or a non-SFO partner
only

Default for data LIFs

local-only

The LIF fails over to a port on the


home node of the LIF only

Default for cluster LIFs, node


management LIFs and
Intercluster LIFs

sfo-partner-only

The LIF fails over to the port on the


home node or SFO partner only

disabled

Failover is disabled for the LIF

Recommended for
nondisruptive software
updates

The LIF is not configured for


failover

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAILOVER POLICIES
These default policies should be used in most cases.

5-51

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Failover
Managing Failover Groups and LIFs

To create a failover group:


c1::> network interface failover-groups create vserver SVM_A-1
failover-group SVM_A-1_fg -targets c1-01:e0f,c1-02:e0f

To configure failover for an existing LIF:


c1::> network interface modify vserver SVM_A-1 lif SVM_A-1_lif1
failover-policy broadcast-wide-domain failover-group SVM_A-1_fg

You can also add and remove targets from a failover group:
network interface failover-groups add-targets
network interface failover-groups remove-targets

To display failover groups:


c1::> network interface failover-groups show

49

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FAILOVER: MANAGING FAILOVER GROUPS AND LIFS

5-52

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion


How do I decide:

What are the benefits of each type of failover group and


failover policy type?
When should I create a user-defined failover group?
When should I use interface groups or failover groupsor do
I need both?

50

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

5-53

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

Network Management

51

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: NETWORK MANAGEMENT


With IPspaces, there are changes in the way that traffic is routed. As with other Data ONTAP 8.3 feature
changes, routing is conceptually similar, but the configuration is little different.

Clustered Data ONTAP 8.3: You control how LIFs in an SVM use your network for outbound traffic by
configuring routing tables and static routes. If you have defined a default gateway when creating a subnet,
a default route to that gateway is added automatically to the SVM that uses a LIF from that subnet.
Earlier than clustered Data ONTAP 8.3: You can control how LIFs in an SVM use your network for
outbound traffic by configuring routing groups and static routes. A set of common routes are grouped in a
routing group that makes the administration of routes easier.

This lesson discusses only clustered Data ONTAP 8.3 examples. For more information on how to configure
LIF failover in earlier versions of clustered Data ONTAP, refer to the Network Management Guide for the
version of clustered Data ONTAP that you are configuring.

5-54

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

If I wanted to dedicate specific data LIFs


to a single client or group of clients by IP
address, how could I do that?

52

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-55

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Routing Management
Overview
Outbound traffic of LIFs in an SVM can be controlled by using route tables and static
routes:
Route tables:
Route tables are routes that are automatically created in an SVM when a service or
application is configured for the SVM
Routes are configured for each SVM, identifying the SVM, subnet, and destination
Because route tables are per-SVM, routing changes to one SVM do not pose a risk of
corrupting another SVM route table
The system SVM of each IPspace has its own route table

Static routes:
A static route is a defined route between a LIF and a specific destination IP address
The route can use a gateway IP address
NOTE: If a default gateway is defined when creating a subnet, a default route to that gateway is
added automatically to the SVM that uses a LIF from that subnet.
53

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ROUTING MANAGEMENT: OVERVIEW


You control how LIFs in an SVM use your network for outbound traffic by configuring route tables and static
routes.

Route tables:

Static route:

5-56

Routes are configured for each SVM, identifying the SVM, subnet, and destination.
Because route tables are per-SVM, routing changes to one SVM do not pose a risk of corrupting another SVM
route table.
Routes are automatically created in an SVM when a service or application is configured for the SVM. Like data
SVMs, the system SVM of each IPspace has its own route table because LIFs can be owned by system SVMs
and the system SVMs might need route configurations that are different from those on data SVMs.

A static route is a defined route between a LIF and a specific destination IP address; the route can use a gateway
IP address.
If you have defined a default gateway when creating a subnet, a default route to that gateway is added
automatically to the SVM that uses a LIF from that subnet.

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Routing Management
Managing Routes

To create a static route:


c1::> network route create vserver SVM_A-1 destination
0.0.0.0/0 -gateway 192.168.0.1

To delete a static route:


c1::> network route delete vserver SVM_A-1 destination
0.0.0.0/0 -gateway 192.168.1.1

To display static routes:


c1::> network route show
Vserver Destination
Gateway
-------- ------------ ----------SVM1_A-1 0.0.0.0/0
192.168.0.1
...
54

Metric
------20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ROUTING MANAGEMENT: MANAGING ROUTES


You can use the optional metric parameter with this command to specify a hop count for the route. The
default settings for this parameter are 10 for management interfaces, 20 for data interfaces, and 30 for cluster
interfaces. This parameter is used for source-IP address selection of user-space applications such as Network
Time Protocol (NTP).

5-57

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Host-Name Resolution
Overview

Host-name resolution is supported by two methodsDNS and


hosts tables.
Cluster administrators can configure DNS and the hosts table for
host-name lookup in the admin SVM
Best practice is to configure DNS when setting up the cluster
When host-name resolution is configured when the cluster is created, it
is propagated to each node as it joins the cluster
By default, the order of lookup is hosts table and then DNS

Cluster administrators and SVM administrators can configure DNS


for host-name lookup in a data SVM
Each SVM has its own DNS configuration
55

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HOST-NAME RESOLUTION: OVERVIEW


Host-name resolution for the admin SVM
Only the cluster administrators can configure DNS and the hosts table for host-name lookup in the admin
SVM. All applications except CIFS discovery use the host-name configuration of the admin SVM. You
cannot use NIS configuration for the admin SVM.
Host-name resolution for the admin SVM is configured when the cluster is created. This configuration is
propagated to each node as it joins the cluster.

Hosts table configuration for the admin SVM:

You can use the vserver services dns hosts command for configuring the hosts table that resides in
the root volume of the admin SVM.
By default, the order of lookup for the admin SVM is hosts table first and then DNS.

DNS configuration for the admin SVM:

It is best to configure DNS on the admin SVM at the time of cluster creation.
If you want to configure DNS later, use the vserver services dns create command.

Host-name resolution for a data SVM


A cluster administrator or an SVM administrator can configure DNS for host-name lookup in a data SVM.
Each SVM has its own DNS configuration. DNS configuration is mandatory when CIFS is used for data
access.
DNS services can also be configured on an SVM for FlexVol volumes by using the Vserver Setup wizard. If
you want to configure DNS later, you must use the vserver services dns create command.
Managing the hosts table (cluster administrators only)
A cluster administrator can add, modify, delete, and view the host name entries in the hosts table of the admin
SVM. An SVM administrator can configure the host name entries only for the assigned SVM.
5-58

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Host-Name Resolution
Table Entries

To create a new hosts table entry:


c1::> vserver services name-service dns hosts create
vserver svm1 address 192.168.0.11 hostname
test.example.com alias test

To create a new DNS table entry:


c1::> vserver services name-service dns create vserver
svm1 domains example.com name-servers 192.168.0.11

56

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HOST-NAME RESOLUTION: TABLE ENTRIES

5-59

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6

Network Load Balancing

57

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 6: NETWORK LOAD BALANCING

5-60

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

It seems like it will be a problem if too


many clients access their data on the
same LIF or port. How can I avoid that
situation?

58

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

5-61

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Network Load Balancing


Clients can mount to an SVM by either:
Specifying a LIFs IP address or
Specifying a host name (for multiple managed IP addresses)

Load balancing dynamically evaluates the load on LIFs and either:


Selects an appropriately loaded LIF or
Moves a LIF to a less loaded port

Load balancing types:


DNS load balancing (NFS or CIFS)
On-box (zoning based)
Off-box (round robin)

Automatic LIF rebalancing (NFSv3 only)


59

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORK LOAD BALANCING


DNS load balancing and automatic LIF rebalancing methods aid in selecting an appropriately loaded data LIF
and balancing user network traffic across all available ports (physical or interface groups).
DNS Load Balancing
With DNS load balancing, you can create a DNS load balancing zone on the SVM that returns the leastloaded LIF based on the network traffic and the availability of the port resources (such as CPU usage,
throughput, and open connections). By configuring a DNS load balancing zone, you can balance new client
connections better across available resources. This balance leads to improved performance for the entire
cluster. Also, no manual intervention is required for deciding which LIFs to use when mounting a particular
SVM. You can use the DNS load balancing method to balance loads for only new share connections and new
mount requests. DNS load balancing cannot be used with existing connections. DNS load balancing works
with NFSv3, NFSv4, NFSv4.1, CIFS, SMB 2.0, SMB 2.1, and SMB 3.0.
Automatic LIF Rebalancing
With automatic load balancing, LIFs are dynamically migrated to ports with low utilization, based on the
failover rules. Automatic LIF rebalancing works only with NFSv3 connections. Automatic LIF rebalancing
provides the following benefits:

5-62

Different client connections use different bandwidth; therefore, LIFs can be migrated based on the load
capacity.
When new nodes are added to the cluster, LIFs can be migrated to the new ports.

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Load Balancing Weights


Data ONTAP automatically assigns weights to data LIFs by
collecting periodic statistics on the current node and port
resources
To override the automatic assignment, manually assign load
balancing weights to LIFs
lbweight specifies the weight of the LIF:
Integer between 0 and 100
Higher weight is made available for client requests more
frequently
Disabled LIFs are automatically assigned a weight of 0
c1::*> net int modify -vserver smv1 -lif lif3 -lbweight 3
60

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOAD BALANCING WEIGHTS


Data ONTAP automatically assigns weights to data LIFs by collecting periodic statistics on the current node
and port resources (such as CPU usage, throughput, and open connections).
To override the automatic assignment, you must consider the following guidelines for manually assigning
load balancing weights to LIFs:

The load balancing weight is inversely related to the load on a LIF.


A data LIF with a high load balancing weight is made available for client requests more frequently than
one that has a low load balancing weight. For example, lif1 has a weight of 10, and lif2 has a weight of 1.
For any mount request, lif1 is returned 10 times more than lif2.

If all LIFs in a load balancing zone have the same weight, LIFs are selected with equal probability.
When manually assigning load balancing weights to LIFs, you must consider conditions such as load, port
capacity, client requirements, CPU usage, throughput, open connections, and so on.
For example, in a cluster that has 10-GbE and 1-GbE data ports, the 10-GbE ports can be assigned a
higher weight so that the port is returned more frequently when any request is received.

5-63

When a LIF is disabled, it is automatically assigned a load balancing weight of 0.

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

DNS Load Balancing


On-Box
1. Configure DNS
forwarder on sitewide DNS server.

nfsclient% mount svm1.netapp.com:/ /mnt/svm1

client

2. Create a DNS load


balancing zone on
the SVM (all 4 LIFs in
dns zone).
3. Mount client by using
host name.

4. An appropriately
loaded LIF will be
chosen.
1

DNS

Delegated zone/forwarder
SVM1.NETAPP.COM

Example for LIF1

LIF1

LIF2

10

LIF3

LIF4

e0e

e0e

e0e

e0e

Create a DNS zone


DNS Zone = SVM1.NETAPP.COM

c1::> net int create -vserver svm1 -lif lif1 -role data -home-node c1-01 -home-port e0e
-address 192.168.0.131 -netmask 255.255.255.0 -dns-zone svm1.netapp.com
61

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DNS LOAD BALANCING: ON-BOX


With on-box DNS, conditional forwarders or delegations can be leveraged.
A forwarder is a DNS server on a network that is used to forward DNS queries for external DNS names to
DNS servers outside that network. You can also forward queries according to specific domain names by using
conditional forwarders.
A conditional forwarder is a DNS server on a network that is used to forward DNS queries according to the
DNS domain name in the query. For example, a DNS server can be configured to forward all the queries that
it receives for names ending with widgets.example.com to the IP address of a specific DNS server, or to the
IP addresses of multiple DNS servers.
DNS delegation must be used if you are configuring an SVM to use a DNS domain that is in the same tree as
an existing zone. For example, if you want to use svm1.netapp.com in the domain netapp.com, you use a DNS
delegation.
In environments with many SVMs, you must account for each of the data LIFs and zones of each of the
SVMs that are being added to the site-wide DNS server.

5-64

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

DNS Load Balancing


Off-Box
1. Create A records
for each LIF on
site-wide DNS
server.

nfsclient% mount svm1.netapp.com:/ /mnt/svm1

client
LIF1

2. Mount client by
using host name.

LIF2

3. Configure DNS
Server for roundrobin load
balancing.

LIF3

e0e

e0e

e0e

3
1

Create A record for


each LIF on DNS server
svm1.netapp.com
svm1.netapp.com
svm1.netapp.com
svm1.netapp.com

62

IN
IN
IN
IN

A
A
A
A

DNS

<LIF1
<LIF2
<LIF3
<LIF4

IP
IP
IP
IP

LIF4

e0e

Address>
Address>
Address>
Address>

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DNS LOAD BALANCING: OFF-BOX


With off-box DNS, each Data LIF in each SVM that resides in the cluster has a DNS A record that was
created with the same name.
1. NFS client makes a request for name resolution to the site-wide DNS server.
2. Site-wide DNS server resolves the request to an IP address by using a round-robin algorithm.
3. Site-wide DNS server responds to the client with the chosen IP address.
Keep in mind, similarly to the on-box method, if an environment will have many SVMs, you must account for
each of the data LIFs of each of the SVMs that are being added to the site-wide DNS server.

5-65

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Automatic LIF Rebalancing


1. Mount NFSv3 client
by using host name
or IP address.

2. Create a failover
group with each port.
3. Modify LIF to
subscribe to failover
group and enable
automatic LIF
rebalancing.

LIF1

NFSv3
client

e0e

e0e

1
e0e

nfsclient% mount <lif1 IP>:/


/mnt/svm1
e0e

c1::> net int failover-groups create -failover-group


alr_fg_svm1 -node c1-01 -port e0e

2
3

63

Create failover group

c1::> net int modify -vserver svm1 -lif lif1


failover-group alr_fg_svm1 -allow-lb-migrate true

Example for
node1 port e0e

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AUTOMATIC LIF REBALANCING


In automatic LIF rebalancing, LIFs are automatically and periodically migrated to a less-utilized port based
on the configured failover rules. Automatic LIF rebalancing allows even distribution of the current load. LIFs
are migrated based on the weights assigned to the LIFs. When new NICs are added to the cluster, add the
ports to the failover group to which the automatically rebalancing LIFs belong. The network ports are then
automatically included the next time that that load is calculated dynamically, and each time thereafter.
Automatic LIF rebalancing is available only under the advanced privilege level of operation.
Automatic LIF rebalancing is supported only with the NFSv3 protocol.

5-66

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Load Balancing Verification


c1::*> net int show -vserver svm1 -lif svm1_lif1
Vserver Name: svm1
Logical Interface Name: svm1_lif1
Role: data
Data Protocol: nfs
Home Node: c1-01
Home Port: e0e
Current Node: c1-01
Current Port: e0e
...
Fully Qualified DNS Zone Name: svm1.netapp.com
DNS Query Listen Enable: false
Load Balancing Migrate Allowed: true
Load Balanced Weight: load
Failover Group Name: alr_fg_svm1

64

Load balancing
feature is
protocol
dependent.

DNS zone
true for
automatic LIF
rebalancing

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOAD BALANCING VERIFICATION

5-67

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion


What are the benefits of each type of load balancing
method?
Why would you choose one method over another?

65

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

5-68

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP Network Management Guide

Clustered Data ONTAP SAN Configuration and Administration Guides


Clustered Data ONTAP File Access Management Guide
TR-4182: Ethernet Storage Best Practices for Clustered Data ONTAP
Configurations
TR-4080: Clustered Data ONTAP SAN Best Practices

66

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

5-69

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Im confident in my new understanding of the


three networks that are required for clustered
Data ONTAP to function. I also learned how
physical ports and LIFs interact for nondisruptive
network access.

67

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

5-70

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 5: Network
Management

Time Estimate: 30 minutes

68

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

5-71

Clustered Data ONTAP Administration: Network Management

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 6

Implementing NAS Protocols

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 6: IMPLEMENTING NAS PROTOCOLS


In this module you configure a simple storage virtual machine (SVM) to serve data with the NFS and Server
Message Block (SMB) protocols. You also learn how FlexVol volumes and junctions are used to build
namespaces for NAS clients.

6-1

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Storage
Review

File System (SAN)

NFS
Corporate
LAN

iSCSI

CIFS
FCoE

File System
(NAS)
NAS
(File-Level
Access)

FC

SAN
(Block-Level
Access)

NetApp FAS
2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UNIFIED STORAGE: REVIEW


NAS is a file-based storage system that uses NFS and SMB protocols to make data available over the
network. Data ONTAP clusters support NAS file systems on clusters of up to 24 nodes. NAS protocols place
the file system on the storage controller, rather than on the client computer.

6-2

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Lots of end users use Windows and
Linux to access their files on NetApp
storage. How do I configure NAS in
Data ONTAP?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-3

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Basic Steps for Setting Up a System


1. Connect controllers, disks, and cables.
2. Set up and configure nodes.
3. Install software onto nodes. (Software is preinstalled on most systems.)

4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes
to the cluster.

This Module

6. Complete the initial configuration.

7. Create data aggregates.


8. Create a storage virtual machine (SVM).
9. Create data volumes and protocol configuration.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BASIC STEPS FOR SETTING UP A SYSTEM

6-4

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

File System Structure

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: FILE SYSTEM STRUCTURE

6-5

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The NAS File System


Client

Client
UNIX1

WIN1

/mnt/NFSvol

Disk 1 (C:)
Disk 2 (E:) \\system\SMBvol

Server
SMB
volume

NFS
volume
6

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE NAS FILE SYSTEM


NAS is a distributed file system that enables users to access resources, such as volumes, that are located on a
remote storage system and to access those resources as if they were located on their local computer system.
NAS provides its services through a client-server relationship. Storage systems that allow their file systems
and other resources to be available for remote access are called servers. The server is set up with its own
network address and provides file-based data storage to other computers that use the server's resources. These
computers are called clients.
The Data ONTAP operating system supports the NFS and SMB protocols. (SMB is also commonly known as
CIFS.)

6-6

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage System Resources


Volume
Qtree
Directory

File

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE SYSTEM RESOURCES


With the NAS protocols, you need to create file systems and other resources that will be made available to
clients through either NFS or SMB.
Volumes are the highest-level logical storage object. FlexVol volumes are data containers that enable you to
partition and manage your data. In a NAS environment, volumes contain file systems. In Data ONTAP
operating in 7-Mode or in clustered Data ONTAP, the first resource to create is the volume.
In clustered Data ONTAP, the volume is associated with an SVM. The SVM is a virtual management entity.
Within that entity, you create a namespace. Volumes are joined to the namespace through junctions. In
clustered Data ONTAP, it is these junctions that are exported.
Qtrees enable you to partition your FlexVol volumes into smaller segments that you can manage individually.
Data ONTAP creates a default qtree, called qtree0, for each volume. If you do not create and put data in
another qtree, all the data resides in qtree0. Qtrees enable you to partition your data without incurring the
overhead that is associated with a creating another FlexVol volume. You might create qtrees to organize your
data, or to manage one or more of the following factors: quotas, security style, and oplocks settings.
You can also create a directory or a file on the client in a FlexVol volume to use as a resource to export or
share. A qtree is a partition that is created on the storage system and a directory is a partition that is created on
the client within a FlexVol volume.

6-7

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFSv3 Implementation
Targets and Access in Clustered Data ONTAP
Create a projects volume under the SVMs root:
c1::> volume create vserver vsNFS
-aggregate aggr1_system_01 volume Projects
-size 20MB state online type RW
policy Default security-style unix
-junction-path /Projects junction-active true

Or:

Create a second named project volume:


c1::> volume create vserver vsNFS
-aggregate aggr1_system_01 volume Theseus
-size 20GB state online type RW
policy Default security-style unix

Projects

Mount the second named project volume under the Projects


volume:
c1::> volume mount vserver vsNFS
volume Theseus junction-path /Projects/Theseus
active true policy-override false

Theseus

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NFSV3 IMPLEMENTATION: TARGETS AND ACCESS IN CLUSTERED DATA ONTAP


The following is an abbreviated list of parameters that are used to mount a volume:

6-8

Junction path of the mounting volume: -junction-path <junction path>


This path specifies the junction path of the mounted volume. The junction path name is case insensitive
and must be unique within an SVM's namespace.
Active junction path: [-active {true|false}]
This optional parameter specifies whether the mounted volume is accessible. The default setting is
false. If the mounted path is not accessible, it does not appear in the SVMs namespace.
Override the export policy: [-policy-override {true|false}]
This optional parameter specifies whether the parent volumes export policy overrides the mounted
volumes export policy. The default setting is false.

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Junctions
From the storage system: c1::> volume show vserver
vs1 volume * fields junction-path
vserver
vs1
vs1
vs1
vs1
vs1

volume
acct
pro_1
pro_2
pro_3
vs1_root

junction path
/acct
/project1
/project2
/project3
/

From an NFS client: root@unix1 vs1_root# ls -al

unix1# drwxr-xr-x.

2 root root 4096 Mar 15

2014 ..

unix1#
unix1#
unix1#
unix1#

root
root
root
root

acct
project1
project2
project3

drwxr-xr-x.
drwxr-xr-x.
drwxr-xr-x.
drwxr-xr-x.

2
2
2
2

root
root
root
root

4096
4096
4096
4096

Mar
Mar
Mar
Mar

15
15
15
15

2014
2014
2014
2014

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

JUNCTIONS
Volume junctions are a way to join individual volumes together into a single logical namespace. Volume
junctions are transparent to CIFS and NFS clients. When NAS clients access data by traversing a junction, the
junction appears to be an ordinary directory.
A junction is formed when a volume is mounted to a mount point below the root and is used to create a filesystem tree. The top of a file-system tree is always the root volume, which is represented by a slash mark (/).
A junction points from a directory in one volume to the root directory of another volume.
A volume must be mounted at a junction point in the namespace to allow NAS client access to contained data.
Although specifying a junction point is optional when a volume is created, data in the volume cannot be
exported and a share cannot be created until the volume is mounted to a junction point in the namespace. A
volume that was not mounted during volume creation can be mounted post-creation. New volumes can be
added to the namespace at any time by mounting them to a junction point.
NOTE: Mounting volumes to junction paths is accomplished on the storage system.

6-9

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mounting Junction Paths


Mounting
c1::> volume mount -vserver vs1 volume
pro1 -junction-path /project1

Unmounting
c1::> volume unmount -vserver vs1 volume

pro1

Mounting to a new path


c1::> volume mount -vserver vs1 volume
junction-path /project/pro1
10

pro1 -

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MOUNTING JUNCTION PATHS


When volumes are created by using the volume create command, a junction path is usually specified.
The junction path is optional; a volume can be created and not mounted into the namespace. To put a volume
without a junction path into use, you must use the volume mount command to assign a junction path to
the volume.
A volume can only be mounted to the namespace of its SVM in one place.
When you unmount a volume, you take the volume out of the namespace. An unmounted volume is
inaccessible to NFS and CIFS clients but is still online and can be mirrored, backed up, moved, and so on.
You can then mount the volume again to the same location or a different location in the namespace and in
relation to other volumes. For example, you can unmount a volume from one parent volume and then mount
the volume to another parent volume.
Be careful when unmounting and remounting a volume to a new path. Because rejunctioning changes the
location of a flexible volume inside the namespace, it is not transparent to client access. The client now has to
access the data at the new directory location.

6-10

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Why do I need to associate junction paths with


my volumes?

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-11

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NAS Namespace Architecture


Multiple Standalone Volumes

root
/project3

/acct
acct

/project1

pro3

/project2
pro2

pro1

12

SVM

volume

junction path

vs1
vs1

acct
pro1

/acct
/project1

vs1

pro2

/project2

vs1

pro3

/project3

vs1

vs1_root

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAS NAMESPACE ARCHITECTURE: MULTIPLE STANDALONE VOLUMES


In an architecture with standalone volumes, every volume has an insertion point to the root of the SVM
namespace. No volume is junctioned below another volume. Each volume has a unique path and is junctioned
directly below the root.

6-12

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NAS Namespace Architecture


Namespace with Branched Trees
root
/acct
acct

/project pro3

/project/pro3

project
pro2

/project/pro1

/project/pro2

pro1

13

SVM

volume

junction path

vs1

acct

/acct

vs1
vs1

project
pro1

/project
/project/pro1

vs1

pro2

/project/pro2

vs1

pro3

/project/pro3

vs1

vs1_root

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAS NAMESPACE ARCHITECTURE: NAMESPACE WITH BRANCHED TREES


An architecture with multiple branched trees has multiple insertion points to the root of the SVM namespace.
The insertion points can be junctioned volumes, directories, or qtrees beneath the root. All other volumes are
mounted at junction points beneath the insertion points (which can be volumes, directories or qtrees).
The graphic shows a typical volume junction configuration, with two insertion points to the root volume of
the SVM. One insertion point is a junctioned volume that is named acct and one insertion point is a
junctioned volume that is named project. The other volumes are junctioned under the project volume.

6-13

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NAS Namespace Architecture


Namespace with Directory

root

/acct

project

acct

/project/pro3
pro3

pro1

/project/pro11

14

pro2

/project/pro2

SVM

volume

junction path

vs1
vs1

acct
pro1

/acct
/project/pro1

vs1

pro2

/project/pro2

vs1

pro3

/project/pro3

vs1

vs1_root

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAS NAMESPACE ARCHITECTURE: NAMESPACE WITH DIRECTORY


This example has two insertion points. One insertion point is from the root to the acct volume. The second
insertion point is a directory that was created from either an export of the root volume to a UNIX host or from
within a share of the root volume to a Windows host.
This second insertion point could also be a qtree in place of the directory.

6-14

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

How do NFS and SMB clients see


junctions in a namespace?

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

6-15

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2
Deploying NFS

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: DEPLOYING NFS

6-16

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFS
Client
UNIX1

/mnt/NFS

Server

NFSvol

NFSvol is exported to UNIX1 with read/write access.


UNIX1 mounts NFSvol to /mnt/NFS with read/write
access.
17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NFS
NFS is a distributed file system that enables users to access resources, such as volumes that are located on
remote storage systems, as if the resources were located on their local computer system.
NFS provides its services through a client-server relationship.

Storage systems that allow their file systems and other resources to be available for remote access are
called servers.
The computers that use a server's resources are called clients.
The procedure of making file systems available is called exporting.
The act of a client accessing an exported file system is called mounting.

When a client mounts a file system that a server exports, users on the client machine can view and interact
with the mounted file systems on the server within the permissions granted.

6-17

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFSv3 Implementation Steps


1. Verify or add the NFS protocol license.

2. Enable the NFS functionality on the vFiler unit or SVM.


3. Export the available resources.
4. Configure NFS authentication.
5. Authorize the user.
6. Mount the exported resources.

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NFSV3 IMPLMENTATION STEPS


This slide shows the basic process for implementing the NFS protocol between a UNIX host and a Data
ONTAP storage system. This process consists of several steps.
The first step includes enabling the NFS functionality, licensing NFS, and then enabling the feature on the
storage system.
Second, you need resources to export, so you create volumes and qtrees.
Third, you determine which clients will have what type of access to these resources. You need a way to
authenticate the client access and authorize users with appropriate permissions. These permissions can include
read-only or read/write.
Finally, when the client has been granted access to the exported resource, the client mounts the resource and
grants access to the users.

6-18

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFSv3 Implementation
Enable NFS
Data ONTAP 7-Mode
system> options nfs.v3.enable on

Clustered Data ONTAP


c1::> vserver nfs create

Best Practice:
Configure NAS protocols
with OnCommand System
Manager.

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NFSV3 IMPLEMENTATION: ENABLE NFS


After you license NFS, you enable the protocol. You can enable NFS through the CLI or OnCommand
System Manager. Although you can configure NFS with the CLI, NetApp recommends using the tools and
wizards that are available with OnCommand System Manager.

6-19

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Client Specification
Host: Use the host name or IP address.
/vol/acct rw=unix1
/vol/acct rw=192.168.0.10
Netgroup: Use the group name.
/vol/acct rw=@mygroup
Subnet: Specify the subnet address.
/vol/acct rw=192.168.0.0/24
/vol/acct rw=192.168.0.0
255.255.255.0
DNS Subdomain:
/vol/acct rw=.learn.netapp.local
20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLIENT SPECIFICATION
Data ONTAP controls access to its exported resources according to the authentication-based and file-based
restrictions that are specified. With authentication-based restrictions, you can specify which client machines
can connect to the storage system.
When the storage system receives a request to mount an exported resource, it looks up the name of the client
that is making the request. The storage system takes the client IP address and looks up the corresponding host
name that matches that address. Data ONTAP relies on correct resolution of client names and IP addresses to
provide basic connectivity for storage systems on the network. If you are unable to access the storage system
data or establish sessions, there might be problems with host-name resolution on your storage system or on a
name server.

Host: Typically, the UNIX host system that is connected to the storage system
Netgroup: A network-wide group of machines that are granted identical access to certain network
resources for security and organizational reasons
Subnet: A physical grouping of connected network devices. Nodes on a subnet tend to be located in close
physical proximity to each other on a LAN.
DNS subdomain: A domain that is part of a larger domain. A DNS hierarchy consists of the root-level
domain at the top, underneath which are the top-level domains, followed by second-level domains, and
finally the subdomains.

With a netgroup, each element is listed in a triple format: host name, user name, domain name. The host name
entry must be fully qualified if the specified host is not in the local domain. The user name is ignored because
it is used only for mounts. The domain name is either empty or the local domain name. The @ symbol is used
in 7-Mode to indicate that the name following the @ symbol is a netgroup, not a host name.
The following netgroup file contains three netgroups:

6-20

trustedhosts (host1,,)(host2,,)
untrustedhosts (host3,,)(host4,,)(host5,,)
allhosts trustedhosts untrustedhosts
Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

A volume that is called vs1_vol2 is created on


aggregate aggr2. It is mounted in the vs1 namespace
at /vs1/vol2. An administrator moves this volume to the
aggr1 aggregate. What is the path to this volume after
the move?
a. /aggr1/vs1/vs1_vol2
b. /vs1/vol2
c. /vol/vs1_vol1
d. /aggr1/vs1_vol2
21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

6-21

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

7-Mode
Exporting

Modify the /etc/exports file.


/vol/acct rw=unix1:unix2
Run the exportfs command on the CLI:
Use this syntax:
exportfs options path
Example:
system> exportfs o rw=unix1:unix2 /vol/acct
NOTE: The parameter -o specifies one or more export options for a file system
path as a comma-delimited list.

NOTE: To be accessible, the export must be in memory.


To check exports in memory, run exportfs with no options.
To add an export to memory run exportfs -a
22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

7-MODE: EXPORTING
You can export or unexport a file system path, making it available or unavailable to NFS clients, by editing
the /etc/exportsfile or running the exportfs command. To specify which file system paths Data ONTAP
exports automatically when NFS starts, edit the /etc/exportsfile.

6-22

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

7-Mode
Rules for Exporting Resources

Specify the complete path name, including the /vol prefix:


/vol/acct

You cannot export /vol. It is not a path name to a file,


directory, or volume
You must export each resource separately
When you export a resource to multiple targets with the same
access, separate the target names with a colon (:) character:
/vol/acct rw=unix1:unix2

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

7-MODE: RULES FOR EXPORTING RESOURCES


In the export entry syntax, path is a file system path (for example, a path to a volume, directory, or file) and
option is an export option that specifies the following information:

6-23

Which NFS clients have which access privileges


The access privileges are read-only, read/write, or root.
The UID or name of all anonymous or root NFS client users that access the file system path
Whether NFS client users can create setuid and setgid executables and use the mknod command when
accessing the file system path
The security types that an NFS client must support to access the file system path
The actual file system path that corresponds to the exported file system path

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Exporting

Create an export policy, or use an existing policy; the first


policy is named default
Add rules to the policies:
A rule is automatically created in the default export policy if you
use OnCommand System Manager
Specify access permissions to volumes for one or more clients
that are specified by host name, IP, network mask, and netgroup
Rules are processed in the order in which they appear in the
export policy (the rule index number)

Export policies and rules replace /etc/exports:

You do not need to create a separate export entry for each export
Apply a single policy to many exports

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP: EXPORTING


Clustered Data ONTAP uses export policies and rules to control host access. You can specify access
permissions to volumes for one or more clients that are specified by host name, IP, network mask, and
netgroup. Export rules are processed in the order in which they appear in the export policy (the rule index
number). Policies and rules eliminate the need to create individual exports for every volume.

6-24

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Policies and Rules for Exporting Resources

Export project/pro1:

Client specification: unix1,unix2 with access of rw


everyone on the subnet 192.168.0.0/24 with access ro
Security: AUTH_SYS

Create an export policy:

c1::> vserver export-policy create vserver vs1


-policyname vs1_pro1

Create an export policy rule:


c1::> vserver export-policy rule create -vserver vs1
-policyname vs1_pro1 -ruleindex 1 -protocol NFS
-clientmatch 192.168.0.0/24 -rorule sys -rwrule none

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP: POLICIES AND RULES FOR EXPORTING RESOURCES


Here are example commands that are used to create an export policy and rules.

6-25

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Assign an Export Policy to a Volume


or Qtree
Assign the newly created export policy to an existing volume or
qtree.
c1::> volume modify vserver vs1 qtree-path
/vol/project/pro1 export-policy vs1_pro1

Best Practice: NetApp recommends the use of


OnCommand System Manager to manage export policies
and rules.

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ASSIGN AN EXPORT POLICY TO A VOLUME OR QTREE


Each FlexVol volume that is contained in an SVM must be associated with an export policy that contains
export rules for clients to access data in the volume. You can associate one export policy with the volume.
Starting with Data ONTAP 8.3, qtree exports are supported not only for NFSv3 but also for NFSv4.0,
NFSv4.1, and parallel NFS (pNFS).
When you create an SVM, Data ONTAP creates a default export policy that is called default for the SVM.
Data ONTAP assigns the default export policy to the SVM volumes. You can create another export policy
that you associate with a volume instead of the default policy. Before you associate a custom export policy
with a volume, you must create one or more export rules that allow the desired access to data in the volume
and assign those export rules to the custom export policy. You can associate an export policy with a volume
when you create the volume or at any time after you create the volume.

6-26

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Now that my data is in a namespace and has


been exported, how do my users connect to the
exports?

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-27

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mounts
Use the mount command on the client to mount an exported
NFS resource from the storage system.
unix1# mkdir /mnt/project1

unix1# mount <systemIP>:/project/pro1 /mnt/project1

NOTE: /project/pro1 is the junction path.

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MOUNTS
To enable an NFS client, you mount a remote file system after NFS is started. Usually, only a privileged user
can mount file systems with NFS. However, you can enable users to mount and unmount selected file systems
by using the mount and umount commands if the user option is set in /etc/fstab. This setting can reduce
traffic by having file systems mounted only when they are needed. To enable user mounting, create an entry
in /etc/fstab for each file system to be mounted.

6-28

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Verifying Mounts
To verify exports on a target:
mount
When used without options, this command displays all
mounted files

New for Data ONTAP 8.3:


showmount a storagesystemX
Displays a list of clients that are mounting from a storage
system
showmount e storagesystemX
Prints a list of available NFS exports

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VERIFYING MOUNTS
To verify exported resources, use the mount command in UNIX systems:

On the client, use showmount e, nfsstat m, or an equivalent command to verify exported


resources and mount options.
With the showmount command, you can display:

What is being exported by the storage system


The clients that mount the storage system

In versions earlier than Data ONTAP 8.3, clients cannot use the showmount -e command to view the NFS
exports list. Instead, only the root volume (/) is displayed.

6-29

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

UNIX File Permissions


# ls -l
d

type

Sticky
bit

read

write

exec

read

write

exec

read

write

exec

Owner Permissions
1
1

Group Permissions
4

World Permissions
4

Users can use chmod to change permissions.


#chmod 1754 project1
drwxr-xr-1 krnl
21:51
project1
30

nobody

5274

Oct 3

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UNIX FILE PERMISSIONS


After the NFS file system is mounted read/write by a remote host, the only protection that each shared file has
is its permissions.
The nosuid option disables set-user-identifier or set-group-identifier bits. This option prevents remote users
from gaining higher privileges by running a setuid program.
You can use the nosuid option to forbid suid programs to work from the NFS file system. Some UNIX
programs, such as passwd, are called "suid" programs: They set the ID of the person who is running them to
whomever is the owner of the file. If a file is owned by root and is suid, then the program executes as root, so
that they can perform operations (such as writing to the password file) that only root is allowed to do. Using
the nosuid option is a good idea, and you should consider using it with all NFS-mounted disks. It means
that the server's root user cannot make an suid-root program on the file system, log in to the client as a normal
user, and then use the suid-root program to become root on the client too.
The most common use of the sticky bit is on directories that reside in file systems for UNIX-like operating
systems. When a directory's sticky bit is set, the file system treats the files in such directories in a special way
so that only the file's owner, the directory's owner, or root can rename or delete the file. Without the sticky bit
set, any user with write and execute permissions for the directory can rename or delete contained files,
regardless of the files owner.

6-30

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

How can I learn more about NFS? How can I


benefit from information about NFSv4,
delegations, and pNFS?
Data ONTAP NFS Administration course
STRSW-ILT-NFSAD-REV05
One-day instructor-led training

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-31

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Windows File Services

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: WINDOWS FILE SERVICES

6-32

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Server Message Block Protocol


Client
WIN1

Disk 1 (C:)
Disk 2 (E:) \\system\SMBvol

Server
SMBvol
33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SERVER MESSAGE BLOCK PROTOCOL


Server Message Block (SMB) is an application-layer network file-sharing protocol that is used by the
Microsoft Windows operating system. SMB enables users or applications to access, read, and write to files on
remote computers just like on a local computer. For the purposes of this course, the SMB and CIFS terms are
used interchangeably (although the definitions of these two terms are not strictly the same).
A user or an application can send network requests to read and write to files on remote computers. Messages
travel from the network interface card (NIC) of the users computer, through the Ethernet switch, to the NIC
of the remote computer.
SMB provides access to the files and directories that are stored on the remote computer through sharing
resources. This network read and write process, which is also called network I/O, is controlled by the rules of
network protocols such as IPv4 and IPv6.

6-33

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

Now that my UNIX users can access their data,


how should I proceed with Windows clients?

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-34

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SMB Implementation Steps


1. Verify or add the CIFS protocol license.
2. Enable the SMB functionality on vFiler unit or SVM.
3. Share the available resources.
4. Configure SMB authentication.
5. Authorize the user.
6. Map the shared resources.

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SMB IMPLEMENTATION STEPS


To implement SMB, first enable the SMB functionality on the Data ONTAP storage system. Then share the
available resources. Finally, map the shared resources on the Windows client.

6-35

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SMB Implementation
Enable SMB
Data ONTAP 7-Mode
system> cifs setup

Clustered Data ONTAP


c1::> vserver cifs create

Best Practice:
Configure NAS protocols
with OnCommand System
Manager.

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SMB IMPLEMENTATION: ENABLE SMB


After you license CIFS, you enable the protocol. You can enable SMB through the CLI or OnCommand
System Manager. Although you can configure SMB with the CLI, NetApp recommends using the tools and
wizards that are available with OnCommand System Manager.
CIFS setup enables you to perform several tasks: create and name a CIFS server that your CIFS clients
can access, join the CIFS server to a domain, and create a default set of local CIFS users and groups.

6-36

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

CIFS Default Shares


Data ONTAP 7-Mode
Setup creates three default shares:
C$ maps to /
ETC$ maps to /etc
HOME is /vol/vol0/home

Clustered Data ONTAP


C$ maps to /
ipc$ maps to /
admin$ maps to /

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CIFS DEFAULT SHARES


As an administrator, you can share directories with users on the storage system (create "shares"). When you
set up CIFS on the storage system, several default shares are created. A hidden share is a share that is not
visible when you are browsing. An admin share is available only to those users who are members of an
administrator group.
In clustered Data ONTAP, the only administrative shares are those that are used by Windows services.
Administrators must create any desired user-accessible shares.

6-37

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion


We assign exports to volumes and qtrees. What
resources can we share through SMB?

38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION


SMB shares are associated with paths within the namespace. Because the namespace is constructed by
junctions, qtrees, and directories, shares can be associated with any of these resources.

6-38

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Share Permissions
Share permissions can be managed by:
The CLI
OnCommand System Manager
Microsoft Management Console (MMC), such as Computer
Management (clusters, starting with 8.3 and 8.x 7-Mode)

Windows share permissions are:


Full control
Read-only
Change

39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SHARE PERMISSIONS
Share permissions apply only to users who access the resource over the network. They apply to all files and
folders in the shared resource.

Full Control: Full Control is the default permission that is assigned to the Administrators group on the
local computer. Full Control allows all Read and Change permissions, plus Changing permissions (NTFS
files and folders only).
Read: Read is the default permission that is assigned to the Everyone group. Read allows:

Change: Change is not a default permission for any group. The Change permission allows all Read
permissions, plus:

6-39

Viewing file names and subfolder names


Viewing data in files
Running program files

Adding files and subfolders


Changing data in files
Deleting subfolders and files

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Share Access Administration


7-Mode
system> cifs access <share> [-g] [user|group] <rights>
system> cifs access pro1 friends Full Control
1 share(s) have been successfully modified
system> cifs access -delete pro1 everyone

1 share(s) have been successfully modified


system> cifs shares datatree1

40

Name

Mount Point

Description

----

-----------

-----------

datatree1

/vol/project/pro1 Windows Qtree


system\friends / Full Control

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SHARE ACCESS ADMINISTRATION: 7-MODE


In this 7-Mode example, on the pro1 share, you set the share access for the friends group to Full Control
and delete the everyone access control list entry.

6-40

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Share Access Administration


Clustered Data ONTAP

c1::> vserver cifs share access-control create


-vserver vs1
-share-name pro1
-user-or-group friends
-permission Full_Control

c1::> vserver cifs share access-control delete


-vserver vs1
-share-name pro1
-user-or-group Everyone
-permission Full_Control

41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SHARE ACCESS ADMINISTRATION: CLUSTERED DATA ONTAP


In this clustered Data ONTAP example, on the pro share, you set the share access for the friends group to
Full Control and delete the everyone access control list entry.

6-41

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating Shares
Data ONTAP 7-Mode
system> cifs shares -add <share_name> <path>
system> cifs shares change <share_name> <path>

[-comment description]
[-forcegroup name]
[-maxusers n]

Clustered Data ONTAP


c1::> vserver cifs share create
-vserver vsSMB
-share-name DOCS
-path /docs
-share-properties
browsable,changenotify,oplocks

42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING SHARES
A CIFS share is a named access point in a volume that enables CIFS clients to view, browse, and manipulate
files on a file server. There are certain guidelines that you should take into consideration when creating CIFS
shares.
When you create a share, you must provide all of the following information:

The complete path in a volume to the CIFS share


The name of the share that is entered by users when they connect to the share

When you create a share, you can optionally specify a description for the share. The share description appears
in the Comment field when you browse the shares on the network.

6-42

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I have NFS exports and I know how to


connect my UNIX users. I have CIFS
shares, so how can my Windows users
find shares on the network?

43

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-43

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mapping a Share to a Client


CLI
C:\> net view \\system
C:\> net use e: \\system\SHARE1 /user:marketing\jdoe

UI
Use Run dialog box
Map a drive

44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MAPPING A SHARE TO A CLIENT


The net view command displays a list of computers with shared resources that are available on the
specified computer.
To use the net view command, follow these steps:
1. Click the Start button, point to Programs, and then click the MS-DOS prompt.
2. At the command prompt, type net view \\<computername> where <computername> is the
name of a specific computer whose resources you want to view.
These steps connect a computer to or disconnect a computer from a shared resource, or they display
information about computer connections. The command also controls persistent net connections. Used
without parameters, the net use command retrieves a list of network connections.
You can also use Windows to map a share to a client.

6-44

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SMB Sessions
A client establishes a session with a storage system upon the
first share access
Access is based on user authentication and share access
rules

WIN1

\\system\pro1
qtree_pro1

Bob

45

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SMB SESSIONS
An SMB session is established between an authenticated user on an SMB client and an SMB server.

6-45

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows File Permissions

46

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS FILE PERMISSIONS


Permissions are rules that are associated with objects on a computer or network, such as files and folders.
Permissions determine whether a user can access an object and what the user can do with it. For example, you
might have access to a document on a shared folder on a network. And even though you can read the
document, you might not have permissions to make changes to it.

6-46

Full Control: Users can see the contents of a file or folder, change existing files and folders, create new
files and folders, and run programs in a folder.
Modify: Users can change existing files and folders but cannot create new ones.
Read and Execute: Users can see the contents of existing files and folders and can run programs in a
folder.
Read: Users can see the contents of a folder and open files and folders.
Write: Users can create new files and folders and make changes to existing files and folders.

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

You mentioned MMC. Ive heard of it, but what


can I do with it?

47

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-47

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

MMC Support
Features

Create an SMB share

Stop an SMB share


Set or modify SMB share
permissions

View details of enumerated


open sessions
View details of enumerated
open files

Read-only support on clusters


before Data ONTAP 8.3

Close a session
Close a file
48

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MMC SUPPORT: FEATURES


MMC support includes the ability to create and stop an SMB share, set or modify SMB share permissions,
view details of enumerated open sessions and files, and close a session or file.

6-48

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

MMC Support
Limitations

MMC console does not update instantaneously

SMB sessions and file enumeration are managed through a


node-scoped view
Some MMC features are not supported:

Local users and groups management


Windows Performance Monitor (PerfMon)
Live View audit

49

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MMC SUPPORT: LIMITATIONS


The MMC support in the Data ONTAP 8.3 operating system has a few limitations:

6-49

MMC does not update instantaneously, so customers might experience a lag between the moment that
they create a share or set security permissions and the moment that MMC displays that share.
SMB sessions and file enumeration are managed through a node-scoped view. For example, an SMB
administrator who connects to a LIF that is hosted on node 3 and who tries to view open files in MMC
will not see a file that was opened by an SMB user who is connected to a LIF that is hosted on node 8.
Some MMC features are not supported. These features include management of local users and groups,
Windows Performance Monitor (PerfMon), and Live View audit.

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I know theres more to learn about CIFS. What


should I know about topics like name mapping,
Branch Cache, Dynamic Access Control,
persistent handles, and copy offload?
Data ONTAP CIFS Administration course
STRSW-ILT-CIFSAD-REV06
Two-day instructor-led training
50

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

6-50

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Namespace References
Clustered Data ONTAP File Access Management Guide for
NFS
TR-4129: Namespaces in Clustered Data ONTAP

51

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAMESPACE REFERENCES

6-51

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NFS References
Clustered Data ONTAP File Access Management Guide for NFS
Clustered Data ONTAP NFS Configuration Express Guide
TR-4067: Clustered Data ONTAP NFS Implementation Guide

Additional training:
Data ONTAP NFS Administration instructor-led training

52

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NFS REFERENCES

6-52

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SMB References
Clustered Data ONTAP File Access Management Guide for CIFS
Clustered Data ONTAP CIFS/SMB Configuration Express Guide
TR-4191: Best Practices Guide for Clustered Data ONTAP Windows
File Services
Additional training:
Data ONTAP SMB (CIFS) Administration instructor-led training

53

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SMB REFERENCES

6-53

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

Ive learned how my clients can access their


data with the supported NAS protocols. I can
configure NFSv3 and CIFS, create exports and
shares, and set up client computers to access
their NAS data.

54

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

6-54

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 6: Implementing NAS


Protocols

Estimated Time: 45 minutes

55

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Refer to your exercise guide.

6-55

Clustered Data ONTAP Administration: Implementing NAS Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 7

Implementing SAN Protocols

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 7: IMPLEMENTING SAN PROTOCOLS


In this module you learn fundamental SAN concepts, and configure a simple storage virtual machine (SVM)
to serve data with the iSCSI protocol.

7-1

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Storage
Review

File System (SAN)

NFS
Corporate
LAN

iSCSI

CIFS
FCoE

File System
(NAS)
NAS
(File-Level
Access)

FC

SAN
(Block-Level
Access)

NetApp FAS
2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UNIFIED STORAGE: REVIEW


A SAN is a block-based storage system that uses FC, FCoE, and iSCSI protocols to make data available over
the network. Starting with the Data ONTAP 8.1 operating system, clustered Data ONTAP systems support
SANs on clusters of up to four nodes. In Data ONTAP 8.2 and later, SAN is supported in clusters of up to
eight nodes.

7-2

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


I have an SVM and data. How do I enable
my storage customers to access data on
LUNs by using both FC and iSCSI?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

7-3

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Basic Steps for Setting Up a System


1. Connect controllers, disks, and cables.
2. Set up and configure nodes.
3. Install software onto nodes. (Software is preinstalled on most systems.)
4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes to the cluster.

This Module

6. Complete the initial configuration.

7. Create data aggregates.


8. Create a storage virtual machine (SVM).

9. Create data volumes and protocol configuration.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BASIC STEPS FOR SETTING UP A SYSTEM

7-4

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Basic SAN Implementation

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: BASIC SAN IMPLEMENTATION

7-5

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Scalable SAN Support


Data ONTAP 8.2 and 8.3
Windows

Red Hat VMware ESX HPUX

Solaris

AIX

FC, iSCSI, or FCoE

LUNs

LUNs

LUNs

LUNs

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SCALABLE SAN SUPPORT


Clustered Data ONTAP continues to support Windows, Red Hat Linux, VMware ESX, HPUX, and Solaris
hosts, and now it also supports AIX as a SAN host. To function with scalable SAN, all SAN client stacks
must support asymmetric logical unit access (ALUA).
With Data ONTAP 8.3, the maximum number of LUNs per cluster has been increased to 98,304 LUNs. This
increase allows 12,288 LUNs per node, times the maximum SAN cluster size of eight nodes.
Consult the NetApp Supportability Matrix for details about supported versions of SAN hosts.

7-6

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAN Protocols
Which protocols are used in a
Data ONTAP SAN?
FC
iSCSI

FCoE
FCoE uses Data Center
Bridging Ethernet (DCB
Ethernet) capabilities to
encapsulate the FC frame

Operating System or Application


SCSI Layer
FCP
iSCSI
FCP

FC

TCP
FCoE
IP
Ethernet

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SAN PROTOCOLS
LUNs on a NetApp storage system can be accessed through either of the following:

An FC SAN fabric that uses the FC protocol


An Ethernet network that uses either the FCoE protocol or the iSCSI protocol

In all cases, the transport portals (FC, FCoE, or iSCSI) carry encapsulated SCSI commands as the data
transport mechanism.

7-7

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

What do you think is the most popular SAN protocol in the


field?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

7-8

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


In clustered Data ONTAP 8.2 and later, what is the
maximum number of nodes that can support a SAN
protocol?

a. 4
b. 8
c. 16
d. 24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

7-9

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Is a LUN?
A logical representation of a SCSI disk
Logical Blocks: 512 bytes

LUN

10

SCSI Disk

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT IS A LUN?
A LUN in Data ONTAP is a logical representation of an attached SCSI disk. As we learned earlier, SAN is
often called block-based storage. The block refers to the logical blocks that the host writes tojust as it
would write to an attached SCSI disk. Traditionally, these logical blocks are 512 bytes per sector.
Hard disk manufacturers have started using 4096-byte (4-KB) sectors, called Advanced Format, in new hard
disk platforms. At this time, Data ONTAP LUNs are using the traditional SCSI standard of 512 bytes per
sector.
NOTE: This slide is meant to simplify the understanding of a LUN.

7-10

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Connecting Initiator to Target


How can you connect an initiator to a target?
Initiator

Application
File System
SCSI Driver

Eth

FC

Disk 1 (C:)
Disk 2 (E:) LUN

Connected
through a switch

e0a

0a

SAN Services
HA
WAFL

Target
(controller
or SVM)

LUN
11

FlexVol
Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONNECTING INITIATOR TO TARGET

7-11

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAN Data Logical Interfaces


Clustered Data ONTAP

SAN data logical interfaces (data LIFs):

Are assigned a home node and port

HA

e1a

Cluster Interconnect

Data SVM

Do not fail over

LIF1

LIF1

e1b

Data SVM

HA

LIF2

LIF2

Are single-protocol (FC or iSCSI)


Can be moved to different ports or nodes
within an SVM (LIF must be offline)
Can be grouped into port sets
Recommendation: Use at least one LIF per
node, per SVM, per network.

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SAN DATA LOGICAL INTERFACES: CLUSTERED DATA ONTAP


SAN data LIFs are not applicable to Data ONTAP operating in 7-Mode.
SAN data LIFs do not migrate or fail over the way that NAS does, but the LIFs can be moved to another node
or port in the SVM.
To move a data LIF with SAN protocols, use the network interface modify command:
1. To view the current status of a LIF, use the network interface show command.
2. Change the admin status of the LIF to down (offline).
3. Change the location of the LIF to a new node or port (use the network interface modify
command).
4. Change the admin status of the LIF to up (online).
5. Verify the changes by using the network interface show command.

7-12

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

iSCSI Architecture
Multipathing software is required.

Initiator

Disk 1 (C:)
Disk 2 (E:)

Ethernet

Target
Portal
Groups

My_IP_igroup
iqn.1999-04.com.a:system
Protocol: iSCSI
OS Type: Windows
ALUA: true
Port set: myportset

LIF LIF

vs_iscsi
LUN

13

The LUN

FlexVol
Volume

Target: Data SVM

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ISCSI ARCHITECTURE
Data is communicated over ports. In an Ethernet SAN, the data is communicated by means of Ethernet ports.
In an FC SAN, the data is communicated over FC ports. For FCoE, the initiator has a converged network
adapter (CNA) and the target has a unified target adapter (UTA).

7-13

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Targets and Naming


Default
vFiler0 unit

Data ONTAP 7-Mode

Three
vFiler units

Without vFiler units, the storage system is the


target (single node name).
Each vFiler unit:

HA

Supports only iSCSI


Is a separate target and node name (iSCSI
Qualified Name ([IQN])

e0a

LIF1

0a

HA

LIF2

LUN

root

Data SVM
14

FlexVol
Volume

Clustered Data ONTAP


Each SVM is:
A separate target

Assigned a unique node


name (IQN or worldwide node
name [WWNN])

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TARGETS AND NAMING

7-14

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

iSCSI Nodes
Each node has a unique name that is
called an iSCSI Qualified Name (IQN).

Initiator

iqn.1995- 02.com.microsoft:
base.learn.netapp.local

All data SVMs with iSCSI enabled have unique IQNs.


LIF LIF

vs_iscsi

iqn.1992-08.com.netapp:sn.000:vs
LUN

15

FlexVol
Volume

Target: Data SVM

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ISCSI NODES

7-15

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Worldwide Name Format


N_Port ID Virtualization

0a

LIF1

0a

LIF2

N_Port ID Virtualization (NPIV)


has these features:
Enables multiple FC Node Port
(N_Port) IDs to share a single
physical N_Port

Data SVM

Allows more than one LIF to


log in to the fabric from a single
N_Port

Each SVM in the cluster


is assigned a unique
NPIV-based WWNN.

Requires FC switches that


support NPIV
Each LIF is assigned a unique NPIV-based
WWPN that is based on the SVM WWNN.

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WORLDWIDE NAME FORMAT: N_PORT ID VIRTUALIZATION


Clustered Data ONTAP uses the MAC address of e0M on the first node of the cluster to create a worldwide
name (WWN) for the SVMs. Because MAC addresses are globally unique, and the NetApp Organizationally
Unique Identifier (OUI) is specific to NetApp, the WWN for each SVM on the cluster is also globally unique.

7-16

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

WWN Format
Clustered Data ONTAP Identification
1. Verify the port details:
LIF1

LIF2

c1::*> network fcp adapter show


node system-02 adapter 0a

0a

Node: cluster1-02
Adapter: 0a

Original WWNN: 50:0a:09:80:00:05:a2:40


Adapter WWNN: 50:0a:09:80:8d:7d:c0:1c
Adapter WWPN: 50:0a:09:85:8d:7d:c0:1c
Queue Depth: 2048

0a

2. List the SVMs WWNN:

Data SVM

c1::> vserver fcp show

Status
Vserver Target Name
------- ----------------------svm1
20:37:00:a0:98:13:d5:d4

3. List the LIFs WWPN:

Admin
----up

c1::> network interface show vserver svm1

Logical
Vserver Interface
-------- ----------svm1
n1_fc_lif1
n1_fc_lif2

17

Status
Admin/Oper
---------up/up
up/up

Network
Current
Address/Mask
Node
--------------------------20:38:00:a0:98:13:d5:d4 cluster1-01
20:39:00:a0:98:13:d5:d4 cluster1-02

Current
Port
------0a
0a

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WWN FORMAT: CLUSTERED DATA ONTAP IDENTIFICATION


This slide illustrates how the entities are identified in clustered Data ONTAP.
Addressing for FC in clustered Data ONTAP is assigned as follows:
1. The MAC address of e0M on the first node of the cluster is used to generate a WWN for the SVM when
FC is enabled.
svm1 WWNN example: 20:00:00:a0:98:13:d5:d4
2. When a LIF is created, a WWPN is generated based on the WWNN.
LIF WWPN examples: 20:01:00:a0:98:13:d5:d4, 20:02:00:a0:98:13:d5:d4, etc.
3. If a second SVM is created, a WWN will be generated for that SVM. Only the numbers in bold in the
examples will change, but each SVM will have a unique address range.
svm2 WWNN example: 20:05:00:a0:98:13:d5:d4
NOTE: The first output on the slide uses the diagnostic privilege level to display the adapters WWNN and
WWPN.

7-17

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

SAN Configuration and


Multipathing

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: SAN CONFIGURATION AND MULTIPATHING

7-18

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Ive heard that SAN data LIFs dont fail over
during an outage. How do my clients access
their SAN data when a node is down?

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

7-19

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows Multipath Driver


Overview

Ethernet

Multiple paths require


multipath software

Microsoft Multipath
I/O (MPIO)
Ethernet

HA

Device-Specific
Module (DSM)

LUNa

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS MULTIPATH DRIVER: OVERVIEW


Microsoft Multipath I/O (MPIO) software is required any time that a Windows host has more than one path to
the storage system. The MPIO software presents a single disk to the operating system for all paths, and a
device-specific module (DSM) manages path failover. Without MPIO software, the operating system could
see each path as a separate disk, which can lead to data corruption.
On a Windows system, there are two main components to any MPIO configuration: the Windows MPIO
components and a DSM. MPIO is supported for Windows Server 2003, Windows Server 2008, and Windows
Server 2012 systems.

7-20

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Multipath I/O Example


LUN

Multipath Driver
fc0

0d

fc1

0e

0d

0e

HA

LUNa
21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MULTIPATH I/O EXAMPLE


As paths are added between the storage controllers and the host, the LUN is seen once through each path.
When a multipath driver is added to the host, the multipath driver can present the LUN as a single instance.
Four paths are illustrated on this slide. Two of the paths are active and optimized. Two of the paths are active
and non-optimized.
NOTE: The paths in this illustration are simplified for conceptual purposes. Depending on the platform and
version, paths might appear differently, physically or logically, but the concept of ALUA states
(active/optimized, active/non-optimized, or unavailable) will be the same. For example:

7-21

Data ONTAP 7-Mode FC active/non-optimized paths are over the high-availability (HA) interconnect.
Clustered Data ONTAP active/non-optimized paths are over the cluster interconnect.
E-Series controller active/non-optimized paths are over the dual-active storage system back plane.

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

ALUA Multipathing
Asymmetric logical unit access (ALUA) identifies a group of
target ports that provide a common failover behavior for a LUN.
Access states:
Active/optimized

Active/non-optimized
Standby (not used by Data ONTAP)
Unavailable

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ALUA MULTIPATHING
ALUA, also called Target Port Group Support (TPGS), identified a set of one or more SCSI targets that are
unified by a purpose.

Active/optimized: While in the active/optimized state, all the target ports in the Target Port Group can
immediately access the LUN.
Active/non-optimized: While in the active/non-optimized state, the device server support all commands
that the LUN supports. The execution of specific commands, especially those that involve data transfer or
caching, might operate with lower performance than they would if the Target Port Group were in the
active/optimized state.
Unavailable: The target port returns a CHECK CONDITION status with the sense key set to NOT
READY and an additional sense code of LOGICAL UNIT NOT ACCESSIBLE, TARGET PORT IN
UNAVAILABLE STATE.

NOTE: Do not confuse a Target Port Group (a group of target ports) with a portal group (sometimes called a
Target Portal Group on the storage), which is a list of IP addresses and ports that listen for iSCSI
connections.

7-22

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Active and Optimized (Direct)


Multipath Driver
with ALUA Support

Direct

LUN

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ACTIVE AND OPTIMIZED (DIRECT)


The multipath driver keeps track of which paths are available and which paths are not working well. The
multipath driver proactively routes traffic over the best possible path.
A path is considered optimized, or direct, when it has a direct connection to the storage controller that owns
the LUN. ALUA requires additional SCSI mechanisms to indicate that all target ports are fully accessible, but
some of the target ports have only partial access or more latency to a specific LUN.
ALUA identifies the path type (active/optimized or active/non-optimized). The multipath driver will only use
the active/optimized (direct) paths that ALUA identifies and the multipath driver applies a path selection
policy to the direct path(s). The path selection policy is chosen by the administrator.

7-23

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Active and Non-optimized (Indirect)


Multipath Driver
with ALUA Support

Indirect

Indirect

Indirect

LUN

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ACTIVE AND NON-OPTIMIZED (INDIRECT)


A path is considered non-optimized, or indirect, when it does not have a direct connection, in other words,
when it is connected to a storage controller that does not own the LUN.
If no direct paths are available, the multipath driver will then take the active/non-optimized (indirect) paths. It
will apply the path selection policy to the indirect path(s) in the same way it did when the direct paths were
available. You would never see a mix of direct and indirect paths.

7-24

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


Clustered Data ONTAP supports ALUA with which
protocols?
a. FC only
b. FC and FCoE only
c. FC, FCoE, and iSCSI only

d. FC, FCoE, iSCSI, and NFS only

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

7-25

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Okay, I understand how it works. Now what
steps do I take to configure my cluster to
serve iSCSI?

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

7-26

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

iSCSI Implementation Steps


1. Add the iSCSI licenses for the cluster.

2. Enable iSCSI traffic for an SVM or vFiler unit.


3. Create iSCSI LIFs.
4. Create an initiator group (igroup).
5. Create a LUN.
6. Map the LUN to the appropriate igroup.
7. Prepare the LUN on the host computer.

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ISCSI IMPLEMENTATION STEPS


These are the basic steps to implement iSCSI for an SVM on clustered Data ONTAP. You can enable iSCSI
by using either the CLI or System Manager UI.

7-27

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3
LUN Access

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: LUN ACCESS

7-28

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Identify the iSCSI Node Name

iSCSI initiator name

This prompt may appear the


first time that you start the
iSCSI initiator.

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: IDENTIFY THE ISCSI NODE NAME


The iSCSI software initiator creates the iSCSI connection on the Windows host. For Windows Server 2003,
you must download and install the software initiator. The iSCSI software initiator is built in to Windows
Server 2008 and Windows Server 2012.
If this is the first time that this machine has used as an iSCSI initiator, a dialog box appears, which requests
that you turn on the service. Click Yes. The iSCSI Initiator Properties dialog box appears.

7-29

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Target Discovery Methods

Initiator

Ethernet

Administrator must tell the host where to


discover the target by using either the
target portal IP address or an Internet
Storage Name Service (iSNS) server.

IP
SAN

Target
(controller
or SVM)

30

Ethernet

HA

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: TARGET DISCOVERY METHODS


The administrator must direct the software initiator on the host to discover the target. Discovery can be
performed in one of two methods: the send targets method (entering the target portal IP address) or by using
an iSNS server. These slides illustrate the send targets method, because it is the most common method and
doesnt require another server to implement.

7-30

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Target Discovery

1. Click Discover Portal.

31

2. Enter the IP address for


one of your iSCSI LIFs.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: TARGET DISCOVERY


To discover an iSCSI target:
1. In Server Manager, from the Tools menu, select iSCSI Initiator.
2. If this is the first time that this machine has used an iSCSI initiator, a dialog box appears, which requests
that you turn on the service. Click Yes.
The iSCSI Initiator Properties dialog box appears.
3. Click the Discovery tab.
4. Click Discover Portal, and then enter the following:

For 7-Mode: Enter the IP address for one of your interfaces on the storage system or target vFiler unit and click
OK.
For clustered Data ONTAP: Enter the IP address for one of your LIFs on the target SVM and click OK.

5. Click the Targets tab, then select the discovered targets IQN and click Connect.
6. In the Connect To Target dialog box, select Enable multi-path and click OK.
7. Verify that the target now has a status of Connected (this step is shown on the next slide).

7-31

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Target Discovered

Verify that the target has been


discovered but is inactive.

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: TARGET DISCOVERED


When a target is discovered, it appears in the iSCSI Initiator properties as Inactive.

7-32

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Connect to a Target

1. Select the
IQN.

2. Click Connect.

3. In the Connect To Target


dialog box, select Enable
multi-path and click OK.

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: CONNECT TO A TARGET


Select the inactive target and click the Connect button. The Connect To Target dialog box opens. In that
dialog box, you can enable persistent bindings (Favorite Targets), enable multipath, and modify advanced
options.

7-33

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows iSCSI Implementation


Verify the Session
2. Verify the session.

1. Click Properties.

Use iscsi session show to verify on storage system.


34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS ISCSI IMPLEMENTATION: VERIFY THE SESSION


When a successful connection is made with the storage system, a session will be created. If multiple paths are
available, a session will need to be created for each path (or LIF).
You can also display information about sessions or connections on the storage as well.
7-Mode
The vserver iscsi session show command displays session information, and the iscsi
connection show command displays connection information.
Clustered Data ONTAP
The iscsi session show command displays session information, and the vserver iscsi
connection show command displays connection information.
The session information is also available through OnCommand System Manager.

7-34

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Portal Groups


Data ONTAP 7-Mode
system1> iscsi tpgroup show
TPGTag Name
Member Interfaces
1000
e0a_default
e0a
1001
e0b_default
e0b
Each target interface has a portal group, by
default, with a unique tag (a Target Portal
Group tag, or TPGT).

Clustered Data ONTAP


c1::> vserver iscsi interface
Logical
Vserver Interface
TPGT
-------- ------------- ---svm1
lif_iscsi_n1 1026

35

show -vserver svm1


IP
Curr
Address
Node
------------ -----------192.168.0.1 cluster1-01

Curr
Port
---e0c

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP PORTAL GROUPS


When a session is created between an initiator and a target, a session ID is generated. Sessions IDs are
typically a long string of digits which can be difficult to use for identification of a particular session. An
easier way to identify a session by port (in Data ONTAP 7-Mode) or by LIF (in clustered Data ONTAP) is to
use the Target Portal Group tag.

7-35

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Verifying Session
Verify sessions:
Data ONTAP 7-Mode
system> iscsi session show

Clustered Data ONTAP


c1::> vserver iscsi session show -vserver vsISCSI2
Tpgroup
Initiator
Initiator
Vserver
Name
TSIH Name
ISID
Alias
--------- ------- ---- ------------------------ --------- ------------vsISCSI2 i2LIF1
5 iqn.1991-05.com.microsoft:win-frtp2qb78mr
40:00:01:37:00:00

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VERIFYING SESSION

7-36

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


The initiator is all set up. Wheres the data?

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

7-37

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating an igroup
Create an igroup:
Data ONTAP 7-Mode
system> igroup create -i t windows ig_myWin2 iqn.1991-05.com.microsoft:winfrtp2qb78mr

Clustered Data ONTAP


c1::> lun igroup create -vserver vsISCSI2 -igroup ig_myWin2 -protocol iscsi
-ostype windows -initiator iqn.1991-05.com.microsoft:win-frtp2qb78mr

Verify an igroup:
Data ONTAP 7-Mode
system> igroup show

Clustered Data ONTAP


c1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- -------- -------- -------- ------------------------vsISCSI2 ig_myWin2 iscsi
windows iqn.199105.com.microsoft:win-frtp2qb78mr

38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING AN IGROUP

7-38

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Verifying igroups
Verify igroups:
Data ONTAP 7-Mode
system> igroup show v

Clustered Data ONTAP


c1::> igroup show instance ig_myWin2
Vserver Name: vsISCSI2
Igroup Name: ig_myWin2
Protocol: iscsi

ALUA: true
Initiators: iqn.1991-05.com.microsoft:winfrtp2qb78mr (logged in)
39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VERIFYING IGROUPS

7-39

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating a LUN
Create a fully provisioned LUN:

Data ONTAP 7-Mode


system> lun create s 50MB t windows_2008 /vol/vol1/lun1

Clustered Data ONTAP


c1::> lun create -vserver vsISCSI2 -volume vol1
-lun lun_vsISCSI2_1 -size 50MB
-ostype windows_2008

40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING A LUN

7-40

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Mapping a LUN
Map a LUN to an igroup:
Data ONTAP 7-Mode
system> lun map /vol/vol1/lun1 ig_myWin2

Clustered Data ONTAP


c1::> lun map -vserver vsISCSI2 -volume vol1
-lun lun_vsISCSI2_1 -igroup ig_myWin2

41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MAPPING A LUN

7-41

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Selective LUN Mapping in 8.3

LIF1 LIF2

D D

LIF1 LIF2

LIF1 LIF2

LIF1 LIF2

Local nodes: The node that owns the LUN and its partner report
the LUN to the host (also called reporting nodes).
Remote nodes: All other nodes in the cluster do not report.
For information about Selective LUN Mapping and LUN Mobility,
see the SAN Implementation course.
42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SELECTIVE LUN MAPPING IN 8.3


Selective LUN Mapping has been added in clustered Data ONTAP 8.3 to enable Data ONTAP to
transparently manage paths to a LUN. In previous versions on Data ONTAP, port sets are used to manually
manage paths. When a LUN is mapped in 8.3, Data ONTAP identifies the node that owns the aggregate with
the LUN and its HA partner as reporting nodes (also called local nodes). The reporting nodes report the LUN
to the host.
NOTE: LUNs that were created in versions of Data ONTAP before 8.3 and then upgraded will report on all
nodes. LUN mapping must be changed explicitly to use Selective LUN Mapping.

7-42

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


What is required for a LUN to serve data in a clustered
Data ONTAP environment?
a. A LUN requires a junction path to the global namespace.

b. A LUN requires igroup-to-LUN mapping.


c. A LUN requires a mount point in the volume.
d. A LUN requires LUN reservation to be enabled.

e. All of the above


43

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

7-43

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows LUN Implementation


Discover LUN

To configure an NTFS volume LUN, first, discover the LUN by


selecting Disk Management > Rescan Disks.

In Windows, a LUN
appears as a disk.

44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS LUN IMPLEMENTATION: DISCOVER LUN


There are many ways to discover and prepare the LUN in Windows. Each version of Windows may have
slightly different tools that can be used. This module illustrates the most common method, which is used in
many versions. Be aware that in Windows, a LUN appears as a disk and and is labeled as a disk.
Open Computer Management and select Disk Management. If the LUN that you created is not displayed,
rescan disks by right-clicking Disk Management or, from the Action menu, select Rescan Disks.

7-44

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows LUN Implementation


Initialize Disk

1. Right-click the new disk and select Online.

2. Right-click the disk and select Initialize Disk.

45

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS LUN IMPLEMENTATION: INITIALIZE DISK


Depending on how many LUNs you mapped, one or more disks may appear. Identify the disk that you want
to prepare.

7-45

Bring the disk online by right-clicking the box to the left of the disk and selecting Online.
To initialize the disk, right-click again and select Initialize Disk.

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows LUN Implementation


Provisioning a Volume (1 of 3)

Right-click the new disk


and select New Simple
Volume.

46

The New Simple


Volume Wizard
launches.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS LUN IMPLEMENTATION: PROVISIONING A VOLUME (1 OF 3)


Now that the diskthe LUN was presented to Windowsis online and initialized, it is necessary to provision
a volume to put data on. There are many ways to provision a volume in Windows. This module illustrates the
most common method, the New Simple Volume Wizard from the Disk Management utility.
In the Disk Management utility, launch the New Simple Volume Wizard by right-clicking the disk that you
want to provision and selecting New Simple Volume.

7-46

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows LUN Implementation


Provisioning a Volume (2 of 3)

1. Specify the volume


size.

2. Specify the drive letter


or mount point.

47

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS LUN IMPLEMENTATION: PROVISIONING A VOLUME (2 OF 3)


You need to specify the size of the volume; typically this size is equal to the LUN size. In other words, the
volume would span the whole LUN.
You also need to select a way to access the volume, by assigning a drive letter or a mount point. You can also
choose not to do anything at this time if you do not want the volume to be accessible.

7-47

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Windows LUN Implementation


Provisioning a Volume (3 of 3)
3. Specify the file system
format, allocation unit size,
and a volume label.

4. Verify the
configuration and click
Finish.

48

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WINDOWS LUN IMPLEMENTATION: PROVISIONING A VOLUME (3 OF 3)


You format the volume with a file system, typically NTFS. This moment would also be a good time to label
the volume so that it will be easier to identify.
Lastly, verify the setting and click Finish to complete the process.

7-48

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Where can I learn more about topics like FC and FCoE
SAN, configuring Linux hosts, Foreign LUN import, LUN
mobility enhancements, and SnapDrive software?
The SAN Implementation instructor-led course:
Implementation details about using Windows and Linux as
initiators
Information about SnapDrive for Windows and SnapDrive
for UNIX

The Architecting SAN on NetApp Storage instructorled course:


Details about FC and FCoE implementation
Steps for troubleshooting

Take both courses and prepare for the NCIE-SAN


certification exams.

49

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

7-49

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SAN References
Clustered Data ONTAP SAN Administration Guide

Clustered Data ONTAP SAN Configuration Guide


Clustered Data ONTAP SAN Express Setup Guide
Clustered Data ONTAP iSCSI Configuration for Windows
Express Guide
Clustered Data ONTAP FC Configuration for Windows
Express Guide
TR-4080: Best Practices for Scalable SAN in Clustered Data
ONTAP
50

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SAN REFERENCES

7-50

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


After configuring my clustered Data ONTAP
system and client server, I detected my LUN and
started to write data to it. It was easy to do by
using OnCommand System Manager and the
CLI.

51

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

7-51

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 7: Implementing SAN


Protocols

Estimated Time: 45 minutes

52

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

7-52

Clustered Data ONTAP Administration: Implementing SAN Protocols

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 8

Snapshot Copies

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 8: SNAPSHOT COPIES


Snapshot technology enables simple point-in-time local backups of FlexVol volumes. It is also the basis for
many other Data ONTAP features. In this module you learn basic Snapshot concepts, how Snapshot copies
use storage space, how to create Snapshot copies, and how to restore data from a Snapshot copy.

8-1

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks
My company backs up all data to tape, but
tape is expensive and time consuming.
We need a quick, inexpensive, spaceefficient way to instantly back up the data that
we use every day. We would also like our
users to retrieve backed-up data without
needing a storage administrator to intervene.
How can we accomplish all of that?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

8-2

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Defining Snapshot Technology

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: DEFINING SNAPSHOT TECHNOLOGY

8-3

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Technology
A Snapshot copy is a local read-only image of the active file system at a
point in time.
The benefits of Snapshot technology are:
Nearly instantaneous application data backups
Fast recovery of data that is lost due to:
Accidental data deletion
Accidental data corruption

Snapshot technology is the foundation for these NetApp products and


features:

SnapRestore
SnapDrive
FlexClone
SnapProtect

SnapManager
SnapMirror
SnapVault
Deduplication

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT TECHNOLOGY
Snapshot technology is a key element in the implementation of the WAFL (Write Anywhere File Layout) file
system:

A Snapshot copy is a local, read-only, space-efficient, point-in-time image of data in a volume or


aggregate.
A Snapshot copy is only a picture of the file system, and it does not contain any data file content.
Snapshot copies are used for backup and error recovery.
Snapshot copies are only for local file recovery. If a volume is destroyed, then the Snapshot copies are
destroyed.

The Data ONTAP operating system automatically creates and deletes Snapshot copies of data in volumes to
support commands that are related to Snapshot technology.

8-4

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Snapshot Functionality


Create Snapshot
Copies
Manually
Automatically, based
on schedule
Through management
and backup tools

Restore Through
NAS Client
UNIX:
.snapshot directory

Entire volume or
individual file

Windows:
~snapshot directory

License required

NOTE: These are hidden


directories.

Restore Through
SnapRestore

Can be restored
manually or by using
management tools

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VOLUME SNAPSHOT FUNCTIONALITY


Snapshot functionality is controlled by management, which provides the UI for manual Snapshot copies and
the job manager policies and schedules for automated Snapshot operations. Each volume can have a Snapshot
policy associated with it. A policy can include multiple schedules, so that Snapshot copies are created hourly,
daily, weekly, and so on. A policy also indicates how many of each type of Snapshot copy to retain before
deleting older copies. For example, you can keep four hourly Snapshot copies, and when the fifth one is
created, the oldest one is removed, such that a continuously updated group of the previous four hourly
Snapshot copies is retained.
Clients can see and use the .snapshot directories, so users can restore their own data without administrator
intervention. When an entire volume needs to be restored from a Snapshot copy, an administrator uses the
volume snapshot promote command, which is basically the same as restoring the volume by using
SnapRestore technology. The entire Snapshot copy is promoted, replacing the entire volume. Individual files
can be restored only through a client.

8-5

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Snapshot Copy


PROD

SNAP 1

SNAP 2

Production: Active File System

Prod

S1

S2

SNAP #1

SNAP #2

WRITE

WRITE

E
E
E
E

F
F
F

F
F

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP SNAPSHOT COPY


Before a Snapshot copy is created, there must be a file system tree that points to data blocks that contain
content. When the Snapshot copy is created, the file structure metadata is saved. The Snapshot copy points to
the same data blocks as the file structure metadata that existed when the Snapshot copy was created.
Creating a Snapshot copy has no significant impact on disk space. Because the file structure takes up little
space, and no data blocks must be copied to disk, a new Snapshot copy consumes almost no additional disk
space. In this case, the phrase consumes no space really means no appreciable space. The so-called toplevel root inode, which contains metadata that is necessary to define the Snapshot copy, is 4 KB.
Snapshot copies begin to use space when data is deleted or modified. The WAFL file system writes the new
data to a new block (F) on the disk and changes the root structure for the active file system to point to the
new block.
Meanwhile, the Snapshot copy still references the original block F. Any time that a Snapshot copy references
a data block, that block remains unavailable for other uses, which means that Snapshot copies start to
consume disk space only when the file system data changes after a Snapshot copy is created.

8-6

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

When does a Snapshot copy start taking up space?


a. When a Snapshot copy is first created
b. When SnapRestore technology is first initiated
c. When the Snapshot copy is reverted

d. Only when the file system data changes after the Snapshot
copy is created

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

8-7

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Snapshot technology is foundational for which of these NetApp


products?
a. SnapRestore
b. SnapDrive
c. FlexClone
d. All of the above

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

8-8

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Copies and Inodes


Each volume can contain up to 255 Snapshot copies

The inodes of Snapshot copies are read-only


When the Snapshot inode is created:

The Snapshot copy points to the same disk blocks as the root
inode
New Snapshot copies consume only the space that is required for
the inode itself

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT COPIES AND INODES


A Snapshot copy is a frozen, read-only image of a traditional volume, a FlexVol volume, or an aggregate. A
Snapshot copy reflects the state of the file system at the time that the Snapshot copy was created. Snapshot
copies are your first line of defense for backing up and restoring data. You can configure the Snapshot copy
schedule.

8-9

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which hidden directory contains Snapshot copies when viewed


from a CIFS client?
a. .snapshot
b. ~snapshot
c. %snapshot
d. ~backup

10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

8-10

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Managing Snapshot Space


Usage

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: MANAGING SNAPSHOT SPACE USAGE

8-11

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks
How can I make sure that my volumes
dont fill up with Snapshot copies?

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

8-12

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Reserve
Snapshot reserve

Aggregate Space

Determines the storage space set aside for


Snapshot copies
Is not used for file system writes
Can be increased or decreased

Active File
System

Default Snapshot reserve


Reserve for aggregates = 0%
Reserve for volumes = 5%

5%

To change Snapshot reserve:


Data ONTAP 7-Mode
system> snap reserve -V vol01 10

Clustered Data ONTAP

95%

Default volume Snapshot


reserve: 5%
Default aggregate
Default
aggregate
Snapshot
copy
reserve: 0%
Snapshot reserve: 0%

c1::> vol modify vserver


svm1 vol vol01 percent-snapshot-space 10
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT RESERVE
The snap reserve command determines the percentage of the storage space that is set aside for Snapshot
copies.
You can change the percentage of storage space that is set aside for the Snapshot copies of a volume. By
default, volume Snapshot copies are stored in the Snapshot reserve storage space. The Snapshot reserve space
is not counted as part of the volumes disk space that is allocated for the active file system. When a Snapshot
copy is first created, none of the Snapshot reserve is consumed. The Snapshot copy is protecting the active
file system at the point in time when the Snapshot copy was created. As the Snapshot copy ages, and the
active file system changes, the Snapshot copy begins to own the data blocks that were deleted or changed by
the current active file system. The Snapshot copy begins to consume the Snapshot reserve space. The amount
of disk space that is consumed by Snapshot copies can grow, depending on the length of time that a Snapshot
copy is retained and the rate of change of the volume.
In some cases, if the Snapshot copy is retained for a long period of time, and the active file system has a high
rate of change, the Snapshot copy can consume 100% of the Snapshot reserve, which is the full 5% of the disk
space that is set aside for Snapshot copies. If the Snapshot copy is not deleted, the Snapshot copy can
consume a portion of the disk space that is intended for the active file system. You monitor and manage
Snapshot copies so that disk space is properly managed.
NOTE: Even if the Snapshot reserve is set to 0%, you can still create Snapshot copies. If there is no Snapshot
reserve, Snapshot copies, over time, consume blocks from the active file system.

8-13

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Disk Consumption


Deleting a Snapshot copy

Aggregate Space

Data ONTAP 7-Mode


system> snapshot delete
system> volume snapshot autodelete

Active File
System

95%

Clustered Data ONTAP


c1::> vol snapshot delete
c1::> vol snapshot autodelete

Estimating space savings

Snapshot
reserve

5%

WAFL Reserve 10%

Calculates the volume space that can be reclaimed


Is available in advanced mode
Data ONTAP 7-Mode
system> volume snapshot compute-reclaimable

Clustered Data ONTAP


c1::*> volume snapshot compute-reclaimable
14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT DISK CONSUMPTION


Especially in environments with high rates of data change, Snapshot copies can consume significant space in
a volume. Older Snapshot copies can be deleted manually or can be automatically deleted based on predefined
thresholds. You can estimate the potential space that can be reclaimed by deleting Snapshot copies by using
the compute-reclaimable options.

8-14

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

When is the active file system used to store Snapshot copies?


a. When the Snapshot reserve is completely full
b. When the Snapshot reserve is set to 20%
c. The active file system is always used for Snapshot copies.

d. After turning on fractional reserve

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

8-15

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Creating Snapshot Copies

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: CREATING SNAPSHOT COPIES

8-16

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Copy Management Interfaces


Use these interfaces to manage Snapshot copies:
The storage system CLI
NetApp OnCommand System Manager
NetApp data management and data protection software, such
as:

SnapDrive for Windows and SnapDrive for UNIX


SnapManager software
Snap Creator Framework
OnCommand Unified Manager
SnapProtect management software

Third-party data management software


17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT COPY MANAGEMENT INTERFACES


To manage Snapshot and SnapRestore operations, you can always use the storage CLI or NetApp data
protection GUI interfaces. NetApp management software includes NetApp System Manager, OnCommand
Unified Manager, SnapManager, SnapProtect, SnapDrive for Windows, SnapDrive for UNIX, and Snap
Creator Framework. You can also use third-party data management programs, such as Symantec.

8-17

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks
When are Snapshot copies triggered? Can I
create one myself on demand? Do I have
control over a schedule? Can I back up
different volumes on different schedules?

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

8-18

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Commands
These basic 7-Mode and clustered Data ONTAP commands enable you to
create and manage Snapshot copies.

snap Command

Action

system> snap create


c1::> vol snapshot create

Creates Snapshot copies for a


specified volume

system> snap delete


c1::> vol snapshot delete

Deletes the named Snapshot copy


from the specified volume

system> snap reserve


c1::> vol modify percentsnapshot-space
system> snap sched
c1::> vol snapshot policy addschedule
19

Changes the Snapshot reserve

Displays the schedule for all volumes

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT COMMANDS
Here are the basic Snapshot commands that you use on the storage system CLI:

8-19

To create and delete Snapshot copies, use the snap create command for 7-Mode or the volume
snapshot create command for clustered Data ONTAP.
To modify the Snapshot reserve on 7-Mode, use the snap reserve command. On a cluster, you
modify the Snapshot reserve at the volume level.
Use snap sched on 7-Mode to manipulate Snapshot schedules. Clustered Data ONTAP uses snapshot
policies to apply schedules to volumes.

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp System Manager


Create a Snapshot Copy

Select the volume


and select Snapshot Copies > Create
to create a new Snapshot copy.

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP SYSTEM MANAGER: CREATE A SNAPSHOT COPY


In 7-Mode and in clustered Data ONTAP, you can use OnCommand System Manager to create, schedule, and
maintain Snapshot copies for volumes and aggregates.

8-20

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

Should all hourly Snapshot copies run on the hour?

Why or why not?

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

8-21

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The snap sched Command


Snapshot copies are created and deleted according to the
Snapshot schedule, which is set by the snap sched command.
Default Snapshot schedule:
snap sched vol2 0 2 6@8,12,16,20
weekly

daily

hourly

snap sched Command


snap sched volume_name
snap sched volume_name
weeks days hours @ list
of hours
22

Action
Displays the Snapshot schedule
for the named volume
Changes the Snapshot
schedule for the named volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE SNAP SCHED COMMAND


By default, when you create a volume, the automatic Snapshot copy setting is enabled, and Snapshot copies
are created and deleted on the default Snapshot schedule.
All the volumes that you create share the same default Snapshot schedule. If you change the Snapshot
schedule on the root volume, all new volumes created will have the same schedule as the root volume.
This example shows a default schedule that specifies that Snapshot copies will be created at 8:00, 12:00,
16:00, and 20:00 (24-hour time), and that the two most recent daily Snapshot copies and the six most recent
hourly Snapshot copies will be kept.
Snapshot copies are a point-in-time image of a volume. The only difference between a weekly Snapshot copy
and a nightly or hourly copy is the time at which the Snapshot copy was created and the data that has changed
between the Snapshot copies.
Depending on the NetApp storage system platform and the version of Data ONTAP, you can create up to 500
volumes. For performance considerations, it is recommended that you vary the Snapshot schedules for
volumes so that all volumes are not stressing physical resources by creating hundreds of Snapshot copies at
the same moment.
On the CLI, you use the snap sched command to change the Snapshot schedule. The snap sched
command enables you to specify the number of weekly, daily, and hourly Snapshot copies to create and
retain. When the number of Snapshot copies of each type is reached, the oldest Snapshot copy for each type is
deleted and replaced by the new Snapshot copy.

8-22

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Policies
Create a job schedule
c1::> job schedule cron create -name 4hrs -dayofweek all
-hour 4 -minute 0

Create a Snapshot policy


c1::> volume snapshot policy create -vserver svm1 -policy
snappolicy_4hrs -schedule1 4hrs -count1 5 -prefix1
every_4_hour

Apply the policy to the volume


c1::> volume modify vserver svm1 volume vol01 snapshotpolicy snappolicy_4hrs

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT POLICIES
Two Snapshot policies are automatically created: default and none. If a volume uses none as its Snapshot
policy, no Snapshot copies of it are created. Create Snapshot policies by using the volume snapshot
policy create command and cluster-level schedules.

8-23

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

Restoring Data from a


Snapshot Copy

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: RESTORING DATA FROM A SNAPSHOT COPY

8-24

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks
Snapshot copies are a great feature. How
can clients find where the copies are stored
and how can they restore lost files?

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

8-25

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Recovering Data
Recover Snapshot
Data

Use SnapRestore
Technology

Copy data from


Snapshot data

Locate the Snapshot


copy

Requires a
SnapRestore license

Use SnapRestore
data recovery
software

Copy the file to the


original location

Restores entire
volumes

Copy the file to a new


location

Quickly restores large


files

Use the Windows


Previous Versions
feature

26

Copy from a
Snapshot Copy

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RECOVERING DATA
You can use Snapshot copies to recover data in two ways:

8-26

Copy a file from a Snapshot directory: To copy a lost or corrupted file from a Snapshot copy, navigate to
the Snapshot directory on the client host and locate the Snapshot copy that contains the correct version of
the file. You can copy the file to the original location and overwrite existing data or copy the file to a new
location.
Use the SnapRestore feature to recover data: To revert a volume or a file from a Snapshot copy, you need
the SnapRestore license. You can revert a volume or file from the storage CLI or from the OnCommand
System Manager interface. You can also revert a volume or file by using NetApp data protection software
solutions such as SnapManager, Snap Creator Framework, SnapProtect, or SnapDrive.

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Visibility to Clients


Allow client access to a Snapshot directory
Data ONTAP 7-Mode
system> vol options vol01 nosnapdir off

Clustered Data ONTAP


c1::> vol modify vserver svm1 vol vol01 snapdir-access true
c1::> vserver cifs share modify vserver svm1 share svm1v01
share-properties showsnapshot

~snapshot visible to CIFS


Data ONTAP 7-Mode
system> options cifs.show_snapshot on

.snapshot visible to NFS


Data ONTAP 7-Mode
system> options nfs.hide_snapshot off

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT VISIBILITY TO CLIENTS


CLI commands are available for controlling the visibility of Snapshot directories on a volume from NAS
clients.
7-Mode
You can make the Snapshot directory invisible to clients and turn off access to the .snapshot directory. Setting
the nosnapdir option to on disables access to the Snapshot directory that is present at client mountpoints
and at the root of CIFS directories, and makes the Snapshot directories invisible. (NFS uses .snapshot for
directories, and CIFS uses ~snapshot.) By default, the nosnapdir option is off (directories are visible).
To make the ~snapshot directory visible to CIFS clients, turn the cifs.show_snapshot option on. Turn
the nosnapdir option off for each volume for which you want directories to be visible.
NOTE: You must also ensure that Show Hidden Files and Folders is enabled on your Windows system.
To make the .snapshot directory invisible to NFS clients, turn the nfs.hide_snapshot option on. Turn
the nosnapdir option off for each volume for which you want directories to be visible.
Clustered
Access to .snapshot and ~snapshot is controlled at the volume level by setting the snapdir-access
switch. Additionally, you can control access to ~snapshot from CIFS clients at the share level with the
showsnapshot share property.

8-27

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

What are the advantages and disadvantages of allowing


clients to restore their own data?

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

8-28

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot View from a UNIX Client


# ls l /system/vol01/.snapshot

weekly.2014-09-15_0015

daily.2014-09-18_0010

daily.2014-09-19_0010

hourly.2014-09-19_0605

hourly.2014-09-19_0705

hourly.2014-09-19_0805

hourly.2014-09-19_0905

hourly.2014-09-19_1005

hourly.2014-09-19_1105

hourly.2014-09-19_1205

snapmirror.3_2147484677.2014-09-19_114126

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT VIEW FROM A UNIX CLIENT


Every volume in your file system contains a special Snapshot subdirectory that enables users to access earlier
versions of the file system to recover their lost or damaged files.
The Snapshot directory appears to NFS clients as .snapshot. The .snapshot directory is usually hidden and is
not displayed in directory listings, unless you use the ls command with the a option.
When client Snapshot directories are listed, the timestamp is usually the same for all directories. To find the
actual date and time of each Snapshot copy, use the snap list command on the storage system.

8-29

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Recovering Files from the .snapshot Directory


of a UNIX Host
/
mnt
system

vol0

etc

usr

var

Snapshot directories exist at every level but


are visible at only the top level of the mount.
.snapshot

home
.snapshot

daily.2014-09-18_0010
daily.2014-09-17_0010

daily.2014-09-18_0010 daily.2014-09-17_0010

Files on home
(as of previous
midnight)
30

Files on home
(as of night
before last)

Files on vol0
(as of previous
midnight)

Files on vol0
(as of night
before last)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RECOVERING FILES FROM THE .SNAPSHOT DIRECTORY OF A UNIX HOST


The .snapshot directory is at the root of a storage system volume.
In this example, the directory structure is shown for an NFS client that has mounted vol0 of a storage system
to the mountpoint /mnt/system on the UNIX host.
The home directory and the .snapshot directory are visible at the root of the vol0 mount.
You can open the .snapshot directory and access the files in the two Snapshot copies that are subdirectories of
the .snapshot directory.
To restore a file from the .snapshot directory, rename or move the original file, then copy the file from the
.snapshot directory to the original directory.

8-30

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Recovering Files from the ~snapshot Directory


of a Windows Client

Snapshot copies are visible to Windows


clients that have File Manager configured
to display hidden files.

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RECOVERING FILES FROM THE ~SNAPSHOT DIRECTORY OF A WINDOWS CLIENT


Snapshot directories are hidden on Windows clients. To view them, you must first configure File Manager to
display hidden files, then navigate to the root of the CIFS share and find the directory folder.
The subdirectory for Snapshot copies appears to CIFS clients as ~snapshot. Both automatic and manually
created Snapshot copies are listed.
To restore a file from the ~snapshot directory, rename or move the original file, then copy the file from the
~snapshot directory to the original directory.

8-31

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Snapshot Review


PROD

SNAP 1

SNAP 2

Production: Active File System

Prod

32

S1

S2

SNAP #1

SNAP #2

WRITE

WRITE

E
E
E
E

F
F
F

F
F

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP SNAPSHOT REVIEW


This Snapshot diagram from earlier in the module shows how Snapshot copies work in the context of
restoring data from a Snapshot copy. Compare this version of the graphic to the next version.

8-32

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Restore from a Snapshot Copy


PROD
Production: Active File System

SNAP
PROD1

SNAP 2

E
E

F
F

F
F

A
Prod

B
C
D

33

Prod
S1

S2

Production:
SNAP #1
Active
File System

SNAP #2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RESTORE FROM A SNAPSHOT COPY


When you restore from a previous Snapshot copy, the pinned blocks that are associated with one or more
Snapshot copies replace the blocks that succeeded them in the active file system.

8-33

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Reverting and Restoring a File


1. Verify that the volume is online and writable.
2. List the Snapshot copies in the volume:
Data ONTAP 7-Mode
system> snap list vol01

Clustered Data ONTAP


c1::> snapshot show vserver svm1 volume vol01

3. Notify network users about the reversion.

4. Initiate the reversion:


Data ONTAP 7-Mode
system> snap restore -t file s vol01_snap /vol/vol3/myfile.txt

Clustered Data ONTAP


c1::> volume snapshot restore-file -vserver svm1 -volume vol01 -snapshot
vol01_snap -path /vol/vol3/myfile.txt

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REVERTING AND RESTORING A FILE


Follow these steps to restore a single file:
1.
2.
3.
4.

Verify that the volume is online and writable.


List the Snapshot copies in the volume.
Notify network users that you are going to revert a file.
If you know the name of the Snapshot copy and the file to be restored, initiate the reversion.

The Data ONTAP operating system displays a warning message and prompts you to confirm your decision to
revert the file. Press Y to confirm that you want to revert the file. If you do not want to proceed, press Ctrl+C
or press N for no.
If you confirm that you want to revert the file that already exists in the active file system, it is overwritten by
the version in the Snapshot copy.

8-34

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapRestore Technology Compared to


Copying
If a file is large (such as a database), you should revert it with
SnapRestore technology instead of copying the file:
Copying requires double the storage and time.
Reverting saves time and reinstates the data.
For reliability, NetApp recommends SnapRestore technology over
alternative technologies.

For more information about using SnapRestore technology to


revert volumes and aggregates, see the Data ONTAP Data
Protection Administration course.
For more information about using Snapshot technology in
SAN environments, see the Data ONTAP SAN
Implementation course.
35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPRESTORE TECHNOLOGY COMPARED TO COPYING


Whether you restore by copying files from a Snapshot directory or from tape, copying large quantities of data
can be time consuming. Instead, use the SnapRestore function to restore by reverting the volume or file. You
save time.

8-35

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP Data Protection Guide

TR-4210-0713 Operational How To Guide - Snapshots Management

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

8-36

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


I feel totally confident about NetApp Snapshot
technology and all its features. This module
exposed me to the ways that I can use the CLI
and GUI to accomplish all my companys backup
and restore requirements.

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

8-37

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 8: Snapshot Copies


Estimated Time: 30 minutes

38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

8-38

Clustered Data ONTAP Administration: Snapshot Copies

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 9

Managing Storage Space

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 9: MANAGING STORAGE SPACE


In this module you learn how to use NetApp data efficiency features to maximize your storage. These features
include thin provisioning, compression and deduplication, and FlexClone volumes. You also learn storage
management techniques such as quota management, moving volumes, and growing aggregates and volumes.

9-1

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Im serving data. I have Snapshot copies.
How do I get the most out of my available
storage space?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-2

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Thin Provisioning

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: THIN PROVISIONING

9-3

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


SLAs in my company dictate that all teams
have a minimum capacity of space available to
them. Some teams run the volumes at nearly
full capacity. Most of the others use very little of
what is provisioned. This wastes space. Can
NetApp help me to unlock this unused space
while maintaining the available-space
agreements?
aggr21
vol28

vol29

vol27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-4

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

vol30

Thick and Thin Provisioning of Volumes


Thick provisioning of volumes (guarantee = volume)
Guarantee = volume requires that space within the aggregate be
reserved for the volume when the volume is created
You cannot overcommit an aggregate
Simplifies storage management

Thin provisioning of volumes (guarantee = none)


Does not require that space within the aggregate be reserved for the
volume when the volume is created
Enables more aggressive allocation
Can overcommit an aggregate
Requires more complex storage management

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THICK AND THIN PROVISIONING OF VOLUMES


Administrators can manage their storage systems by allocating volumes in one of two ways.

Thick provisioning of volumes uses a space guarantee for a volume or file. A guarantee of volume
requires that space in the aggregate be reserved for the volume when the volume is created. A guarantee
of file guarantees space for LUNs in the volume. Thick provisioning is a conservative approach that
prevents administrators from overcommitting space to an aggregate. It simplifies storage management at
the risk of wasting unused space.
Thin provisioning of volumes uses a space guarantee of none. It does not require that space within the
aggregate be reserved for the volume when the volume is created. It is a more aggressive approach that
makes it is possible to overcommit an aggregate. This approach requires more complex storage
management.

NOTE: The file guarantee is no longer supported as of Data ONTAP 8.3.

9-5

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Thin Provisioning

App 2

waste

App 3

40% use

waste

Typical:

NetApp: More than 70% use


Buy 50% less storage
Save 50% in power, cooling,
and space

8 spindles

Shared
capacity

6 spindles

App 1

waste

App 3

6 spindles

12 spindles

App 2
App 1

Standard Volume Manager


6

NetApp Thin Provisioning

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THIN PROVISIONING
If you compare the NetApp storage use approach to the competitions approach, you find one feature that
stands out. Flexible dynamic provisioning with FlexVol technology provides high storage use rates and
enables customers to increase capacity without the need to physically reposition or repurpose storage devices.
NetApp thin provisioning enables users to oversubscribe data volumes, which results in high use models. You
can think of this approach as just-in-time storage.
To manage thin provisioning on a cluster, use the volume command.

9-6

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Deduplication and
Compression

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: DEDUPLICATION AND COMPRESSION

9-7

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


My company uses one volume to host home
directories for the entire IT department. These
people often share large data files for
development and testing, so the volume often
has many identical copies of the same file. How
do I avoid wasting space on duplicate data?
~/alan

vol01

~/doug

~/phil

~/stu

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-8

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Deduplication
NetApp deduplication:

NetApp Deduplication

Is integrated with the Data


ONTAP operating system:
Provides general-purpose volume
deduplication
Identifies and removes redundant
data blocks

Is application agnostic:
Before

After

Primary storage
Backup data
Archival data

Is a service that runs as a


background process and is
transparent to any client
9

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DEDUPLICATION
Deduplication can be thought of as the process of unduplicating data. The term deduplication was first
coined by database administrators many years ago, as a way of describing the process of removing duplicate
records after two databases had been merged.
In the context of disk storage, deduplication refers to any algorithm that searches for duplicate data objects
(for example, blocks, chunks, and files) and discards those duplicates. When duplicate data is detected, it is
not retained, but instead a data pointer is modified so that the storage system references an exact copy of the
data object that is already stored on disk. This deduplication feature works well with datasets that have a lot of
duplicated date (for example, full backups).
When NetApp deduplication is configured, it runs as a background process that is transparent to any client
that accesses data from a storage system. This feature allows a reduction of storage costs by reducing the
actual amount of data that is stored over time. For example, if a 100-GB full backup is made on the first night,
and then a 5-GB change in the data occurs during the next day, the second nightly backup only needs to store
the 5 GB of changed data. This amounts to a 95% spatial reduction on the second backup. A full backup can
yield more than a 90% spatial reduction with incremental backups averaging about 30% of the time. With
nonbackup scenarios, such as with virtual machine images, gains of up to 40% space savings can be realized.
To estimate your own savings, visit the NetApp deduplication calculator at http://www.secalc.com.

9-9

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Deduplication in Action
Example: Three files in three different home directories on a single volume
presentation.ppt

= Identical blocks
With NetApp deduplication,
30 total blocks

Original file
20 blocks
presentation.ppt
Identical file
20 blocks

Without NetApp deduplication,


70 total blocks

presentation.ppt

Edited file
10 blocks added
10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DEDUPLICATION IN ACTION
In this example, one user creates a Microsoft PowerPoint presentation (presentation.ppt) that includes 20
blocks of data. Then a second user copies the presentation to another location. Finally, a third user copies the
presentation to a third location and edits the file, adding 10 blocks.
When the files are stored on a storage system for which deduplication is configured, the original file is saved,
but the second copy (because it is identical to the original file) merely references the original files location on
the storage system. The edits to the file in the third location (the additional 10 blocks) are saved to the storage
system, but all unedited blocks are referenced back to the original file.
With NetApp deduplication, 30 blocks are used to store 70 blocks of data, and the space that is required for
storage is reduced by 58%.

9-10

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Configuring Deduplication

Enable deduplication on a volume named vol01:


Data ONTAP 7-Mode
system> sis on /vol/vol1
system> sis start -s /vol/vol1

Clustered Data ONTAP


c1::> volume efficiency on -vserver svm1 -volume vol1
11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONFIGURING DEDUPLICATION
Deduplication improves physical storage-space efficiency by eliminating redundant data blocks within a
FlexVol volume. Deduplication works at the block level on an active file system and uses the WAFL (Write
Anywhere File Layout) block-sharing mechanism. Each block of data has a digital signature that is compared
with all the other blocks in the data volume. If an exact match is identified, the duplicate block is discarded,
and a data pointer is modified so that the storage system references the copy of the data object that is stored
on disk. The deduplication feature works well with datasets that have large quantities of duplicated data or
white space. You can configure deduplication operations to run automatically or according to a schedule. You
can run deduplication on new or existing data on any FlexVol volume.

9-11

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Characteristics of Data Compression


Inline compression
Parallelism is increased
Path length is decreased
Latency is increased

Postprocess compression

Uncompressed data is compressed during idle time


Only previously uncompressed blocks are compressed
Compression is done before deduplication
Data ONTAP 8.2 and later releases can detect incompressible data
before wasting cycles

See TR-3966 for more information.


12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CHARACTERISTICS OF DATA COMPRESSION


There are two types of data compression: inline compression and postprocess compression.
With inline compression, all writes to a volume are compressed immediately before being written to the
volume. Inline compression increases parallelism because all compression and decompression algorithms are
multiprocessor-capable and because writes are compressed outside the consistency point. Because operations
do not have to be suspended and resumed, inline compression also reduces path length. However, because
processing is required for compression and decompression, there is a latency impact on performance.
Postprocess compression runs as a background task. Uncompressed data that is written after deduplication is
compressed and rewritten to the volume when the controller is not busy. If inline and postprocess
compression are enabled for the same volume, postprocess compression compresses only the blocks on the
volume that were not compressed previously. If compression and deduplication are enabled, compression is
always executed before deduplication.
For more information, see TR-3966: NetApp Data Compression and Deduplication Deployment and
Implementation Guide.

9-12

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Configuring Data Compression

Data Compression

Enable both postprocess and inline compression on a


volume that is named vol01:
Data ONTAP 7-Mode
system> sis config -C true -I true /vol/vol01

Clustered Data ONTAP


c1::> volume efficiency modify -vserver svm1 -volume
vol01 -compression true -inline-compression true
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONFIGURING DATA COMPRESSION


Data compression enables you to reduce the physical capacity that is required to store data on a cluster by
compressing data blocks within a FlexVol volume. Data compression is available only on FlexVol volumes
that are created on 64-bit aggregates. Data compression optimizes the storage space and bandwidth that are
required to replicate data during volume operations, such as moving volumes and performing SnapMirror
transfers. You can compress standard data files, virtual disks, and LUNs, but not file system internal files,
alternate data streams, or metadata.
To manage compression on a cluster, use the volume efficiency command.

9-13

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

Discuss which application and use case would


most benefit from deduplication.
What are the potential storage challenges from
repeated sharing of large blocks?

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

9-14

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

FlexClone Volumes

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: FLEXCLONE VOLUMES

9-15

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Now I have a question thats similar to my previous
one. This time its about a complex Oracle database
with a FlexVol volume dedicated to the whole
database. Users often request that the storage
administrator create copies of the volume for
others to use. Again, how can we efficiently give
users their own copy of these volumes?

aggr21

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-16

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

FlexClone Volume Clones


FlexClone technology:

Enables the creation of multiple, instant dataset clones with


no storage overhead
Provides dramatic improvement for application test and
development environments

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLEXCLONE VOLUME CLONES


FlexClone volume clones provide an efficient way to copy data for:

Manipulation
Projection operations
Upgrade testing

The Data ONTAP operating system enables you to create a volume duplicate in which the original volume
and clone volume share the same disk space for storing unchanged data.

9-17

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How Volume Cloning Works


Volume cloning:
Starts with a volume
Creates a Snapshot copy of the
volume
Creates a clone (a new volume
based on the Snapshot copy)

aggr01
vol01

Parent

Snapshot
copy of
parent

Modifications of the original


volume are separate from
modifications of the cloned
volume

Clone

18

Result: Independent volume


copies are efficiently stored

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HOW VOLUME CLONING WORKS


FlexClone volumes are managed similarly to regular FlexVol volumes, with a few key differences. Consider
these important facts about FlexClone volumes:

A FlexClone volume is a point-in-time, writable copy of the parent volume. Changes that are made to the
parent volume after the FlexClone volume is created are not reflected in the FlexClone volume.
You can clone only FlexVol volumes. To create a copy of a traditional volume, you must use the vol
copy command, which creates a distinct copy with its own storage.
FlexClone volumes are fully functional volumes that are managed, as is the parent volume, by using the
vol command. Likewise, FlexClone volumes can be cloned.
FlexClone volumes always exist in the same aggregate as parent volumes.
FlexClone volumes and parent volumes share the same disk space for common data. Therefore, creating a
FlexClone volume is instantaneous and requires no additional disk space (until changes are made to the
clone or parent).
A FlexClone volume is created with the same space guarantee as the parent.
You can sever the connection between the parent and the clone. This severing is called splitting the
FlexClone volume. Splitting removes all restrictions on the parent volume and causes the FlexClone to
use its own storage.
IMPORTANT: Splitting a FlexClone volume from its parent volume deletes all existing Snapshot copies
of the FlexClone volume and disables the creation of new Snapshot copies while the splitting operation is
in progress.

9-18

Quotas that are applied to a parent volume are not automatically applied to the clone.
When a FlexClone volume is created, existing LUNs in the parent volume are also present in the
FlexClone volume, but these LUNs are unmapped and offline.

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Cloning

vol1
Aggregate
Unique Clone
Data Blocks
Shared Data Blocks
Unique vol1
Data Blocks

vol1
clone
19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLONING
A FlexClone volume is a point-in-time, space-efficient, writable copy of the parent volume. The FlexClone
volume is a fully functional standalone volume. Changes that are made to the parent volume after the
FlexClone volume is created are not reflected in the FlexClone volume, and changes to the FlexClone volume
are not reflected in the parent volume.
FlexClone volumes are created in the same virtual server and aggregate as the parent volume, and FlexClone
volumes share common blocks with the parent volume. While a FlexClone copy of a volume exists, the parent
volume cannot be deleted or moved to another aggregate. You can sever the connection between the parent
and the FlexClone volume by executing a split operation.
A FlexClone split causes the FlexClone volume to use its own disk space, but the FlexClone split enables you
to delete the parent volume and to move the parent or the FlexClone volume to another aggregate.
To manage cloning on a cluster, use the volume clone command.

9-19

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Splitting Volumes
With a volume and a Snapshot
copy of that volume, create a
clone of the volume

Volume 1
Snapshot Copy
of Volume 1
Cloned
Volume

Split volumes when most of the


data on a volume is not shared

Replicate shared blocks in the


background
Result:

A new, permanent volume is


created for forking (branching)
project data.
20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SPLITTING VOLUMES
Splitting a FlexClone volume from its parent removes any space optimizations that are currently employed by
the FlexClone volume. After the split, both the FlexClone volume and the parent volume require the full
space allocation that is specified by their space guarantees. After the split, the FlexClone volume becomes a
normal FlexVol volume.
When splitting clones, consider these important facts:

9-20

When you split a FlexClone volume from its parent, all existing Snapshot copies of the FlexClone volume
are deleted.
During the split operation, no new Snapshot copies of the FlexClone volume can be created.
Because the clone-splitting operation is a copy operation that could take some time to complete, the Data
ONTAP operating system provides the vol clone split stop and vol clone split
status commands to stop clone-splitting or to check the status of a clone-splitting operation.
The clone-splitting operation is executed in the background and does not interfere with data access to
either the parent or the clone volume.
If you take the FlexClone volume offline while clone-splitting is in progress, the splitting operation is
suspended. When you bring the FlexClone volume back online, the splitting operation resumes.
After a FlexClone volume and its parent volume have been split, they cannot be rejoined.

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Flexible Volume Clone Syntax


Creating a FlexClone volume:
Data ONTAP 7-Mode
system> vol clone create vol01_c1 b vol01

Clustered Data ONTAP


c1::> volume clone create -vserver svm1 -flexclone vol01_c1 -parentvolume vol01

Splitting a cloned volume:


Data ONTAP 7-Mode
system> vol clone split start vol01_c1

Clustered Data ONTAP


c1::> volume clone split start -vserver svm1 -flexclone vol01_c1

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FLEXIBLE VOLUME CLONE SYNTAX


In 7-Mode, use the vol clone create command to create a FlexClone volume. The -b flag indicates
the backing or parent volume. Use vol clone split start to initiate a split of the clone from the
parent.
In clustered Data ONTAP, use the volume clone create command to create a FlexClone volume. Use
volume clone split start to initiate a split of the clone from the parent.

9-21

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which types of data compression are available in Data


ONTAP?

a. Inline and external


b. Inline and preprocess
c. Inline and postprocess
d. Inline and reclaimable
22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

9-22

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4
Quotas

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: QUOTAS

9-23

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


We dedicate one volume for a project that
should provide plenty of work space for all the
team members. One user ties up unnecessary
space, squeezing out the other users. Can we
limit the space for a user without giving
everyone their own volume?

~alan
~stu
~phil
~doug
24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-24

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Quotas
Limit resource use

Track resource use

vol1

Notify users of high


resource use
/etc/quotas

qtree3

Quota policies

qtree2
qtree1

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

QUOTAS
Quotas provide a way for you to restrict and track the disk space and number of files that are used by users,
groups, and qtrees. You apply quotas to specific volumes and qtrees. Clustered Data ONTAP enables you to
apply user and group quota rules to qtrees.
You can use quotas to:

9-25

Limit the amount of disk space or the number of files that can be used by a user or group
Limit the amount of disk space or the number of files that can be contained by a qtree
Track the amount of disk space or the number of files that are used by a user, group, or qtree without
imposing a hard limit
Warn users when their disk use or file use reaches a predefined threshold

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP


Quota Policies
SVM1

quota policy - unassigned


quota
policy
quota
rule --unassigned
vol1
quota
policy
quota
rule unassigned
vol1
quota
policy
quota rule unassigned
vol1
vol2

quota policy assigned


quota rule: vol1
quota rule: vol2

quota rule:
rule vol1
vol2
quota
quota rule
rule vol3
vol2
quota
quota
rule vol2
vol3
quota
quota
quotarule:
rule vol3
quota
quota
quota
quota
quota
quota
quota
quota
quota
quota

quota rule: vol3


quota
quota
quota

A set of quota rules for all the volumes of a storage virtual machine (SVM)

No sharing among SVMs


Up to five quota policies per SVM:
The SVM enables backup copies of quota policies.
One quota policy can be active on one SVM at a time.
26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP: QUOTA POLICIES


Quotas are defined by quota rules. Quota rules are collected in the quota policy of an SVM and are specific to
a volume. A quota rule has no effect on the volume until the quota rule is activated.
A quota policy is a collection of quota rules for all the volumes of an SVM. Quota policies are not shared
among SVMs. An SVM can have up to five quota policies; therefore, you can have backup copies of quota
policies. One quota policy is assigned to an SVM at any given time.
A quota is the actual restriction that the Data ONTAP operating system enforces, the actual tracking that the
system performs, or the actual threshold that triggers the system to send a warning message. A quota rule
always results in at least one quota and might result in many additional derived quotas.
Activation is the process of triggering the Data ONTAP operating system to create enforced quotas from the
current set of quota rules in the assigned quota policy. Activation occurs on a volume-by-volume basis. The
first time that quotas are activated on a volume is called initialization. Subsequent activation of quotas on the
same volume is called either re-initialization or resizing, depending on the scope of the changes.

9-26

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

A FlexClone volume, by definition, shares no data


blocks with its parent volume.
a. True

b. False

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

9-27

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

Volume Moves in Clustered


Data ONTAP

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: VOLUME MOVES IN CLUSTERED DATA ONTAP

9-28

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


If my aggregates are getting full, and I
create more aggregates to provide more
space, how can I use the new aggregates
to relieve the full ones?
If I anticipate that a volume will be heavily
accessed by users, can I isolate that data
to a specific node to avoid performance
bottlenecks?

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

9-29

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The volume move Command


node1

node2

node3

aggr27

a1

aggr42

b3

b1 a3

c5
disk shelves

node4

c5
c5
disk shelves

disk shelves

disk shelves

c1::> vol move start vserver svm3 vol c5 destination-aggr aggr42


c1::> vol move trigger-cutover vserver svm3 vol c5
30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE VOLUME MOVE COMMAND


Clustered Data ONTAP enables you to move a volume from one aggregate or node to another within the same
SVM to utilize capacity, improve performance, and satisfy SLAs. The volume move is a nondisruptive
operation. During the volume movement process, the original volume is intact and available for clients to
access. You can move a FlexVol volume to a different aggregate, node, or both within the same SVM. The
data is transferred to the destination node through the cluster interconnect.
Use the volume move start command to initiate the volume transfer. When the transfer is complete and
the volume is ready to be served by the destination aggregate, use the volume move triggercutover command to complete the move. To bypass any confirmation before cutover, use force true
on the volume move start command. This bypassing could cause client I/O disruptions.

9-30

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

How a Volume Move Works


1. A volume is created on the destination aggregate.
2. A Snapshot copy of the source volume is created.

3. The Snapshot copy is replicated to the destination volume.


4. When replication is complete, client access is temporarily blocked.
5. A final replication is performed to reach consistency.

6. Cutover is initiated: -cutover-action

defer_on_failure
abort_on_failure
force
wait

7. The client accesses the destination volume, and the source volume is
cleaned up.
31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HOW A VOLUME MOVE WORKS


When a volume move is initiated, a Snapshot copy of the source volume is created and is used as the basis to
populate the destination volume. Client machines continue to access the volume from the source destination
until all data is moved. At the end of the move process, client access is temporarily blocked. During this time,
the system performs a final replication from the source volume to the destination volume, swaps the identities
of the source and destination volumes, and changes the destination volume to the source volume. When the
move is complete, the system routes client traffic to the new source volume and resumes client access.
Occasionally, especially when there is heavy client traffic on the source volume, Data ONTAP is unable to
complete a replication in a timeframe that is transparent to clients. You can specify the cutover-action
option on a volume move start command to indicate what should happen in this case.

9-31

If the default action, defer_on_failure, is specified, the job tries to cut over until the cutover
attempts are exhausted. If it fails to cut over, it moves into the cutover deferred state. The volume move
job waits for the user to issue a volume move trigger-cutover command to restart the cutover
process.
If the abort_on_failure action is specified, the job tries to cut over until cutover attempts are
exhausted. If the system fails to cut over, it performs a cleanup and ends the operation.
If the force action is specified, the job tries to cut over until the cutover attempts are exhausted, and
then forces the cutover to occur at the expense of disrupting the clients.
If the wait action is specified, when the job reaches the decision point, it does not cut over
automatically. Instead, the job waits for the user to issue a volume move trigger-cutover
command as the signal to try the cutover.

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Move Considerations


One aggregate to another

Transparent to clients

Same SVM only

Both SAN and NAS volumes

Same node, or a different node

Volume characteristics retained at


destination

No intercluster volume movement


node1

node2

aggr27

a1

node3

node4

aggr42

b3

b1 a3

c5

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VOLUME MOVE CONSIDERATIONS


The volume move command is supported only on clustered Data ONTAP. Volume moves are performed
within the cluster. Volumes can be moved from one aggregate to another, within the same SVM.
Data is transferred over the cluster network. Clients continue to have access to the volume while the volume is
moved. Clients are unaware that the volume is moving. Volume characteristics such as deduplication,
compression, and SnapMirror and SnapVault relationships are retained in the new locations.

9-32

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

Why would Data ONTAP limit the relocation of a volume to


destinations within the same SVM?

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

9-33

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 6

Growing Aggregates

34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 6: GROWING AGGREGATES

9-34

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Growing Aggregates
aggr1
rg1

rg0

Add 3 disks
aggr2

rg1

rg0

rg2
Add 6 disks
Data ONTAP 7-Mode
system> disk show -n
system> aggr add aggr1 10
Clustered Data ONTAP
c1::> storage disk show -spare -owner node3
c1::> storage aggregate add-disks aggr aggr1 disks 3
35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

GROWING AGGREGATES
You can add disks to an aggregate so that it can provide more storage to its associated volumes. You do this
by adding available spare disks to an existing aggregate. When adding disks, you should consider the size of
your RAID groups and plan to fill complete RAID groups to maximize that amount of useable space that is
gained in comparison to the number of disks that are used for parity. In the aggr2 example, six disks are added
to the aggregate, but only one more data disk adds capacity to the aggregate compared to adding three disks.
Other points to consider when adding disks:

9-35

Adding disks that are owned by the same system


Benefits of keeping your RAID groups homogeneous for disk size and speed
Which types of disks can be used together
Checksum rules when disks of more than one checksum type are in use
Ensuring that the correct disks are added to the aggregate (the disk addition operation cannot be undone)
How to add disks to aggregates from heterogeneous storage
Minimum number of disks to add for best performance
Number of hot spares that you need to provide for protection against disk failures
Requirements for adding disks from multidisk carrier disk shelves

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion

What is an alternative to adding a small number of disks to an


aggregate when all current RAID groups are full?

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

9-36

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 7

Automatic Space Management

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 7: AUTOMATIC SPACE MANAGEMENT

9-37

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Autosize
Overview

Volume autosize allows a FlexVol volume to automatically grow or shrink


You can specify a mode:
Off: volume will not grow or shrink
Grow: volume will automatically grow when space in the volume reaches a
threshold
Grow_shrink: volume will automatically grow or shrink in response to the
amount of used space
Additionally, you can specify the following:
Maximum to grow (default: 120% of volume size)
Minimum to shrink (default is volume size)

vol01

vol01

Grow and shrink thresholds


38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VOLUME AUTOSIZE: OVERVIEW


Volume autosize, also called autogrow, can be enabled or disabled. If enabled, autogrow enables Data
ONTAP to automatically increase the size of the volume up to a maximum size that you predetermine. There
must be space available in the containing aggregate to support the automatic growth of the volume. Therefore,
if you enable autogrow, you must monitor the free space in the containing aggregate and add more when
needed.
Autogrow cannot be triggered to support Snapshot creation. If you attempt to create a Snapshot copy and
there is insufficient space on the volume, the Snapshot creation fails, even with autogrow enabled.
If autogrow is disabled, the size of your volume will remain the same.
For more information about using volume autosize, see the SAN Administration Guide.

9-38

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Autosize
Configuration

Data ONTAP 7-Mode


system1> vol autosize vol1 [-m size [k|m|g|t]]
[-i size [k|m|g|t]]
[-minimum-size size [k|m|g|t]]
-m is maximum size
[grow|grow_shrink|off|reset]
-i is increment
[on|off|reset]

Clustered Data ONTAP


c1::> volume autosize vol1 is-enabled on
-mode {off|grow|grow_shrink}
-maximum-size <integer> [KB|MB|GB|TB|PB]
-grow-threshold-percent <percent>
-minimum-size <integer> [KB|MB|GB|TB|PB]
-shrink-threshold-percent <percent>

39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VOLUME AUTOSIZE: CONFIGURATION


The increment size and percentage parameters have been deprecated at Data ONTAP 8.3. In a future release,
their value will be computed based on other autosize settings.

9-39

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Automatic Delete


Enable automatic deletion on a volume:

Data ONTAP 7-Mode


system> snap autodelete vol01 on

Clustered Data ONTAP


c1::> volume snapshot autodelete modify vserver svm1
volume vol01 enabled true

40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT AUTOMATIC DELETE


Snapshot automatic delete determines when or if Snapshot copies are automatically deleted. The option is set
at the volume level.

9-40

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Snapshot Automatic Delete Options


Set autodelete options:

Data ONTAP 7-Mode


system> snap autodelete vol01 trigger snap_reserve

Clustered Data ONTAP


c1::> volume snapshot autodelete modify -vserver svm1
vol01 -trigger snap_reserve
Options
Values
commitment
defer_delete
delete_order
target_free_space
trigger
prefix
destroy-list
41

-volume

try, disrupt, destroy


scheduled, user_created, prefix, none
oldest_first, newest_first
1-100
volume, snap_reserve, space_reserve
<string>
<text>

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPSHOT AUTOMATIC DELETE OPTIONS


When autodelete is enabled, options are available:
system> snap autodelete vol option value

[-commitment {try|disrupt|destroy}]
Specifies which Snapshot copies and LUN clones can be automatically deleted to reclaim space

[-defer-delete {scheduled|user_created|prefix|none}]
Determines the order in which Snapshot copies can be deleted

[-delete-order {newest_first|oldest_first}]
Specifies whether the oldest Snapshot copy and the oldest LUN clone, or the newest Snapshot copy and
the newest LUN clone, are deleted first

[-defer-delete-prefix <text>]
Specifies the prefix string for the -defer-delete prefix parameter. The option is not applicable
for LUN clones.

[-target-free-space <percent>]
Specifies the free space percentage at which the automatic deletion of Snapshot copies and LUN clones
must stop. Depending on the -trigger, Snapshot copies and LUN clones are deleted until you reach
the targeted free space percentage.

[-trigger {volume|snap_reserve|space_reserve}]
Specifies the condition that starts the automatic deletion of Snapshot copies and LUN clones

9-41

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

[-destroy-list <text>]
Clustered Data ONTAP only: Specifies a comma-separated list of data backing functions that are affected
if the automatic deletion of the Snapshot copy that is backing that service is triggered

9-42

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP Logical Storage Management Guide
TR-4148-0313: Operational Best Practice - Thin Provisioning
TR-3966: Compression and Deduplication for Clustered Data ONTAP,
Deployment and Implementation Guide

42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

9-43

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


I learned a lot about getting the most out of my
NetApp storage, and I learned that I can resolve
several space inefficiencies in my data center.

43

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

9-44

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 9: Managing Storage


Space

Estimated Time: 30 minutes

44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Refer to your exercise guide.

9-45

Clustered Data ONTAP Administration: Managing Storage Space

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 10
Data Protection

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 10: DATA PROTECTION


This module is an introduction to Data Protection features available with clustered Data ONTAP. You will
learn the basic concepts of HA Pairs, SnapVault, SnapMirror, and MetroCluster.

10-1

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


My company suffered a hardware failure on
a legacy storage system that resulted in the
loss of a critical sales and inventory
database. I was confident that a restore
from the daily tape backup would recover
the database and normal operations would
resume. While all sales and inventory
applications were offline, the database was
successfully restored. However, the
restored data was 18 hours old. Nearly a full
day of work was lost and unrecoverable. Is
there a better solution?
2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

10-2

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data Protection Challenges


Cost
Multiple specialized devices to buy
Backup
Servers

Backup
Storage

ApplicationConsistent
Backup

As data grows, multiple products must be scaled


Complexity
Time-consuming deployments
Multiple silos of management

Disaster Recovery
Storage

Archive
Servers

Multiple vendors and support contracts


Limited flexibility

Compliance
Storage

Restrictivesystems cannot be used for multiple


purposes
Difficult to use with virtual servers
$$$$
$$$$
$$$$

Not well suited to the on-demand, multi-tenancy


needs of IT as a Service (ITaaS) and the cloud

Personnel and Expenses


3

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA PROTECTION CHALLENGES


Because traditional data protection solutions require multiple hardware and software products, they increase
cost and complexity. Typically, traditional solutions are implemented on a system-by-system basis, with
products added to the existing infrastructure as needed. A layer of servers, storage, and software is dedicated
to high-availability (HA) disaster recovery; and backup, archival, and compliance processes are created. In
many cases, because each component is dedicated to one task (such as to backup or archive), silos are created.
Because resources are not shared, costs increase, management overhead becomes more complex and time
consuming, and the risk of data loss increases.
Traditional data protection does not extend well to virtualization or the cloud. A traditional infrastructure
might work with physical servers, but it creates challenges within shared architectures. For example, when
traditional backup is deployed on virtual servers, production performance is impacted, and restore processes
become more complex. Within IT-as-a-Service (ITaaS) or cloud environments, data protection must be
integrated into the infrastructure. If data protection products must be added to individual user environments,
cost and provisioning time increase, and the advantages that virtualized desktop environments provide are
lost.

10-3

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Data Protection Solutions


HA Pairs

Feature

Key
Benefits

SnapMirror

SnapVault

MetroCluster

Provide continuous
data access by
transferring data
service of an
unavailable
controller to the
surviving partner

A disaster recovery
solution that mirrors
data to a different
storage controller or
cluster

A data protection
solution that
provides extended
and centralized diskto-disk backup for
storage systems

A self-contained HA
disaster recovery
solution that
achieves continuous
data availability for
mission-critical
applications

Transparent to
clients
Multiple paths to
all storage
shelves
Nondisruptive
software upgrade

Reduced
bandwidth
utilization
Thin replication
Replication
management
across storage
tiers with a single
tool

Drastically
reduced backup
times
Smaller backup
footprint
Fast application
and virtual
machine recovery

Automated,
transparent siteto-site failover
Continuous
availability and
zero data loss
Easy deployment

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP DATA PROTECTION SOLUTIONS


NetApp provides many data protection solutions that fit every imaginable storage environment.
HA controller configuration provides continuous data access by transferring the data service of an unavailable
controller to the surviving partner. Transfer of service is transparent to clients, and redundant physical paths
to all storage shelves ensure nondisruptive operations (NDO) and nondisruptive software upgrades.
SnapMirror technology provides a disaster recovery solution that mirrors data to another storage controller,
cluster, or even to a remote location. SnapMirror software decreases bandwidth utilization by using native
network compression and thin replication to minimize the amount of data that is being transferred to
secondary and tertiary storage. It also provides a single tool to manage replication across storage tiers.
SnapVault software provides extended and centralized disk-to-disk backup for storage systems by backing up
a Snapshot copy to another controller, cluster, or to a remote location. With SnapVault software, you can
reduce backup times from days or hours to minutes or seconds. By transferring only new or changed blocks to
reduce bandwidth and disk capacity, SnapVault software provides a significantly smaller backup footprint
than traditional backup methods. SnapVault software also provides for fast recovery of application and virtual
machine data.

10-4

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion


What types of data protection challenges might you
encounter in a typical customer environment?
How might NetApp data protection technologies fit
into this environment?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

10-5

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Storage Failover Management

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: STORAGE FAILOVER MANAGEMENT

10-6

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

HA Pairs
A high-availability (HA) pair contains two nodes
whose controllers are directly connected through an
HA interconnect
A node can take over its partner's storage to provide
continued data service if the partner goes down
HA pairs are components of the cluster, but only the
nodes in the HA pair can take over each others
storage

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HA PAIRS
HA pair controllers are connected to each other through an HA interconnect. This connection allows one node
to serve data that resides on the disks of its failed partner node. Each node continually monitors its partner,
mirroring the data for each others NVRAM or NVMEM. The interconnect is internal and requires no
external cabling if both controllers are in the same chassis.
Takeover is the process in which a node takes over the storage of its partner.
Giveback is the process in which that storage is returned to the partner.
HA pairs are components of the cluster in clustered Data ONTAP. Although both nodes in the HA pair are
connected to other nodes in the cluster through a cluster interconnect, only the nodes in the HA pair can take
over each others storage.
Although single-node clusters are supported, clusters that contain two or more nodes must be arranged in HA
pairs. If you join two single nodes into a Data ONTAP cluster, you must configure the two nodes as an HA
pair.

10-7

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP HA Differences


7-Mode nodes in the HA pair work independently:

A unique namespace
Resource pool (disks, aggregates, volumes)
Network access (interfaces)
MultiStore units, which are confined to a node

All clustered Data ONTAP nodes share resources:


Each storage virtual machine (SVM) has a unique namespace across
the cluster (SVMs are not owned by nodes.)
Aggregates are shared across the cluster for all SVMs
Volumes and logical interfaces (LIFs) are owned by an SVM but can
move to any node in the cluster

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP HA DIFFERENCES


Clustered Data ONTAP is built on the technology innovation that is Data ONTAP operating in 7-Mode.
However, clustering adds scale-out features that are not available in 7-Mode.
You can think of a 7-Mode storage system as a single stack, in that the node owns all of its resources. A
client can connect to shares, and hosts can map to LUNs. When you add a second controller to form an HA
pair, this second controller must be configured with its own resources. If the clients and hosts require
communication to the second node, they must be configured independently. An HA pair can maintain service
if a node fails. When a node fails in 7-Mode, the partner emulates the complete identity of the failed controller
and takes over all resources. It is important to create alternative partner network interfaces so that clients and
hosts have access to the failed-over resources.
With clustered Data ONTAP, HA pairs are a component of the cluster. They are used to keep the data
available to the cluster if a node fails. Nodes do not have identities as they do in 7-Mode, because all nodes
work together. When a node fails, the partner only needs to take care of storage resources (aggregates,
volumes, and LUNs) and not the identity of the failed node.
Because the network access (logical interfaces) can be on any node in the cluster, the handling of network
access is completely separate from the storage resources during the failover. NAS and management LIFs
might or might not fail over to the HA partner, if they fail over at all. (SAN LIFs do not fail over.)
When a node is first initialized, a root aggregate is created. In clustered Data ONTAP, the root aggregate
(aggr0) is assigned the controller failover (CFO) HA policy. Because this aggregate is required for a node to
operate, it fails over last (after all operations are complete and the node shuts down) and is the first to be
given back (so that the node can boot).
NOTE: Do not store data volumes on the root aggregate (aggr0). Volumes on CFO aggregates are not
available to clients or hosts during failover.

10-8

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data aggregates are treated a little differently. Data can still be served from the node that has taken over.
Additionally, the client might not even be mounted to the node in the HA pair that is failing over. When the
system creates an aggregate, it assumes that the aggregate is for data and assigns the storage failover (SFO)
HA policy to the aggregate. With the SFO policy, the data aggregates will fail over first and fail back last in a
serial manner.
Hardware-assisted takeover speeds up the takeover process by using a nodes remote management device
(Service Processor [SP] or Remote LAN Module [RLM]) to detect failures and quickly initiate the takeover,
rather than waiting for Data ONTAP to recognize that the partner's heartbeat has stopped. Without hardwareassisted takeover, if a failure occurs, the partner waits until it notices that the node is no longer giving a
heartbeat, confirms the loss of heartbeat, and then initiates the takeover.

10-9

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Takeover


Unplanned Event

Node 1
Cluster Interconnect

Node 1 and 2 own their


root and data
aggregates
Node 1 fails
Node 2 takes over root
and data aggregates

n1_aggr0
aggr1
aggr2
H
A

Node 2

n2_aggr0
aggr3

NOTE: Before 8.2, planned and


unplanned events were identical.
9

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP TAKEOVER: UNPLANNED EVENT


Clustered Data ONTAP 8.2 and later systems perform takeovers differently than past versions did. Earlier
than Data ONTAP 8.2, an unplanned event (for example, a node failure) and a planned event (for example, a
manual takeover initiated by an administrator) followed the same process. With clustered Data ONTAP 8.2,
planned events use a different process.
When a node fails, an unplanned event or automatic takeover is initiated. Ownership of data aggregates is
changed to the HA partner. After the ownership is changed, the partner can read and write to the volumes on
the partners data aggregates. Ownership of aggr0 disks remains with the failed node, but the partner takes
over control of the aggregate, which can be mounted from the partner for diagnostic purposes.

10-10

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Takeover


Planned Event (Aggregate Relocation)

Node 1

Manual takeover is initiated


by using the storage
failover takeover
command

STOP

Data aggregates change


ownership to node 2 one at
a time
The root aggregate is
failed over after all
aggregates that contain
user data are failed over to
the partner node

10

Cluster Interconnect

Nodes 1 and 2 own their


root and data aggregates

n1_aggr0
aggr1
aggr2
H
A

Node 2

n2_aggr0
aggr3

NOTE: Aggregate relocation


cannot move root aggregates.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP TAKEOVER: PLANNED EVENT (AGGREGATE


RELOCATION)
Aggregate relocation operations take advantage of the HA configuration to move the ownership of storage
aggregates within the HA pair. Aggregate relocation occurs automatically during manually initiated takeover
and giveback operations to reduce downtime during maintenance. Aggregate relocation can be initiated
manually for load balancing. Aggregate relocation cannot move ownership of the root aggregate.
During a manually initiated takeover, before the target controller is taken over, ownership of each aggregate
that belongs to the target controller is moved to the partner controller, one aggregate at a time. When giveback
is initiated, the ownership is automatically moved back to the original node. To suppress aggregate relocation
during the takeover, use the -bypass-optimization parameter with the storage failover
takeover command.
The aggregate relocation requires additional steps if the aggregate is currently used by an infinite volume with
SnapDiff enabled. Aggregate relocation is also disabled on internal drives.

10-11

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Giveback


Node 1
Cluster Interconnect

Automatic or manual
giveback
is initiated with the
storage failover
giveback command
n1_aggr0 is given back
to node 1 to boot the
node
Data aggregate giveback
occurs one aggregate at
a time

11

n1_aggr0
aggr1
aggr2
H
A

Node 2

n2_aggr0
aggr3

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP GIVEBACK


Giveback is initiated by the storage failover giveback command or by automatic giveback if the
system is configured for it. The node must have access to its root volume on aggr0 to fully boot. The CFO HA
policy ensures that aggr0 is given back immediately to enable the node to boot.
After the node has fully booted, the partner node returns ownership of the data aggregates one at a time until
giveback is complete. You can monitor the progress of the giveback with the storage failover
show-giveback command. I/O resumes for each aggregate when giveback is complete for that aggregate,
thereby reducing the overall outage window.

10-12

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

HA Failover Summary
HA Event

Event Description

Failover events in 7-Mode

Partner node takes over the functions and disks of the


failed node by creating an emulated storage system.

Unplanned event in clustered


Data ONTAP

All aggregates fail over to partner node in parallel.

Planned event in clustered


Data ONTAP 8.1

All aggregates fail over to partner node in parallel.

Planned event in clustered


Data ONTAP 8.2 and 8.3

Each aggregate is failed over serially. The root aggregate


is failed over after all aggregates that contain user data are
failed over to the partner node.

Giveback in clustered Data


ONTAP

Root aggregate is given back first. After a node is


assimilated back into the cluster, each aggregate that
contains data is given back serially to the partner node.

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

HA FAILOVER SUMMARY

10-13

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Select the statement that is true about giveback with storage failover in
clustered Data ONTAP.
1. The surviving partner simultaneously returns ownership of all the
aggregates to its partner node.
2. The surviving partner returns ownership of the all the aggregates and the
data LIFs to its partner.
3. The surviving partner returns ownership of the root aggregate to its
partner node first, and then returns the other aggregates.
4. I/O resumes only when all aggregates are returned to the partner node.
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

10-14

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

SnapMirror Software

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: SNAPMIRROR SOFTWARE

10-15

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Technology

Source

15

Destination

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPMIRROR TECHNOLOGY
SnapMirror copies are disk-to-disk online backups. Data protection mirror copies are simpler, faster, more
reliable, and easier to restore than tape backups are, although data protection mirror copies are not portable for
storing offsite. A typical use of data protection mirror copies is to put them on aggregates of SATA disks that
use RAID-DP technology and then mirror data to them daily during the least active time in the cluster.
Data protection mirror copies are not meant for client access, although they can be mounted into the
namespace by an administrator. Junctions cannot be followed in a data protection mirror copy, so access is
given to only the data that is contained in that data protection mirror copy, not to any other volumes that are
mounted to the source read/write volume.

10-16

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapMirror Features
Qtree
replication

Volume
replication

16

SVM-to-SVM
replication

Data protection
mirror
copies

Load-sharing
mirror
copies

Asynchronous
mirroring

Cluster-tocluster replication

Synchronous
mirroring

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPMIRROR FEATURES
SnapMirror software provides asynchronous data protection mirror copies on the volume level.
Data ONTAP operating in 7-Mode adds support for replication at the qtree level, and also semi-synchronous
and synchronous mirror replication in real time.
Clustered Data ONTAP adds support for replication among SVMs and among clusters. It also adds the ability
to balance loads among nodes in a cluster with load-sharing mirrors.

10-17

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Intercluster Replication
Replication between clusters for Disaster Recovery
Data transfers on intercluster network
RW

Source volume

Intercluster LIF
connection

WAN

Intercluster
network

DP

17

Destination volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INTERCLUSTER REPLICATION
Intercluster SnapMirror replication, as opposed to traditional intracluster mirroring, gives you the flexibility
to create an asynchronous SnapMirror volume on a cluster other than the source volumes cluster, for data
protection. The replication is carried out across the WAN by using intercluster LIFs. You can use intercluster
SnapMirror replication to store online copies of your data offsite, for disaster recovery.
To use intercluster SnapMirror replication, you must license the feature on both participating clusters.
You need a full mesh intercluster network to support node failover and volume moves of the source or
destination volumes. For the network to be full mesh, every intercluster LIF on every node in the cluster must
be able to connect to every intercluster LIF on every node in the peer cluster.

10-18

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Load-Sharing Mirror Volumes

/vol_b

/.admin/vol_b

B Origin volume

18

B Load-sharing mirror volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LOAD-SHARING MIRROR VOLUMES


In addition to mirroring data in order to protect it, clustered Data ONTAP provides mirroring for load
balancing. Copies of read/write volumes, which are called load-sharing (LS) mirrors, can be used to offload
read requests from their read/write volumes. Also, when a number of LS mirrors are created for a single
read/write volume, the likelihood of a read request being served locally, rather than traversing the cluster
network, is greatly increased, resulting in better read performance.
An LS mirror is mounted to the SVMs NAS namespace at the same point as its read/write volume. So, if a
volume has any LS mirrors, all client requests are sent, transparently to the clients, to an LS mirror, rather
than to the read/write volume. If the LS mirrors become out-of-sync with their read/write volumes, a client
read request gets out-of-date information. LS mirrors are ideal for volumes that are read frequently and
written infrequently.
To allow an NFS request to go to the read/write volume after it has been replicated to an LS mirror, an
additional mount must be done to use the /.admin path (for example, mount svm1:/.admin/vol_b
/mnt/vol_b_rw). For CIFS clients, an additional step is needed within the cluster itself. You must create
an additional CIFS share that uses /.admin rather than / for its path. The clients that require read/write
access must use that share.
When multiple LS mirrors exist for a volume, the node that receives the request gives preference to a local LS
mirror. If there is no local LS mirror, Data ONTAP uses a round-robin algorithm to choose which "remote"
LS mirror receives the request. For volumes with high read traffic, a good practice is to have an LS mirror on
every node so that all read requests are served locally. Mirroring of the root volumes of virtual servers is
highly recommended and is considered a best practice.

10-19

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

What are the performance advantages of using LS mirrors?

How are LS mirrors different from data protection mirrors?

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

10-20

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

SnapVault Software

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: SNAPVAULT SOFTWARE

10-21

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SnapVault Backups
Reduce backup times from hours to days or minutes
Provide 100% success rates for backup reliability

Reduce disk capacity requirements by 90% or more


Simplify management across enterprise applications
Minimize network traffic

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SNAPVAULT BACKUPS
SnapVault software leverages block-level incremental replication for a reliable, low-overhead backup
solution. It provides efficient data protection by copying only the data blocks that have changed since the last
backup, instead of copying entire files. As a result, you can back up more often while reducing your storage
footprint because no redundant data is moved or stored.
With direct backups between NetApp systems, disk-to-disk vault backups minimize the need for external
infrastructure and appliances. By default, vault transfers retain storage efficiency on disk and over the
network, further reducing network traffic. You can also configure additional deduplication, compression, or
both on the destination volume. However, if additional compression is configured on the destination volume,
storage efficiencies from source to destination are not retained over the network.
The key advantages of vault backups for clusters include reduction of backup times from hours or days to
minutes, 100% success rates for backup reliability, reduction of disk capacity requirements by 90% or more,
simplified management across enterprise applications, and minimized network traffic.
For more information about backing up FlexVol volumes to a backup vault, see the Clustered Data ONTAP
Data Protection Guide.

10-22

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion

What are the differences between using SnapMirror


software or SnapVault software for data protection?

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

10-23

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Comparing SnapVault and SnapMirror


Functions
SnapMirror Functions

23

SnapVault Functions

SnapMirror software provides data


protection for disaster recovery.

SnapVault software accumulates


copies of the source volume for
archival purposes.

SnapMirror software can


asynchronously copy the source
FlexVol volume to a disaster
recovery replica.

If the source FlexVol volume is lost,


clients can restore data from the
secondary to primary storage.

If the source FlexVol volume is lost


or destroyed, clients can connect to
the mirror image of the source data.

The read-only SnapVault copy can


be rendered writable only by
creating a FlexClone volume copy.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COMPARING SNAPVAULT AND SNAPMIRROR FUNCTIONS


SnapMirror software and SnapVault software create data protection replications. SnapMirror software is used
for disaster recovery and maintains only one read-only replica of the source volume. If a disaster occurs at
your data center, you break the SnapMirror relationship to render the copy writable and quickly connect your
client servers to the now writable SnapMirror replica.
SnapVault software is used for creating read-only archival copies of your source volume. If a disaster occurs
at your data center, you can select one of the read-only backup versions and restore the data from secondary
to primary storage. The read-only SnapVault copy can be rendered writable only by creating a FlexClone
volume copy of the SnapVault copy and splitting the cloned volume away from the parent Snapshot copy.

10-24

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

MetroCluster Software

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: METROCLUSTER SOFTWARE

10-25

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

MetroCluster Software for Clusters

The clustered Data ONTAP


operating system provides NDO
within a data center.
Data Center A

Clustered Data ONTAP

Cluster A in
Data Center A

Cluster B in
Data Center B

MetroCluster software provides


business continuity and continuous
availability beyond a data center.

MetroCluster in Data ONTAP 8.3


25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

METROCLUSTER SOFTWARE FOR CLUSTERS


With the release of Data ONTAP 8.3, the MetroCluster feature becomes available to the clustered Data
ONTAP operating system. The feature extends NDO across clusters and across data centers. With the
clustered Data ONTAP 8.3 MetroCluster feature, if a failure occurs, the surviving nodes continue to serve
dataeven if the nodes and their clusters are located in different data centers.

10-26

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Benefits of MetroCluster for Target Customers


Local HA failover
Ability to share Inter-Switch Links (ISLs) to
consolidate two MetroCluster clusters to one
instance (saves 50% on operational expenditures
[opex])
Better performance from the NetApp FAS8000
platforms
Ability to use clustered Data ONTAP to achieve
NDO and reduce opex and capital expenses (capex)

Existing 7-Mode
MetroCluster
Customers

26

New MetroCluster
Customers

Near-zero recovery point objective (RPO)


solution
NDO within and across data centers

Existing Clustered
Data ONTAP
Customers

Ability to move applications to MetroCluster


configurations for the highest level of availability

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BENEFITS OF METROCLUSTER FOR TARGET CUSTOMERS


Customers who move from Data ONTAP operating in 7-Mode to Data ONTAP 8.3 with MetroCluster gain
HA failover, the ability to share Inter-Switch Links (ISLs), improved performance from the NetApp FAS8000
platforms, and the NDO that clustered Data ONTAP provides.
Customers who implement MetroCluster software as a new solution gain a near-zero recovery point objective
(RPO) solution and NDO within and across data centers.
Clustered Data ONTAP customers who add MetroCluster functionality gain the ability to move applications
to MetroCluster configurations for the highest level of availability.

10-27

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


When should I consider
MetroCluster for Data ONTAP 8.3
solutions?

How far apart can supported data


centers be?

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION


A solution that uses MetroCluster should be considered when a customer requires very low RTO or RPO. A
common way to determine whether MetroCluster will be suitable is to ask, What is the distance between the
data centers?
If the answer is less than 200 kilometers (or 125 miles), MetroCluster is supported, and customers should
consider using the feature.

10-28

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which MetroCluster configuration does Data ONTAP 8.3 support?

1. Stretch

2. Fabric
3. Hybrid
4. Cloud

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

10-29

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

In a Data ONTAP 8.3 MetroCluster configuration, both two-node clusters can


function as _____.

1. Independent clusters, each supporting its own workload


2. Either an active cluster or a failover target, but not both simultaneously
3. The failover cluster for a tertiary MetroCluster failover configuration

4. A failover target for a 7-Mode storage system

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

10-30

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

NetApp Data Protection


Interfaces

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: NETAPP DATA PROTECTION INTERFACES

10-31

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Additional NetApp Data Protection Interfaces

Unified
Manager*

Data
Protection
Interfaces

SnapProtect

SnapManager

* OnCommand
Unified Manager
31

SnapDrive

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ADDITIONAL NETAPP DATA PROTECTION INTERFACES


This course describes data protection features that are configurable from the clustershell and OnCommand
System Manager. Setting up and administering data protection with Data ONTAP built-in replication is
described in the Data ONTAP Data Protection Administration two-day instructor-led (ILT) course. NetApp
has several additional tools to implement your data protection strategies. Each of these tool are discussed in
detail in other NetApp University courses.
SnapDrive for Windows and SnapDrive for UNIX are discussed in the Data ONTAP SAN Implementation and
Data ONTAP Data Protection Administration courses. Both are ILT courses. You can automate storageprovisioning tasks and simplify the process of creating error-free, host-consistent Snapshot copies of data. For
information about SnapDrive software, please see SAN Implementation and other SnapDrive courses that are
available from NetApp University.
SnapManager products are available to deploy application-consistent backups for Microsoft Exchange Server,
SnapManager for Oracle, SnapManager for Hyper-V, SnapManager for Microsoft SQL Server, and
SnapManager for Microsoft Office SharePoint Server. These products are discussed in individual
SnapManager courses as well as in Data ONTAP Data Protection Administration.
The SnapProtect feature provides integrated, application-consistent backup, including NDMP and tape
management. You can learn more in several SnapProtect web-based and ILT courses that are available on the
NetApp University learning management system (LMS), including the SnapProtect Solution Administration
ILT course.

10-32

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question

Which two NetApp data protection products create application-consistent


backups? (Choose two.)

1. SnapDrive
2. SnapManager
3. SnapMirror

4. SnapProtect
5. SnapVault

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

10-33

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Where can I learn about advanced topics
like configuring intercluster replication, fanin and fan-out strategies, and NetApp data
protection interfaces?

Data ONTAP Data Protection


Administration two-day instructor-led
course
SnapProtect Solution Administration ILT
course
33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

10-34

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP Data Protection Guide

MetroCluster Installation and Configuration Guide


MetroCluster Management and Disaster Recovery Guide
Clustered Data ONTAP SVM Root Volume Protection
Express Guide
Clustered Data ONTAP Cluster Peering Express Guide
TR-4183i: SnapVault Best Practices Guide for Clustered
Data ONTAP
TR-4015: SnapMirror Configuration and Best Practices
Guide for Clustered Data ONTAP
34

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

10-35

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


Im now familiar with some challenges that
storage administrators face in ensuring that
data is readily available. Ive learned how
Data ONTAP implements storage failover,
and I can configure a simple SnapMirror
and SnapVault replication.

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

10-36

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 10: Data Protection


Estimated Time: 30 minutes

36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

10-37

Clustered Data ONTAP Administration: Data Protection

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 11

Monitoring Your Storage


System

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 11: MONITORING YOUR STORAGE SYSTEM


In this module you learn about some basic tools to monitor your cluster. You also learn how to manage log
and core files and how to upload core files to NetApp technical support.

11-1

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


What are some tools that can help
keep my cluster running smoothly
and efficiently?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

11-2

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Monitor Your Cluster

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: MONITOR YOUR CLUSTER

11-3

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Data Management Software

OnCommand
System Manager

Provides simple device management

OnCommand
Unified Manager

Provides storage management at scale

OnCommand
Workflow
Automation

Automates and standardizes processes

OnCommand
Performance
Manager

Provides performance monitoring and alerting

OnCommand
Insight

Provides multivendor storage configuration,


performance, and capacity management, along
with enterprise reporting for capacity planning,
showback, and chargeback

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ONCOMMAND DATA MANAGEMENT SOFTWARE


NetApp OnCommand storage management software enables you to improve efficiency and productivity, gain
valuable operational savings, and take advantage of all the benefits of clustered Data ONTAP.

11-4

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Is there a single interface that I can
use to monitor availability, capacity,
protection, and performance on my
cluster?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

11-5

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Unified Manager

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ONCOMMAND UNIFIED MANAGER


OnCommand Unified Manager provides comprehensive monitoring of clustered Data ONTAP systems to
help manage the availability, capacity, protection, and performance risks of the storage systems. Unified
Manager provides IT efficiency savings by unifying and automating storage operations, provisioning, and
protection for physical resources (such as resource pools) and virtual resources (such as storage virtual
machines [SVMs]). If issues occur in the cluster objects that are monitored, Unified Manager uses events to
notify you about such issues, so that you can take the required corrective action.
OnCommand Unified Manager 6.0 and later releases support clustered Data ONTAP.

11-6

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Manager
Discovery Process
Unified Manager
Server

Cluster management
IP address

Pings

Cluster management
logical interface (LIF)

Management LIF
response
No

Yes

Cluster objects
added

Cluster not
added
Check for LIF reachability
or
Check if nodes are down

Nodes

Aggregates

Cluster

Volumes

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UNIFIED MANAGER: DISCOVERY PROCESS


Understanding the discovery process helps you to manage your organization's clusters and their objects. After
you have added a cluster to Unified Manager, the server discovers the cluster objects and adds them to its
database.
The default monitoring interval is 15 minutes. If you add a cluster to Unified Manager server, it takes up to 15
minutes for the cluster details to be displayed in the Unified Manager UI.

11-7

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Unified Manager Dashboard

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UNIFIED MANAGER DASHBOARD


The dashboard has four panes:

Availability
Capacity
Performance
Protection

The dashboard consists of two areas:

The Quick Takes area provides information about the health of your storage objects.
The Unresolved Incidents and Risks area displays events that are categorized as incidents and risks.

Incidents refer to issues that have already affected the storage objects.
Risks refer to issues that may impact the storage objects.

You can integrate OnCommand Workflow Automation with Unified Manager to execute workflows for your
storage classes. You can also monitor SVMs that have an infinite volume but do not have storage classes.
When Unified Manager is integrated with Workflow Automation, the reacquisition of Workflow Automation
cached data is triggered.
NOTE: A storage class is a definition of aggregate characteristics and volume settings. You can define
storage classes, and you can associate one or more storage classes with an infinite volume.

11-8

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Manage Annotations

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGE ANNOTATIONS
Annotation types enable you to annotate storage objects based on the priority of the data that they contain.
You can annotate volumes, clusters, and SVMs. Data-priority is the default annotation type; it has the values
mission-critical, high, and low. You can create custom annotations. You can also view custom annotation
information in an alert email and in the Event details page and Object details page.

11-9

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Managing Thresholds, Events, and Alerts

10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGING THRESHOLDS, EVENTS, AND ALERTS


By using Unified Manager, you can configure global threshold values for all your aggregates and volumes to
track any threshold breaches.
Events are notifications that are generated automatically when a predefined condition occurs or when an
object crosses a threshold. These events enable you to take action to prevent issues that can lead to poor
performance and system unavailability. Events include an impact area, severity, and impact level. Events are
categorized by the type of impact area such as availability, capacity, configuration, or protection.
You can create alerts to notify you when a particular event is generated. You can create alerts for a single
resource, group of resources, and events of a particular severity type, and specify the frequency with which
you want to be notified.

11-10

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Integrated Performance Monitoring

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INTEGRATED PERFORMANCE MONITORING


A connection between the Performance Manager server and the Unified Manager server enables you to use
Unified Manager web UI to monitor the performance issues that are detected by the Performance Manager
server. When you drill down from the performance issues on the Unified Manager dashboard, you are taken to
the OnCommand Performance Manager UI.

11-11

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Pairing with Workflow Automation

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PAIRING WITH WORKFLOW AUTOMATION


Integration of Unified Manager with OnCommand Workflow Automation is required to configure SnapMirror
and SnapVault relationships. Unified Manager uses the Workflow Automation API to offload the creation of
SnapMirror or SnapVault relationships.

11-12

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Reporting

13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REPORTING
Unified Manager reports display the current status of the storage so that you can make important decisions,
such as storage procurement based on the current usage. Reports provide an full view of storage objects, such
as a list of volumes, disk shelves, and aggregates. You can run reports, delete reports, create custom reports,
save a customized report, and import reports. Reports can be scheduled and shared to multiple recipients.

11-13

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Event Management

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: EVENT MANAGEMENT

11-14

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Notifications of Events
The system:

Writes events to the event log


Sends and routes notifications of events
Collects events cluster-wide
Can view events of all nodes from any node
system> ems log dump
c1::> event log show

Each event contains a:

15

Message name
Severity level
Description
Corrective action, if applicable

Display the contents


of your event log

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NOTIFICATIONS OF EVENTS
The system collects and displays information about events that occur on your cluster. You can manage the
event destination, event route, mail history records, and SNMP trap history records. You can also configure
event notification and logging.

11-15

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Event Severity Levels


The event severity levels, from most to least severe, are:
EMERGENCY

ALERT
CRITICAL
ERROR
WARNING
NOTICE

INFORMATIONAL
DEBUG
16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EVENT SEVERITY LEVELS


Definitions of the event severity levels. Informational and debug levels are for clusters only.

11-16

EMERGENCY: The system is unusable.


ALERT: An action must be performed immediately to prevent system failure.
CRITICAL: A critical condition occurred.
ERROR: An error condition occurred.
WARNING: A warning condition occurred.
NOTICE: A normal but significant condition occurred.
INFORMATIONAL: An informational message was sent.
DEBUG: A debugging message was sent.

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Event Notifications
A three-step process to set up:
1.Modify the event configuration to contain the mail host and other
attributes.
2.Create at least one event destination.
3.Modify routes to use a destination.

Examples:
c1::> event config modify -mailfrom bob@learn.local
-mailserver xx.xx.xx.xx
c1::> event destination create -name crits -mail
tom@learn.local
c1::> event route modify -messagename coredump*
-destinations crits

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EVENT NOTIFICATIONS
On clustered Data ONTAP, you can further configure the system to send notifications to certain destinations
when an event of interest occurs on the cluster. Unlike AutoSupport messages, the event message is only a
notification rather than complete system diagnostic information. The notification can be associated with any
event.
The event route associates a given event message with an event destination. You modify a messages
destination value to indicate the email address to which the notification should be sent. You can perform this
action on all notifications at the same time by using a regular expression when specifying the event name in
the event route modify command.

11-17

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Event Destinations
An event destination is a named combination of any or all of
the following:
The email destination
The SNMP destination
The syslog destination

The named destination can be associated with specific event


messages through the use of an event route.
c1::> event route modify

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EVENT DESTINATIONS
An event destination is a named combination of the email destination, the SNMP destination, or the syslog
destination. You can associate a named destination with a specific event message by using an event route.

11-18

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Event Routes
Are associations between event messages and event
destinations
Allow for frequency thresholds and time thresholds:
Prevent floods of event notifications
Stop notifications for a specific number of iterations or for a
period of time (for example, if you know that a disk is bad and you
want to be reminded only once a day)

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EVENT ROUTES
Event routes have nothing to do with network routes but are merely associations between event messages and
receivers of notifications that are associated with the messages.

11-19

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 3

Determine System Health

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 3: DETERMINE SYSTEM HEALTH

11-20

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Try this task on cluster1:
Are all the nodes healthy?
c1::> cluster show
c1::> node show
Is the cluster network okay?
c1::> set advanced
c1::*> cluster ping-cluster
c1::*> set admin

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

11-21

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data Storage
Are the aggregates online?
Data ONTAP 7-Mode
system> aggr status

Clustered Data ONTAP


c1::> storage aggregate show -state !online

Are any disks broken or being reconstructed?


Data ONTAP 7-Mode
system> disk show

Clustered Data ONTAP


c1::> storage disk show -state broken

Try the Clustered Data


ONTAP commands
on your cluster.

Are the volumes online?


Data ONTAP 7-Mode
system> vol status

Clustered Data ONTAP


c1::> volume show -state !online
22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA STORAGE
For the most part, these commands are self-explanatory. Most show commands provide a view of whats
happening in a particular area of the cluster. Also, most show commands have some powerful query
capabilities which, if you take the time to learn them, can help you to pinpoint potential problems.
In the clustered Data ONTAP command volume show -state !online, the exclamation point means
not (negation). Therefore, this command shows all volumes that do not have a state of online. Because
youll want to know about other states that exist, it is important to use !online rather than offline.

11-22

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Failover
Is storage failover happy?
Data ONTAP 7-Mode
system> cf status

Clustered Data ONTAP


c1::> storage failover show
c1::> storage failover show -instance

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE FAILOVER
When the aggregates of one node fail over to the HA partner node, the aggregate that contains the vol0
volume of that node goes, too. Each node needs its vol0 to boot, so when the rebooted node begins to boot, it
signals the partner to do a giveback of that one aggregate and then waits for that to happen. If storage failover
(SFO) is working properly, giveback happens quickly, the node has its vol0 and can boot. When it gets far
enough in its boot process, the rest of the aggregates are given back. If problems exist, you probably see the
rebooted node go into a waiting for giveback state. If this happens, it is possible that its aggregates are stuck
in a transition state between the two nodes and might not be owned by either node. In this situation, contact
NetApp Global Support.

11-23

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Networking
Data ONTAP 7-Mode

Check network interfaces.


system> ifconfig a
system> ifstat -a

Clustered Data ONTAP


Are all the ports okay?
c1::> network port show

Are all the LIFs okay and home?


c1::> network interface show
c1::> net int show is-home false

24

Try these commands


on your cluster.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETWORKING
You can verify that all the network configuration, the ports, and the interfaces are functioning properly.
On clustered Data ONTAP, if the physical ports are fine, verify that the LIFs are working properly and note
which ones are home and which ones arent home. If the LIFs are not home, it doesnt mean that a problem
exists, but this condition might give you a sense of what is happening.

11-24

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


From the output of the command network interface
show, the LIF that has a value of false under the Is Home
column signifies that it _______. (Choose all that apply.)
1. Is disconnected
2. Has failed over
3. Is administratively disabled

4. Has been migrated


5. Is unhappy
25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

11-25

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 4

Display Utilization and


Performance Information

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 4: DISPLAY UTILIZATION AND PERFORMANCE INFORMATION

11-26

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


I received a phone call recently, telling me
that some NFS clients were experiencing
slow response times from the storage
system. Everything appeared to be
running and stable but just not performing
at the level that the clients are
accustomed to. What commands are
available to help me to determine where a
bottleneck might be occurring?

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

11-27

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data ONTAP Counter Manager


Object-instance-counter hierarchy:

7-Mode: stats command


Clustered Data ONTAP: statistics command

volume

Object
28

vol1

avg_latency:54.6us

vol2

avg_latency:84.1us

vol3

avg_latency:53.8us

Instance

Counter

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP COUNTER MANAGER


Counter Manager is a thin layer that is built into the Data ONTAP architecture; it provides a single view of
Data ONTAP performance counters and a standard performance API set for all clients. Clients include
Manage ONTAP, the AutoSupport support tool, Windows Performance Monitor (perfmon), SNMP, and the
CLI. Counter Manager provides a complete set of performance metrics that supply statistics to analyze
configuration and performance issues.
To look at any object-instance and the corresponding counter, use the stats command in 7-Mode, and use
the statistics command in clustered Data ONTAP.

11-28

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Command Syntax
Using Counter Manager commands:
Collect a sample; use wildcards to collect all objects or instances:
Clustered Data ONTAP
c1::> statistics start | stop
c1::> statistics samples show

7-Mode
system> stats start
system> stats stop

Evaluate current sample:


Clustered Data ONTAP
c1::> stat show

7-Mode
system> stats show

Instantaneous statistics: Display counter values multiple times at a fixed


interval.
7-Mode
system> stats show i 1
Clustered Data ONTAP
c1:> stat show-periodic
29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COMMAND SYNTAX
To begin using Counter Manager, use the start and stop options to collect a measured sample of data. You can
use object and instance parameters to narrow down the data that is being collected, or you can use wildcards
with these parameters to collect all objects or instances. When you have a sample to work with, you can use
the show commands with various object, instance, and counter values to further filter results. In clustered
Data ONTAP, you can use wildcards to specify subsets or thresholds of values for objects, instances, and
counters.
You can also simply collect instantaneous Snapshot copies of current statistics by using the i parameter in
7-Mode or the show-periodic option in clustered Data ONTAP. Use ? and tab completion, or view the
man pages for more details.

11-29

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Displaying Counters in Data ONTAP


Organized into objects, instances, and counters:
An object has one or more instances
An instance has one or more counters

Example:
Object = volume
Instance = accountsPayVol
Counter = total_ops

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISPLAYING COUNTERS IN DATA ONTAP


Displaying counters in Data ONTAP consists of displaying objects, instances, and counters as a single entity.
An object has one or more instances, and an instance has one or more counters. For example, to display the
total operations per second of the accounts payable volume, you would specify an object of volume, an
instance of accountsPayVol, and a counter of total_ops.

11-30

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Filtering Statistics
Show groups of counters or individual counters
Can show multiple iterations

Operate on most recent sample


Data ONTAP 7-Mode
system> stats show object[:instance[:counter]]
system> stats show volume:vol1:total_ops
volume:vol1:total_ops:1278194
Clustered Data ONTAP
c1::> statistics show object [object] instance [instance]
counter [counter]
c1::> statistics show -object volume -instance fcp_vol
-counter total_ops
Node: node01
Object.Instance.Counter
Value
Delta
----------------------------------------------- ------------- ------------volume.fcp_vol.total_ops
1278194
0/s:272s

31

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

FILTERING STATISTICS
Displaying statistics in Data ONTAP 7-Mode requires the use of the stats show command on the most
recently collected sample. The stats show command shows groups of counters or individual counters and
can show multiple iterations of counters. The stats show command can be very useful for protocol
latencies.

11-31

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Identify Objects and Counters


Describe the meaning of a counter and its units
Can describe all counters in all objects
Data ONTAP 7-Mode
system> stats explain counters nfsv3 nfsv3_read_ops
Name: nfsv3_read_ops
Description: Total observed NFS V3 read operations per second
Properties: rate
Unit: per_sec

Clustered Data ONTAP


cluster::> stat catalog counter show object nfsv3 counter
nfsv3_read_ops
(statistics catalog counter show)
Object: nfsv3
Counter
Description
--------------------------- ---------------------------------------------nfsv3_read_ops
Total observed NFSv3 read operations per
second.
32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

IDENTIFY OBJECTS AND COUNTERS


The stats explain and statistics catalog counter show commands describe the meaning
of a counter and its units. You can describe all counters in all objects if you do not enter the object name or
counter name options.

11-32

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 5

Manage Logs and Core Files

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 5: MANAGE LOGS AND CORE FILES

11-33

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

System Logs
Log messages can be sent to:
The console
7-Mode: /etc/messages file
Clustered Data ONTAP: /mroot/etc/log/mlog/messages.log file

You can access the message files by using:


7-Mode: an NFS or CIFS client
Clustered Data ONTAP: the debug log
command
OnCommand System Manager

34

Use the debug log


command to browse the
messages.log file.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SYSTEM LOGS
The system log contains information and error messages that the storage system displays on the console and
logs in message files. In 7-Mode, use an NFS or CIFS client to access the /etc/messages file. In clustered Data
ONTAP, use the debug log command to access the /mroot/etc/log/mlog/messages.log file. You can use
OnCommand System Manger to access system logs in either 7-Mode or clustered Data ONTAP.

11-34

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Log Files


During a boot, all logging goes to /var/log/, which resides
on the comp boot device act flash card
After vol0 is available, logging goes to
/mroot/etc/log/mlog/
Many processes have their own log
Log files are rotated; some old ones are kept

Event logs reside at /mroot/etc/log/ems*

35

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP LOG FILES


While a node is booting, and until the vol0 volume is available, all logging goes to /var/log/. After vol0 is
available, the logging goes to /mroot/etc/log/. Many processes have their own log; for example,
notifyd (the AutoSupport process). Other log files include mgwd.log, vldb.log, and vifmgr.log.
Log files are rotated every time that the particular process starts, and several previous log files are kept for
each process; for example, vldb.log.1 and vldb.log.2.
Event messages are available to be viewed through the UI. The data module, network module, and
management event log messages go to the event log. The event log is rotated once a week at the same time
that the AutoSupport messages are sent out.
The tail UNIX command displays the last few lines of a file on the console. The f flag causes the output
to be refreshed continuously as new data is written to that file. Using tail f for a log file is an effective
way to watch the logging as it happens. For example, if you run a command in the UI and get an error, you
can open another window to that node, run the tail f command on the log file that you think might provide
information for this error, and then go back to the other window or browser and run the UI command again.
This approach helps to establish the cause-and-effect relationship between a UI command and a log message.

11-35

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Core Files
Data ONTAP 7-Mode
Located in /etc/crash
Named as core.<n>.nz
Clustered Data ONTAP
User-space core files are:

Located in /mroot/etc/crash/cores
Named as <procname>.core.<pid>

Kernel core files are:


Located in /mroot/etc/crash/kcores
Named as core.<sysid>.<panic_date>. <panic_time>.nz
36

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CORE FILES
7-Mode core files are stored in /etc/crash. The n in the file name is a number that can be matched with a
date and time of panic based on the panic message in the /etc/messages log file.
User-space core dumps are named according to the process name (for example, mgwd) and use the process ID
(pid) of the instance of the process that generates the core file.
Kernel core dumps include the sysid, which is not the node name but a numerical representation of the node.
The date and time in the core dump name indicate when the panic occurred.
The Remote LAN Module (RLM) is an out-of-band connection to a node that allows for some management of
the node, even when the node is inaccessible from the console and UI. The RLM connection has a separate IP
address and its own shell. Examples of RLM commands are system power off, system power on,
system reset, and system console.

11-36

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

User-Generated Core Files


A system core dump can be generated manually.

If the cluster shell is responsive, enter:


Data ONTAP 7-Mode
system> reboot -d
Clustered Data ONTAP
c1::> system reboot -node <node> -dump true

If the node is in bad shape, from the Remote LAN Module session
or the Storage Partition Management session, enter:
RLM> system core

37

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

USER-GENERATED CORE FILES


When a node panics, a kernel core dump is generated. Sometimes, however, a node is running but has issues
that cannot be debugged live. NetApp technical support might request that a system core dump be generated
for one or multiple nodes to capture the complete picture of what is happening at that time. If a node is
healthy enough to issue UI commands, you can enter a system reboot command with the dump true
parameter. If a node is not healthy enough for that, from the Remote LAN Module session or the Storage
Partition Management session to that node, you can use the system core command to generate a core
dump.

11-37

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Managing Core Files


All kernel cores in the cluster are listed in:
Data ONTAP 7-Mode
system> savecore -l

Clustered Data ONTAP


c1::> system coredump show

Core files must be saved:


Data ONTAP 7-Mode
system> savecore

Clustered Data ONTAP


c1::> system coredump save

For information about how


to use AutoSupport to
upload a core file, see
NetApp knowledge base
article 1011839.

Core files can be uploaded to NetApp:


Data ONTAP 7-Mode
https://mysupport.netapp.com/upload

Clustered Data ONTAP


c1::> system coredump upload
38

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGING CORE FILES


Before a core file is used or handled, it must be saved to /mroot/etc/crash or to /etc/crash.
To upload 7-Mode core files to NetApp technical support, you can remove the core file from /etc/crash
by using a NAS client, or by activating the on-board FTP service. Then upload the file to
https://support.netapp.com/upload. Contact NetApp technical support for details. To upload a clustered Data
ONTAP core file, use the system coredump upload command.
Core files are meant to be examined by NetApp technical support and should be reported and uploaded to
NetApp technical support. The default location to which core dumps should be uploaded is
ftp://ftp.netapp.com/to-ntap/.
For information about how to use AutoSupport to upload a core file, see NetApp knowledge base article:
https://kb.netapp.com/support/index?page=content&id=1011839&locale=en_US

11-38

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Accessing Logs and Core Dumps


from a Web Browser
Requirements
Cluster management LIF must be online
c1::> net int show

HTTP protocol engine must be configured


c1::> system services web show

If a firewall is configured, assign a policy to HTTP protocol


c1::> system services firewall policy show

Service policy infrastructure (SPI) must be enabled


c1::> vserver services web show

Cluster user account must be enabled for the HTTP protocol


c1::> security login show

Users access control role must be SPI accessible


c1::> vserver services web access show

39

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ACCESSING LOGS AND CORE DUMPS FROM A WEB BROWSER: REQUIREMENTS


For added convenience, you can access clustered Data ONTAP logs and core files from a web browser. To
configure a cluster for this feature, ensure that the cluster management LIF is online and the HTTP protocol is
enabled. Ensure that the cluster is configured to allow http protocol traffic and that the administrators account
that will be used to access the web page is http enabled. Finally, be sure that the service policy infrastructure
(SPI) service is active.

11-39

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Accessing Logs and Core Dumps


from a Web Browser
Links

To access logs:
http://cluster-mgmt-ip/spi/cluster1-01/etc/log/

To access core files:


http://cluster-mgmt-ip/spi/cluster1-01/etc/crash/

40

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ACCESSING LOGS AND CORE DUMPS FROM A WEB BROWSER: LINKS


Access these URLs to find the logs and dumps. Substitute the correct node name in the URL. The page will
access the /mroot directory on the desired node. For more information, see the Clustered Data ONTAP
System Administration Guide for Cluster Administrators.

11-40

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Theres a lot to understand. Where can I learn more
when I have more time to digest all the information?

NetApp University course:


Performance Analysis on Clustered Data ONTAP

41

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

11-41

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
TR-4211: NetApp Storage Performance Primer for Clustered Data ONTAP
TR-4150-0313: Operational Best Practice AutoSupport
NetApp Knowledge Base: https:\\kb.netapp.com

NetApp Technical Triage Templates

42

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES
You can find the technical triage templates at
https://kb.netapp.com/support/index?page=content&cat=TRIAGE&channel=HOW_TO.

11-42

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


This module prepared me to collect data to
help me isolate problems that might arise
on my cluster, including NetApp counter
manager, system health commands, the
event notification system, system logs, and
core dumps.

43

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

11-43

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Exercise

Module 11: Monitoring Your


Storage System

Estimated Time: 30 minutes

44

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

EXERCISE
Please refer to your exercise guide.

11-44

Clustered Data ONTAP Administration: Monitoring Your Storage System

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Module 12

Upgrading and Transitioning to


Clustered Data ONTAP

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MODULE 12: UPGRADING AND TRANSITIONING TO CLUSTERED DATA ONTAP


This module discusses nondisruptive upgrade (NDU) of clustered Data ONTAP. It also discusses upgrading
Data ONTAP 7-Mode systems to clustered Data ONTAP. Topics include nondisruptive operations (NDO),
rolling and batch upgrades, automated NDU, and transition fundamentals of Data ONTAP operating in 7Mode.

12-1

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Nondisruptive Upgrade

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: NONDISRUPTIVE UPGRADE

12-2

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


NetApp releases new and improved
technology quite often. I like to keep
my storage software up to date with
the latest and greatest. Can I do it
without disrupting my clients work?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

12-3

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive Operations Overview

Measure of NDO

100%

0%

Lifecycle
Operations
Planned
Event

Unplanned
Event

Capacity and
performance
management

Maintenance
Operations

Software upgrade,
hardware
replacement and
upgrade

Infrastructure
Resiliency

Resiliency during
hardware and
software failure

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NONDISRUPTIVE OPERATIONS OVERVIEW


Todays business environments require 24/7 data availability. The storage industry delivers the base building
block for IT infrastructures, providing data storage for all business and objectives. Therefore, constant data
availability begins with architecting storage systems that facilitate nondisruptive operations (NDO). NDO has
three main uses: hardware and software lifecycle operations; hardware and software maintenance operations;
and infrastructure resiliency.

Lifecycle operations: These are operations that a customer performs to optimize the storage environment
to meet business SLAs while maintaining the most cost-optimized solution. These operations include
moving datasets around the cluster to different tiers of storage and storage controllers to optimize the
performance level of the dataset and manage capacity allocations for future growth of the dataset.
Maintenance operations: At the next level of NDO, components of the storage subsystem are
maintained and upgraded without incurring any outage of data. Examples include replacing any hardware
component, from a disk or shelf fan to a complete controller head, shelf, or system. The idea is that data is
immortal and potentially lives forever, but hardware does not, so maintenance and replacement of
hardware will happen one or more times over the lifetime of a dataset.
Infrastructure resiliency: Infrastructure resiliency is the basic building block for the storage subsystem.
It prevents a customer from having an unplanned outage when a hardware or software failure occurs.
Infrastructure resiliency is based on redundant field replaceable units (FRUs), multipath high-availability
(HA) controller configurations, RAID, and WAFL (Write Anywhere File Layout) proprietary software
enhancements that help with failures from a software perspective. For node hardware failures or software
failures, HA failover enables the node in the HA pair to fail over.

For more granular information about storage subsystem resiliency against failures, refer to TR-3450 and the
Storage Subsystem Best Practices Guide.

12-4

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Upgrading Nodes
Two boot images exist on each node

Updates are installed on the inactive image


When Is Default is true, that image is used after the next
boot process

Updates can be loaded onto each nodes secondary image at


the same time, and the images are made active when a boot
process is convenient:
c1::> system image update -node * -package
http://<ftp_server>/<image>.tgz -setdefault true

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UPGRADING NODES
A nondisruptive upgrade (NDU) is a mechanism that uses HA-pair controller technology to minimize client
disruption during an upgrade of Data ONTAP or controller firmware. This procedure allows each node of
HA-pair controllers to be upgraded individually to a newer version of Data ONTAP or firmware.

12-5

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Upgrade to Clustered Data ONTAP 8.3


Warnings and Considerations

First, upgrade to Data ONTAP 8.2.1 or later

Expand any remaining 32-bit aggregates to 64-bit


Age out or externally back up remaining 32-bit Snapshot
copies

Ensure that all network ports in a failover group are reachable


by their intended clients
Create an SVM management logical interface (LIF) for each
data SVM
Run and review Upgrade Advisor and address all warnings
6

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UPGRADE TO CLUSTERED DATA ONTAP 8.3: WARNINGS AND CONSIDERATIONS


Before you upgrade to Data ONTAP 8.3, there are many things that must be considered. Some of them are
listed on this slide. It is important to review Upgrade Advisor before proceeding.
Clustered Data ONTAP 8.2.1 and later releases are the jumping-off point to upgrade to 8.3. Data ONTAP 8.3
does not support 32-bit aggregates, and Data ONTAP 8.2 provides the ability to expand 32-bit aggregates to
64-bit without adding disks to the aggregate. Data ONTAP 8.3 also cannot access any 32-bit Snapshot copies,
so they must be aged out or backed up externally. To ensure that Data ONTAP 8.3 broadcast domains are
complete and accurate, you must make sure that all failover groups on your Data ONTAP 8.2.1 cluster are
accurate and all ports in the failover groups can be accessed by the intended client machines. It is a best
practice to have an SVM management interface available for each data SVM.

12-6

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


From the clustershell CLI, type:

system node image show instance


How many nodes are in your cluster?
What version of clustered DATA
ONTAP is current on each node?
Can you tell which image is currently
booted?
What do you think the default column is
used for?
7

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

12-7

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive Upgrade
Rolling Upgrade

8.2
8.2

8.1

8.2

8.1

8.2

12-Node Cluster
8.1

8.2

8.1

8.1

8.1
8.1

* Based on a
60-minute average
upgrade time per
HA pair

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NONDISRUPTIVE UPGRADE: ROLLING UPGRADE


Earlier than the Data ONTAP 8.2 operating system, NDU was available only for a rolling upgrade of HA
pairs.
In this example, a 12-node Data ONTAP 8.1 cluster is being upgraded to clustered Data ONTAP 8.2. With
this rolling upgrade, each controller in an HA pair is upgraded, followed in turn by the corresponding partner
in the HA pair. This process is repeated for the next node until all of the nodes are upgraded. With this
sequential upgrade process, you can see that upgrading all the nodes in the cluster can be quite time
consuming.

12-8

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive Upgrade
Batch Upgrade

8.2
8.2

8.2

Batch 1

8.2

8.2
8.2

12-Node Cluster
8.1

Batch 2

8.1

8.1

8.1
8.1

8.1

Based on a
60-minute average
upgrade time per
HA pair

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NONDISRUPTIVE UPGRADE: BATCH UPGRADE


With clustered Data ONTAP 8.2 and later, customers can perform batch upgrades on clusters of eight or more
nodes. With a batch upgrade, customers can upgrade multiple nodes simultaneously, which can reduce the
maintenance period. In this example, three nodes in Batch 1 are upgraded concurrently, followed by their
three corresponding HA-pair nodes. The process is then repeated for the six nodes in Batch 2.
Note that customers should never shut down more than half the nodes in a cluster at the same time. This
shutdown would cause the remaining nodes in the cluster to be out of quorum.

12-9

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


Which of the following protocols can be used to
download the clustered Data ONTAP software
image? (Choose all that apply.)
1. NFS
2. FTP
3. TFTP

4. HTTPS
5. HTTP
6. CIFS
10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

12-10

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Automated NDU
Data ONTAP 8.3 and Later
8.3.x
8.3.x

8.3.x

8.3.x

8.3
8.3

6-Node Cluster

8.3.x
8.3.x
8.3.x

8.3.x

Batch 1

8.3.x
8.3.x

12-Node Cluster
8.3
8.3

Batch 2
8.3

8.3
8.3

8.3

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AUTOMATED NDU: DATA ONTAP 8.3 AND LATER


Benefits of automated NDU:

One-touch upgrade
Simplifies and smoothens the upgrade experience
Avoids human errors
No need to download the image to all nodes in the cluster. Saves /mroot space on n-2 nodes.
In case of any errors, automated NDU provides guidance for the user with further actions to be taken.

The Data ONTAP 8.3 operating system automates the NDU process.
1. First, the Data ONTAP 8.3 operating system automatically installs the target Data ONTAP image on each
node in a cluster.
2. The Data ONTAP 8.3 operating system validates the cluster components to ensure that the cluster can be
upgraded nondisruptively.
3. Based on the number of nodes in the cluster, the operating system executes a rolling or batch upgrade in
the background. Clusters with two to six nodes use a rolling upgrade, whereas clusters with more than six
nodes use a batch upgrade.
The multistep manual process that administrators need to perform on each node has been automated into three
commands for the entire cluster.

cluster image package get -url http://<server>/image.tgz


cluster image update version version_A
cluster image show-update-progress

Note that automated NDU requires that all nodes in the cluster start at the generally available distribution of
the Data ONTAP 8.3 operating system and upgrade to a later release.
For additional information about Data ONTAP upgrades, see the Clustered Data ONTAP 8.3 Upgrade and
Revert/Downgrade Guide.
12-11

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topic for Discussion


What would happen if you upgraded both controllers
of an HA pair at the same time?

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPIC FOR DISCUSSION

12-12

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Answer This Question


With clustered Data ONTAP 8.2, which nondisruptive
upgrade technique is supported on a cluster of six or
fewer nodes?
1. Batch
2. Systematic
3. Rolling
4. Consignment
5. Bunch
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ANSWER THIS QUESTION

12-13

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Transition Fundamentals

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: TRANSITION FUNDAMENTALS

12-14

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Journey to Clustered Data ONTAP


From 7-Mode to Clustered Data ONTAP
Operations

Evaluation

Adoption

Transition

Operations

7-Mode

7-Mode

7-Mode

Mixed

cDOT

Operating on
7-Mode

Clustered Data
ONTAP evaluation
and ROI
assessment

Decision to adopt
clustered Data
ONTAP and
move 7-Mode
environment

Transition of
the 7-Mode
environment
to clustered
Data ONTAP

Operating on
clustered
Data ONTAP

Transition Fundamentals

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE JOURNEY TO CLUSTERED DATA ONTAP: FROM 7-MODE TO CLUSTERED


DATA ONTAP
The transition fundamentals framework addresses the fundamental knowledge that is needed to holistically
complete a successful transition to clustered Data ONTAP from 7-Mode.
Transition begins when you decide to adopt clustered Data ONTAP and move your existing 7-Mode
environment. As the transition activity progresses, your NetApp environment is a mix of both 7-Mode and
clustered Data ONTAP. By the end of the transition activity, your NetApp environment is completely
operating on clustered Data ONTAP.

12-15

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Transition Fundamentals
Overview

Define
Scope

Transition
Planning

Environment
Discovery

Cluster
Design

IDENTIFY

DESIGN

TRANSITION

IMPLEMENT

Deploy &
Configure

Environment
Updates
Data
Migration

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRANSITION FUNDAMENTALS: OVERVIEW


The transition fundamentals have three logical categories: Identify, Design, and Implement.
Identify: The Identify category is related to addressing the initial business decisions that will drive the
identification of which storage containers will transition first in the NetApp environment. Identifying involves
understanding what applications and workloads are deployed in your 7-Mode environment. How can you
group systems, applications, or workloads together so that they can be moved together and their movement
can be prioritized according to business requirements?
Design: The design category involves addressing the significant planning activities that are related to the
transition. It also includes performing detailed cluster architecture design of the destination clustered Data
ONTAP system. Transition planning encompasses a detailed discovery of the 7-Mode environment and
migration planning. Cluster architecture design involves the planning of what cluster capabilities are required
to support the migrated workloads and it should also take into account the future growth needs of the storage
system.
Implement: The implement category addresses the activities that are involved with the deployment and
configuration of the destination cluster, the migration of workloads and data from 7-Mode to clustered Data
ONTAP, and the updates to the operating environment to make sure the NetApp environment and all
technologies that are dependent on it are working as expected.

12-16

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Transition Fundamentals
A Conceptual Framework

Customers may have their


own workflows and
processes
NetApp and its partners
offer a Unified Transition
Methodology when
delivering transition-related
services to customers

Transition activities tend to


be iterative

Define
Scope

Transition
Planning

Environment
Discovery

Cluster
Design

IDENTIFY

DESIGN

TRANSITION

Web-based Training:
Introduction to the Unified Transition
MethodologyIMPLEMENT
Framework

Deploy &
Configure

Environment
Updates
Data
Migration

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRANSITION FUNDAMENTALS: A CONCEPTUAL FRAMEWORK


The transition fundamentals framework is not a process. It is a logical organization of core topics that indicate
which activity needs to occur and how to accomplish tasks.
Customers already have their own workflows and processes that are used to execute projects similar to
transition. The intention of the transition fundamentals is that they will fit into the customers existing
processes or workflows.
Although the transition fundamentals framework is not a process, there is a logical flow to achieving the tasks
and activities that are involved with transition. In general, when applying the fundamentals, start with
activities that are included in the Identify category, move to Design, and then conclude with Implement.
NetApp and its partners do have a standard process that is used when delivering transition-related services to
customers. This process is known as the Unified Transition Methodology (UTM). Simply put, transition
fundamentals is the knowledge and UTM is the process that can be used to put that knowledge into practice. It
is important to note that the UTM is not mandatory. It is available to augment existing processes; it can be
used in a different order, if desired. Please contact your NetApp representative for more information about the
transitioning services. Training is also available for Unified Transition Methodology.
Note that some transaction activities may be iterative in nature. For example, the initial transition activity of
the first systems and workloads that move to clustered Data ONTAP may see all the fundamentals applied
from Identify through Implement. Additional data migrations after the first migration may simply iterate
through the implementation activities until all planned systems and workloads have been transitioned to
clustered Data ONTAP.

12-17

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Premier Transition Tools


7-Mode Transition Tool
(7MTT)
Source discovery
Automates data migration
NFS or CIFS migration

7MTT version 2.0


SAN migration from 7-Mode to
clustered Data ONTAP 8.3
MetroCluster migration from 7Mode to clustered Data ONTAP
8.3

18

RapidData Migration
Solution
Targets NFS v3 customers
Source discovery
Automates data migration
Cache maintains client
performance
Minimally disruptive per-client
cutover

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP PREMIER TRANSITION TOOLS


NetApp provides two premier tools that can be used for Transition activities. The first is the 7-Mode
Transition Tool (7MTT), which provides capabilities for source discovery, automates data migration, and
supports both NFS and CIFS migration in a single toolset. The latest version of the 7MTT is version 2.0. This
version adds support for migrating SAN workloads from 7-Mode to clustered Data ONTAP 8.3, as well as
transitioning 7-Mode MetroCluster configurations to clustered Data ONTAP 8.3.
The second tool is the RapidData Migration Solution. This solution targets NFS version 3 customers. Like the
7-Mode Transition Tool, it also provides source discovery and automates the data migration. One of the key
enhancements that RapidData Migration solution provides is the caching of the source controller data. This
cache maintains client performance and creates headroom on the source system during migration. The second
key enhancement is that RapidData offers a minimally disruptive per-client cutover that allows read/write
access to both source and destination storage during transition. No final synchronization is necessary.

12-18

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data Migration Methods


Replication-based migration
Uses SnapMirror software
Available with 7MTT and with manual transition by creating a SnapMirror
relationship of the type TDP

Application-based migration
Oracle Automatic Storage Management (ASM)
Microsoft Exchange Database Availability Group (DAG)
Virtualization environment Tools.

Appliance-based migration
RapidData Migration tool
DTA2800

Host-based migration (including Rsync, NDMP Copy, and LVM)

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA MIGRATION METHODS


Several types of migration methods are available, each with their own benefits and considerations. NetApp
has categorized these migration methods based on their benefits and capabilities.
Replication-based migration: This migration method uses NetApp SnapMirror technology. This method is
available with the 7MTT, or you can manually transition by creating a SnapMirror relationship of the type
TDP. The key benefits of replication migration is that the Snapshot copies and storage efficiency savings are
retained throughout the migration activity.
Application-based migration: This migration method uses native application replication, mirroring, or
copying features to migrate data to clustered Data ONTAP. Two key examples of application-based migration
methods are Oracle Automatic Storage Management and Microsoft Exchange Database Availability Group.
Virtualized environment tools for the movement of data are also grouped into this category for the purpose of
transition.
Appliance-based migration: This migration method uses hardware to migrate data to clustered Data ONTAP.
The RapidData Migration solution and Data Transport Appliance 2800 (DTA2800) are appliance-based
migration methods.
Host-based migration: This migration method uses non-NetApp tools such as Rsync, NDMP Copy, and
logical volume managers (LVM) to migrate data to clustered Data ONTAP.

12-19

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Im interested in learning more about
transitioning workloads from my 7Mode systems to my new cluster.
Where do I go next?

Web-based courses:
NetApp Transition Fundamentals
Introduction to the Unified Transition
Methodology Framework
20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

12-20

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
Clustered Data ONTAP 8.2 Upgrade and Revert/Downgrade Guide

Nondisruptive Upgrade Technical FAQ


Clustered Data ONTAP Transition Collateral Binder
TR-4052: Successfully Transitioning to Clustered Data ONTAP
Clustered Data ONTAP Transition Technical FAQ
Clustered Data ONTAP Transition Technical Presentation
Clustered Data ONTAP for the 7-Mode Administrator Portal

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES

12-21

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module


Its great to know how I can upgrade
my cluster without disruption. Ill be
transitioning some 7-Mode systems
into a new cluster soon, and Im glad
that I learned strategies to make the
transition run smoothly.

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT KEN HAS LEARNED IN THIS MODULE

12-22

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Your Learning Journey Continues


Visit regularly to stay informed

NetApp University

NetApp

NetApp University Overview

New to NetApp Support Webcast

Find the training you need

Ensure a successful support experience

Explore certification

NetApp Support

Follow your learning map

Access downloads, tools, documentation

NetApp University Community

Customer Success Community

NetApp University Support

Welcome to Knowledgebase

Join the discussion

Contact the support team

23

Engage with experts

Access a wealth of knowledge

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

YOUR LEARNING JOURNEY CONTINUES: VISIT REGULARLY TO STAY INFORMED


The NetApp University Overview page is your front door to learning. Find training that fits your learning map
and your learning style, learn how to become certified, link to blogs and discussions, and subscribe to the
NetApp newsletter Tech OnTap.
http://www.netapp.com/us/services-support/university/index.aspx
The NetApp University Community page is a public forum for NetApp employees, partners, and customers.
NetApp University welcomes your questions and comments!
https://communities.netapp.com/community/netapp_university
The NetApp University Support page is a self-help tool that enables you to search for answers and to contact
the NetApp University support team. http://netappusupport.custhelp.com
Are you new to NetApp? If so, register for the New to NetApp Support Webcast to acquaint yourself with the
facts and tips that will ensure that you have a successful support experience.
http://www.netapp.com/us/forms/supportwebcastseries.aspx?REF_SOURCE=new2ntapwl-netappu

12-23

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The NetApp Support page is your introduction to all products and solutions support:
http://mysupport.netapp.com. Use the Getting Started link
(http://mysupport.netapp.com/info/web/ECMP1150550.html) to establish your support account and hear from
the NetApp CEO. Search for products, downloads, tools, and documentation, or link to the NetApp Support
Community (http://community.netapp.com/t5/Products-and-Solutions/ct-p/products-and-solutions).
Join the Customer Success Community to ask support-related questions, share tips, and engage with other
users and experts.
https://forums.netapp.com/
Search the NetApp Knowledgebase to leverage the accumulated knowledge of NetApp users and product
experts.
https://kb.netapp.com/support/index?page=home

12-24

Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Module A
Infinite Volumes

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULE A: INFINITE VOLUMES

A-1

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I need an SVM with one very large


storage containerlarger than what a
single node can handle. Think of it as a
data repository. Can clustered Data
ONTAP accommodate something like
that?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

A-2

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Virtual Machine


With Infinite Volume

data
LIFs
-----

mgmt
LIF
Infinite
Volume

SVM Admin

NFS
CIFS

Client Access

SVM with Infinite Volume

Storage virtual machines (SVMs) with Infinite Volume contain only one
infinite volume:
One junction path, which is /NS by default
Can be used for NFS and CIFS (SMB 1.0) only
3

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE VIRTUAL MACHINE: WITH INFINITE VOLUME


SVMs with Infinite Volume can contain only one infinite volume to serve data. Each SVM with Infinite
Volume includes only one junction path, which has a default value of /NS. The junction provides a single
mount point for the large namespace provided by the SVM with Infinite Volume. You cannot add more
junctions to an SVM with Infinite Volume. However, you can increase the size of the infinite volume.
SVMs with Infinite Volume can contain only files. They provide file-level data access by using NFS and
CIFS (SMB 1.0) protocols. SVMs with Infinite Volume cannot contain LUNs and do not provide block-level
data access.

A-3

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Steps to Create an Infinite Volume


Single Junction Path/NS
LIF1 LIF2

LIF5 LIF6

LIF3 LIF4

LIF7 LIF8

/NS
LIF9 LIFA LIFB LIFC

1. Start with clustered Data ONTAP 8.2


configuration.
2. Create an SVM for Infinite Volume.
3. Assign aggregates to the SVM.

Infinite Volume
NS

DC2

DC1

1 namespace constituent
Namespace mirror(s)
Minimum of 2 data constituents
1 junction path

5. Create logical interfaces (LIFs).


6. Configure NFS and/or SMB.

NSm

SVM for Infinite Volume


4

4. Create an infinite volume that under


the covers automatically creates:

7. Mount infinite volume on clients.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STEPS TO CREATE AN INFINITE VOLUME: SINGLE JUNCTION PATH/NS


An infinite volume is composed of one namespace constituent, a namespace mirror(s), and several data
constituents. The namespace constituent contains the namespace of the infinite volume. The namespace
constituent maps path names to data that is stored on data constituents. A data constituent contains the data of
one or more regular files and is one of the containers that contribute to the storage capacity of an infinite
volume. Access from a client is made to a redirector file within the namespace constituent. The I/O requests
are routed to one of the many data constituents that are referenced by the redirector file. This redirection
happens transparently from the clients perspective. By introducing this redirection technology, an infinite
volume can present a single junction namespace that can grow to over 20 PB.
The junction path for the infinite volume follows the cluster and must be a single element path, such as /NS or
/InfiniteVol. More than one element (for example, "/NS/v1") is not allowed.

A-4

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Creating SVMs with Infinite Volume


System Manager 3.1
Step 2: Create volume

Step 1: Create SVM


(select type Infinite Volume)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CREATING SVMS WITH INFINITE VOLUME: SYSTEM MANAGER 3.1


Creating an infinite volume is similar to creating a FlexVol volume. First create an SVM and specify Infinite
Volume as the volume type. After the SVM is created, create a volume.
For more information about creating an SVM for Infinite Volume and creating an infinite volume, see this
video on the field portal at
https://fieldportal.netapp.com/DirectLink.aspx?documentID=111615&contentID=192061.

A-5

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume
Namespace Constituent

One per infinite volume


10 TB container containing:
Directory hierarchy
File names
File redirectors (maximum
2 billion)

Infinite Volume
NS

F1

DC1
DC1

F5

F2

F3

F4
DC2

F6

NSm

NS

SVM for Infinite Volume

F1 DC1
F2 DC2
F3
NS DC1
F4 DC2
F5 DC1
F6 DC2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME: NAMESPACE CONSTITUENT


The namespace constituent contains the namespace of the infinite volume.
With the Infinite Volume features built-in resiliency, if the namespace constituent is offlined, then:

A-6

Any namespace and directory operations (such as mkdir) don't work.


File operations (such as read, write, get/set attrs) on recently accessed files continue to work due to
caching of fh-mapping in the memory of local nodes.

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume
Namespace Mirror

10 TB container
Intracluster volume
SnapMirror copy of
namespace constituent

Provides

Infinite Volume
NS

F1

DC1
DC1

F5

F2

F3

F4
DC2

F6

NSm

Data protection for the


namespace constituent
(updated every 5 minutes)
Supports SnapDiff for
incremental tape backup of
the infinite volume (updated
daily)

SVM for Infinite Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME: NAMESPACE MIRROR


Namespace mirror is a type of data protection mirror. It is not a load-sharing or FlexCache device.
The namespace mirror is not an active namespace constituent. It cannot serve the incoming requests until it is
promoted to a namespace constituent, in case the namespace constituent is not available.
You can specify which aggregate to use for the namespace mirror or allow the Infinite Volume to select it. By
default, it will select an aggregate on a node that is not the high-availability (HA) mirror of the node that
contains the namespace constituent.
When you create a read/write infinite volume, one namespace mirror constituent is automatically created, and
a data protection mirror relationship is automatically created between the namespace constituent and the
namespace mirror constituent. The data protection mirror relationship is updated every five minutes. The data
protection mirror relationship is an automatic process for an infinite volume. You cannot use SnapMirror
commands to modify or manage the data protection mirror relationship between the namespace constituent
and the namespace mirror constituent.
When you enable SnapDiff on an infinite volume that spans three or more nodes in a cluster, additional
namespace mirror constituents are automatically created for SnapDiff to use for incremental tape backup of
infinite volumes. Namespace mirror constituents that are created to support SnapDiff are updated daily or as
configured for SnapDiff.
NOTE: When you create a destination infinite volume for a data protection mirror relationship, a namespace
mirror constituent is not created on the destination infinite volume. However, if you enable SnapDiff on a
destination infinite volume, namespace mirror constituents are automatically created for use by SnapDiff. You
must initialize the data protection mirror relationship between the source and destination infinite volumes
before you can enable SnapDiff.

A-7

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume
Data Constituents

Containers that store the


content of data files

Infinite Volume
NS

F1

DC1

F5

F2

F3

F4
DC2

F6

NSm

SVM for Infinite Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME: DATA CONSTITUENTS


A data constituent contains the data of one or more regular files and is one of the containers that contribute to
the storage capacity of an infinite volume. Access from a client is made to a redirector file within the
namespace constituent. The I/O requests are routed to one of the many data constituents that are referenced by
the redirector file.
The Infinite Volume feature has built-in resiliency, but there might be situations in which a data constituent is
offlined or unavailable. In that case:

A-8

Directory operations (such as mkdir) are not affected.


File operations (such as lookup, read, write, and get/set attrs) that include the files from other data
constituents are not affected.
Operations that are distributed over numerous constituents (such as create and readdirplus) might take
longer and yield partial results if these operations include files from that offline data constituent.

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Write Request to Infinite Volume


F1

F2

F4

F3

Creates an empty file handle in


namespace constituent.
File is written to a data constituent.
Namespace constituent is updated with
the pointer redirector to that data file
location.

Infinite Volume
NS

DC2

DC1

NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume

Files are distributed in a round-robin


fashion to the data constituents

Files do not span data constituents

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WRITE REQUESTS TO INFINITE VOLUME

A-9

Files are distributed in a round-robin fashion to the data constituents based on the capacity threshold
(preference is given to data constituents with the most available space).
Files do not span data constituents; each file gets written completely in one data constituent only and
doesnt get striped.
When a write request comes in, the namespace constituent is updated with an empty file handle, and the
data file is written on a data constituent based on the capacity threshold. Then the data file location is
updated on the namespace constituent and acknowledged back to the client.

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Read Request to Infinite Volume


Client 8 requests file F2
File location is unknown to node
so request gets routed to
namespace constituent
Namespace constituent provides
physical location of data file to
requesting node
Read request is redirected to the
hosting data constituent

Infinite Volume
NS

F1

DC1

F3

F2

DC2

F4

File location remains cached in


node main memory

NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume

10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

READ REQUESTS TO INFINITE VOLUME

A-10

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Subsequent Read Request to Infinite Volume

F2 DC2

Infinite Volume
NS

F1

DC1

F3

F2

DC2

Client 7 requests file F2


Node checks memory and
finds it knows the location of
the requested file
Request is forwarded directly
to the physical location of file
Read request is serviced from
the hosting aggregate

F4

NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBSEQUENT READ REQUESTS TO INFINITE VOLUME


Subsequent reads to the same node (from any client that is connected to the LIF on the node) for the same file
do not have to go to the namespace constituent to find the file location if it is saved in main memory on the
requesting node.

A-11

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Storage Classes


Allows grouping of storage within
an infinite volume
Nondisruptively add capacity to
each storage class
Created with OnCommand
Workflow Automation (WFA) 2.1
Storage classes can be defined by:
Disk type (ex. SAS or SATA)
Volume setting

Infinite Volume
Class 3

F3
F1

Class 1

10-TB SSD

200-TB SATA
Dedupe = enabled
Compression= enabled

F4

Class 2

100-TB SAS
Dedupe = enabled

F2

Deduplication enabled/disabled
Compression enabled/disabled
Inline compression enabled/disabled
Storage efficiency policy
Space guarantee

SVM for Infinite Volume


12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME STORAGE CLASSES


Created with OnCommand Workflow Automation 2.1
Infinite volume compression and deduplication are managed and initiated at the infinite volume level under
the cover that sends requests to all data constituents. Each node can run only eight compression or
deduplication processes at once; additional requests will be placed in the queue.

A-12

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Data Policies


Data policies can be defined by:
File type (ex. .doc, .xls)
File owner
Directory location

Data policies contain ingest rules that


direct files into different storage classes
Storage classes are transparent to
application or end user
Created with OnCommand Unified
Manager

Infinite Volume
Class 3

F3

F1

Class 1

10TB SSD

200TB SATA
Dedupe = enabled
Compression= enabled

F4

Class 2

100TB SAS
Dedupe = enabled

F2

SVM for Infinite Volume


13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME DATA POLICIES


Data policies must be modified for an infinite volume with multiple storage classes to create rules that place
files into the different storage classes at ingest.

A-13

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Storage Classes


Creating with OnCommand WFA

Bundled Workflows
LIF1 LIF2

LIF5 LIF6

LIF3 LIF4

LIF7 LIF8

NFS
and
CIFS
SAS
SATA
SATA

Create and configure an SVM


with Infinite Volume

/NS

Add performance class

LIF9 LIFA LIFB LIFC

SAS drives
Deduplication enabled
Volume efficiency policy

Weekends: 6 a.m.midnight
QoS policy: background

DC1 NS DC2

DC2

DC3

DC3

DC5

Add capacity class

DC4

DC7

DC9

DC11

DC13

DC15

NSm

DC1

DC4

DC6

DC8

DC10

DC12 DC14

DC16

SATA drives
Deduplication enabled
Compression enabled
(background and inline)
Volume efficiency policy
Midnight6 a.m.

SVM for Infinite Volume


14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME STORAGE CLASSES: CREATING WITH ONCOMMAND WFA

A-14

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Storage Classes


Write Request Example

Sample Data Policy


Performance storage class
F1

F2

F3

Files owned by bob

F4

Files written to /ns/SE


File type *.doc, *.docx

Capacity storage class


Default

File1 is owned by bob


File2 is owned by linda, written to
/ns/marketing and is a .txt

NS

SAS

DC1

SATA

DC2

SATA

DC2

DC3

DC3

DC5

DC4

DC7

DC9

DC11

DC1

DC4

DC6

DC8

DC10

DC12 DC14

SVM for Infinite Volume


15

DC13

DC15

File3 is owned by tony, written to


/ns and is a .doc file

DC16

File4 is owned by jane, written to


/ns/PM and is an .xls file

NSm

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME STORAGE CLASSES: WRITE REQUEST EXAMPLE

A-15

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Recommendation Checklist

16

Attributes

Intended For

Not Intended For

Scalability

Single large NAS repository that


needs to grow beyond 100 TB

Maximum space that never needs to


grow beyond 100 TB

Junction Path

Single junction path

Numerous junction paths per infinite


volume

SVM

One SVM dedicated to a single


infinite volume; cluster can contain
several SVMs for FlexVol volumes
and SVMs for Infinite Volume

Single SVM that hosts both FlexVol


volumes and infinite volume or
numerous infinite volumes

Protocol
Access

NAS: NFSv3, NFSv4.1, pNFS,


SMB1.0

NAS: NFS v4.0, SMB2.0 or higher


SAN: FC, FCoE, iSCSI

System Type

FAS3000, FAS8000 and FAS6000


series systems and equivalent VSeries

FAS2000 series systems

Files

Up to 2 billion
Maximum file size 16TB

>2 billion files


Files larger than 16 TB

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME RECOMMENDATION CHECKLIST


The objective of this slide is to show quick qualification criteria to identify Infinite Volume sales
opportunities. Any attributes that are listed as Intended for are the target use case criteria for Infinite
Volume use cases in clustered Data ONTAP 8.2 and above. If any of the Not Intended for criteria match a
requirement of your customer, then Infinite Volume is not a recommended solution for the customer.
For FAS and V-Series 3000, 8000 and 6000 series systems that support Data ONTAP 8.2: For a current list of
platforms that support infinite volumes, see the Hardware Universe (formerly the Systems Configuration
Guide) at support.netapp.com/knowledge/docs/hardware/NetApp/syscfg/index.shtml.
Infinite Volume supports multiple mounts or shares to a single Infinite Volume file system, similar to FlexVol
volumes. There are no restrictions on client side, similar to FlexVol volumes.
On the server side during a volume mount, you have to specify a junction-path, which must be rooted under
/. So you can have /pepsi but not /pepsi/HR/docs.
Infinite volumes do not support subvolume exports like FlexVol volumes, which means you cannot control
which clients can access which parts of the file system. So for an IV file system with directory structure
/pepsi/HR/docs and /pepsi/HR/payroll you cannot control access to payroll section for a client mounted to
/pepsi/HR.
ACLs and permissions still are in effect, and you can control access with them.

A-16

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Infinite Volume Recommendation Checklist


(Cont.)
Attributes

Intended For

Not Intended For

Workloads

Not latency sensitive


Typical file sizes >100KB
Static, unstructured, overwrite
or delete seldom
Active archive, enterprise
content repository

Transactional or requiring low


latency
Typical files sizes <50KB
Structured data or data that
changes often
HPC, software development or
home directories, virtualization

Backup and
Replication

Replication no more frequently


than hourly

Replication more frequently than


hourly

Data Protection

RAID-DP, Snapshot, and


asynchronous SnapMirror are
sufficient

SnapVault, MetroCluster compliance,


or NDMP to tape

Storage
Efficiency
Features

Deduplication, compression, thin


provisioning, Flash Cache cards,
Flash Pool

FlexClone

Number of Data
Single data copy per cluster
Copies Required
17

Distributed data copies

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

INFINITE VOLUME RECOMMENDATION CHECKLIST (CONT.)


The objective of this slide is to show quick qualification criteria to identify Infinite Volume sales
opportunities. Any attributes that are listed as Intended for are the target use case criteria for Infinite
Volume use cases in clustered Data ONTAP 8.2 and above. If any of the Not Intended for criteria match a
requirement of your customer, then Infinite Volume is not a recommended solution for the customer.

A-17

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Topics for Discussion


If an application requires 100 TB of storage, should I create a
FlexVol volume or an infinite volume?

Can an infinite volume support LUNs?

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOPICS FOR DISCUSSION

A-18

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Has Ken Learned in this Module?

I learned how an infinite volume uses


constituent volumes spread across a
cluster to create a single monolithic data
container. I also learned how to create an
SVM to host an infinite volume, and I
learned about its capabilities and
limitations.
19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT HAS KEN LEARNED IN THIS MODULE?

A-19

Clustered Data ONTAP Administration: Bonus Module A: Infinite Volumes

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Module B
Engaging NetApp Support

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULE B: ENGAGING NETAPP SUPPORT

B-1

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question

I understand about NDO and the


reliability of NetApp storage. But
eventually, Im going to need help from
NetApp. What are my options?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

B-2

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

Tools to Interface with NetApp

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: TOOLS TO INTERFACE WITH NETAPP

B-3

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Engaging NetApp Support

Experienced
Support
Professionals

Customer
Support
Centers

Global
Logistics

AwardWinning
Support

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ENGAGING NETAPP SUPPORT


NetApp Support is staffed with highly experienced support professionals who are located around the globe to
help with all your support needs.
With eight Customer Support Centers worldwide, NetApp maintains a global presence. In addition, Global
Logistics operates from more than 275 parts depots on five continents, so that the parts you need will arrive as
quickly as possible.
The NetApp Support organization has been recognized as an industry leader that is known for its awardwinning support web site.

B-4

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Contacting NetApp Support


Email

Website

Support@netapp.com

Visit the NetApp Support site at http://mysupport.netapp.com

888.4.NETAPP (US and Canada) (888.463.8277)

Call

00.800.44.638277 (EMEA or Europe)


+800.800.80.800 (Asia and Pacific)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CONTACTING NETAPP SUPPORT


There are three ways to contact NetApp Support.
You can email NetApp Support at support@netapp.com.
You can also contact NetApp Support by visiting the Support site at http://mysupport.netapp.com.
Or call the numbers that are listed for your region.

B-5

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

NetApp Support Site


Without Login

Key Features
Download software releases and patches
Download product documentation
Use the comprehensive Knowledgebase
Manage your service contracts
Access technical resources
Log and monitor problem reports
Share information through interactive
community sessions

With Login

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NETAPP SUPPORT SITE


The NetApp support site contains many features customers will find useful. To access the NetApp support
site, you must create an account on the NetApp Support site. To do this, follow the prompts for the Create
your account wizard. To register, you need a controller serial number and your company email address. This
information validates that you are authorized to access the details for your company. It may take up to 24
hours to create your account.
Your account enables you to download software, patches, and product documentation, and to use the
extensive NetApp knowledge base. You can also manage your install base information, including site
operating hours, contacts, and support contracts. You can open support cases online or discuss a topic on the
NetApp community forums.
Access the NetApp support site at this link:
http://mysupport.netapp.com

B-6

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Submitting a New Case


Procedure
Specify the problem controller
Select the problem category
Verify the problem does not have a case
Review the list of knowledgebase articles

Verify the case information is correct

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SUBMITTING A NEW CASE


If you encounter a problem with a NetApp product, you can open a new support case on the NetApp Support
site. From Cases and Parts, select Create Case to open a case. After you have opened a case, you can check
the status of the case, request a replacement part, or check the part request status from the NetApp Support
site.
The following information is required to open a case:
1. Specify the problem controller that has the issue by entering the appropriate system-specific information.
Or, you can browse your install base to select the correct controller.
2. Select the problem category and appropriate subcategories, and type your question or problem statement.
3. Verify that the problem does not already have a case that was opened for it.
4. Review the list of knowledge base articles to see if your problem is already known and has a documented
solution.
5. Verify that the case information is correct and then click Go to create the case.
Be sure to explore the NetApp Support site to see the other helpful tools and methods for finding the answers
that you need when you need them. To get started, review the Support Owners Manual under the Getting
Started heading when you first log in.

B-7

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Knowledge Base and Community Forums


Knowledge Base
https://kb.netapp.com/support/index?page=home&access=s

Support Community
https://forums.netapp.com/community/support

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KNOWLEDGE BASE AND COMMUNITY FORUMS


You can also search the self-help knowledge base for articles or ask the Support Community forums to
resolve your issue.
Knowledge Base
To make your search more accurate, ask a complete question instead of using keywords. This allows the
natural language search engine to process the query by using a specific algorithm and then provide you with
optimal results.

Example: How do I change the J7 jumper on the AT-FCX module to get it running in 10Gb mode?

Community Forums
You can ask questions, exchange ideas, or get feedback from other community members on the NetApp
Community public forum. You must be a member to participate and connect to the NetApp community.

B-8

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

AutoSupport
Key Features
Sophisticated monitoring for faster incident management and resolution
Known as Phone Home
Sends weekly AutoSupport messages to NetApp

AutoSupport
Messages

NetApp

NetApp Systems

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

AUTOSUPPORT
NetApp AutoSupport is an integrated, efficient monitoring and reporting technology that checks the health of
your AutoSupport enabled NetApp systems on a continual basis. This call home feature in the Data
ONTAP software for all NetApp systems collects detailed performance data and sends that diagnostic data
back to NetApp, where it is automatically analyzed for any issues that might affect system stability and
performance.

B-9

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

My AutoSupport
Key Features

NetApp Systems

Identifies risks and provides best


practice tips

AutoSupport
Messages

Compare your hardware and


software versions and alerts you to
potential obsolescence

NetApp
SSC Partners
Customers

Provides performance and storage


utilization reports to proactively plan
capacity needs
Provides new system visualization
tools, Transition Advisor and
Upgrade Advisor for clustered Data
ONTAP systems.

My AutoSupport Mobile App

AutoSupport
Data
Warehouse

NetApp Support Cloud


10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MY AUTOSUPPORT
My AutoSupport is a suite of web-based applications hosted on the NetApp Support site and accessible via
your web browser. Using the data from AutoSupport, My AutoSupport proactively identifies storage
infrastructure issues through a continuous health-check feature and automatically provides guidance on
remedial actions that help increase uptime and avoid disruptions to your business.
My AutoSupport provides four primary functions.
First, it identifies risks and provides best practice tips. For example, My AutoSupport might find a
configuration issue, a bad disk drive, or version incompatibility on your system.
Second, My AutoSupport can compare your hardware and software versions and alert you to potential
obsolescence. For example, My AutoSupport alerts you about end-of-life (EOL) issues or an upcoming
support contract expiration date.
Third, My AutoSupport provides performance and storage utilization reports to help you proactively plan
capacity needs.
Last, My AutoSupport provides new system visualization tools and transition advisor tools for clustered Data
ONTAP systems.
If you plan any changes to your controllers, NetApp recommends manually triggering an AutoSupport
message before you make the changes. This manually triggered AutoSupport message provides a before
snapshot for comparison, in case a problem arises later.

B-10

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Dashboard

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP DASHBOARD


The Fitness Dashboard is the central portal in My AutoSupport to manage the fitness of your NetApp storage
systems. This dashboard gives you a 360-degree view of the fitness of your installed base. In addition to
showing basic configuration fitness elements such as AutoSupport adoption, it summarizes risks, best practice
gaps, remote support coverage, and end-of-support components. The dashboard also provides a snapshot of
storage efficiency, and the ability to quickly access comprehensive details on a specific dashboard object.

B-11

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Upgrade Advisor

12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UPGRADE ADVISOR
With the clustered Data ONTAP Dashboard, you can easily generate a Data ONTAP upgrade plan for a
cluster. Using the Upgrade Advisor link from the left navigation pane of the dashboard, select the nodes for
which you need an upgrade plan and then specify the target Data ONTAP version to generate an upgrade plan
for one or more nodes in the cluster.

B-12

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Interoperability Matrix Tool


What and Why

Key Features
Web-based utility
Provides detailed configuration
information for NetApp products
that work with third-party
components

13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE INTEROPERABILITY MATRIX TOOL: WHAT AND WHY


It is important to know which NetApp products and third-party products are supported for use within your
configuration. Many products exist, and the dependencies between them are complex. The Interoperability
Matrix Tool (IMT) has an easy-to-use interface and expanded search capabilities, which will ensure that your
configuration along with third-party components work together.
Resources
Click http://support.netapp.com/matrix.
The IMT User Guide can be accessed at this link: http://support.netapp.com/matrix/helpcms/pdf/User%20Guide.pdf

B-13

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Uploading a Core File


Method 1

14

Method 2

Enable remote read-only HTTPS


access

Run the system coredump upload


command

Copy the file from the root volume to a


local workstation.

Log in with user name anonymous and


use any valid email address as the
password.

After the file is on a local workstation,


you can upload by using
https://upload.netapp.com.

The type and case number fields are


specific for the type of core that is being
uploaded (application or kernel) and the
specific case number that is opened for
this issue.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

UPLOADING A CORE FILE


There are occasions when a core file must be transferred to NetApp. There are two methods for transferring a
core file from a Data ONTAP 8.x node to NetApp.
To determine the core name, use the system core show command to find the appropriate file:
::system node coredump> show
Node
----node1
node2

Core Name
------------------------------------core.101178384.2007-08-28.07_18_45.nz
core.101178384.2007-09-13.21_16_49.nz
core.101178745.2007-10-04.13_15_14.nz

Saved
------true
true
true

Panic Time
----------------8/28/2007 03:18:45
9/13/2007 17:16:49
10/4/2007 09:15:14

Method 1: Enable remote read-only HTTPS access to the root volume of each node.
NOTE: This option is available with clustered Data ONTAP 8.1.1 and later.
For information on how to enable remote read-only HTTPS access, see article 1013814: How to enable
remote access to a nodes root volume in a cluster
1. Copy the file from the root volume to a local workstation.
https://<cluster-mgmt-ip>/spi/<node_name>/etc/crash/
2. Once the file is on a local workstation, you can upload it using https://upload.netapp.com.
3. For more information, see article 1010364:How to upload a core file for analysis.

B-14

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Method 2:
1. Run the system coredump upload command.
You can upload a core to NetApp from the storage system, provided that it has access to the Internet. The
syntax for the command is:
::>system node coredump upload -node <node_name> -corename
core.<id>.<date_time>.nz -location ftp://ftp.netapp.com/to-ntap/ -type
kernel -casenum <case_number>
2. Log in with user name anonymous and any valid email address as the password.
The type and case number fields are specific for the type of core being uploaded (application or kernel) and
the specific case number opened for this issue.

B-15

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Top Questions to NetApp


Support

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: TOP QUESTIONS TO NETAPP SUPPORT

B-16

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (1 of 12)


Question #1
How do I generate a new
AutoSupport?

First, enable HTTPS access on


the node. Then log in to My
AutoSupport and click the
AutoSupport Upload button.

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (1 OF 12)


On storage systems that are running clustered Data ONTAP 8.1 and later, see Knowledge Base ID:1013814
(How to Enable Remote Access to a Nodes Root Volume in a Cluster). This article outlines the steps to
configure HTTPS access to the root volume on the node(s) in question.
After you have HTTPS access to the required node(s), you will need the sequence numbers of all the
AutoSupport messages you want to retransmit or collect.
> system node autosupport history show -node <node>
Retransmit all the required AutoSupport messages to local files in the root volume to get them into the archive
file format (.7z):
NOTE: You must include 'mroot' in the URI path:
> system node autosupport history retransmit -node <node> -seq-num
<number> -uri file:///mroot/etc/log/retransmit/<seq-num>-<type>.7z
If desired, you can also invoke a new AutoSupport for the most current content and save it as a local file:
> system node autosupport invoke -node <node> -type all -message
"invoke.all -uri file:///mroot/etc/log/retransmit/<seq-num>-<type>.7z
Use the HTTPS Web administrative interface that was mentioned earlier (Article 1013814) to download the
generated .7z files in /etc/log/retransmit from the root volume of the storage controller. You will need to log
in with administrative privileges to access this interface.

B-17

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

URL for Data ONTAP 8.1 only:


https://<node-mgmt-ip>/na_admin/logs/retransmit
URL for Data ONTAP 8.1.1 or later:
https://<cluster-mgmt-ip>/spi/<node_name>/etc/log/retransmit
Log in to My AutoSupport via the NetApp Support site.
Click the AutoSupport Upload button that is located in the top right of the browser window.
For more information on this functionality, please check out the AutoSupport Manual Upload Feature
reference card.
Click the Select Files button and choose the .7z file you collected in Step 4.
Click the Upload button to complete the file upload to NetApp.
After the upload completes, the AutoSupport message should be available in My AutoSupport within 30
minutes. It may take up to 120 minutes during weekends and peak processing hours.

Resource
https://kb.netapp.com/support/index?page=content&id=1013073&locale=en_US

B-18

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (2 of 12)


Question #2
How do I set up
AutoSupport?

Execute commands to:


Add the mail hosts that send the
AutoSupport email
Set up the message
Specify recipients of the email

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (2 OF 12)


Procedure
To set up AutoSupport, execute commands to specify the mail hosts, the addresses for AutoSupport mail, the
storage system administrator's address, set up the command to send an immediate message, and specify the
recipients of short email messages sent by AutoSupport.
Specify mail hosts:
The command to specify hosts that send AutoSupport email messages to NetApp Technical Support is:
options autosupport.mailhost hostname,...
You can specify up to five mail host names. Separate the names with commas, and do not include spaces in
the list. (Default host name is the administration host.) For example: options
autosupport.mailhost host1,host2,host3
NOTE: The storage system can send AutoSupport messages by SMTP (port 25), HTTP (port 80), or HTTPS
(port 443). HTTPS is the default. If the network connection does not allow HTTPS or HTTP, configure
AutoSupport for SMTP. To use HTTP or HTTPS to send AutoSupport messages, configure an HTTP or
HTTPS proxy.
Specify addresses for AutoSupport mail:
The command for specifying the recipients of automatic email messages sent by the AutoSupport daemon is:
options autosupport.to addresses,...
Up to five email addresses can be specified. Separate the email addresses with commas, and do not include
spaces in the list. The default email address is autosupport@netapp.com. For example:
options autosupport.to autosupport@netapp.com,d@c.com

B-19

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Be sure to enter the actual command on a single line.


Specify the filer administrator's address:
Sometimes NetApp Support may need to contact the administrator of the storage system after receiving
automatic AutoSupport email. The options command for specifying the storage system administrator is:
options autosupport.from address. For example: options autosupport.from
jdoe@abc.com
Set up the message:
The command to send an automatic email message immediately is: options autosupport.doit
string
The string is used in the subject line of the email message to explain why the email was sent. For example:
options autsupport.doit TESTING
Immediate messages contain the same storage system data as automatic AutoSupport messages. The
following is an example of an immediate message:
FILER> Wed Jul 26 09:48:08 PDT [asup_main]: System Notification mail sent
Send a short message:
The options autosupport.noteto command specifies the recipients of short email messages that are
sent by AutoSupport. The short email messages are for urgent events such as disk failures or storage system
reboots.
Short messages are useful if the person who should be notified of urgent events reads email on a small screen
such as that on an alphanumeric pager.

B-20

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (3 of 12)


Question #3
How do I make a volume bigger?

Use the vol size volname


[[+]size] command.

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (3 OF 12)


Use the command vol size to make a flexible volume larger or smaller.
vol size volname [[+|-]size]
This command sets or displays the given flexible volume's size as specified; using space from the volume's
containing aggregate. It can make the flexible volume either larger or smaller. The size argument has the same
form and obeys the same rules as when it is used in the vol create command to create a flexible volume.
Be careful if the sum of the sizes of all flexible volumes in an aggregate exceeds the size of the aggregate.
If [+|-]size is used, then the flexible volume's size is changed (grown or shrunk) by that amount. Otherwise,
the volume size is set to size (rounded up to the nearest 4 KB).

B-21

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (4 of 12)


Question #4
How do I use the
Interoperability Matrix Tool
(IMT)?
1.
2.
3.

19

Launch the IMT tool.


From search criteria, select
Data ONTAP Features.
From the left panel, select the
components to add.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (4 OF 12)


Procedure
Launch the Interoperability Matrix Tool.
1. From the Search Criteria center panel, select Data ONTAP Features.
2. From the left component panel, select the components to add
As you add components, other components may become unavailable, appearing dimmed. The dimmed items
are not supported in combination with the components that you added.
Resource
http://mysupport.netapp.com/matrix/help-cms/pdf/User%20Guide.pdf

B-22

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (5 of 12)


Question #5
Which version of clustered Data
ONTAP should I run to avoid this
BURT?

Run the Release Bug Comparison


tool to compare releases and list
the bug fixes.

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (5 OF 12)


From the Release Bug Comparison tool, select the releases of clustered Data ONTAP to compare. Click Go.
A list of bug fixes is displayed for you.
Resource
http://mysupport.netapp.com/NOW/cgi-bin/relcmp.on?what=fix&notfirst=Go!&rels

B-23

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (6 of 12)


Question #6
Can you go over Upgrade
Advisor with us?

Here are a few URLs to


get you started.

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (6 OF 12)


Resources to get started with an upgrade:
Data ONTAP 8.2 Upgrade and Revert/Downgrade Guide
https://library.netapp.com/ecm/ecm_get_file/ECMP1368868
Data ONTAP 8.1 7-Mode Upgrade and Revert/Downgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMP1114711
Data ONTAP 8.0 7-Mode Upgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMM1249833
Data ONTAP 7.3 Upgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMM1278409

B-24

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (7 of 12)


Question #7
How can I download a P-release
or a D-release of clustered Data
ONTAP?

For a D-release, contact your


support account team.
For a P-release, go to:
https://mysupport.netapp.com/NO
W/cgi-bin/software/

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (7 OF 12)


A D-release is a debug release. Usually built for a specific customer that cannot wait for a true maintenance
release, D-releases are not published on the NetApp Support site. For a D -release, contact your account team
or your Support Account Manager (SAM).
A P-release is a patch release. You learn of patch releases through tools such as My AutoSupport or from
your Support Account Manager (SAM). For P- releases, access this URL:
https://mysupport.netapp.com/NOW/cgi-bin/software/
Scroll to the bottom of the screen and specify version in the To access a specific field.
Resource
https://mysupport.netapp.com/NOW/cgi-bin/software/

B-25

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (8 of 12)


Question #8
What commands should I
use?

Here are the command-line


manual pages for your version of
clustered Data ONTAP.

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (8 OF 12)


Documentation by product type can be accessed at this link after you log in to the NetApp Support site:
http://mysupport.netapp.com/documentation/productlibrary/index.html?productID=30092
Other Resources:
Data ONTAP 8.2 Upgrade and Revert/Downgrade Guide
https://library.netapp.com/ecm/ecm_get_file/ECMP1368868
Data ONTAP 8.1 7-Mode Upgrade and Revert/Downgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMP1114711
Data ONTAP 8.0 7-Mode Upgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMM1249833
Data ONTAP 7.3 Upgrade Guide
https://library.netapp.com/ecm/ecm_download_file/ECMM1278409

B-26

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (9 of 12)


Question #9
I have a CIFS share that I can
see but not access. Why is this
happening?
Access this link for top CIFS
issues:
https://kb.netapp.com/support/ind
ex?page=content&id=3013892&lo
cale=en_US&access=s

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (9 OF 12)


This question is a broad one and there could be many reasons for this issue. Access this link for the top CIFS
issues:
https://kb.netapp.com/support/index?page=content&id=3013892&locale=en_US&access=s

B-27

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (10 of 12)


Question #10
How do I keep NetApp from
contacting me during planned
maintenance events?

You can trigger an AutoSupport


message that states Maintenance
beginning and another that states
Maintenance complete.

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (10 OF 12)


When performing maintenance on a system, you should trigger an AutoSupport notification stating
Maintenance beginning and, once completed, a Maintenance complete notification. To do this, use the
options command:
>options autosupport.enable
The default is on to cause AutoSupport notifications to be sent. To disable AutoSupport notifications, use the
command:
>options autosupport.support.disable

B-28

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (11 of 12)


Question #11
Why did my system perform a
giveback?

You have the auto giveback


option enabled.

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (11 OF 12)


It could be that the auto giveback option is enabled.
The option is:
options cf.giveback.auto.enable
This option turns automatic giveback off or on. An automatic giveback is invoked when one node of a cluster
is in takeover mode and the "down" node is repaired and reboots. The repaired node will boot into Data
ONTAP and the node in takeover mode will detect this and initiate a giveback.
Resource
https://kb.netapp.com/support/index?page=content&id=3013919&locale=en_US

B-29

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Questions to NetApp Support (12 of 12)


Question #12
How can I verify that all firmware
for my hardware is up to date?

Access the System Firmware and


Diagnostics Download site.
1. Select a platform.
2. Select a type.

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP QUESTIONS TO NETAPP SUPPORT (12 OF 12)


Access the System Firmware and Diagnostics Download site at this link:
http://mysupport.netapp.com/NOW/cgi-bin/fw?platform=FAS8020&type=sp_ontap&plattype=&fs=
Choose the platform (for example, FAS8020) , then choose the type (for example, Service Processor image
for installation from the Data ONTAP prompt). The results are displayed in a table. Review the options and
select the release to download.

B-30

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

References
NetApp Support web site
http://mysupport.netapp.com

NetApp Knowledge Base

https://kb.netapp.com/support/index?page=home

NetApp Support Community

https://forums.netapp.com/community/support

Interoperability Matrix Tool

http://support.netapp.com/matrix

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

REFERENCES
Here are some links to learn more about the NetApp support site.
The NetApp Support Website: A website where you can access technical resources and log and monitor
problem reports. You are required to create an account to access the site.
The NetApp Knowledgebase: A self-help knowledgebase for articles and tips on NetApp product.
The NetApp Support Community: A web site where you can ask product usage questions and exchange tips
and suggestions. You are required to create an account to access the site.
The Interoperability Matrix Tool: A search tool that provides which NetApp and third-party products are
supported within a particular configuration. You are required to create an account to access this tool.

B-31

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Ken Has Learned in This Module

NetApp technical assistance is


everywhere. There are many ways to get
help. I love the capabilities of
AutoSupport, Upgrade Advisor, the
knowledge base, and the community
forums. Its also comforting to know that
NetApp technical professionals are a
phone call or email away.
29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT HAS KEN LEARNED IN THIS MODULE

B-32

Clustered Data ONTAP Administration: Bonus Module B: Engaging NetApp Support

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Module C
OnCommand Insight
Walkthrough

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULE C: ONCOMMAND INSIGHT WALKTHROUGH

C-1

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks

I got an overview of the capabilities of


OnCommand Insight in Module 1. But Id
like to get a closer look. How about a
tour?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS

C-2

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 1

OnCommand Insight Assets

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 1: ONCOMMAND INSIGHT ASSETS

C-3

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Thin Client Login

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THIN CLIENT LOGIN


If OnCommand Insight resides on the local host, launch a browser and go to https://localhost.
The default username is admin, and the password is admin123.

C-4

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Asset Dashboard
Upper Screen
Launch Java
(thick) client

Toolbar

Administration
and settings

Online help
Current user

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ASSET DASHBOARD: UPPER SCREEN


The Insight thin client takes advantage of the HTML5 attributes from the browser and displays the Asset
Dashboard. The toolbar at the top of all pages provides access to the Insight tools.
Insight Toolbar
The name of the monitored environment is displayed on the left side of the toolbar, as you see in the example.
All Insight pages display the same toolbar across the top to provide access to the search function and these
Insight tools:

The Help menu contains these options:

C-5

The Help topics option includes getting started, installation, and configuration information for Insight 7.0.
The Data source support matrix option opens a detailed matrix for this version of Insight.
The Check for updates option indicates whether a new Insight version is available.
The Support option opens the NetApp Support page.
The Java UI Help option describes the original Insight client features that you might need to use with the new
Insight 7.0 features.
The About option lists the Insight version, build numbers, and copyright information.
The Admin icon opens the web UI configuration and troubleshooting tools. If a circled number appears on this
icon, the number is the total of all items that require your attention. Check the buttons in the Admin group to see
how these items are divided among the options.
The Launch Java UI icon opens the original Insight client. You need to use the Java UI to define annotations,
business entities, policies, and thresholds.
The Current User Logged in as <user role> icon displays the role of the person who is logged in and
provides the logout option.

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Asset Dashboard
Search, Navigation, and Number of Problems

Search for a
specific resource.

Browser navigation

Number of
potential problems

Asset dashboard
global status charts

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ASSET DASHBOARD: SEARCH, NAVIGATION, AND NUMBER OF PROBLEMS


Insight runs in a browser and employs many standard browser navigation features. For example, use the
browser Back arrow to return to the previously displayed page, and click text that appears in blue font to
display linked information.
Use the browser Back arrow to return to the previous page, or right-click and hold the pointer on the Back
arrow to display a list of all previous pages and select a specific page.
Insight Toolbar
The name of the monitored environment is displayed on the left side of the toolbar, as in the example. All
Insight pages display the same toolbar across the top to provide access to the search function and Insight
tools.
The total number of possible problems is indicated by a red dot next to the Admin icon on the toolbar. Use the
Search Assets box to locate an object in your environment and display its asset page. This page displays the
current status and links to more information.

C-6

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Errors, Problems, and Links

Red dots indicate errors.

Action
menu

Blue font indicates


navigation links.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ERRORS, PROBLEMS, AND LINKS


Insight displays an red dot beside items in tables that you should examine for possible problems. The total
number of possible problems in the Admin area is displayed on the Admin icon on the toolbar.
Insight lists any items that it identifies as possibly causing serious problems as the first items in the Data
Sources table. You can use the filter option to sort the table to show only the items you want to view . The
Actions menu in the upper right corner displays the operations that you can perform on a table . If you
position your pointer over an item in a table, the operations that you can perform on that item appear as
clickable icons to the right.

C-7

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Asset Dashboard
Lower Screen
Top 10 storage
pools

Heat maps: Storage and


virtual machine IOPS

Current capacity
information

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ASSET DASHBOARD: LOWER SCREEN


Current Capacity Information
The left panel provides a quick overview of the current usage from the top Arrays, FC switches, Virtual
Infrastructure, and Efficiency sections.
Top 10 Utilized Storage Pools
To examine a resource in the top 10 storage pools chart, click a tall bar in the chart. In the example, the mouse
pointer shows the name of the storage pool with the highest utilization in the chart.
Heat Maps
In the lower right corner you can view the heat maps for storage input/output per second (IOPS) and virtual
machine IOPS.

C-8

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Asset Dashboard
Heat Maps

Elements are sized


by the number of
correlated events.

Select the array


to investigate
the storage
landing page.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ASSET DASHBOARD: HEAT MAPS


Heat Maps
If you look more closely at the heat maps, you see that any resource name that appears large print has a high
amount of IOPs driven into that storage or virtual machine. Click any resource name in the heat maps to
display an asset page that summarizes the current state of that resource.

C-9

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Asset Landing Page


Top
Selected device
type and name

Time segment

Last refresh

Device configuration
summary

Correlation
information

Performance charts
10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE ASSET LANDING PAGE: TOP


Asset Page Types
If you select an asset from the heat maps section or search by using the Search Assets box in the toolbar, you
land on the asset landing page.
The type of information that displays on the asset pages depends on the type of base resource that is
represented.
Insight provides asset pages for these resource types:

Virtual machine
Volume
Internal volume
Physical host
Storage pool
Storage
Datastore
Hypervisor
Application
Node

Asset Page Details


You can display more details by using your mouse pointer to select linked or active items on asset pages.
Click a linked resource name to display summaries of information for that resource. Use the browser back
arrow to return to previous pages.

C-10

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Show Time Icons


Click any of the time icons to change the segment of data that is displayed:

3 hours
24 hours
3 days
7 days
All data

Performance Charts
Select the checkboxes above the charts to determine which types of data are displayed in the charts. The types
of data vary depending on the type of the base resource. Move your pointer over the graphs to display more
details for any point on the graph. Select different time icons to display different segments of the data.
Top Correlated Resources
The Top correlated resources list shows the resources that have a high correlation on one or more
performance metrics with the base resource. Use the checkboxes and links in this list to supply additional
information:

C-11

Select the checkbox in front of the resource name to add the data from that resource to the charts. Each
resource is displayed in a different color.
Click the linked letter "T" beside the checkbox, and select whether to include the Total, Read only, or
Write only data in the performance charts. Total data is the default.
Click a linked resource name to open a page of summary data for that resource.
Click the linked percentage beside a resource name to open a box that compares the type of correlation
that resource has with the base resource.
If the correlated resources list does not contain a resource that you need in the performance charts, use the
Search Assets box to locate the resource and add it to the performance data.

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Asset Landing Page


Details
Storage pool configuration, capacity,
and performance information

SAN configuration and


performance information

11

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE ASSET LANDING PAGE: DETAILS


Associated Resources and Violations
Scroll below the performance charts to examine linked lists of all associated resources and violations of any
preset thresholds. For example, a Virtual Machine asset page lists all virtual machine disks (VMDKs) in the
virtual machine and lists any threshold violations.

C-12

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

VM Asset Landing Page


Top

Navigate to this virtual


machine from the heat map.

12

The topology view is


displayed here, if it is
available.

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VM ASSET LANDING PAGE: TOP


Topology
This area shows the device and its connections. Position your mouse pointer over items in the topology map
to display additional information. Not all landing pages display the topology view. The topology views are
only available from the host or VM perspective to storage and they follow host-to-storage paths not just
physical connectivity. For instance, a topology view does not appear on the Storage landing page, but it does
appear on other pages, such as the VM Asset landing page that is displayed on the slide.

C-13

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

VM Asset Landing Page


Details
VMDK configuration, capacity, and
performance information

SAN configuration and


performance information

Violations

13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VM ASSET LANDING PAGE: DETAILS


This page provides details of the virtual machines, VMDK, and datastores, plus performance and
configuration information that is related to the virtual machine and all correlated objects.
Associated Resources and Violations
Scroll below the performance charts to examine linked lists of all associated resources and violations of any
preset thresholds. For example, a Virtual Machine asset page lists all VMDKs in the virtual machine and lists
any threshold violations.

C-14

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Lesson 2

Insight Capacity and Utilization


Trends, Forecasts, and
Reports

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

LESSON 2: INSIGHT CAPACITY AND UTILIZATION TRENDS, FORECASTS, AND


REPORTS

C-15

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Capacity Forecast Dashboard

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CAPACITY FORECAST DASHBOARD


The Capacity Forecast dashboard provides an overview of storage capacity in the enterprise from several
perspectives.
Because dashboards contain large amounts of useful information, you might need to wait a few seconds for
the dashboard to completely load from the data warehouse database.

C-16

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Capacity Consumption Forecast Matrix

16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CAPACITY CONSUMPTION FORECAST MATRIX


The Capacity Consumption Forecast matrix forecasts how many months remain before storage usage at each
data center and each tier reaches 80%. This matrix provides a valuable high-level overview of storage. Click
the link for a data center and tier for details about how storage is used over time.

C-17

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Capacity Trend Chart

17

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE CAPACITY TREND CHART


The Storage Capacity Trend chart displays how storage is consumed over time. The Storage Capacity Trend
chart also displays the 80% threshold. You can adjust the threshold. Many customers use the chart to trigger
actions that they should take, such as optimizing storage, recovering unused or orphaned storage, or ordering
more storage.
This chart view changes depending on what you select in the matrix box on the left side of the dashboard.
You can view the chart by enterprise, by data center, or by tier.

C-18

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Utilization Dials

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE UTILIZATION DIALS


The Storage Utilization dials display, at a glance, the percentage of utilization at the enterprise level and at the
data center level.

C-19

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Top Business Entity Capacity Consumers Chart

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TOP BUSINESS ENTITY CAPACITY CONSUMERS CHART


The Top Business Entity Capacity Consumers chart displays the top business entities and indicates which
entities are using the most capacity by tier and business unit. You can click to navigate to detailed information
about the storage that is consumed at the level of each business entity, for instance, Tenant, Line of Business,
Business Unit, and Project.

C-20

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Navigating to Built-In Reports

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAVIGATING TO BUILT-IN REPORTS


Each dashboard includes a list of built-in reports that relate to the dashboard. The example displays capacity
storage reports.

C-21

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Summary Report

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE SUMMARY REPORT


The Capacity Storage reports include detailed information about the amount of storage capacity that is
allocated, used, free, and overhead. You can view other detailed capacity reports that display VM capacity,
VM cost accountability, and storage utilization reports and trending.
Select a built-in report to open it.

C-22

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Tier Dashboard

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE TIER DASHBOARD


This dashboard displays how well your tier strategy is working and exactly which applications and business
entities use each tier of storage.

C-23

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Tier Usage Trends Chart

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TIER USAGE TRENDS CHART


The Tier Usage Trends chart enables you to monitor your tier strategy. Use this chart to track trends and to
forecast your future needs, based on past tier usage.

C-24

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Tier Usage by BU Pie Charts

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TIER USAGE BY BU PIE CHARTS


Tier usage can easily be organized by business unit across the data centers, to indicate how much capacity
each unit uses.

C-25

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Tier Usage by Applications Pie Charts

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TIER USAGE BY APPLICATIONS PIE CHARTS


You can review which applications use different tiers of storage. Based on this information about application
needs, you can optimize load balancing and tiering.

C-26

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Storage Capacity Details by Tier

26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

STORAGE CAPACITY DETAILS BY TIER


These charts provide trending data about how the storage tiers are used over time. This information can help
administrators to optimize the storage tier usage based on real trends.

C-27

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

OnCommand Insight
OnCommand Insight

Cross-Domain, Multivendor, and E-Series Management

Single Pane
of Glass

End-to-end visibility

27

Reporting

Chargeback and showback


costs

Value-Add
Product

Price that is based on


capacity

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

ONCOMMAND INSIGHT
Insight is a single solution that enables cross-domain, multivendor, and E-Series resource management and
analysis across networks, storage, and servers in physical and virtual environments.
Insight improves operational efficiency by providing a "single pane of glass, enabling end-to-end visibility
into the storage environment, and generating meaningful reports on storage costs for chargeback and
showback.
Insight is a value-add product. Currently its pricing is based on capacity by terabyte for multivendor storage
environments.

C-28

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

What Has Ken Learned in This Module?

OnCommand Insight is a powerful tool


for real-time monitoring of my storage
resources. It can help me to anticipate
problems and proactively scale my
cluster when necessary. It was great to
get a closer look.

28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHAT HAS KEN LEARNED IN THIS MODULE?

C-29

Clustered Data ONTAP Administration: Bonus Module C: OnCommand Insight Walkthrough

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Module D

Data ONTAP Physical Storage


Maintenance

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULE D: DATA ONTAP PHYSICAL STORAGE MAINTENANCE

D-1

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk Protection and Validation


Data ONTAP uses RAID to
protect data on disk

Data ONTAP verifies data by


using media-level and RAIDlevel scrubbing

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK PROTECTION AND VALIDATION


Understanding how RAID protects your data and data availability can help you administer your storage
systems more effectively. You should also understand how Data ONTAP uses various techniques to verify the
protected data.

D-2

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Validation Methods
Data ONTAP uses various methods to validate data:

Disk-level checksums are used to detect errors on the disks.


Block checksum (BCS): for disks with 520-byte sectors
Advanced-zoned checksum (AZCS): for disks with 512-byte
sectors (mSATA)

Media-level scrubbing is used to detect and correct media


errors. Media-level scrubs runs continuously in background.
RAID-level scrubbing is used to detect media and parity
consistency. RAID-level scrubs can be scheduled or run
manually.
3

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VALIDATION METHODS
Disk-level checksums: Two checksum types are available for disks that are used by Data ONTAP: BCS
(block) and AZCS (zoned). Both checksum types provide the same resiliency capabilities. BCS optimizes for
data access speed and reserves the smallest amount of capacity for the checksum for disks with 520-byte
sectors. AZCS provides enhanced storage utilization and capacity for disks with 512-byte sectors. You cannot
change the checksum type of a disk. To determine the checksum type of a specific disk model, see the
Hardware Universe.
Media-level scrubbing: The purpose of the continuous media scrub is to detect and correct media errors to
minimize the chance of storage system disruption due to a media error while a storage system is in degraded
or reconstruction mode.
By default, Data ONTAP runs continuous background media scrubbing for media errors on all storage system
disks. If a media error is found, Data ONTAP uses RAID to reconstruct the data and repairs the error. Media
scrubbing is a continuous background process. Therefore, you might observe disk LEDs blinking on an
apparently idle storage system. You might also observe some CPU activity even when no user workload is
present.
Because continuous media scrubbing searches only for media errors, its impact on system performance is
negligible. In addition, the media scrub attempts to exploit idle disk bandwidth and free CPU cycles to make
faster progress. However, any client workload results in aggressive throttling of the media scrub resource.
RAID-level scrubbing: RAID-level scrubbing means checking the disk blocks of all disks in use in
aggregates (or in a particular aggregate, plex, or RAID group) for media errors and parity consistency. If Data
ONTAP finds media errors or inconsistencies, it uses RAID to reconstruct the data from other disks and
rewrites the data.
RAID-level scrubs help improve data availability by uncovering and fixing media and checksum errors while
the RAID group is in a normal state. (For RAID-DP, RAID-level scrubs can also be performed when the
RAID group has a single-disk failure.) RAID-level scrubs can be scheduled or run manually.
D-3

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Rapid RAID Recovery


If a disk has exceeded its error threshold, Data ONTAP can
perform rapid RAID recovery:
Prefail

Hot Spare

Copy

Fix or Fail

Place suspect disk in prefail mode


Select a suitable hot spare replacement
Copy the suspect disk contents to
selected spare

After copy is complete, put disk into the


maintenance center to fix or fail disk

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RAPID RAID RECOVERY


Data ONTAP continually monitors disks to assess their performance and health. When Data ONTAP
encounters certain errors or behaviors from a disk, it takes the disk offline temporarily or takes the disk out of
service to run further tests. Continuously monitoring and validating data is often called predictive failure in
the storage industry.
While the disk is offline, Data ONTAP reads from other disks within the RAID group while writes are
logged. When the offline disk is ready to come back online, Data ONTAP resynchronizes the RAID group
and brings the disk online. This process generally takes a few minutes and incurs a negligible performance
impact.
NOTE: A disk can be taken offline only if its containing RAID group is in a normal state and the plex or
aggregate is not offline.
When Data ONTAP determines that a disk has exceeded its error thresholds, Data ONTAP can perform rapid
RAID recovery by removing the disk from its RAID group for testing and, if necessary, failing the disk.
Spotting disk errors quickly helps prevent multiple disk failures and allows problem disks to be replaced.
By performing the rapid RAID recovery process on a suspect disk, Data ONTAP avoids three problems that
occur during sudden disk failure and the subsequent RAID reconstruction process:

D-4

Rebuild time
Performance degradation
Potential data loss due to additional disk failure during reconstruction

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Selecting Hot Spares

Larger Size:
Unused Capacity

Exact
Match

Different Speed:
Performance

!!

Degraded Mode:
No replacement

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SELECTING HOT SPARES


Data ONTAP always tries to choose a hot spare that exactly matches the disk that is failing or has failed. If an
exact match is not available, Data ONTAP uses the best available spare or goes into a degraded mode.
Understanding how Data ONTAP chooses an appropriate spare when there is no matching spare enables you
to optimize the spare allocation for your environment.
Data ONTAP picks a nonmatching hot spare based on the following criteria:

If the available hot spares are not the correct size, Data ONTAP uses one that is the next size up, if there
is one.
The replacement disk is downsized to match the size of the disk it is replacing; the extra capacity is not
available.
If the available hot spares are not the correct speed, Data ONTAP uses one that is a different speed.
Using disks with different speeds within the same aggregate is not optimal. Replacing a disk with a
slower disk can cause performance degradation, and replacing a disk with a faster disk is not cost-effective.

If there is no spare with an equivalent disk type or checksum type, the RAID group that contains the failed
disk goes into degraded mode; Data ONTAP does not combine effective disk types or checksum types within
a RAID group.

D-5

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Degraded Mode
Degraded mode occurs when a disk in a RAID group fails.

During degraded mode:


Data is still available.
Performance is less than optimal.
Data must be recalculated from parity until the failed disk is replaced.
CPU usage increases to calculate from parity.

The failed disk (or disks for RAID-DP) are rebuilt on a spare disk (if
available).

If no spares are available, or if the rebuild fails, the system shuts


down after a period of time.
To change the time interval, use the raid.timeout option.
The default value is 24 hours.
6

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DEGRADED MODE
If one disk in a RAID group fails, the system operates in degraded mode. In degraded mode, the system
does not operate optimally, but no data is lost. Within a RAID 4 group, if a second disk fails, data is lost;
within a RAID-DP group, if a third disk fails, data is lost. The following AutoSupport message is broadcast:
[monitor.brokenDisk.notice:notice].
If the maximum number of disks have failed in a RAID group (two for RAID-DP, one for RAID 4) and there
are no suitable spare disks available for reconstruction, the storage system automatically shuts down in the
period of time specified by the raid.timeout option. The default timeout value is 24 hours. See this FAQ
for more information: https://kb.netapp.com/support/index?page=content&id=2013508
Therefore, you should replace failed disks and used hot-spare disks as soon as possible. You can use the
options raid.timeout command to modify the timeout internally. However, keep in mind that, as the
timeout interval increases, the risk of subsequent disk failures also increases.

D-6

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk Replacement
To replace a data disk with a spare disk:
Data ONTAP 7-Mode
system> disk replace start [-m] old_disk_name spare_name
-m if no speed match

Clustered Data ONTAP


c1::> storage disk replace disk old_disk_name
replacement spare_name action start
DoubleParity Parity

Data Disks

Spare Disks

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK REPLACEMENT
You can use the storage disk replace command to replace disks that are part of an aggregate without
disrupting data service. You do this to swap out mismatched disks from a RAID group. Keeping your RAID
groups homogeneous helps optimize storage system performance.

D-7

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Disk Sanitization
A way to protect sensitive datato make recovery of the data
impossible
The process of physically obliterating data by overwriting
disks with three successive byte patterns or with random data
Administrators can specify the byte patterns or use the Data
ONTAP default pattern

011010010010
101001011001010
010010010010010
0001
11000100
10111011010010100
11001011010100100
0110100101000

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DISK SANITIZATION
Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte
patterns or with random data so that recovery of the original data is impossible. Use the disk sanitize
command to ensure that no one can recover the data on the disks.
The disk sanitize command uses three successive default or user-specified byte overwrite patterns for
up to seven cycles per operation. Depending on the disk capacity, the patterns, and the number of cycles, the
process can require several hours. Sanitization runs in the background. You can start, stop, and display the
status of the sanitization process.

D-8

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Nondisruptive Shelf Removal


Verify and prepare
system

Nodes available, no
disk reconstruction

Document storage
config, max
capacity, max
spindle and volume
counts

Identify volumes
and aggregates
Record details of
volumes and
aggregates residing on
shelves to be removed

Add new
storage shelves
As required, might
require new HBA
Verify limits
Create new
aggregates, same
size or larger than
originals

Evacuate
data

Move all volumes


previously identified

Verify all disks


unowned

Delete all
aggregates

Remove disk
ownership on
evacuated shelves

If removing an
entire stack, remove
all cables

If removing some
shelves from a
stack, recable to
bypass the removed
shelves on path A,
then on path B

Power off and


remove shelves

Identify shelves to
replace

Configuration best practices:


Data ONTAP 8.2.1 or later
HA pair must use multipath HA
Single controller must use dual path
ACP is recommended
9

Verify and
remove shelves

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NONDISRUPTIVE SHELF REMOVAL


Typically, shelf stacks, or loops, have a homogeneous shelf type (for example, DS14 or DS4243). The most
common reasons for replacing a disk shelf stack are:

The disk hardware has reached the end of a capital depreciation period.
The disk hardware has reached the end of hardware support.
New-generation storage technology is available.

All the shelves on a stack or loop are replaced in one operation. This process is also known as a shelf stack, or
loop, upgrade. A common scenario is upgrading from 4-Gbps FC-AL or 3-Gbps SAS disk shelves to 6-Gbps
SAS disk shelves.
The steps to perform a shelf stack upgrade use clustered Data ONTAP features that are standard in 8.1 and
later versions. The primary Data ONTAP technology components are:

HA manual takeover and giveback


DataMotion for Volumes using the volume move command
Disk copy for the root volume (if necessary) using storage disk replace

For information about nondisruptive shelf removal, see Technical Report 4277: Nondisruptively Replace a
Complete Disk Shelf Stack with Clustered Data ONTAP.

D-9

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Try This Task


Using cluster2 on your lab kit:

Use the storage raid-options show


raid.scrub.schedule command.
What is the schedule?

How would you run a manual RAID scrub?

10

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TRY THIS TASK

D-10

Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Bonus Module E
Clustered Data ONTAP
Architecture

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BONUS MODULE E: CLUSTERED DATA ONTAP ARCHITECTURE

E-1

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Ken Asks a Question


Can you share some of the details
about what happens under the
covers of a Data ONTAP cluster?
What are the keys to NDO and the
single point of administration?

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

KEN ASKS A QUESTION

E-2

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Components
Three major software components on every node:
The network module
The data module
The SCSI module

Other key software components on every node:


The cluster session manager (CSM)
The replicated database (RDB) units

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

COMPONENTS
The modules refer to separate software state machines that are accessed only by well-defined APIs. Every
node contains a network module, a SCSI module, and a data module. Any network or SCSI module in the
cluster can talk to any data module in the cluster.
The network module and the SCSI module translate client requests into Spin Network Protocol (SpinNP)
requests and vice versa. The data module, which contains the WAFL (Write Anywhere File Layout) file
system, manages SpinNP requests. The cluster session manager (CSM) is the SpinNP layer between the
network, SCSI, and data modules. The SpinNP protocol is another form of remote procedure call (RPC)
interface. It is used as the primary intranode traffic mechanism for file operations among network, SCSI, and
data modules.
The members of each replicated database (RDB) unit on every node in the cluster are in constant
communication with each other to remain synchronized. The RDB communication is like the heartbeat of
each node. If the heartbeat cannot be detected by the other members of the unit, the unit corrects itself in a
manner that is discussed later in this course. The four RDB units on each node are the blocks configuration
and operations manager (BCOM), the volume location database (VLDB), VifMgr, and management.

E-3

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Single Node Components (Illustrated)


Node
Network and
SCSI
modules

Client Access (Data)


Management

M-Host

Cluster Traffic

CSM

Data
module

Data SVM
Root Volume
Root

Vol0

RDB Units:
Mgwd
VLDB
VifMgr
BCOM

Vol1
Vol2

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SINGLE NODE COMPONENTS (ILLUSTRATED)

E-4

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Network Module


Manages networking, NFS, and CIFS

Protocols:
TCP/IP and UDP/IP
NFS and CIFS
SpinNP

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE NETWORK MODULE

E-5

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The SCSI Module


Manages networking, FC, FCoE, and iSCSI

Protocols:

FC
SCSI
SpinNP
TCP/IP

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE SCSI MODULE

E-6

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data Module


Manages the WAFL (Write Anywhere File Layout) file system,
RAID, and storage
Protocols:
SpinNP
FC and SAS to disk and tape devices

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE DATA MODULE

E-7

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The CSM
Provides a communication mechanism between any network
or SCSI module and any data module
Provides a reliable transport for SpinNP traffic
Is used regardless of whether the network or SCSI module
and the data module are on the same node or on different
nodes

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE CSM

E-8

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Path of a Local Write Request


Node1
Requests
Responses

Node2

Network and
SCSI
modules

Network and
SCSI
modules

CSM

CSM

Data
module

Data
module

NAS and SAN


Clients

Root
Vol1
Vol2

Vol0
Vol0
Root

Vol3
Vol4

Vol 1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE PATH OF A LOCAL WRITE REQUEST


A NAS or SAN client sends a write request to a data logical interface (LIF). The network module (NAS) or
SCSI module (SAN) that is currently associated with that LIF translates the NFS or CIFS (NAS), or FC,
FCoE, or iSCSI (SAN) request to a SpinNP request. The SpinNP request goes through the CSM to the local
data module. The data module sends the data to NVRAM and to the disks. The response works its way back
to the client.

E-9

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Path of a Remote Write Request


Node1
Requests
Responses

Node2

Network and
SCSI
modules

Network and
SCSI
modules

CSM

CSM

Data
module

Data
module

NAS and SAN


Clients

Root Root
Vol1
Vol 1
Vol2

10

Vol0

Vol0
Vol3
Vol4

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE PATH OF A REMOTE WRITE REQUEST


A NAS or SAN client sends a write request to a data LIF. The network module or SCSI module that is
currently associated with that LIF translates the NFS (or CIFS), FC, FCoE, or iSCSI request to a SpinNP
request. The SpinNP request goes through the CSM to the remote data module by means of the remote CSM.
The data module sends the data to NVRAM and to the disks. The response works its way back to the client.

E-10

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Clustered Data ONTAP Modules


Network
Module
SAN
Module

Network
Module
SAN
Module

Network
Module
SAN
Module

11

Cluster Interconnect

Network
Module
SAN
Module

WAFL
RAID
Storage
N
V
R
A
M
WAFL

RAID
Storage

WAFL
RAID
Storage
N
V
R
A
M
WAFL

The network module


provides NAS protocols
The SCSI module provides
SAN protocols

The data module provides


storage access to shelves
(WAFL file system, RAID
subsystems, and storage
shelves subsystems)

RAID
Storage

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

CLUSTERED DATA ONTAP MODULES

E-11

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

CSM

Data ONTAP Architecture


Cluster Traffic

Data module

Network and SCSI


modules

Network

Protocols

WAFL

RAID

Storage

Clients
Physical
Memory

NVRAM

To HA partner

Management
12

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA ONTAP ARCHITECTURE

E-12

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Vol0 Volume


Contains data for managing the node and cluster:
Is used for RDB databases and log files
Doesnt contain user or client data

Cannot be accessed by NAS or SAN clients

Exists on every nodeone vol0 per node


Must not be confused with the root volume of a data storage
virtual machine (SVM)

Cannot be mirrored, moved, or backed up


Can be recreated after a disaster
13

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE VOL0 VOLUME


The vol0 volume of a node is analogous to the root volume of a Data ONTAP 7G operating system. The vol0
volume contains the data that is needed for the node to function.
The vol0 volume does not contain any user data, nor is it part of the namespace of an SVM. The vol0 volume
resides permanently on the initial aggregate that is created when each node is initialized.
The vol0 volume is not protected by mirror relationships or tape backups, which is valid. Although vol0 is an
important volume (a node cannot boot without its vol0 volume), the data that is contained on vol0 is largely
re-creatable. If the data is lost, the log files are indeed gone. But because the RDB data is replicated on every
node in the cluster, that data can be automatically re-created on this node.

E-13

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data SVMs
Characteristics

Formerly known as cluster Vservers

Are virtual entities within a cluster


Can coexist with other cluster data SVMs in the same cluster
Are independent of nodes
Are independent of aggregates
Contain all the volumes of their namespaces

14

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA SVMS: CHARACTERISTICS


Think of a cluster as a group of hardware elements (nodes, disk shelves, and more). A data SVM is a logical
piece of that cluster, but an SVM is not a subset or partitioning of the nodes. An SVM is more flexible and
dynamic. Every SVM can use all the hardware in the cluster, and all at the same time.
Example: A storage provider has one cluster and two customers: ABC Company and XYZ Company. An
SVM can be created for each company. The attributes that are related to specific SVMs (volumes, LIFs,
mirror relationships, and others) can be managed separately, while the same hardware resources can be used
for both. One company can have its own NFS server, while the other can have its own NFS, CIFS, and iSCSI
servers.

E-14

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Data SVMs
Relationships to Volumes and LIFs

Represent unique namespaces

Can and should have multiple data LIFs, each of which is


associated with one SVM
Can and do have multiple volumes, each of which is
associated with one SVM

15

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

DATA SVMS: RELATIONSHIPS TO VOLUMES AND LIFS


A one-to-many relationship exists between an SVM and its volumes. The same is true for an SVM and its
data LIFs. Data SVMs can have many volumes and many data LIFs, but those volumes and LIFs are
associated only with this one data SVM.

E-15

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Building a Namespace with Volumes


and Junctions
R
A

A Data ONTAP Cluster

C
D

F
E

G H
A

B
C

R
E

R is the root of a data SVM


A, B, C, and F are mounted to R through junctions
D and E are mounted to C through junctions

G and H are mounted to F through junctions


16

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BUILDING A NAMESPACE WITH VOLUMES AND JUNCTIONS


These nine volumes are mounted together through junctions. All volumes must have a junction path
(mountpoint) to be accessible within the SVMs namespace.
Volume R is the root volume of an SVM. Volumes A, B, C, and F are mounted to R through junctions.
Volumes D and E are mounted to C through junctions. Likewise, volumes G and H are mounted to F.
Every SVM has its own root volume, and all nonroot volumes are created within an SVM. All nonroot
volumes are mounted into the namespace, relative to the SVM root.
In this example if volume C goes offline, clients who are mounted to R or C will not be able to access D or E.
Clients who are mounted directly to D or E will have uninterrupted access to D or E.

E-16

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

SVMs, Namespaces, and Volumes


PopCo

PetCo

RonCo

QuekCo

Namespace

Namespace

Namespace

Namespace

SVM
Root

Volume

Volume

Volume
Volume

17

SVM
Root

SVM
Root

SVM
Root

Volume

Volume

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

SVMS, NAMESPACES, AND VOLUMES


NOTE: This slide is a representation of logical concepts and is not meant to show any physical relationships.
For example, all the objects that are shown as part of an SVM are not necessarily on the same physical node
of the cluster. In fact, that situation is unlikely.
This slide shows four distinct SVMs and namespaces. Although the hardware is not shown, these four SVMs
might reside in a single cluster. These namespaces are not separate entities of the SVMs but are shown merely
to indicate that each SVM has a namespace. The volumes, however, are separate entities. Each volume is
associated with one SVM. Each SVM has one root volume, and some SVMs have additional volumes.
Although an SVM might have only one volume (the SVMs root volume), in real life, it is more likely that an
SVM consists of multiple volumes, possibly thousands. Typically, a new volume is created for every distinct
area of storage. For example, every department and employee might have volume separate volume in an
SVM.

E-17

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Namespaces
A namespace is the file system of a data SVM

A namespace consists of many volumes


A namespace is independent of the namespaces of other
data SVMs

The root of the namespace is the cluster data SVM root


volume
A client mount or mapping can be to the data SVM root
volume or to a point further into the tree

18

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

NAMESPACES
A namespace is a file system. A namespace is the external, client-facing representation of an SVM. A
namespace consists of volumes that are joined together through junctions. Each SVM has one namespace, and
the volumes in one SVM cannot be seen by clients that are accessing the namespace of another SVM.

E-18

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Data SVM Root Volume


Exists on each data SVMone per data SVM

Is the root of the data SVM namespace


Is a normal flexible volume
Contains junctions
Can be moved, copied, and backed up
Can have Snapshot copies

Is usually mirrored

19

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE DATA SVM ROOT VOLUME


Each SVM has one namespace and, therefore, one root volume. This volume is separate from the vol0 volume
of each node.

E-19

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The RDB
The RDB is the key to maintaining high-performance
consistency in a distributed environment
The RDB maintains data that supports the cluster, not
the user data in the namespace

Operations are transactional (atomic): entire transactions


are either committed or rolled back
Four RDB units exist: the volume location database
(VLDB), management, VifMgr, and blocks configuration
and operations manager (BCOM)

20

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE RDB
The RDB units do not contain user data. The RDB units contain data that helps to manage the cluster. These
databases are replicated; that is, each node has its own copy of the database, and that database is always
synchronized with the databases on the other nodes in the cluster. RDB database reads are performed locally
on each node, but an RDB write is performed to one master RDB database, and then those changes are
replicated to the other databases throughout the cluster. When reads of an RDB database are performed, those
reads can be fulfilled locally without the need to send requests over the cluster interconnects.
The RDB is transactional in that the RDB guarantees that when data is written to a database, either it all gets
written successfully or it all gets rolled back. No partial or inconsistent database writes are committed.
Four RDB units (the VLDB, management, VifMgr, and BCOM) exist in every cluster, which means that four
RDB unit databases exist on every node in the cluster.

E-20

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Management Gateway
Is also known as the M-host

Enables management of the cluster from any node


Provides the CLI
Runs as mgwd (the management gateway daemon) on every
node
Stores its data in the management RDB unit

21

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

MANAGEMENT GATEWAY
The management RDB unit contains information that is needed by the management gateway daemon (mgwd)
process on each node. The kind of management data that is stored in the RDB is written infrequently and read
frequently. The management process on a given node can query the other nodes at run time to retrieve a great
deal of information, but some information is stored locally on each node, in the management RDB database.

E-21

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Volume Location Database


The VLDB is:
One of the RDB units
An index of which aggregate owns a volume
An index of which node hosts an aggregate

VLDB content is cached in memory on each node for instant


access by each network and SCSI module to speed up the
lookup process during data access by clients.

22

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VOLUME LOCATION DATABASE


Although each RDB unit consists of a process and a database on each node in the cluster, an RDB unit is
considered a single entity. One of the RDB units is the VLDB.
The VLDB tracks where the volumes and aggregates are.
Because the VLDB is potentially referenced (read) frequently for client requests, the VLDB content is cached
in memory on each node so that the network and SCSI modules can avoid RDB lookups during client
requests.

E-22

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

VifMgr
Runs as vifmgr

Stores and monitors LIF configuration


Stores and administers LIF failover policies

23

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

VIFMGR
The VifMgr is responsible for creating and monitoring NFS, CIFS, and iSCSI LIFs. It also handles automatic
NAS LIF failover and manual migration of NAS LIFs to other network ports and nodes.

E-23

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Blocks Configuration and Operations


Management
Runs as bcomd

Stores LUN map definitions


Stores initiator groups (igroups)

24

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

BLOCKS CONFIGURATION AND OPERATIONS MANAGEMENT


The BCOM RDB unit hosts the SAN ring that contains the replicated configuration information data for block
data access, including LUN maps and initiator groups (igroups).

E-24

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The RDB
Details

Each RDB unit has its own replication ring

For each of the units, one node is the master and the other
nodes are secondaries
The master node for each unit might be different than the
master nodes for the other units
Writes for an RDB unit go to its master and are then
propagated to the secondaries through the cluster
interconnect

25

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE RDB: DETAILS


Each RDB unit has its own ring. An RDB ring is the total of all RDB units of each type across the cluster. For
example, in an eight-node cluster, the eight vldb units make up the vldb ring. Each of the four RDB rings
elects a master. The master is considered the "official" copy of the database in case of discrepancies.
If n is the number of nodes in the cluster, each unit or ring consists of n databases and n processes. At any
given time, one of those databases is designated as the master, and the others are designated as secondary
databases. Each RDB units ring is independent of the other RDB units. For example, if node X has the
master database for the VLDB unit, node Y might have the master for the VifMgr unit, and node Z might
have the master for the management unit and the BCOM unit.
The master of a given unit can change. For example, when the node that is the master for the management
unit is booted, a new management master must be elected by the remaining members of the management unit.
Note that a secondary can become a master and a master can become a secondary. Nothing is special about
the database itself; the database that is designated as the master is the role of the process that manages the
database (master versus secondary).
When data must be written to a unit, the data is written to the database on the master, and then the master
immediately replicates the changes to the secondary databases on the other nodes. If a change cannot be
replicated to a specific secondary, the entire change is rolled back everywhere, which is what no partial
writes means. Either all databases of an RDB unit get the change, or none gets the change.

E-25

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The RDB
Terminology

An RDB unit is considered to be healthy only when it is


in quorum (when a master can be elected)
In quorum means that a simple majority of nodes are
communicating with each other

When the quorum is lost or regained, the master might


change
If a master has communication issues, a new master is
elected by the members of the unit
One node has a tie-breaking ability (epsilon) for all RDB
units
26

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE RDB: TERMINOLOGY


RDB Terminology and Definitions
A master can be elected only when a quorum of member nodes is available (and healthy) for a particular RDB
unit. Each member votes for the node that it thinks should be the master for this RDB unit. One node in the
cluster has a special tie-breaking ability that is called epsilon. Unlike the master, which might be different
for each RDB unit, epsilon is a single node that applies to all RDB units.
Quorum means that a simple majority of nodes are healthy enough to elect a master for the unit. The epsilon
power is used only in the case of a voting tie. If a simple majority does not exist, the epsilon node (process)
chooses the master for a given RDB unit.
When cluster communication is interruptedfor example, because of a booting or cluster interconnect hiccup
that lasts for a few secondsa unit goes out of quorum. When the cluster communication is restored, the unit
comes back into quorum automatically.

E-26

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

RDB Databases
node1

node2

mgwd VLDB VifMgr BCOM

mgwd VLDB VifMgr BCOM

node4

node3

mgwd VLDB VifMgr BCOM

mgwd VLDB VifMgr BCOM

27

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

RDB DATABASES
This slide shows a four-node cluster. The four databases that are shown for each node are the four RDB units
(management, VLDB, VifMgr, and BCOM). Each unit consists of four distributed databases. Each node has
one local database for each RDB unit.
The databases that are shown on this slide with dark borders are the masters. Note that the master of any
particular RDB unit is independent of the master of the other RDB units.
The node that is shown on this slide with a dark border has epsilon (the tie-breaking ability).
On each node, all the RDB databases are stored in the vol0 volume.

E-27

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Quorum
Overview

A quorum is a simple majority of connected, healthy, and eligible


nodes
Two RDB quorum concepts exist: a cluster-wide quorum and an
individual RDB unit that is in or out of quorum
RDB units never go out of quorum as a whole; only local units
(processes) do
When an RDB unit goes out of quorum, reads from the RDB unit
can still occur, but changes to the RDB unit cannot

Example: If the VLDB goes out of quorum, during the brief time that
the database is out, no volumes can be created, deleted, or moved;
however, access to the volumes from clients is not affected.
28

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

QUORUM: OVERVIEW
A master can be elected only when a majority of local RDB units are connected and healthy for a particular
RDB unit on an eligible node. A master is elected when each local unit agrees on the first reachable healthy
node in the RDB site list. A healthy node is one that is connected, can communicate with the other nodes,
has CPU cycles, and has reasonable I/O.
The master of a given unit can change. For example, when the node that is the master for the management
unit is booted, a new management master must be elected by the remaining members of the management unit.
A local unit goes out of quorum when cluster communication is interrupted for a few seconds; for example,
because of a booting or cluster interconnect hiccup that lasts for a few seconds. Because the RDB units
always work to monitor and maintain a good state, the local unit comes back in quorum automatically. When
a local unit goes out of quorum and then comes back into quorum, the RDB unit is synchronized again. Note
that the VLDB process on a node might go out of quorum although the VifMgr process on that same node has
no problem.
When a unit goes out of quorum, reads from that unit can be performed, but writes to that unit cannot. That
restriction is enforced so that no changes to that unit happen during the time that a master is not agreed upon.
In addition to the example above, if the VifMgr goes out of quorum, access to LIFs is not affected, but no LIF
failover can occur.

E-28

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Quorum
Details

The members of each RDB unit vote to determine which node


will be their master; each unit elects its own master
Each master might change when a local unit goes out of and
into quorum
Before you take a node down for an extended period of time,
you should mark it as ineligible (so the node doesnt factor
into quorum):
cluster1::> system node modify node
<node> -eligibility false

29

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

QUORUM: DETAILS
Marking a node as ineligible (by using the cluster modify command) means that the node no longer
affects RDB quorum or voting. If you mark the epsilon node as ineligible, epsilon is automatically given to
another node.

E-29

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

The Epsilon Node


When half of the nodes in a cluster are isolated from the other
half, no simple majority exists

NOTE: This situation is rare.


One node has a weighted vote (epsilon)

The epsilon node is epsilon for the entire cluster, not only for
individual RDB units (such as the masters)

30

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

THE EPSILON NODE


One node in the cluster has a special voting weight called epsilon. Unlike the masters of each RDB unit,
which might be different for each unit, the epsilon node is the same for all RDB units. This epsilon vote is
used only in the case of an even partitioning of a cluster, where, for example, four nodes of an eight-node
cluster cannot talk to the other four nodes. This situation is rare, but in this situation, a simple majority does
not exist, and the epsilon node sways the vote for the masters of the RDB units.

E-30

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Which Cluster Is in Quorum?

31

4+

2+

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

WHICH CLUSTER IS IN QUORUM?

E-31

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Two-Node Clusters
Two-node clusters are a special case:

No majority exists in the event of a cluster interconnect


partition or during a failover situation
The RDB manages this case under the covers, but it must
be told that this cluster contains only two nodes

cluster1::> cluster ha modify configured true


See TR3450 for more information

32

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

TWO-NODE CLUSTERS
From Ron Kownacki, author of the RDB:
Basically, quorum majority doesnt work well when down to two nodes and theres a failure, so RDB is
essentially locking the fact that quorum is no longer being used and enabling a single replica to be artificially
writable during that outage.
The reason we require a quorum (a majority) is so that all committed data is durable: if you successfully
write to a majority, you know that any future majority will contain at least one instance that has seen the
change, so the update is durable. If we didnt always require a majority, we could silently lose committed
data. So in two nodes, the node with epsilon is a majority and the other is a minorityso you would only
have one-directional failover (need the majority). So epsilon gives you a way to get majorities where you
normally wouldnt have them, but it only gives unidirectional failover because its static.
In two-node (high-availability mode), we try to get bidirectional failover. To do this, we remove the
configuration epsilon and make both nodes equaland form majorities artificially in the failover cases. So
quorum is two nodes available out of the total of two nodes in the cluster (no epsilon involved), but if theres
a failover, you artificially designate the survivor as the majority (and lock that fact). However, that means you
cant fail over the other way until both nodes are available, they sync up, and drop the lockotherwise you
would be discarding data.

E-32

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.

NetApp University - Do Not Distribute

Putting It All Together


Node
Network and
SCSI
modules

Client Access (Data)


Management

M-Host
Cluster Traffic

CSM

Data
module

Data SVM
Root Volume

Root
Vol1

Vol0

RDB Units:
Mgwd
VLDB
VifMgr
BCOM

Vol2

33

2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

PUTTING IT ALL TOGETHER

E-33

Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture

2015 NetApp, I