Vous êtes sur la page 1sur 89

Netapp Storage TEC - NetApp Storage- Module 2 - I

DSEP Education Alliances

2018

© Atos - For internal use


NetApp
1 4
Introduction Disk
Administration
3
System
Administration
2 5
NetApp File Access
Architecture Management
2

2018

2
© Atos - For internal use
1

Topic 1
Introduction to NetApp

2018

© Atos - For internal use

3
Network Appliance (NetApp)
Network Appliance (NetApp), The Company creates storage systems and
management software associated with companies data. NetApp offer products
that cater for small, medium & large companies and can provide supports.
Other storage main Vendors are:-
EMC
Hitachi Data Systems
HP
IBM
The NetApp Filer also known as NetApp Fabric-Attached Storage (FAS) is a type
of disk storage device which owns and controls a file system and present files
and directories over the network, it uses an operating system called Data
ONTAP (based on FreeBSD).

2018

© Atos - For internal use

4
Network Appliance (NetApp)
NetApp Filers can offer the following
Supports SAN, NAS, FC, SATA, iSCSI, FCoE, and Ethernet all on the same platform.
Supports either SATA, FC, and SAS disk drives
Supports block protocols such as iSCSI, Fiber Channel and AoE
Supports file protocols such as NFS, CIFS, FTP, TFTP, and HTTP
High availability
Easy Management
Scalable

2018

© Atos - For internal use

5
Network Appliance (NetApp)
History
NetApp was created in 1992 by David Hitz, James Lau and Michael Malcolm, the company become
public in 1995 and grew rapidly in the dot com boom, the companies headquarters are in Sunnyvale,
California, US. NetApp has acquired a number of companies that helped in development of various
products. The first NetApp network appliance shipped in 1993 known as a filer, this product was a new
beginning in data storage architecture, the device did one task and it did it extremely well. NetApp
made sure that the device was fully compatible to use industry standard hardware rather than
specialized hardware. Today's NetApp products cater for small, medium and large size corporations
and can be found in many blue-chip companies.

2018

© Atos - For internal use

6
Network Appliance (NetApp)
NetApp Filer
The NetApp Filer also known as NetApp Fabric-Attached Storage (FAS), is a data storage device, it can act as a
SAN or a NAS, It serves storage over network using either file-based or block based protocols.
File-Based Protocol :- NFS, CIFS, TFTP, HTTP
Block-Based Protocol :- Fiber Channel (FC), Fiber Channel Over Ethernet (FCoE), Internet SCSI (iSCSI)
The most common NetApp Configuration consists of a filer (also known as a controller or head node) and disk
enclosures (also known as shelves), the disk enclosures are connected by FC, parallel/Serial ATA, SAS the filer is
then accessed by other Linux, Unix, or Windows Servers via network (Ethernet or FC), An example setup would be
like the one in the diagram below

2018

© Atos - For internal use

7
Network Appliance (NetApp)

2018

© Atos - For internal use

8
Network Appliance (NetApp)
The filer run NetApp’s own adapted operating system (based on FreeBSD) called Data
ONTAP, It is highly tuned for storage-serving purposes.
All filers have battery backed NVRAM, which allows them to commit writes to stable
storage quickly, without on the disks.
It is also possible to cluster filers to create a highly-availability cluster with a private high-
speed link using either FC, clusters can then be grouped together under a single
namespace when running in the cluster mode of the Data ONTAP operating system.

2018

© Atos - For internal use

9
Network Appliance (NetApp)
The filer will be either Intel or AMD processor based computer using PCI, each filer will
have a battery-backed NVRAM adapter to log all writes for performance and to reply in the
event of a server crash. The Data ONTAP operating system implements single proprietary
file-system called WAFL (Write Anywhere File Layout)
WAFL is not a file system in the traditional sense, but a file layout that supports very large
high performance RAID arrays (up to 100 TB), It provides mechanism that enable variety
of file system and technologies that want to access disk blocks. WAFL also offers
Filers offer two RAID options, you can also create very large RAID arrays up to 28 disks,
this depends on the type of filer.
2018

© Atos - For internal use

10
Network Appliance (NetApp)
Snapshots (up to 255 per volume can be made)
Snap Mirror ( disk replication)
Sync mirror (mirror RAID arrays for extra resilience, can be mirrored up to 100km away)
Snap lock (write once read many, data cannot be deleted until its retention period has been reached)
Read-only copies of the file system
Read-write snapshots called Flex Clone
ACL’s
Quick defragmentation

2018

© Atos - For internal use

11
Network Appliance (NetApp)
Filers offer two RAID options, you can also create very large RAID arrays up to
28 disks, this depends on the type of filer.
RAID 4 :- Offers single parity on a dedicated disk (unlike RAID 5)
RAID6 :- Is the same as RAID 5 but offers double parity (more
resilience), two disks in the raid could fail.

2018

© Atos - For internal use

12
Network Appliance (NetApp)
NetApp Backup
The last point to touch on is backups, Netapp offers two types.
Dump :-
Backs up files and directories
Supports level-0, incremental and differential backup
Supports single file restore
Capable of backing only the base snapshot copy.

2018

© Atos - For internal use

13
Network Appliance (NetApp)
SMTape :-
Backs up block of data to tape
Supports only level -0 backup
Does not support single file restore
Capable of backing up multiple snapshot copies in a volume
Does not support remote tape backups and restores.
The filer will support either SCSI or Fiber Channel (FC) tape drives and can have a maximum 64 mixed tape
devices attached to a single storage system

2018

© Atos - For internal use

14
Network Appliance (NetApp)
Network Data Management Protocol (NDMP) is a standardized protocol for controlling
backup, recovery and other transfer of data between primary and secondary storage devices
such as storage systems and tape libraries. This removes the need for transporting the data
through the backup server itself, thus enhancing speed and removing load from the backup
server. By enabling NDMP support you enable that storage system to carry communication with
NDMP enabled commercial network-attached backup application, it also provides low-level
control of tape devices and medium changers.

2018

© Atos - For internal use

15
Network Appliance (NetApp)
The advantages of NDMP are
Provide sophisticated scheduling of data protection across multiple storage systems
Provide media management and tape inventory management services to eliminate tape handling during data
protection operations.
Support data cataloging services that simplify the process of locating specific recovery data.
Supports multiple topology configurations, allowing sharing of secondary storage (Tape Library) resources through
the use of three-way network data connections.
Supports security features to prevent or monitoring unauthorized use of NDMP connections.

2018

© Atos - For internal use

16
2
Topic 2
NetApp Architecture

2018

© Atos - For internal use

17
Network Appliance (NetApp)

NetApp Architecture
 The NetApp architecture consist of hardware, Data ONTAP operating
system and the network.

2018

© Atos - For internal use

18
Network Appliance (NetApp)

Hardware
 NetApp have a number of filers that would fit into any company and cost, the filer itself may
have the following
 can be a Intel or AMD server (up to 8 dual core processors)
 can have dual power supplies
 can handle up to 64GB RAM and 4GB NVRAM (non-volatile RAM)
 can manage up to 1176GB storage (It depends on Filer Model)
 has a maximum limit of 1176 disk drives (It depends on Filer Model)
 can connect the disk shelves via a FC loop for redundancy
 can support FCP, SATA and SAS disk drives
 has a maximum 5 PCI and 3 PCI-express slots
 has 4/8/10GbE support
 64bit support
2018

© Atos - For internal use

19
Network Appliance (NetApp)

The filer can be attached to a number of disk enclosures (shelves) which


expands the storage allocation, these disk enclosures are attached via FC,
as mentioned above the disk enclosures can support the following disks

These are fibre channel disks, they are very fast but
FCP
expensive
Serial attached SCSI disks again are very fast but expensive
SAS
, due to replace the FC disks
Serial ATA are slow disks but are cheaper, ideal for QA and
2018 SATA
DEV environments

© Atos - For internal use

20
Network Appliance (NetApp)

One note to remember is that the filer that connects to the top module of a
shelf controls (owns) the disks in that shelf under normal circumstances
(i.e. non-failover).
The filers can make use of VIF's (Virtual Interfaces), they come in two flavors

•1 active link, others are passive, standby links


Single-mode
•Failover when link is down
VIF
•No configuration on switches
•Multiple links are active at the same time
•Load balancing and failover
Multi-mode
2018 •Load balancing based on IP address, MAC address or
VIF
round robin
•Requires support & configuration on switches

© Atos - For internal use

21
Network Appliance (NetApp)
Software
The operating system Data ONTAP, the latest version is currently version 8 which
fully supports grid technology (GX in version 7). It is fully compatible with Intel and
AMD architectures and supports 64bit, it borrows the idea's from FreeBSD.
All additional NetApp products are activated via licenses, some require the filer to
be rebooted so check the documentation.

2018

© Atos - For internal use

22
Network Appliance (NetApp)
Management of the filer can be accessed via any of the following
Telnet or SSH
Filerview (HTTP GUI)
System Manager (client software GUI)
Console cable
snmp and ndmp

2018

© Atos - For internal use

23
Network Appliance (NetApp)

Storage Terminology

When talking about storage you probably come across two solutions

NAS storage speaks to a file, so the protocol if a file based


NAS
one. Data is made to be shared examples are
(Network
•NFS (Unix)
Attached
•CIFS or SMB (Windows)
Storage)
•FTP, HTTP, WebDAV, DAFS
SAN storage speaks to a LUN (Logical Unit Number) and
2018 SAN accesses it via data blocks, sharing is difficult examples are
(Storage Area •SCSI
Network) •iSCSI
•FCAL/FCP
© Atos - For internal use

24
Network Appliance (NetApp)
There are a number of terminologies associated with the above
solutions.

Terminology Solution Description

CIFS servers makes data available via shares, a Unix server makes data
share/export NAS
available via exports

Drive CIFS clients typically map a network drive to access data stored on a storage
NAS
mapping/mounting server, Unix clients typically mount the remote resource

Logical Unit Number , basically a disk presented by a SAN to a host, when


LUN SAN
attached it looks like a locally attached disk.

The machine that offers a disk (LUN) to another machine in other words the
Target SAN
SAN

The machine that expects to see the disk (LUN) the host OS, appropriate
Initiator SAN
initiator software will be required

One or more fibre switches with targets and initiators connected to them
2018 Fabric SAN are referred to as a fabric. Cisco, McData and Brocade are well know fabric
switch makers

© Atos - For internal use

25
Network Appliance (NetApp)
Terminology Solution Description
Host Bus Adapter, the hardware that connects the server or SAN to the fabric
HBA SAN
switches. There are also iSCSI HBA's
The use of redundant storage network components responsible for transfer of
Multipathing (MPIO) SAN data between the server and the storage (Cabling, adapters, switches and
software)
The portioning of a fabric into smaller subsets to restrict interference, added
Zoning SAN security and simplify management, it's like VLAN's in networking

2018

© Atos - For internal use

26
Network Appliance (NetApp)
Below is a typical SAN setup using NetApp hardware

2018

© Atos - For internal use

27
Network Appliance (NetApp)

This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like
2a.17
•2a = SCSI adapter
Disk
•17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed disks.
Disks are assigned to a specific pool, also parity disks do not contain any data.

Normally there are three pools 0, 1 and spare


•0 = normal pool
Raid Group (Pool)
•1 = mirror pool (if syncMirror is enabled)
•spare = spares disks that be used for growth and replacement of failed disks

A collection of disks that can have either of the below RAID levels, the aggregate can contain up to 1176 disks, you
can have many aggregates with the below different RAID levels. An aggregate can contain many volumes (see
volumes below).
•RAID-4
Aggregate
2018 •RAID-DP (RAID-6) better fault tolerance
One point to remember is that a aggregate can grow but cannot shrink, the disadvantage with RAID 4 is that a
bottleneck can happen on the dedicated parity disk, which is normally the first disk to fail due to it being used the
most, however the NVRAM helps out by only writing to disks every 10 seconds or when the NVRAM is 50% full.

© Atos - For internal use

28
Network Appliance (NetApp)

When a aggregate is mirrored it will have two plexes, when thinking of plexes think of mirroring. A
Plex
mirrored aggregated can be split into two plexes.

This is more or like a traditional volume in other LVM's, it is a logical space within an aggregate that
Volume (Flexible)
will contain the actual data, it can be grown or shrunk as needed

LUN The Logical Unit Number is what is present to the host to allow access to the volume.

Write anywhere filesystem layout is the filesystem used, it uses inodes just like Unix. Disks are not
WAFL formatted they are zeroed.
By default WAFL reserves 10% of a disk space (unreclaimable)

A frozen read-only image of a volume or aggregate that reflects the state of the new file system at
the time the snapshot was created, snapshot features are
•Up to 255 snapshots per volume
Snapshot
•can be scheduled
2018 •Maximum space occupied can be specified (default 20%)
•File permissions are handled

© Atos - For internal use

29
Network Appliance (NetApp)
Snapshots in NetApp world are very fast, basically it takes a snapshot of all the
blocks that are associated with the files, this data is never actual changed, if a block
is changed a new block is created, the snapshot still points to the old block. NetApp
has two products called Snap Drive and Snap Manager that deal with consistency
problems where data has not actually been written to the disk but cached in
memory buffers, you might want to take a look at these products.

2018

© Atos - For internal use

30
Network Appliance (NetApp)

There are three additional replication products that can you can use

•real time replication of data


•maximum distance of up to 35km
SyncMirror •Fibre Channel or DWDM protocol
•Synchronous
is used primarily for data redundancy
•long distance DR data consolidation
•no limit on distance and uses
SnapMirror •IP protocol (WAN/LAN)
•ASync Mirror (> 1 minute)
is used primarily for disaster recovery
•disk-to-disk backup, restore HSM
•no limit on distance
SnapVault •IP protocol (WAN/LAN)
2018
•ASync Mirror (> 1 hour)
is used primarily for backup/restore

© Atos - For internal use

31
3
Topic 3
NetApp System
Administration

2018

© Atos - For internal use

32
Network Appliance (NetApp)

NetApp System Administration

Basically the NetApp filer is a Unix server highly tuned to deliver large
amounts of storage, the hardware again is very similar to the computer that
you have at home but will have extra redundancy features.

As you know the Operating Systems is called Data ONTAP which is based on
Free BSD, you don't need to know a great deal about Unix in order to manage
and setup a NetApp file, it comes with two excellent GUI tools one of which is
web based but it would be worth while getting to know Unix for more
problematic problems as you will need to use the commandline.
Generally the NetApp filer will be setup when you receive it, it should have
the latest Data ONTAP o/s installed and be ready to go.
2018

© Atos - For internal use

33
Network Appliance (NetApp)
Accessing NetApp
Once you have your NetApp filer powered up and on the network, you can
access it by any of the following common methods

telnet/SSH
2018

© Atos - For internal use

34
Network Appliance (NetApp)
Web Access GUI (http)

2018

© Atos - For internal use

35
Network Appliance (NetApp)
There are a number of common session
related parameters that you may wish to
tweak, there are many more than below so
take a peek at the documentation

Help ontap1> options ?


ontap1> options telnet
telnet.access legacy
telnet.distinct.enable on
Telnet telnet.enable off
2018
## Enabling telnet access
ontap1> options telnet.enable on

© Atos - For internal use

36
Network Appliance (NetApp)

ontap1> options ssh


ssh.access *
ssh.enable on
ssh.idle.timeout 0
ssh.passwd_auth.enable on
ssh.port 22
ssh.pubkey_auth.enable on
ssh1.enable off
SSH ssh2.enable on
## change the idle timeout to 5 minutes
ontap1> options ssh.idle.timeout 300
## You can also use the secureadmin command to setup
SSH/SSL
secureadmin [setup|addcert|enable|disable|status]
## You also use the system manager

2018

© Atos - For internal use

37
Network Appliance (NetApp)
ontap1> options http
httpd.access legacy
httpd.admin.access legacy
httpd.admin.enable on
httpd.admin.hostsequiv.enable off
httpd.admin.max_connections 512
httpd.admin.ssl.enable on
httpd.admin.top-page.authentication on
httpd.autoindex.enable off
HTTP httpd.bypass_traverse_checking off
httpd.enable off
httpd.log.format common
httpd.method.trace.enable off
httpd.rootdir XXX
httpd.timeout 300
httpd.timewait.enable off
## Enabling HTTP administration access
ontap1> httpd.admin.enable on

ontap1> options autologout


autologout.console.enable on
autologout.console.timeout 300
autologout.telnet.enable on
Session timeout
autologout.telnet.timeout 300
## Change the timeout values
2018 ontap1> options autologout.telnet.timeout 300

ontap1> options trusted


trusted.hosts *
Security
## Only allow specific hosts to administrate the NetApp filer
ontap1> options trusted.hosts <host1>,<host2>
© Atos - For internal use

38
Network Appliance (NetApp)
System Configuration and Administration
NetApp filers have two privilege modes, the advanced privilege
allows you to access more advanced and dangerous features
Administrative (default)
Advanced

To set the privilege

priv set [-q] [admin | advanced]


Note: by default you are in administrative mode
2018 Privilege
-q = quiet suppresses warning messages

© Atos - For internal use

39
Network Appliance (NetApp)
You can use the normal shutdown or reboot command to halt or restart
the NetApp filer, if your filer has a RML or BMC you can also start the
filer in different modes

•boot_ontap - boots the current Data ONTAP software release stored on the boot device
•boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel
startup modes •boot_backup - boots the backup Data ONTAP release from the boot device
•boot_diags - boots a Data ONTAP diagnostic kernel
Note: there are other options but NetApp will provide these as when necessary

halt [-t <mins>] [-f]


shutdown -t = shutdown after minutes specified
-f = used with HA clustering, means that the partner filer does not take over

reboot [-t <mins>] [-s] [-r] [-f]

-t = reboot in specified minutes


restart
-s = clean reboot but also power cycle the filer (like pushing the off button)
-r = bypasses the shutdown (not clean) and power cycles the filer
2018
-f = used with HA clustering, means that the partner filer does not take over

© Atos - For internal use

40
Network Appliance (NetApp)
When the filer boots you have a chance to enter the boot menu [Ctrl-
C] which gives you a number of options, that allow you change the
system password, put the filer into maintenance mode, wipe all disks,
etc.

1) Normal Boot.
2) Boot without /etc/rc.
3) Change password.
4) Clean configuration and initialize all disks.
5) Maintenance mode boot.
6) Update flash from backup config.
7) Install new software first.
8) Reboot node.
Selection (1-8)?
Boot Menu •Normal Boot - continue with the normal boot operation
•Boot without /etc/rc - boot with only default options and disable some services
•Change Password - change the storage systems password
•Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default
settings
•Maintenance mode boot - file system operations are disabled, limited set of commands
•Update flash from backup config - restore the configuration information if corrupted on the
2018 boot device
•Install new software first - use this if the filer does not include support for the storage array
•Reboot node - restart the filer

© Atos - For internal use

41
Network Appliance (NetApp)
To check what versions of Data ONTAP you have use the version command

version [-b]
Data ONTAP
version -b = include name and version information for the primary,
secondary and diagnostic kernels and the firmware
Users, Groups and Roles as they are the same in the Unix world, the commands
and options that you should be aware of are the following

you can perform the following using the secureadmin command


•add
Users •modify
•delete
•list
you can perform the following using the secureadmin command
•add
2018
Groups •modify
•delete
•list

© Atos - For internal use

42
Network Appliance (NetApp)

you can perform the following using the secureadmin command


•add
Roles •modify
•delete
•list
you can perform the following using the secureadmin command
•add
Domainuser •delete
•list
•load
you can perform the following using the secureadmin command
•lock
Diaguser •unlock
•list
•Load
security.passwd.firstlogin.enable off
security.passwd.lockout.numtries 4294967295
security.passwd.rootaccess.enable on
security.passwd.rules.enable on
security.passwd.rules.everyone on
2018 User password options
security.passwd.rules.history 6
security.passwd.rules.maximum 256
security.passwd.rules.minimum 8
security.passwd.rules.minimum.alphabetic 2
security.passwd.rules.minimum.digit 1
security.passwd.rules.minimum.symbol 0
© Atos - For internal use

43
Network Appliance (NetApp)
There are a number of commands to get system configuration information and statistics
System Configuration
sysconfig
General information sysconfig -v
sysconfig -a (detailed)

Configuration errors sysconfig -c

sysconfig -d
Display disk devices
sysconfig -A

Display Raid group information sysconfig -V

Display aggregates and plexes sysconfig -r

2018
Display tape devices sysconfig -t

Display tape libraries sysconfig -m

© Atos - For internal use

44
Network Appliance (NetApp)

Environment Information
General information environment status
environment shelf [adapter]
Disk enclosures (shelves)
environment shelf_power_status
environment chassis all
environment chassis list-sensors
environment chassis Fans
Chassis environment chassis CPU_Fans
environment chassis Power
environment chassis Température
environment chassis [PS1|PS2]

Fibre Channel Information


2018
fcstat link_stats
Fibre Channel stats fcstat fcal_stats
fcstat device_map

© Atos - For internal use

45
Network Appliance (NetApp)

SAS Adapter and Expander Information


Shelf information sasstat shelf

sasstat expander
Expander information sasstat expander_map
sasstat expander_phy_state

Disk information sasstat dev_stats

Adapter information sasstat adapter_state

2018

© Atos - For internal use

46
Network Appliance (NetApp)

Statistical Information
All stats stats show
System stats show system
Processor stats show processor
Disk stats show disk
Volume stats show volume
LUN stats show lun
Aggregate stats show aggregate
FC stats show fcp
iSCSI stats show iscsi
CIFS stats show cifs
Network stats show ifnet
2018

© Atos - For internal use

47
Network Appliance (NetApp)

Licensing
The NetApp extra features can be enabled by licensing the product, you can perform this
either via the command line or the system manager GUI

licenses (commandline) ## display licenses


license
## Adding a license
license add <code1> <code2>
## Disabling a license
license delete <service>

2018

© Atos - For internal use

48
4
Topic 4
NetApp Disk
Administration

2018

© Atos - For internal use

49
Network Appliance (NetApp)
NetApp Disk Administration

In this section it covers the disk administration, In this section We will


cover the basics on the following:

Storage
Disks
Aggregates (RAID options)
Volumes (FlexVol and Traditional)
FlexCache
FlexClone
Deduplication
QTrees
CIFS Oplocks
2018
Security styles
Quotas

© Atos - For internal use

50
Network Appliance (NetApp)
The storage command can configure and administrate a disk enclosure, the
main storage commands are below

storage show adapter


storage show disk [-a|-x|-p|-T]
storage show expander
storage show fabric
storage show fault
storage show hub
storage show initiators
storage show mc
storage show port
Display
storage show shelf
storage show switch
storage show tape [supported]
storage show acp

storage array show


storage array show-ports
storage array show-luns
storage array show-config

Enable storage enable adapter


Disable storage disable adapter

Rename switch storage rename <oldname> <newname>

Remove port storage array remove-port <array_name> -p <WWPN>

2018 Load Balance storage load balance

storage power_cycle shelf -h


Power Cycle storage power_cycle shelf start -c <channel name>
storage power_cycle shelf completed

© Atos - For internal use

51
Network Appliance (NetApp)
Your NetApp filer will have a number of disks attached that can be used, when attached
the disk will have the following device name

Disks

This is the physical disk itself, normally the disk will reside in a disk enclosure,
the disk will have a pathname like 2a.17 depending on the type of disk
enclosure
•2a = SCSI adapter
Disk name
•17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed
2018
disks. They can also be assigned to any aggregate. Disks are assigned to a
specific pool.

© Atos - For internal use

52
Network Appliance (NetApp)

There are only four types of disks in Data


ONTAP

Data holds data stored within the RAID group

Does not hold usable data but is available to be added to a RAID


Spare
group in an aggregate, also known as a hot spare

Parity Store data reconstruction information within the RAID group


2018

Stores double-parity information within the RAID group, if RAID-DP is


dParity
enabled

© Atos - For internal use

53
Network Appliance (NetApp)
There are a number of disk commands that you can use

disk show
disk show <disk_name>

disk_list

sysconfig -r
Display
sysconfig -d
## list all unnassigned/assigned disks
disk show -n
disk show -a

## Add a specific disk to pool1 the mirror pool


disk assign <disk_name> -p 1
Adding (assigning)
## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p"
## option is not specififed
disk assign all -p 0

Remove (spin down disk) disk remove <disk_name>

2018 Reassign disk reassign -d <new_sysid>

disk replace start <disk_name> <spare_disk_name>


disk replace stop <disk_name>
Replace
Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this
process using the stop command

© Atos - For internal use

54
Network Appliance (NetApp)
Zero spare disks disk zero spares
fail a disk disk fail <disk_name>
disk scrub start
Scrub a disk
disk scrub stop
disk sanitize start <disk list>
disk sanitize abort <disk_list>
disk sanitize status
Sanitize disk sanitize release <disk_list>

Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license.

disk maint start -d <disk_list>


disk maint abort <disk_list>
disk maint list
Maintenance
disk maint status

Note: you can test the disk using maintain mode


disk swap
disk unswap
swap a disk
Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only.
Statisics disk_stat <disk_name>
Simulate a pulled disk disk simpull <disk_name>
disk simpush -l
disk simpush <complete path of disk obtained from above command>
2018
## Example
Simulate a pushed disk ontap1> disk simpush -l
The following pulled disks are available for pushing:
v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448

ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448


© Atos - For internal use

55
Network Appliance (NetApp)
Aggregates
Disks are grouped together in aggregates, these aggregates provide storage to the volume or volumes
that they contain. Each aggregate has it own RAID configuration, plex structure and set of assigned disks
or array LUNs. You can create traditional volumes or NetApp's FlexVol volumes (see below section on
volumes). There are two types of aggregate
32bit - Maximum 16TB
64bit - Maximum 100TB
A aggregate has only one plex (pool 0), if you use SyncMirror (licensed product) you can mirror the
aggregate in which case it will have two plexes (pool 0 and pool 1). Disks can be assigned to different
pools which will be used for hot spares or extending aggregates for those pools. The plexes are updated
simultaneously when mirroring aggregates and need to be resynchronized if you have problems with one
of the plexes. You can see how mirroring works in the diagram below

2018

© Atos - For internal use

56
Network Appliance (NetApp)

When using RAID4 or RAID-DP the largest disks will be used as the parity disk/s,
if you add a new larger disk to the aggregate, this will be reassigned as the parity
disk/s.

An aggregate can be in one of three states

Online Read and write access to volumes is allowed

Some operations, such as parity reconstruction are allowed, but


Restricted
data access is not allowed

Offline No access to the aggregate is allowed


2018

© Atos - For internal use

57
Network Appliance (NetApp)
The aggregate can have a number of different status values

32-bit This aggregate is a 32-bit aggregate


64-bit This aggregate is a 64-bit aggregate
aggr This aggregate is capable of contain FlexVol volumes

copying This aggregate is currently the target aggregate of an active copy operation

degraded This aggregate is contains at least one RAID group with single disk failure that is not being reconstructed

This aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAID-DP
double degraded
aggregate only)

foreign Disks that the aggregate contains were moved to the current storage system from another storage system

growing Disks are in the process of being added to the aggregate


initializing The aggregate is in the process of being initialized

The aggregate contains no volumes and none can be added. Typically this happened only after an aborted "aggr copy"
invalid
operation

ironing A WAFL consistency check is being performed on the aggregate

2018 mirror degraded The aggregate is mirrored and one of its plexes is offline or resynchronizing

mirrored The aggregate is mirrored

needs check WAFL consistency check needs to be performed on the aggregate

normal The aggregate is unmirrored and all of its RAID groups are functional

© Atos - For internal use

58
Network Appliance (NetApp)
out-of-date The aggregate is mirrored and needs to be resynchronized

partial At least one disk was found for the aggregate, but two or more disks are missing

raid0 The aggrgate consists of RAID 0 (no parity) RAID groups

raid4 The aggregate consists of RAID 4 RAID groups

raid_dp The aggregate consists of RAID-DP RAID groups

reconstruct At least one RAID group in the aggregate is being reconstructed

redirect Aggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded

resyncing One of the mirror aggregates plexes is being resynchronized

snapmirror The aggregate is a SnapMirror replica of another aggregate (traditional volumes only)

trad The aggregate is a traditional volume and cannot contain FlexVol volumes.

verifying A mirror operation is currently running on the aggregate


2018

wafl inconsistent The aggregate has been marked corrupted; contact technical support

© Atos - For internal use

59
Network Appliances (NetApp)
You can mix the disks speeds and different types within the aggregate make
sure you change the below options

## to allow mixed speeds


options raid.rpm.fcal.enable on
Mixed disk
options raid.rpm.ata.enable on
speeds and
types
## to allow mixed disk types (SAS, SATA, FC, ATA)
options raid.disktype.enable off

2018

© Atos - For internal use

60
Network Appliances (NetApp)
Following are the common commands that you use with aggregates

aggr status
Displaying aggr status -r
aggr status <aggregate> [-v]

Check you have spare


aggr status -s
disks

## Syntax - if no option is specified then the default is used


aggr create <aggr_name> [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R
rpm>] [-L] [-B {32|64}] <disk_list>

## create aggregate called newaggr that can have a maximum of 8 RAID groups
aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19
## create aggregated called newfastaggr using 20 x 15000rpm disks
aggr create newfastaggr -R 15000 20

Adding (creating) ## create aggrgate called newFCALaggr


aggr create newFCALaggr -T FCAL 15
Note:
-f = overrides the default behaviour that does not permit disks in a plex to belong to different disk
pools
-m = specifies the optional creation of a SyncMirror
2018 -n = displays the results of the command but does not execute it
-r = maximum size (number of disks) of the RAID groups for this aggregate
-T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN
-R = rpm which include 5400, 7200, 10000 and 15000

© Atos - For internal use

61
Network Appliances (NetApp)
aggr offline <aggregate>
Remove(destroying)
aggr destroy <aggregate>
Unremoving(undestroying) aggr undestroy <aggregate>
Rename aggr rename <old name> <new name>

## Syntax
aggr add <aggr_name> [-f] [-n] [-g {raid_group_name | new |all}] <disk_list>

## add an additional disk to aggregate pfvAggr, use "aggr status" to get group name
Increase size aggr status pfvAggr -r
aggr add pfvAggr -g rg0 -d v5.25

## Add 4 300GB disk to aggregate aggr1


aggr add aggr1 4@300

offline aggr offline <aggregate>


online aggr online <aggregate>
restricted state aggr restrict <aggregate>
## to display the aggregates options
aggr options <aggregate>

## change a aggregates raid group


Change an aggregate options aggr options <aggregate> raidtype raid_dp
aggr options <aggregate> raidtype raid4

2018 ## change a aggregates raid size


aggr options <aggregate> raidsize 4
show space usage aggr show_space <aggregate>
Mirror aggr mirror <aggregate>
Split mirror aggr split <aggregate/plex> <new_aggregate>

© Atos - For internal use

62
Network Appliances (NetApp)
## Obtain the status
aggr copy status

## Start a copy
aggr copy start <aggregate source> <aggregate destination>
Copy from one aggregate to
another ## Abort a copy - obtain the operation number by using "aggr copy status"
aggr copy abort <operation number>

## Throttle the copy 10=full speed, 1=one-tenth full speed


aggr copy throttle <operation number> <throttle speed>

## Media scrub status


aggr media_scrub status
aggr scrub status

## start a scrub operation


aggr scrub start [ aggrname | plexname | groupname ]

## stop a scrub operation


aggr scrub stop [ aggrname | plexname | groupname ]

## suspend a scrub operation


aggr scrub suspend [ aggrname | plexname | groupname ]
Scrubbing (parity)
## resume a scrub operation
aggr scrub resume [ aggrname | plexname | groupname ]
Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the
parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is
given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is
started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on
2018 all RAID groups contained in the plex.
Look at the following system options:
raid.scrub.duration 360
raid.scrub.enable on
raid.scrub.perf_impact low
raid.scrub.schedule

© Atos - For internal use

63
Network Appliances (NetApp)
## verify status
aggr verify status

## start a verify operation


aggr verify start [ aggrname ]

## stop a verify operation


aggr verify stop [ aggrname ]
Verify (mirroring)
## suspend a verify operation
aggr verify suspend [ aggrname ]

## resume a verify operation


aggr verify resume [ aggrname ]
Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.
aggr media_scrub status

Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Media Scrub
2018 Look at the following system options:
raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on

© Atos - For internal use

64
Network Appliances (NetApp)
Volumes
Volumes contain file systems that hold user data that is accessible using one or more of the access protocols supported by
Data ONTAP, including NFS, CIFS, HTTP, FTP, FC, and iSCSI.
Each volume depends on its containing aggregate for all its physical storage, that is, for all storage in the aggregate’s disks
and RAID groups.

A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing
aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the
FlexVol volumes contained by that aggregate.

Because a FlexVol volume is managed separately from the aggregate, you can create small FlexVol volumes (20 MB or
larger), and you can increase or decrease the size of FlexVol volumes in increments as small as 4 KB.

When a FlexVol volume is created, it reserves a small amount of extra space (approximately 0.5 % of its nominal size) from
the free space of its containing aggregate. This space is used to store the volume's metadata. Therefore, upon creation, a
FlexVol volume with a space guarantee of volume uses free space from the aggregate equal to its size × 1.005. A newly-
created FlexVol volume with a space guarantee of none or file uses free space equal to .005 × its nominal size.

2018

© Atos - For internal use

65
Network Appliances (NetApp)
There are two types of FlexVolume
32-bit
64-bit
If you want to use Data ONTAP to move data between a 32-bit volume and a 64-bit
volume, you must use ndmpcopy or qtree SnapMirror. You cannot use the vol copy
command or volume SnapMirror between a 32-bit volume and a 64-bit volume.

A traditional volume is a volume that is contained by a single, dedicated, aggregate. It is


tightly coupled with its containing aggregate. No other volumes can get their storage from
this containing aggregate.

The only way to increase the size of a traditional volume is to add entire disks to its
containing aggregate. You cannot decrease the size of a traditional volume. The smallest
possible traditional volume uses all the space on two disks (for RAID4) or three disks (for
RAID-DP).

Traditional volumes and their containing aggregates are always of type 32-bit. You cannot
grow a traditional volume larger than 16 TB

2018

© Atos - For internal use

66
Network Appliances (NetApp)
You can change many attributes on a volume
The name of the volume
The size of the volume (assigned only for FlexVol volumes; the size of traditional volumes
is determined by the size and number of their disks or array LUNs)
A security style, which determines whether a volume can contain files that use UNIX
security, files that use NT file system (NTFS) file security, or both types of files
Whether the volume uses CIFS oplocks (opportunistic locks)
The language of the volume
The level of space guarantees (for FlexVol volumes only)
Disk space and file limits (quotas, optional)
A Snapshot copy schedule (optional)
Whether the volume is a root volume
Every volume has a language. The language of the volume determines the
character set Data ONTAP uses to display file names and data for that volume.
Changing the language of an existing volume can cause some files to become
inaccessible.
2018

© Atos - For internal use

67
Network Appliances (NetApp)
The language of the root volume has special significance, because it affects or determines
the following items:

Default language for all volumes


System name
Domain name
Console commands and command output
NFS user and group names
CIFS share names
CIFS user account names
Access from CIFS clients that don't support Unicode
How configuration files in /etc are read
How the home directory definition file is read
Qtrees
Snapshot copies
2018
Volumes
Aggregates

© Atos - For internal use

68
Network Appliances (NetApp)
The following table displays the possible states for volumes.

Read and write access to this volume is


Online
allowed.

Some operations, such as parity


Restricted reconstruction, are allowed, but data
access is not allowed.

Offline No access to the volume is allowed.

2018

© Atos - For internal use

69
Network Appliances (NetApp)
There are number of possible status values for volumes

The origin system is not allowing access. (FlexCache volumes


access denied
only.)

The volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance
active redirect
may be reduced while the volume is in this state.

connecting The caching system is trying to connect to the origin system. (FlexCache volumes only.)

copying The volume is currently the target of an active vol copy or snapmirror operation.

The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed
degraded
after single disk failure.

The volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed
double degraded
after double disk failure.

flex The volume is a FlexVol volume.


flexcache The volume is a FlexCache volume.

2018 foreign Disks used by the volume's containing aggregate were moved to the current storage system from another storage
system.

growing Disks are being added to the volume's containing aggregate.

initializing The volume's containing aggregate is being initialized.

© Atos - For internal use

70
Network Appliances (NetApp)
invalid The volume does not contain a valid file system.
ironing A WAFL consistency check is being performed on the volume's containing aggregate.
The language setting of the origin volume was changed since the caching volume was created. (FlexCache
lang mismatch
volumes only.)

mirror degraded The volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing.

mirrored The volume's containing aggregate is mirrored.


needs check A WAFL consistency check needs to be performed on the volume's containing aggregate.
out-of-date The volume's containing aggregate is mirrored and needs to be resynchronized.
partial At least one disk was found for the volume's containing aggregate, but two or more disks are missing.
raid0 The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only).
raid4 The volume's containing aggregate consists of RAID4 groups.
raid_dp The volume's containing aggregate consists of RAID-DP groups.
reconstruct At least one RAID group in the volume's containing aggregate is being reconstructed.
The volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p
redirect
option. Read performance to volumes in the aggregate might be degraded.
The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to re-
rem vol changed
enable the FlexCache relationship. (FlexCache volumes only.)
2018 rem vol unavail The origin volume is offline or has been deleted. (FlexCache volumes only.)

remote nvram err The origin system is experiencing problems with its NVRAM. (FlexCache volumes only.)

resyncing One of the plexes of the volume's containing mirrored aggregate is being resynchronized.

© Atos - For internal use

71
Network Appliances (NetApp)

Every qtree and volume has a security style setting—NTFS, UNIX, or


mixed. The setting determines whether files use Windows NT or UNIX
(NFS) security. How you set up security styles depends on what
protocols are licensed on your storage system.

Although security styles can be applied to volumes, they are not


shown as a volume attribute, and are managed for both volumes and
qtrees using the qtree command. The security style for a volume
applies only to files and directories in that volume that are not
contained in any qtree. The volume security style does not affect the
security style for any qtrees in that volume.
2018

© Atos - For internal use

72
Network Appliances (NetApp)
The following table describes the three security styles and the effects of
changing them.

Security Style Description Effect of changing to this style

For CIFS clients, security is handled using Windows NTFS ACLs.


If the change is from a mixed qtree, Windows NT permissions determine
file access for a file that had Windows NT permissions. Otherwise,
For NFS clients, the NFS UID (user id) is mapped to a Windows SID
UNIXstyle (NFS) permission bits determine file access for files created
(security identifier) and its associated groups. Those mapped
before the change.
NTFS credentials are used to determine file access, based on the NFTS ACL.
Note: If the change is from a CIFS storage system to a multiprotocol
Note: To use NTFS security, the storage system must be licensed for
storage system, and the /etc directory is a qtree, its security style
CIFS. You cannot use an NFS client to change file or directory
changes to NTFS.
permissions on qtrees with the NTFS security style.

The storage system disregards any Windows NT permissions established


UNIX Files and directories have UNIX permissions.
previously and uses the UNIX permissions exclusively.

Both NTFS and UNIX security are allowed: A file or directory can have If NTFS permissions on a file are changed, the storage system recomputes
2018 either Windows NT permissions or UNIX permissions. UNIX permissions on that file.
Mixed
The default security style of a file is the style most recently used to set If UNIX permissions or ownership on a file are changed, the storage
permissions on that file. system deletes any NTFS permissions on that file.

© Atos - For internal use

73
Network Appliances (NetApp)
Following are the commands that are used to create and control
volumes
General Volume Operations (Traditional and FlexVol)
vol status
Displaying vol status -v (verbose)
vol status -l (display language)
Remove vol offline <vol_name>
(destroying) vol destroy <vol_name>
Rename vol rename <old_name> <new_name>
online vol online <vol_name>
offline vol offline <vol_name>
restrict vol restrict <vol_name>
vol decompress status
decompress vol decompress start <vol_name>
vol decompress stop <vol_name>

vol mirror volname [-n][-v victim_volname][-f][-d <disk_list>]


Note:
2018 Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the
contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process.
Mirroring
The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes
require that any operations having directly to do with their containing aggregates be handled via the new aggr
command suite.

© Atos - For internal use

74
Network Appliances (NetApp)
Change language vol lang <vol_name> <language>
## Display maximum number of files
maxfiles <vol_name>
Change maximum number of files
## Change maximum number of files
maxfiles <vol_name> <max_num_files>
Change root volume vol options <vol_name> root

vol media_scrub status [volname|plexname|groupname -s disk-name][-v]

Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If
no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Media Scrub
Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on

2018

© Atos - For internal use

75
Network Appliances (NetApp)
FlexVol Volume Operations (only)

## Syntax
vol create vol_name [-l language_code] [-s {volume|file|none}] <aggr_name> size{k|m|g|t}
## Create a 200MB volume using the english character set
Adding (creating) vol create newvol -l en aggr1 200M

## Create 50GB flexvol volume


vol create vol1 aggr1 50g

# First find the aggregate the volume uses


vol container flexvol1
additional disks
## add an additional disk to aggregate aggr1, use "aggr status" to get group name
aggr status aggr1 -r
aggr add aggr1 -g rg0 -d v5.25

vol size <vol_name> [+|-] n{k|m|g|t}


Resizing
## Increase flexvol1 volume by 100MB
vol size flexvol1 +100m

vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on

2018 Automatically resizing


## automatically grow by 10MB increaments to max of 500MB
vol autosize flexvol1 -m 500m -I 10m on

df -Ah
Determine free space and Inodes df -L
df -i

© Atos - For internal use

76
Network Appliances (NetApp)
Determine size vol size <vol_name>

vol options <vol_name> try_first [volume_grow|snap_delete]


Note:
If you specify volume_grow, Data ONTAP attempts to increase the volume's size before
deleting any Snapshot copies. Data ONTAP increases the volume size based on
specifications you provided using the vol autosize command.
automatic free space preservation
If you specify snap_delete, Data ONTAP attempts to create more free space by
deleting Snapshot copies, before increasing the size of the volume. Data ONTAP
deletes Snapshot copies based on the specifications you provided using the snap
autodelete command.

display a FlexVol volume's


vol container <vol_name>
containing aggregate

vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap]

vol clone split start


vol clone split stop
vol clone split estimate
Cloning vol clone split status
Note: The vol clone create command creates a flexible volume named clone_vol on
2018 the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone
is a volume that is a writable snapshot of another volume. Initially, the clone and its
parent share the same storage; more storage space is consumed only as one volume
or the other changes.

© Atos - For internal use

77
Network Appliances (NetApp)
vol status
vol status <flexclone_name> -v
Display
df -Lh
## Syntax
vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap]
adding (create)
## create a flexclone called flexclone1 from the parent flexvol1
vol clone create flexclone1 -b flexvol1
vol offline <flexclone_name>
Removing (destroy)
vol destroy <flexclone_name>
## Determine the free space required to perform the split
vol clone split estimate <flexclone_name>

## Double check you have the space


df -Ah

## Perform the split


splitting
vol clone split start <flexclone_name>

## Check up on its status


vol colne split status <flexclone_name>

## Stop the split


vol clone split stop <flexclone_name>
/etc/log/clone

The clone log file records the following information:


2018
• Cloning operation ID
• The name of the volume in which the cloning operation was performed
log file
• Start time of the cloning operation
• End time of the cloning operation
• Parent file/LUN and clone file/LUN names
• Parent file/LUN ID
• Status of the clone operation: successful, unsuccessful, or stopped and some other details
© Atos - For internal use

78
Network Appliances (NetApp)
Space Saving

ONTAP Data has an additional feature called deduplication, it improves physical storage space by
eliminating duplicate data blocks within a FlexVol volume.

Deduplication works at the block level on the active file system, and uses the WAFL block-sharing
mechanism. Each block of data has a digital signature that is compared with all other signatures in a
data volume. If an exact block match exists, the duplicate block is discarded and its disk space is
reclaimed.

You can configure deduplication operations to run automatically or on a schedule. You can deduplicate
new and existing data, or only new data, on a FlexVol volume. You do require a license to enable
deduplication.

Data ONTAP writes all data to a storage system in 4-KB blocks. When deduplication runs for the first
time on a FlexVol volume with existing data, it scans all the blocks in the FlexVol volume and creates a
digital fingerprint for each of the blocks. Each of the fingerprints is compared to all other fingerprints
within the FlexVol volume. If two fingerprints are found to be identical, a byte-for-byte comparison is
done for all data within the block. If the byte-for-byte comparison detects identical fingerprints, the
pointer to the data block is updated, and the duplicate block is freed.
2018
Deduplication runs on the active file system. Therefore, as additional data is written to the deduplicated
volume, fingerprints are created for each new block and written to a change log file. For subsequent
deduplication operations, the change log is sorted and merged with the fingerprint file, and the
deduplication operation continues with fingerprint comparisons as previously described.

© Atos - For internal use

79
Network Appliances (NetApp)
sis start -s <path>

start/restart deduplication sis start -s /vol/flexvol1


operation
## Use previous checkpoint
sis start -sp <path>

stop deduplication operation sis stop <path>

sis config -s <schedule> <path>

sis config -s mon-fri@23 /vol/flexvol1

Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms:
schedule deduplication •day_list[@hour_list]
If hour_list is not specified, deduplication runs at midnight on each scheduled day.
•hour_list[@day_list]
If day_list is not specified, deduplication runs every day at the specified hours.
•• -
A hyphen (-) disables deduplication operations for the specified FlexVol volume.

enabling sis on <path>


2018 disabling sis off <path>
status sis status -l <path>

Display saved space df -s <path>

© Atos - For internal use

80
Network Appliances (NetApp)
Qtrees
Qtrees enable you to partition your volumes into smaller segments that you can manage individually. You can set a qtree's
size or security style, back it up, and restore it.

You use qtrees to partition your data. You might create qtrees to organize your data, or to manage one or more of the
following factors: quotas, backup strategy, security style, and CIFS oplocks setting.
The following list describes examples of qtree usage strategies:

Quotas - You can limit the size of the data used by a particular project, by placing all of that project's files into a qtree and
applying a tree quota to the qtree.

Backups -You can use qtrees to keep your backups more modular, to add flexibility to backup schedules, or to limit the size
of each backup to one tape.

Security style -If you have a project that needs to use NTFS-style security, because the members of the project use
Windows files and applications, you can group the data for that project in a qtree and set its security style to NTFS, without
requiring that other projects also use the same security style.

CIFS oplocks settings - If you have a project using a database that requires CIFS oplocks to be off, you can set CIFS
oplocks to Off for that project's qtree, while allowing other projects to retain CIFS oplocks.

2018

© Atos - For internal use

81
Network Appliances (NetApp)
The table below compares qtree with FlexVol and Traditional volumes

Functionality QTree FlexVolume Traditional Volume


Enables organizing user data Yes Yes Yes
Enables grouping users with similar needs Yes Yes Yes
Accepts a secruity style Yes Yes Yes
Accepts oplocks configuration Yes Yes Yes
Can be backed up and restored as a unit using Snap
Yes Yes Yes
Mirror
Can be backed up and restored as a unit using Snap
Yes No No
Vault

Can be resized Yes (using quota limits) Yes Yes

No (qtree data can be


Support snapshot copies extracted from volume Yes Yes
snapshot copies)
Supports quotas Yes Yes Yes
No (except as part of a
Can be cloned Yes No
FlexVol volume)
Maximum number allowed 4,995 per volume 500 per system 100 per system
2018

© Atos - For internal use

82
Network Appliances (NetApp)
Qtree commands :

qtree status [-i] [-v]

Display Note:
The -i option includes the qtree ID number in the display.
The -v option includes the owning vFiler unit, if the MultiStore license is enabled.
## Syntax - by default wafl.default_qtree_mode option is used
qtree create path [-m mode]
adding (create)
## create a news qtree in the /vol/users volume using 770 as permissions
qtree create /vol/users/news -m 770
Remove rm -Rf <directory>
Rename mv <old_name> <new_name>
## Move the directory to a different directory
mv /n/joel/vol1/dir1 /n/joel/vol1/olddir

## Create the qtree


qtree create /n/joel/vol1/dir1
convert a directory into a qtree directory
## Move the contents of the old directory back into the new QTree
mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1

2018 ## Remove the old directory name


rmdir /n/joel/vol1/olddir
qtree stats [-z] [vol_name]
stats
Note:
-z = zero stats

© Atos - For internal use

83
Network Appliances (NetApp)

CIFS Oplocks

CIFS oplocks reduce network traffic and improve storage system performance. However,
in some situations, you might need to disable them. You can disable CIFS oplocks for the
entire storage system or for a specific volume or qtree.

Usually, you should leave CIFS oplocks on for all volumes and qtrees. This is the default
setting. However, you might turn CIFS oplocks off under certain circumstances. CIFS
oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing
scenarios to perform client-side caching of read-ahead, write-behind, and lock
information. A client can then work with a file (read or write it) without regularly
reminding the server that it needs access to the file. This improves performance by
reducing network traffic.

2018 You might turn CIFS oplocks off on a volume or a qtree under either of the following
circumstances:

You are using a database application whose documentation recommends that CIFS oplocks be turned off.

You are handling critical data and cannot afford even the slightest data loss
© Atos - For internal use

84
Network Appliances (NetApp)

Otherwise, you can leave CIFS oplocks on.

cifs.oplocks.enable on
Enabling/Disabling for entire storage
cifs.oplocks.enable off

qtree oplocks /vol/vol2/proj enable


Enabling/Disabling for qtrees
qtree oplocks /vol/vol2/proj disable

2018

© Atos - For internal use

85
Network Appliances (NetApp)
Security Styles

You might need to change the security style of a new volume or qtree. Additionally, you
might need to accommodate other users; for example, if you had an NTFS qtree and
subsequently needed to include UNIX files and users, you could change the security style
of that qtree from NTFS to mixed.
Make sure there are no CIFS users connected to shares on the qtree whose security style
you want to change. If there are, you cannot change UNIX security style to mixed or
NTFS, and you cannot change NTFS or mixed security style to UNIX.

## Syntax
Change the qtree security path {unix | ntfs | mixed}
security style ## Change the security style of /vol/users/docs to mixed
qtree security /vol/users/docs mixed
2018

© Atos - For internal use

86
Network Appliances (NetApp)

Quotas
Quotas provide a way to restrict or track the disk space and number of files
used by a user, group, or qtree. You specify quotas using the /etc/quotas file. Quotas are
applied to a specific volume or qtree.
You can use quotas to limit resource usage, to provide notification when
resource usage reaches specific levels, or simply to track resource usage.

You specify a quota for the following reasons:

To limit the amount of disk space or the number of files that can be used by a user or
group, or that can be contained by a qtree
To track the amount of disk space or the number of files used by a user, group, or qtree,
without imposing a limit
To warn users when their disk usage or file usage is high.
Quotas can cause Data ONTAP to send a notification (soft quota) or to prevent a write
operation from succeeding (hard quota) when quotas are exceeded. When Data ONTAP
receives a request to write to a volume, it checks to see whether quotas are activated for
2018 that volume. If so, Data ONTAP determines whether any quota for that volume (and, if the
write is to a qtree, for that qtree) would be exceeded by performing the write operation. If
any hard quota would be exceeded, the write operation fails, and a quota notification is
sent. If any soft quota would be exceeded, the write operation succeeds, and a quota
notification is sent.
© Atos - For internal use

87
Network Appliances (NetApp)

Quotas configuration file /mroot/etc/quotas

Displaying quota report [<path>]

quota on [-w] <vol_name>


Note:
Activating
-w = return only after the entire quotas file has been scanned

Deactivitating quota off [-w] <vol_name>

quota off [-w] <vol_name>


Reinitializing
quota on [-w] <vol_name>

quota resize <vol_name>


Resizing
Note: this commands rereads the quota file

edit the quota file


Deleting
2018 quota resize <vol_name>
log messaging quota logmsg

© Atos - For internal use

88
Thank you
Thank You

2018

© Atos - For internal use

89

Vous aimerez peut-être aussi