Académique Documents
Professionnel Documents
Culture Documents
Professional Accreditation
Nimble Storage Array Introduction,
Installation, and Maintenance
Revision 2‐5
SSID
Classroom Network
Password
1. Login to http://university.nimblestorage.com/
2. Click “My Account”
3. Verify today’s course is listed and then click “Go”
4. Ensure your status states “Enrolled” with an “X” next to it (Don’t click the “X”)
1
Revision 2‐5
Introductions
Name
Company
Position
Data storage background
What do you hope to get out of the course?
2
Revision 2‐5
Topics
3
Revision 2‐5
Subtract
capacity
for Add storage
RAID-6 capacity due
Raw
parity, to inline Effective
Capacity Usable compression
spares & Capacity
(typical 30%
system Capacity
reserves to 75%)
Effective: 33 TB
Raw: 24 TB Usable: 17 TB
(assuming 50% compression)
4
Revision 2‐5
Model CPU DDR3 Ethernet Cache SSD Cache Data Eff. Capacity
Memory Ports Total HDD 0 2x
CS220 6x1GbE 12x1TB
CS220G 2x1GbE or
1 12GB 4x80GB 320GB 8TB 16TB
2x10GbE 12 TB
RAW
Model CPU DDR3 Ethernet Cache SSD Cache Data Eff. Capacity
Memory Ports Total HDD 0 2x
CS240 6x1GbE 12x2TB
CS240G 2x1GbE or
1 12GB 4x160GB 640GB 17TB 33TB
2x10GbE 24 TB
RAW
5
Revision 2‐5
Model CPU DDR3 Ethernet Cache SSD Cache Data Eff. Capacity
Memory Ports Total HDD 0 2x
CS260 6x1GbE 12x3TB
2x1GbE or
1 12GB 4x300GB 1.2TB 25TB 50TB
CS260G 2x10GbE 36 TB
RAW
Model CPU DDR3 Ethernet Cache SSD Cache Data Eff. Capacity
Memory Ports Total HDD 0 2x
CS420(1) 12TB 8TB 16TB
6x1GbE 640GB
CS440 4x160GB or
2 24GB to 24TB 17TB 33TB
CS460 2x1GbE 4x300GB
2.4TB
2x10GbE 36TB 33TB 50TB
(1) Sold only with X2 or X4 options
6
Revision 2‐5
3U
LEDs LEDs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
CS-Series 210
7
Revision 2‐5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Disks
Disks:
– 10 or 16 hot swappable drive bays populated with:
» 8 or 12 SATA (with SAS interposers) or SAS disks
» 2 or 4 solid-state drives (SSD)
8
Revision 2‐5
Raw Capacity 15 TB 30 TB 45 TB
Effective Capacity
11 – 22 TB 23 – 45 TB 34 – 68 TB
(w/ 0x-2x compression)
Flash 160 GB 300 GB 600 GB
Connectivity 2x 6Gb SAS / IO module
IO Modules Dual hot-swappable SAS controllers
17
SSD
HDD HDD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Expansion Shelf
9
Revision 2‐5
1 2
3 4 8
5
6 7
Hardware Components
10
Revision 2‐5
Coming Soon
Mainstream
Applications
PERFORMANCE
+ NODES
CAPACITY
11
Revision 2‐5
Real-time Analytics
VDI
Oracle
Mainstream
Applications
SQL Server
PERFORMANCE
` Exchange
SharePoint
Backup, DR
12
Revision 2‐5
Adding a Shelf
13
Revision 2‐5
Adding a Shelf
Discovering
Is there
data on No
Available Activate In Use
the
disks?
Yes Force
Foreign
Activate
Faulty
© 2012 Nimble Storage. Proprietary and 27
confidential. Do not distribute.
14
Revision 2‐5
1 Fill Expansion
Shelf until 2 Then balance capacity between them
capacity
utilization
matches the
control head
15
Revision 2‐5
On Off
– Power expansion shelves – Power off the controller
first, then the controller shelf and then the
shelves expansion shelves
+
Provides additional processing power and memory
– Provides two CPU’s each with:
» 6 cores
» 12 GB of DRAM
– Scales performance
– Replaces existing controllers
» A CPU is not installed into current controllers
16
Revision 2‐5
33
Controller Upgrade
17
Revision 2‐5
+
Provides additional cache
– Scales performance
There are two variations:
– -x2 doubles the standard cache size
– -x4 quadruples the standard cache size
18
Revision 2‐5
© 2012 Nimble Storage, Inc. Note there is no –X4 option for the CS460 arrays 37
19
Revision 2‐5
Rank in order
• Performance
• Capacity
• Cost
• Reliability
20
Revision 2‐5
Pros Cons
Pros Cons
21
Revision 2‐5
Pros
Sweeping
22
Revision 2‐5
SLC MLC
High density
Low cost per bit
Endurance
Op temp range
Low power consumption
Write/Erase speeds
Source:
Super Talent SLC vs. MLC: An Write/Erase endurance
Analysis of Flash Memory
23
Revision 2‐5
24
Revision 2‐5
Write Operation
1. Write is received by active
Inline Compression
NVRAM
NVRAM
25
Revision 2‐5
Compression
26
Revision 2‐5
53
4.5 MB stripes
Many IOPS sent as a stripe reduces IOPS
between controller and disks
Write Operation
2 6 8 1 21. 8 Write is received
1 by active
Inline Compression K
18K
K K
7K
K K K
21 K
KRandom
NVRAM
NVRAM
3 3 4 2controller’s
4 NVRAM
18K NIMBLE 21K
11K
K
ARRAY K 2.K
11K
K Random
KWrite is mirrored to partner
DRAM controller’s
32K NVRAM
32K 32K 32K 32K
3.
32K
Seq’l
Write is acknowledged
Universal Compression:
4. Write is shadow copied to DRAM
Variable-size blocks enable
5. System uses a modified Lempel-
fast inline compression, saving
30-75% Ziv for inline compression.
• Variable block based;
Elimination of read-modify-write
penalty allows compression of
compresses all data into
all applications stripes
27
Revision 2‐5
NVRAM
NIMBLE All data is written sequentially in full
DRAM
ARRAY RAID stripes to disks. Because of
compression and the stripe write
All Data there are fewer write operations
Large stripe written to disk in
one operation: ~250x faster
High-Capacity Disk Storage than “write in place” layout
Use of low-cost, high-density HDDs
coupled with compression lowers
costs substantially
Smart Caching
Inline Compression
NVRAM
NVRAM
28
Revision 2‐5
Accelerated Reads
All random writes and any “hot” data is
Inline Compression written to Flash Cache.
NVRAM
NVRAM
NIMBLE
ARRAY Serves hot data from flash;
DRAM responds rapidly to changes
All Data Cache- Large Adaptive Flash Cache Reads 50x faster than disk
worthy
Data
(200us vs. 10ms)
Read Operation
1 Inline Compression 1. Read from Shadow NVRAM (RAM)
NVRAM
NVRAM
NIMBLE
2
DRAM
ARRAY 2. If not found, check DRAM
3. If not found, read from cache
All Data Cache- 3 Large Adaptive Flash Cache
• If found, validate checksum,
worthy
uncompress, and return data
4High-Capacity Disk Storage 5
Data
4. If not found, read from disk
• If found, validate checksum,
uncompress, and return data
5. And, if cache-worthy, write to cache
29
Revision 2‐5
Performance
2 volume, 2 test files, each 30G / 4 threads, with 8 outstanding io for each thread
• How might inline compression Vs. full stripe compression effect the
read?
• How do you think a changed block is handled?
30
Revision 2‐5
31
Revision 2‐5
Data Security
32
Revision 2‐5
Summary
Summary
33
Revision 2‐5
34
Revision 2‐5
Understanding IP’s
1
Revision 2‐5
Networking Terminology
Host
Interface Pairs eth1 eth2
Active link
Standby link
Controller A Controller B
2
Revision 2‐5
MPIO
3
Revision 2‐5
Configure flow control on Configure Flow Control on each switch port that
each switch port handles iSCSI connections. If your application
server is using a software iSCSI initiator and
NIC combination to handle iSCSI traffic, you
must also enable Flow Control on the NICs to
obtain the performance benefit.
Use jumbo frames Configure jumbo frames on each switch that handles
when applicable iSCSI traffic. If your server is using a software iSCSI
initiator and NIC combination to handle iSCSI traffic,
you must also enable Jumbo Frames on the NICs to
obtain the performance benefit (or reduce CPU
overhead) and ensure consistent behavior. Do not
enable Jumbo Frames on switches unless Jumbo
Frames is also configured on the NICs.
4
Revision 2‐5
Vmware Settings
Review the Nimble Vmware Integration Guide
Configure Round Robin ESX 4.1 only (4.0 will be different)
To set the default to Round Robin for all new Nimble volumes type the following, all on one line:
esxcli nmp satp addrule --psp VMW_PSP_RR --satp VMW_SATP_ALUA --vendor Nimble
5
Revision 2‐5
Host
• Even ports to one switch
eth1 eth2
• Odd ports to the opposite switch
Active link
Standby link
Controller A Controller B
Host
eth1 eth2
If a switch fails controllers
cannot perform a proper
failover since their sibling
interface does not have
connectivity. eth5 eth6 eth5 eth6
Controller A Controller B
6
Revision 2‐5
Section 5: Installation
First Steps
Once you have this log on to the support site and download the
following: http://support.nimblestorage.com/download/download.html
– Latest Release Notes
– Latest User Guides
– Latest CLI Reference Guide
– Nimble Windows Toolkit
– VMware Integration Toolkit (if applicable)
– Related Best Practice Guides
7
Revision 2‐5
Pre-Install Checklist
Pre-Install Checklist
Pre-Installation Checklist
8
Revision 2‐5
9
Revision 2‐5
10
Revision 2‐5
Perquisites
11
Revision 2‐5
1) Start the “Nimble Array Setup Manager” 2) Select the Array to install and click “Next”
6) Click “Finish”
12
Revision 2‐5
Enter Management IP
13
Revision 2‐5
Advanced configuration
14
Revision 2‐5
Data ports
Management
& replication
CONTROLLER A CONTROLLER B
15
Revision 2‐5
16
Revision 2‐5
Your Nimble Storage array is ready to use. Before you start using your
array, there are a couple of things you should do to ensure smooth
operations.
You must add the management IP address and the controller support
addresses you provided to
Your mail server’s relay list.
17
Revision 2‐5
18
Revision 2‐5
Post-Install Checklist
Verify with Nimble Support that an Autosupport was received and request
an Install Health-Check
– Don’t leave site without performing this step!
19
Revision 2‐5
20
Revision 2‐5
GUI Interface
21
Revision 2‐5
GUI Tour
GUI Tour
22
Revision 2‐5
Hardware Icons
23
Revision 2‐5
Volume is online
Volume is offline
Volume is offline due to a fault
Volume replica
Volume collection
Volume is running out of space
24
Revision 2‐5
GUI Navigation
Links
Side menus
25
Revision 2‐5
26
Revision 2‐5
MIB II
MIB II Support
– Customers use SNMP to view their Nimble array with existing Management Software
» E.g. Solarwinds, Nagios, Cacti, MG-SOFT MIB Browser
– MIB II is the second version of MIB
– Mandatory for every device that supports SNMP
– Support for SNMP v1 and v2, but not v3
27
Revision 2‐5
Volumes Overview
Physical storage
resource
28
Revision 2‐5
Thin Provisioning
Consumed Space
Volume
RAID 6
Storage Pool
Volume Reserves
Volume
Volume Reserve
RAID 6
Storage Pool
A reservation reserves a
guaranteed minimum
amount of physical space
from the pool for a volume
29
Revision 2‐5
Volume Quotas
Volume Reserve
Volume Quota
Volume
Pool
A quota sets the amount of a volume that can be consumed before an alert is
sent and writes are disallowed.
30
Revision 2‐5
Performance Policy
31
Revision 2‐5
Initiator Groups
Initiator Groups
• Seldom use IP
32
Revision 2‐5
Volume Collection
• VMFS consistent
snapshots
• Exchange consistent
snapshots
• SQL/Exchange uses MS
VSS framework and
requires NPM on the
Application Host – more
later
33
Revision 2‐5
Protection Template
Creating a Volume
Demonstration
and Lab
34
Revision 2‐5
35
Revision 2‐5
Initiator Target
iSCSI Portal: A targets IP and TCP port number pair (default – 3260)
Discovery: The process of an initiator asking a target portal for a list of it's
targets and then making those available for configuration
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
1
Revision 2‐5
iSCSI IQN
iqn.2007-11.com.nimblestorage
iqn.2007-11.com.nimblestorage:training-vob104e23787e0f74.00000002.736e4164
1 2 3 4
2
Revision 2‐5
5. On the dialog that is launched for connection, click the Advanced button to specify
physical port connections as described in Understanding IP addressing on page 29.
6. Select the adapter to connect with (usually Microsoft iSCSI adapter) and the target portal
IP address to use for the connection, then click OK.
7. Leave the volume selected for Add this connection to the list of Favorite targets if you
want the system to automatically try to reconnect if the connection fails, and select Enable
Multipath if the connection should use MPIO, then click OK.
8. Click OK to close the Initiator Properties dialog.
9. Move to the Disk Management area of your operating system to configure and map the
volume. Select Control Panel > Administrative Tools. Move to Computer Management
> Storage > Disk Management.
10. Right-click and initialize the new disk (volume). Important: Use the quick format option
when initializing a volume on Windows.
3
Revision 2‐5
Integration Guides
4
Revision 2‐5
5
Revision 2‐5
Only work with one Volume and one ESX host at a time
Given a vSwitch with two Physical Adapters vmnic1 and 2
configure them for iSCSI use:
1. Select the ESX Host and click on the configuration tab
2. Click on Networking in the Navigation Pane
3. Use the “Add” button to create two VMkernel ports and enable for
iSCSI and vMotion and name them iSCSI0 and iSCSI1
4. Disable NIC teaming
5. Enable iSCSI SW initiator if not already done
6. Add VMkernel Ports to iSCSI using CLI command if you are working
with ESX 4.1 or with the vSphere GUI if using ESX 5
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Verification Formula
( ESX Hosts * Physical Ports per Host * Array Data Ports )
( Count of subnets * Switches per Subnet )
Note:
2 switches with same VLAN/subnet trunked together is 1 switch
2 switches with same VLAN/subnet NOT trunked is 2 switches
6
Revision 2‐5
Controller A Controller B
2X2X4 1 Volume
Expected paths? = 16
1X1
7
Revision 2‐5
Controller A Controller B
2 X 2 X 4 16 1 Volume
Expected paths? = =8
2X1 2
You lost a NIC, link or misconfigured the IP? ESX Host 1 ESX Host 2
• Where could you look to discover which
NIC1 NIC1 NIC1 NIC1
paths are missing?
• The two easiest points to check would
be the switches view of the links and
the arrays view of the links. Switch 1 Switch 2
8
Revision 2‐5
Controller A Controller B
1 Volume
Additional Troubleshooting
Verify physical connectivity (draw a picture)
May want to use switch commands to print connected MAC addresses
and compare with MAC address of array ports.
nic --list
Verify VLAN/Subnets are correct on all ports
Verify links are UP and IPs are correct on array
Under GUI navigate to Manage >> Array
ip --list
Clear all appropriate iSCSI static connections in VMware
before all rescans
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
9
Revision 2‐5
Work on only one system at a time and check the following before moving to
another:
Check src/dest IP addresses of all connections on array:
GUI: Monitor::Connections
CLI: vol --info <volume name>
Check paths on VMware
Storage Adapters
iSCSI SW Initiator
Right click on device and select Manage Paths
Force a failover and check you still have the correct number of connections
As Root on active controller
ctrlr –-list – will display the active controller
reboot –-controller A or B whichever is the active controller from above
Testing Performance
10
Revision 2‐5
Performance Metrics
Random I/O
Random I/O
max IOPS max IOPS
– I/O’s per second (IOPS)
– Typically, 1K—100K IOPS
– Small I/O size; e.g., 4KB
– High QD; e.g., 16 I/O size QD
Seq throughput
Seq throughput
Sequential throughput
– MBytes per second (MBps) max MBPS max MBPS
– Typically, 100—1000 MBps
– Large IO size; e.g., 256KB
– High QD; e.g., 16
I/O size QD
Latency
– Milliseconds (ms)
latency
latency
– Typically, 0.1—10ms
– Random
min latency min latency
– Small IO size; e.g., 4KB
– QD = 1 I/O size QD
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
11
Revision 2‐5
Performance
2 volume, 2 test files, each 30G / 4 threads, with 8 outstanding io for each thread
12
Revision 2‐5
Updated the array to the current release – this should be done before
anything else
AutoSupport enabled and confirmed that it is working with Nimble
support - test via button in GUI
Confirmed the system heartbeat is working with Nimble Support
Email alerts enabled and confirmed they are working with Nimble
Support- test email alert button
Confirm VMware connection count
– Failover controllers and re-check
Section 9: Snapshots
13
Revision 2‐5
Snapshots
Snapped Data
New Data
(non-snapped)
27
What is a
COW?
14
Revision 2‐5
COW Snapshots
Changed Block
Snapped Data
New Data
(non-snapped)
Snapshot Reserve
29
What is a
ROW?
15
Revision 2‐5
ROW Snapshots
Changed Block
Snapped Data
New Data +
Changed Blocks
31
16
Revision 2‐5
10 snap!
10 snap!
If block B is changed, the original
state can be recovered by rolling
back to the snap taken at 10:00
17
Revision 2‐5
10 10
11 snap!
The next snap taken captures the
change made to block B
18
Revision 2‐5
19
Revision 2‐5
Snap15 A B’ C E’ 15:00 snapshot (update E) By restoring just the pointers from the
14:00 snapshot to the active file (or
Snap16 A B’ C E’’ 16:00 snapshot (update E 2X) filesystem or LUN), the state file (or
filesystem or LUN) at 14:00 can be
Snap17 A B’ C E’’ 17:00 snapshot
restored almost instantly, without
8 8 39 having to move any data.
Full Backups Actual Apparent
Blocks Stored Blocks Stored
Snapshots
RARELY USED
Snapped
Volume Snapshot Reserve – An accounting
for a set amount of space that will be
guaranteed available for the
Snapshot Reserve snapshot.
Snapshot Quota – An accounting for
Snapshot Quota the total amount of space a snapshot
can consume.
20
Revision 2‐5
21
Revision 2‐5
Usage (GB)
to Equallogic and Nimble 60
CS-Series 40
22
Revision 2‐5
Replication Overview
What is replication
and how does it
work?
Replication Overview
Network
Snapshot Replica
Partners
Replica
23
Revision 2‐5
• Replication happens
on Management IP
24
Revision 2‐5
• Support Multiple
QOS Policies
• Applies to Partner
Replication Schedules
• Replication configured
using Volume Collection
schedule attributes
• Different Schedules in
the same Collection must
replicate to the same
partner
25
Revision 2‐5
One-to-One Replication
N
E
Single volume assigned to Multiple volumes assigned
T “Hourly” to “Daily”
W Replica of volume Replicas of volumes
assigned to “Hourly” assigned to “Daily”
O
R
K
Reciprocal Replication
N
E
T
W
O
R
K
26
Revision 2‐5
Many-to-One Replication
27
Revision 2‐5
Demote
Replication Status
• Replication tab of
Volume Collection
28
Revision 2‐5
29
Revision 2‐5
Recovery Scenarios
1
Revision 2‐5
2
Revision 2‐5
Recovery Scenarios
Recovery Scenarios
3
Revision 2‐5
Recovery Scenarios
Recovery Scenarios
4
Revision 2‐5
Nimble + CommVault
5
Revision 2‐5
How it works:
1. NPM schedule triggers snapshot process
2. NPM agent talks to MS VSS service.
3. VSS tells Exchange to “quiesce” mail stores.
3 Mail stores
VSS 4. VSS tells NTFS to flush buffer cache.
4 NTFS
5. VSS tells Nimble array to take a snapshot.
6. Nimble array captures near instant snapshots of all volumes
2 9 in collection.
1 7. Optional: NPM runs database verification on predefined
5 schedule to ensure consistency and truncates logs
8. NPM triggers WAN efficient replication on pre-defined
schedule
9 9. Optional: Existing backup software mounts snapshot for
Tape
6 weekly archive copy to tape
10. When needed, snapshots provide fast restore capability
snapshots
Improved protection with fast snapshots,
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute. Efficient capacity and bandwidth utilization
6
Revision 2‐5
Return to the details page of the volume collection and click Validate to
ensure:
– username and password are correct
– user has the correct permissions
7
Revision 2‐5
8
Revision 2‐5
SRM - Reprotection
Storage Replication
9
Revision 2‐5
SRM - Failback
Designed to ensure that VMs are static and quiescent before running the
recovery plan
If the protected site is not available the recovery plan will run to
completion even if errors are encountered
10
Revision 2‐5
11
Revision 2‐5
Object-1 erased,
Object-1 erased, server storage understands
does not inform storage and releases space
that space can be used
released KB from VMware regarding SCSI Unmap:
http://kb.vmware.com/selfservice/microsites/
search.do?language=en_US&cmd=displayK
C&externalId=1021976
© 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
12
Revision 2‐5
Review the Vmware Integration Guide 1.2 - Using Nimble's vCenter plugin, for details on
using this plugin
© 2012 Nimble Storage. Proprietary and 26
confidential. Do not distribute.
13
Revision 2‐5
Hardware FRUs
Controllers
Power Supplies
14
Revision 2‐5
Replacing a Drive
Replacing a Drive
1 2
3 4
15
Revision 2‐5
Replacing a Controller
Replacing a Controller
1 2
3 4
16
Revision 2‐5
17
Revision 2‐5
Upgrade firmware
18
Revision 2‐5
Active
Standby Standby
Active
Failover
19
Revision 2‐5
Proactive Wellness
Accelerate Protect Empower
Comprehensive
Telemetry Data
5-minute heartbeats
Nimble
Support
20
Revision 2‐5
Proactive Wellness
Accelerate Protect Empower
Proactive Wellness
Accelerate Protect Empower
5-minute heartbeats
Nimble
Support Proactive wellness,
Automated case creation
21
Revision 2‐5
Etc.......
22
Revision 2‐5
VPN Tunnel
Ability to establish secure VPN tunnel from customer array to Nimble
Support to enable real-time diagnostics and problem resolution
Ability to download and run custom corrective scripts to prevent
conditions that may result in downtime
Customer has full control over enabling/disabling the tunnel
Can you
please open Nimble Support uses a
the tunnel? DSA public key to
connect to the array
Array status
Array alerts
Performance policies
Protection templates
Snapshots
23
Revision 2‐5
Support Process
Support Process
– Call 1-877-3NIMBLE (US, Canada only) or +1 408-636-6347 worldwide
– Email support@nimblestorage.com
Coverage
– Telephone Support: 24x7
– Email support: 8:00AM-6:00PM PST M-F
– Support website: 24x7
– Engineering Escalation: 24x7 on-call availability
SLA
– P1 telephone response less than 30 minutes (current average is <5min)
» Immediate escalation to Engineering.
– P2 response less than 4 business hours
– P3 response less than 8 business hours
Severity Definitions
– P1 – Not serving data; severe performance degradation; single controller not operational
– P2 – Performance degradation, intermittent SW faults, network degradation
– P3 – Problem or defect causing minimal business impact; request for information
24
Revision 2‐5
Escalation Process
Escalation Process
– Call Nimble Support for 24x7 response
– Indicate nature of issue and business impact. Request immediate escalation as desired.
– Escalation team, account team and executive management notified immediately
following problem description and Support case creation.
– Immediate diagnosis via remote diagnostics tunnel and or WebEx with customer by
support team and engineering resource as required.
– Case worked until resolution.
Parts Delivery
NBD Parts
– North America
» Part must be identified by 3PM PST
– Europe
» Part must be identified by 3PM CET
4 Hour Parts
– We partner with a global logistics provider with over 700+ worldwide depots, and as
we expand, we are constantly adding new ones
» Contact your sales team to locate your nearest depot
25
Revision 2‐5
26
Revision 2‐5
University.nimblestorage.com
Upon Completion
4. DO NOT close the course portal window until the following is displayed:
27
Revision 2‐5
28