Vous êtes sur la page 1sur 35

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Technical Notes
P/N 300-012-014 REV A02 June 2011

This technical notes document contains information on these topics:

Executive summary ................................................................................... 2 Introduction and overview ....................................................................... 2 Fully Automated Storage Tiering ............................................................ 3 FAST and FAST VP comparison .............................................................. 5 Theory of operation ................................................................................... 6 Performance considerations ................................................................... 11 Product and feature interoperability ..................................................... 14 Planning and design considerations ..................................................... 18 Summary and conclusion ....................................................................... 31 Appendix: Best practices quick reference ............................................. 33

Executive summary

Executive summary
EMC Symmetrix VMAX series with Enginuity incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration to a 2 PB system. Symmetrix VMAX provides predictable, self-optimizing performance and enables organizations to scale out on demand in Private Cloud environments. VMAX automates storage operations to exceed business requirements in virtualized environments, with management tools that integrate with virtualized servers and reduce administration time in private cloud infrastructures. Customers are able to achieve always on availability with maximum security, fully nondisruptive operations, and multi-site migration, recovery, and restart to prevent application downtime. Enginuity 5875 for Symmetrix VMAX extends customer benefits in the following areas: More efficiency Zero-downtime tech refreshes with Federated Live Migration, and lower costs with automated tiering More scalability Up to 2x increased system bandwidth, with the ability to manage up to 10x more capacity per storage admin More security Built-in Data at Rest Encryption Improved application compatibility Increased value for virtual environments, including improved performance and faster provisioning Information infrastructure must continuously adapt to changing business requirements. EMC Symmetrix Fully Automated Storage Tiering for Virtual Pools (FAST VP) automates tiered storage strategies, in Virtual Provisioning environments, by easily moving workloads between Symmetrix tiers as performance characteristics change over time. FAST VP performs data movements, improving performance and reducing costs, all while maintaining vital service levels.

Introduction and overview


EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments automates the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. FAST VP proactively monitors workloads at both the LUN and sub-LUN level in order to identify busy data that would
2 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Fully Automated Storage Tiering

benefit from being moved to higher-performing drives. FAST VP will also identify less busy data that could be moved to higher-capacity drives, without existing performance being affected. This promotion/demotion activity is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, via thin storage pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed nondisruptively, without affecting business continuity and data availability.

Audience
This technical notes document is intended for anyone who needs to understand FAST VP theory, best practices, and associated recommendations as necessary to achieve the best performance for FAST VP configurations. This document is specifically targeted at EMC customers, sales, and field technical staff who are either running FAST VP or are considering FAST VP for future implementation. Significant portions of this document assume a base knowledge regarding the implementation and management of FAST VP. For information regarding the implementation and management of FAST VP in Virtual Provisioning environments, please refer to the Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays Technical Note (P/N 300-012-015).

Fully Automated Storage Tiering


Fully Automated Storage Tiering (FAST) automates the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. The primary benefits of FAST include: Elimination of manually tiering applications when performance objectives change over time Automating the process of identifying data that can benefit from Enterprise Flash Drives or that can be kept on higher-capacity, less-expensive SATA drives without impacting performance Improving application performance at the same cost, or providing the same application performance at lower cost. Cost is defined as acquisition (both hardware and software), space/energy, and management expense Optimizing and prioritizing business applications, allowing customers to dynamically allocate resources within a single
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 3

Fully Automated Storage Tiering

array Delivering greater flexibility in meeting different price/performance ratios throughout the lifecycle of the information stored Due to advances in drive technology, and the need for storage consolidation, the number of drive types supported by Symmetrix has grown significantly. These drives span a range of storage service specializations and cost characteristics that differ greatly. Several differencesexist between the three drive technologies supported by the Symmetrix VMAX Enterprise Flash Drive (EFD), Fibre Channel, and SATA. The primary areas they differ in are: Response time Cost per unit of storage capacity Cost per unit of storage request processing At one extreme are EFDs, which have a very low response time, but with a high cost per unit of storage capacity At the other extreme are SATA drives, which have a low cost per unit of storage capacity, but high response times and high cost per unit of storage request processing In between these two extremes lie Fibre Channel drives Based on the nature of the differences that exist between these three drive types, the following observations can be made regarding the most suited workload type for each drive. Enterprise Flash Drives EFDs are more suited for workloads that have a high back-end random read storage request density. Such workloads take advantage of both the low response time provided by the drive, and the low cost per unit of storage request processing without requiring a log of storage capacity. SATA drives SATA drives are suited toward workloads that have a low back-end storage request density. Fibre Channel drives FC drives are the best drive type for workloads with a back-end storage request density that is not consistently high or low. This disparity in suitable workloads presents both an opportunity and a challenge for storage administrators. To the degree it can be arranged for storage workloads to be served by the best suited drive technology, the opportunity exists to improve
FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

FAST and FAST VP comparison

application performance, reduce hardware acquisition expenses, and reduce operating expenses (including energy costs and space consumption). The challenge, however, lies in how to realize these benefits without introducing additional administrative overhead and complexity. The approach taken with FAST is to automate the process of identifying which regions of storage should reside on a given drive technology, and to automatically and nondisruptively move storage between tiers to optimize storage resource usage accordingly. This also needs to be done while taking into account optional constraints on tier capacity usage that may be imposed on specific groups of storage devices.

FAST and FAST VP comparison


EMC Symmetrix VMAX FAST and FAST VP automate the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. While the administration procedures used with FAST VP are also very similar to those available with FAST, the major difference being storage pools used by FAST VP is thin storage pools. FAST operates on non-thin, or disk group provisioned, Symmetrix volumes. Data movements executed between tiers are performed at the full volume level. FAST VP operates on Virtual Provisioning thin devices. As such, data movements executed can be performed at the subLUN level, and a single thin device may have extents allocated across multiple thin pools within the array.
Note: For more information on Virtual Provisioning, please refer to the Best Practices for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning Technical Note available on Powerlink.

Because FAST and FAST VP support different device types non-thin and thin, respectively they both can operate simultaneously within a single array. Aside from some shared configuration parameters, the management and operation of each can be considered separately.
Note: For more information on FAST please refer to the Implementing Fully Automated Storage Tiering (FAST) for EMC Symmetrix VMAX Series Arrays technical note available on Powerlink.

The goal of FAST and FAST VP is to optimize the performance and costefficiency of configurations containing mixed drive technologies. While FAST monitors and moves storage in units of entire logical devices,
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 5

Theory of operation

FAST VP monitors data access with much finer granularity. This allows FAST VP to determine the most appropriate tier (based on optimizing performance and cost efficiency) for each 7680 KB region of storage. In this way, the ability of FAST VP to monitor and move data with much finer granularity greatly enhances the value proposition of automated tiering. FAST VP builds upon and extends the existing capabilities of Virtual Provisioning and FAST to provide the user with enhanced Symmetrix tiering options. The Virtual Provisioning underpinnings of FAST VP allow FAST VP to combine the core benefits of Virtual Provisioning (wide striping and thin provisioning) with the benefits of automated tiering. FAST VP more closely aligns storage access workloads with the best suited drive technology than is possible if all regions of a given device must be mapped to the same tier. At any given time the hot regions of a thin device managed by FAST VP may be mapped to an EFD tier, and the warm parts may be mapped to an FC tier and the cold parts may be mapped to a SATA tier. By more effectively exploiting drive technology specializations, FAST VP delivers better performance and greater cost efficiency that FAST. FAST VP also better adapts to shifting workload locality of reference, changes in tier allocation limits, or storage group priority. This is due to the fact that the workload may be adjusted by moving less data. This further contributes to making FAST VP more effective at exploiting drive specializations and also enhances some of the operational advantages of FAST. This includes the ability to nondisruptively adjust the quality of storage service (response time and throughput) provided to a storage group.

Theory of operation
There are two components of FAST VP Symmetrix microcode and the FAST controller. The Symmetrix microcode is a part of the Enginuity storage operating environment that controls components within the array. The FAST controller is a service that runs on the service processor.

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Theory of operation

Figure 1. FAST VP components When FAST VP is active, both components participate in the execution of two algorithmsthe intelligent tiering algorithm and the allocation compliance algorithmto determine appropriate data placement. The intelligent tiering algorithm uses performance data collected by the microcode, as well as supporting calculations performed by the FAST controller, to issue data movement requests to the VLUN VP data movement engine. The allocation compliance algorithm enforces the upper limits of storage capacity that can be used in each tier by a given storage group by also issuing data movement requests to the VLUN VP data movement engine. Performance time windows can be defined to specify when the FAST VP controller should collect performance data, upon which analysis is performed to determine the appropriate tier for devices. By default, this will occur 24 hours a day. Defined data movement windows determine when to execute the data
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 7

Theory of operation

movements necessary to move data between tiers. Data movements performed by the microcode are achieved by moving allocated extents between tiers. The size of data movement can be as small as 768 KB, representing a single allocated thin device extent, but will more typically be an entire extent group, which is 7,680 KB in size. The following sections further describe each of these algorithms.

Intelligent tiering algorithm


The goal of the intelligent tiering algorithm is to use the performance metrics collected at the sub-LUN level to determine which tier each extent group should reside in and to submit the needed data movements to the Virtual LUN (VLUN) VP data movement engine. The determination of which extent groups need to be moved is performed by a task that runs within the Symmetrix array. The intelligent tiering algorithm is structured into two components: a main component which executes within Symmetrix microcode and a secondary, supporting, component that executes within the FAST controller on the service processor. The main component periodically assesses whether extent groups need to be moved in order to optimize the use of the FAST VP storage tiers. If so, the required data movement requests are issued to the VLUN VP data movement engine. When determining the appropriate tier for each extent group, the main component makes use of both the FAST VP metrics, previously discussed, and supporting calculations performed by the secondary component on the service processor. The intelligent tiering algorithm runs continuously during open data movement windows, when FAST is enabled and the FAST VP operating mode is Automatic. As such, performance-related data movements can occur continuously during an open data movement window.

Allocation compliance algorithm


The goal of the allocation compliance algorithm is to detect and correct situations where the allocated capacity for a particular storage group within a thin storage tier exceeds the maximum capacity allowed by the associated FAST policy. A storage group is considered to be in compliance with its associated FAST policy when the configured capacity of the thin devices in the storage group is located on tiers defined in the policy and when the

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Theory of operation

usage of each tier is within the upper limits of the tier usage limits specified in the policy. Compliance violations may occur for multiple reasons, including: New extent allocations performed for thin devices managed by FAST VP Changes made to the upper usage limits for a VP tier in a FAST policy Adding thin devices to a storage group that are themselves out of compliance Manual VLUN VP migrations of thin devices The compliance algorithm will attempt to minimize the amount of movements performed to correct compliance that may, in turn, generate movements performed by the intelligent tiering algorithm. This is done by coordinating the movement requests with the analysis performed by the intelligent tiering algorithm in determining the most appropriate extents to be moved, and the most appropriate tier, when correcting compliance violations. The compliance algorithm runs every 10 minutes during open data movement windows, when FAST is enabled and the FAST VP operating mode is Automatic.

Data movement
Data movements executed by FAST VP are performed by the VLUN VP data movement engine, and involve moving thin device extents between thin pools within the array. Extents are moved via a move process only; extents are not swapped between pools. The movement of extents, or extent groups, does not change the thin device binding information. That is, the thin device will still remain bound to the pool it was originally bound to. New allocations for the thin device, as the result of host writes, will continue to come from the bound pool. To complete a move, the following must hold true: The FAST VP operating mode must be Automatic. The VP data movement window must be open. The thin device affected must not be pinned. There must be sufficient unallocated space in the thin pools included in the destination tier to accommodate the data being moved.
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

Theory of operation

The destination tier must contain at least one thin pool that has not exceeded the pool reserved capacity (PRC).
Note: If the selected destination tier contains only pools that have reached the PRC limit, then an alternate tier may be considered by the movement task.

Other movement considerations include: Only extents that are allocated will be moved. No back-end configuration changes are performed during a FAST VP data movement, and as such no configuration locks are held during the process. As swaps are not performed, there is no requirement for any swap space, such as DRVs, to facilitate data movement. Data movement time windows are used to specify date and time ranges when data movements are allowed, or not allowed, to be performed. FAST VP data movements run as low-priority tasks on the Symmetrix back end. Data movement windows can be planned so as to minimize impact on the performance of other, more critical workloads. A default data movement time window excludes all performance data samples, 24 hours a day, 7 days a week, 365 days a year. There are two types of data movement that can occur under FAST VP generated by the intelligent tiering algorithm and the allocation compliance algorithm, respectively. Both types of data movement will only occur during userdefined data movement windows. Intelligent tiering algorithm related movements are requested and executed by the Symmetrix microcode. These data movements will be governed by the workload on each extent group, but will only be executed within the constraints of the associated FAST policy. That is, a performance movement will not cause a storage group to become non-compliant with its FAST policy. Allocation compliance related movements are generated by the FAST controller, and executed by the microcode. These movements bring the capacity of the storage group back within the boundaries specified by the associated policy. Performance information from the intelligent tiering algorithm is used to determine more appropriate sub-extents to move when restoring compliance. When a compliance violation exists, the algorithm will generate a data movement request to return the allocations within the required limits. This request will explicitly indicate which thin device extents should be moved, and the specific thin pools they should be moved to.

10

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Performance considerations

Performance considerations
Performance data for use by FAST VP is collected and maintained by the Symmetrix microcode. This data is then analyzed by the FAST controller and guidelines generated for the placement of thin device data on the defined VP tiers within the array.

Performance metrics
When collecting performance data at the LUN and sub-LUN level for use by FAST VP, the Symmetrix microcode only collects statistics related to Symmetrix back-end activity that is the result of host I/O. The metrics collected are: Read miss Write Prefetch (sequential read) The read miss metric accounts for each DA read operation that is performed. Reads to areas of a thin device that have not had space allocated in a thin pool are not counted. Also, read hits, which are serviced from cache, are not considered. Write operations are counted in terms of the number of distinct DA operations that are performed. The metric accounts for when a write is destaged write hits, to cache, are not considered. Writes related to specific RAID protection schemes will also not be counted. In the case of RAID 1 protected devices, the write I/O is only counted for one of the mirrors. In the case of RAID 5 and RAID 6 protected devices, parity reads and writes are not counted. Prefetch operations are accounted for in terms of the number of distinct DA operations performed to prefetch data spanning a FAST VP extent. This metric considers each DA read operation performed as a front-end prefetch operation. Workload related to internal copy operations, such as drive rebuilds, clone operations, VLUN migrations, or even FAST VP data movements, is not included in the FAST VP metrics. These FAST VP performance metrics provide a measure of activity that assigns greater weight to more recent I/O requests, but are also influenced by less recent activity. By default, based on a Workload Analysis Period of 24 hours, an I/O that has just been received is weighted two times more heavily than an I/O received 24 hours previously.
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

11

Performance considerations

Note: Performance metrics are only collected during user-defined performance time windows. The times during which metrics are not being collected does not contribute to reducing the weight assigned to those metrics already collected.

The metrics collected at the sub-LUN level for thin devices under FAST VP control contain measurements to allow FAST VP to make separate data movement requests for each 7,680 KB unit of storage that make up the thin device. This unit of storage consists of 10 contiguous thin device extents and is known as an extent group. In order to maintain the sub-LUN-level metrics, collected by the microcode, the Symmetrix allocates one cache slot for each thin device that is under FAST VP control. When managing metadevices, cache slots are allocated for both the metahead and for each of the metamembers.
Note: A Symmetrix VMAX cache slot represents a single track of 64 KB.

If a thin device is removed from FAST VP control, then the cache slot reserved for collecting and maintaining the sub-LUN statistics is released. This can be done either by removing the thin device from a storage group associated with a FAST policy or disassociating the storage group from a policy.

FAST VP tuning
FAST VP provides a number of parameters that can be used to tune the performance of FAST VP and to control the aggressiveness of the data movements. These parameters can be used to nondisruptively adjust the amount of tier storage that a given storage group is allowed to use, or to adjust the manner in which storage groups using the same tier compete with each other for space. Relocation Rate The Relocation Rate is a quality of service (QoS) setting for FAST VP and affects the aggressiveness of data movement requests generated by FAST VP. This aggressiveness is measured as the amount of data that will be requested to be moved at any given time, and the priority given to moving the data between pools.
Note: The rate at which data is moved between pools can also be controlled via the Symmetrix Quality of Service VLUN setting.

12

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Performance considerations

Pool reserved capacity The PRC reserves a percentage of each pool included in a VP tier for non-FAST VP activities. The purpose of this is to ensure that FAST VP data movements do not fill a thin pool, and subsequently cause a new extent allocation, a result of a host write, to fail. The PRC can be set both system-wide and for each individual pool. By default, the system-wide setting is applied to all thin pools that have been included in VP tier definitions. However, this can be overridden for each individual pool by using the pool-level setting. When the percentage unallocated space in a thin pool is equal to the PRC, FAST VP will no longer perform data movements into that pool. However, data movements may continue to occur out of the pool to other pools. When the percentage of unallocated space becomes greater than the PRC, FAST VP can begin performing data movements into that pool again. FAST VP time windows FAST VP utilizes time windows to define certain behaviors regarding performance data collection and data movement. There are two possible window types: Performance time window Data movement time window

The performance time windows are used to specify when performance metrics should be collected by the microcode. The data movement time windows define when to perform the data relocations necessary to move data between tiers. Separate data movement windows can be defined for full LUN movement, performed by FAST and Optimizer, and sub-LUN data movement performed by FAST VP. Both performance time windows and data movement windows may be defined as inclusion or exclusion windows. An inclusion time window indicates that the action should be performed during the defined time window. An exclusion time window indicates that the action should be performed outside the defined time window. Performance time window The performance time windows are used to identify the business cycle for the Symmetrix array. They specify date and time ranges (past or future) when performance samples should be collected, or not collected, for the purposes of FAST VP performance analysis. The intent of defining performance time windows is to distinguish periods of time
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 13

Product and feature interoperability

when the Symmetrix is idle from periods when the Symmetrix is active, and to only include performance data collected during the active periods. By default performance metrics will be collected 24 hours a day, 7 days a week, 365 days a year. Data movement time window Data movement time windows are used to specify date and time ranges when data movements are allowed, or not allowed, to be performed. FAST VP data movements run as low-priority tasks on the Symmetrix back end. By default data movement is prevented 24 hours a day, 7 days a week, 365 days a year. Storage group priority When a storage group is associated with a FAST policy, a priority value must be assigned to the storage group. This priority value can be between 1 and 3, with 1 being the highest prioritythe default is 2. When multiple storage groups share the same policy, the priority value is used when the data contained in the storage groups is competing for the same resources in one of the associated tiers. Storage groups with a higher priority will be given preference when deciding which data needs to be moved to another tier.

Product and feature interoperability


FAST VP is fully interoperable with all Symmetrix replication technologiesEMC SRDF , EMC TimeFinder /Clone, TimeFinder/Snap, and Open Replicator. Any active replication on a Symmetrix device remains intact while data from that device is being moved. Similarly, all incremental relationships are maintained for the moved or swapped devices. FAST VP also operates alongside Symmetrix features such as Symmetrix Optimizer, Dynamic Cache Partitioning, and Auto-provisioning Groups.

SRDF
Thin SRDF devices, R1 or R2, can be associated with a FAST policy. Extents of SRDF devices can be moved between tiers while the devices are being actively replicated, in either synchronous or asynchronous mode. While there are no restrictions in the ability to manage SRDF devices with FAST VP, what must be considered is that data movements are
FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

14

Product and feature interoperability

restricted to the array upon which the FAST VP is operating. There is no coordination of data movements on both sides of the link, with FAST VP acting independently on both the local and remote arrays. This means that, in a SRDF failover scenario, the remote Symmetrix array will have different performance characteristics than the local, production array being failed over from. Also, in an SRDF/Asynchronous environment, FAST VP data movements on the production R1 array could result in an unbalanced configuration between R1 and R2 (where the performance characteristics of the R2 device are lower than that of the paired R1 device).

TimeFinder/Clone
Both the source and target devices of a TimeFinder/Clone session can be managed by FAST VP. However, the source and target will be managed independently, and as such may end up with different extent allocations across tiers.

TimeFinder/Snap
The source device in a TimeFinder/Snap session can be managed by FAST VP. However, target device VDEVs are not managed by FAST VP.

Open Replicator for Symmetrix


The control device in an Open Replicator session, push or pull, can have extents moved by FAST VP.

Virtual Provisioning
All thin devices, whether under FAST VP control or not, may only be bound to a single thin pool. All host write generated allocations, or user requested pre-allocations, are performed to this pool. FAST VP data movements will not change the binding information for a thin device. It is possible to change the binding information for a thin device without changing any of the current extent allocations for the device. However, when rebinding a device that is under FAST VP control, the thin pool the device is being re-bound to must belong to one of the VP tiers contained in the policy the device is associated with.

Virtual Provisioning space reclamation


Space reclamation may be run against a thin device under FAST VP control. However, during the space reclamation process, no sub-LUN performance metrics will be updated, and no data movements will be
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 15

Product and feature interoperability

performed.
Note: If FAST VP is actively moving extents of a device, a request to reclaim space on that device will fail. Prior to issuing the space reclamation task, the device should first be pinned. This will suspend any active FAST VP data movements for the device and allow the request to succeed.

Virtual Provisioning T10 unmap Unmap commands can be issued to thin devices under FAST VP control. The T10 SCSI unmap command for thin devices advises a target thin device that a range of blocks are no longer in use. If this range covers a full thin device extent, that extent can be deallocated and free space returned to the pool. If the unmap command range covers only some tracks in an extent, those tracks are marked Never Written by Host (NWBH). The extent is not deallocated; however those tracks will not have to be retrieved from disk should a read request be performed. Instead, the Symmetrix array will immediately return all zeros.

Virtual Provisioning pool management


Data devices may be added to or removed from a thin pool that is included in the FAST VP tier. Data movements related to FAST VP, into or out of the thin pool, will continue while the data devices are being modified. In the case of adding data devices to a thin pool, automated pool rebalancing may be run. Similarly, when disabling and removing data devices from the pool, they will drain their allocated tracks to other enabled data devices in the pool. While both data device draining and automated pool rebalancing may be active in a thin pool that is included in a VP tier, both of these processes may affect performance of FAST VP data movements.

Virtual LUN VP Mobility


A thin device under FAST VP control may be migrated using VLUN VP. Such a migration will result in all allocated extents of the device being moved to a single thin pool. While the migration is in progress, no FAST VP related data movements will be performed. Once the migration is complete, however, all allocated extents of the thin device will be available to be retiered.

16

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Product and feature interoperability

To prevent the migrated device from being retiered by FAST VP immediately following the migration, it is recommended that the device first be pinned. To re-enable FAST VP-related data movements, the device can later be unpinned.

FAST
Both FAST and FAST VP may coexist within a single Symmetrix. FAST will only perform full device movements of non-thin devices. As such, there will be no impact to FAST VPs management of thin devices. Both FAST and FAST VP do share some configuration parameters. These are: Workload Analysis Period Initial Analysis Period Performance Time Windows

Symmetrix Optimizer
Symmetrix Optimizer operates only on non-thin devices. As such, there will be no impact on FAST VPs management of thin devices. Both Optimizer and FAST VP share some configuration parameters. These are: Workload Analysis Period Initial Analysis Period Performance Time Windows

Dynamic Cache Partitioning (DCP)


Dynamic Cache Partitioning can be used to isolate storage handling of different applications. As data movements use the same cache partition as the application, movements of data on behalf of one application do not affect the performance of applications that are not sharing the same cache partition.

Auto-provisioning Groups
Storage groups created for the purposes of Auto-provisioning may also be used for FAST VP. However, while a device may be contained in multiple storage groups for the purposes of Auto-provisioning, it may only be contained in one storage group that is associated with a FAST policy (DP or VP). Should a storage group contain a mix of device types, thin and non-thin,
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

17

Planning and design considerations

only the devices matching the type of FAST policy it is associated with will be managed by FAST. If it is intended that both device types in an Auto-provisioning storage group be managed by FAST and FAST VP, respectively, then separate storage groups will need to be created. A storage group with the nonthin devices may then be associated with a policy containing DP tiers. A separate storage group containing the thin devices will be associated with a policy containing VP tiers.

Planning and design considerations


The following sections detail best practice recommendations for planning the implementation of a FAST VP environment. The best practices documented are based on features available in Enginuity 5875.198.148, Solutions Enabler 7.3, and Symmetrix Management Console 7.3. FAST VP configuration parameters FAST VP includes multiple configuration parameters that control its behavior. These include settings to determine the effect of past workloads on data analysis, quality of service for data movements, and pool space to be reserved for non-FAST VP activities. Also, performance collection and data movement time windows can be defined. The following sections describe best practice recommendations for each of these configuration parameters.
Note: For more information on each of these configuration parameters, refer to the Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays technical note available on Powerlink. Performance time window

The performance time windows specify date and time ranges when performance metrics should be collected, or not collected, for the purposes of FAST VP performance analysis. By default, performance metrics are collected 24 hours a day, every day. Time windows may be defined, however, to include only certain days or days of the week, as well as exclude other time periods. As a best practice, the default performance window should be left unchanged. However, if there are extended periods of time when the

18

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

workloads managed by FAST VP are not active, these time periods should be excluded.
Note: The performance time window is applied system-wide. If multiple applications are active on the array, but active at different times, then the default performance time window behavior should be left unchanged. Data movement time window

Data movement time windows are used to specify date and time ranges when data movements are allowed, or not allowed, to be performed. The best practice recommendation is that, at a minimum, the data movement window should allow data movements for the same period of time that the performance time windows allow data collection. This will allow FAST VP to react more quickly and more dynamically to any changes in workload that occur on the array. Unless there are specific time periods to avoid data movements, during a backup window, for example, it may be appropriate to set the data movement window to allow FAST VP to perform movements 24 hours a day, every day.
Note: If there is a concern about possible impact of data movements occurring during a production workload then the FAST VP Relocation can be used to minimize this impact. Workload Analysis Period

Rate

The Workload Analysis Period (WAP) determines the degree to which FAST VP metrics are influenced by recent host activity, and also less recent host activity, that takes place while the performance time window is considered open. The longer the time defined in the workload analysis period, the greater the amount of weight assigned to less recent host activity. The best practice recommendation for the workload analysis period is to use the default value of 7 days (168 hours).
Initial Analysis Period

The Initial Analysis Period (IAP) defines the minimum amount of time a thin device should be under FAST VP management before any performance-related data movements should be applied. This parameter should be set to a long enough value so as to allow sufficient data samples for FAST VP to establish a good characterization of the typical workload on that device.
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

19

Planning and design considerations

At the initial deployment of FAST VP, it may make sense to set the initial analysis period to at least 24 hours to ensure that a typical daily workload cycle is seen. However, once FAST VP data movement has begun, setting the IAP back to the default of 8 hours will allow newly associated devices to benefit from FAST VP movement recommendations more quickly.
FAST VP Relocation Rate

The FAST VP Relocation Rate (FRR) is a quality of service (QoS) setting for FAST VP. The FRR affects the aggressiveness of data movement requests generated by FAST VP. This aggressiveness is measured as the amount of data that will be requested to be moved at any given time, and the priority given to moving the data between pools. Setting the FRR to the most aggressive value, 1, will cause FAST VP to attempt to move the most data it can, as quickly as it can. Dependent on the amount of data to be moved, an FRR of 1 will be more likely to cause impact to host I/O response times, due to the additional back-end overhead being generated by the FAST VP data movements. However, the distribution of data across tiers will be completed in a shorter period of time. Setting the FRR to the least aggressive value, 10, will cause FAST VP to greatly reduce the amount of data that is moved and the pace at which it is moved. This setting will cause no impact to host response time, but the final distribution of data will take longer. Figure 2 shows the same workload, 1,500 IOPS of type OLTP2, being run on an environment containing two FAST VP tiers, Fibre Channel (FC) and Enterprise Flash (EFD). The same test was carried out with three separate relocation rates 1, 5, and 8. With a FRR of 1, an initial increase in response time is seen at the twohour mark, when FAST VP data movement was initiated, while no increase in response time was seen when the relocation rate was set to 8. However, the steady state for the response time is seen in a much shorter period of time for the lower, more aggressive, setting.

20

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

Figure 2. Example workload with varying relocation rates The default value for the FRR is 5. However, the best practice recommendation, for the initial deployment of FAST VP, is to start with a more conservative value for the relocation rate, perhaps 7 or 8. The reason for this is that when FAST VP is first enabled the amount of data to be moved is likely to be greater; compared to when FAST VP has been running for some time. At a later date, when it is seen that the amount of data movements between tiers is less, the FRR can be set to a more aggressive level, possibly 2 or 3. This will allow FAST VP to adjust to small changes in workload more quickly.
Pool Reserved Capacity

The Pool Reserved Capacity (PRC) reserves a percentage of each pool included in a VP tier for non-FAST VP activities. When the percentage of unallocated space in a thin pool is equal to the PRC, FAST VP will no longer perform data movements into that pool. The PRC can be set both as a system-wide setting and for each individual pool. If the PRC has not been set for a pool, or the PRC for the pool has been set to NONE, then the system-wide setting is used. For the system-wide setting, the best practice recommendation is to use the default value of 10 percent.
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 21

Planning and design considerations

For individual pools, if thin devices are bound to the pool, the best practice recommendation is to set the PRC based on the lowest allocation warning level for that thin pool. For example, if a warning is triggered when a thin pool has reached an allocation of 80 percent of its capacity, then the PRC should be set to 20 percent. This will ensure that the remaining 20 percent of the pool will only be used for new hostgenerated allocations, and not FAST VP data movements. If no thin devices are bound to the pool, or are going to be bound, then the PRC should be set to the lowest possible value, 1 percent.
Note: If the PRC is increased, causing a thin pool to be within the PRC limit, FAST VP will not automatically start moving data out of the pool. The PRC value only affects the ability of FAST VP to move data into a pool.

FAST VP policy configuration A FAST VP policy groups between one and three VP tiers and assigns an upper usage limit for each storage tier. The upper limit specifies the maximum amount of capacity of a storage group associated with the policy can reside on that particular tier. The upper capacity usage limit for each storage tier is specified as a percentage of the configured, logical capacity of the associated storage group.

Figure 3. Storage tier, policy, storage group association The usage limit for each tier must be between 1 percent and 100 percent. When combined, the upper usage limit for all thin storage tiers in the policy must total at least 100 percent, but may be greater than 100 percent. Creating a policy with a total upper usage limit greater than 100 percent allows flexibility with the configuration of a storage group whereby data may be moved between tiers without necessarily having to move a corresponding amount of other data within the same storage group. The ideal FAST VP policy would be 100 percent EFD, 100 percent FC, and 100 percent SATA. Such a policy would provide the greatest amount
22 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

of flexibility to an associated storage group, as it would allow 100 percent of the storage groups capacity to be promoted or demoted to any tier within the policy. While ideal, operationally it may not be appropriate to deploy the 100/100/100 policy. There may be reasons to limit access to a particular tier within the array. As an example, it may be appropriate to limit the amount of a storage groups capacity that can be placed on EFD. This may be used to prevent one single storage group, or application, from consuming all of the EFD resources. In this case, a policy containing just a small percentage for EFD would be recommended. Similarly, it may be appropriate to restrict the amount of SATA capacity a storage group will utilize. Some applications, which can become inactive from time to time, may require a minimum level of performance when they become active again. For such applications a policy excluding the SATA tier could be appropriate. The best way to determine appropriate policies for a FAST VP implementation is to examine the workload skew for the application data to be managed by FAST VP. The workload skew defines an asymmetry in data usage over time, meaning a small percentage of the data on the array may be servicing the majority of the workload on the array. One tool that provides insight into this workload skew is Tier Advisor.
Tier Advisor

Tier Advisor is a utility, available to EMC technical staff, that estimates the performance and cost of mixing drives of different technology types (EFD, FC, and SATA) within Symmetrix VMAX storage arrays. Tier Advisor can examine performance data collected from Symmetrix, VNX, or CLARiiON storage arrays and determine the workload skew at the full LUN level. It can also estimate the workload skew at the subLUN level. With this information, Tier Advisor can model an optimal storage array configuration by enabling the ability to interactively experiment with different storage tiers and storage policies until achieving the desired cost and performance preferences. Thin device binding Each thin device, whether under FAST VP control or not, may only be bound to a single thin pool. All host write generated allocations, or user requested pre-allocations, are performed from this pool. FAST VP data
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

23

Planning and design considerations

movements do not change the binding information for a thin device. From an ease-of-management and reporting perspective, it is recommended that all thin devices be bound to a single pool within the Symmetrix array. In determining the appropriate pool, both performance requirements and capacity management should be taken into consideration, as well as the use of thin device preallocation and the system write pending limit.
Note: Unless there is a very specific performance need it is not recommended to bind thin devices, under FAST VP control, to the EFD tier. Performance consideration

If no advance knowledge of the expected workload on newly written data is available, the best practice recommendation is to bind all thin devices to a pool within the second highest tier. In a three-tier configuration (EFD, FC, and SATA), this would imply the Fibre Channel tier. In a FC and SATA configuration, this would imply the SATA tier. Once the data has been written, and space allocated in the pool, FAST VP will then make the decision to promote or demote the data as appropriate. If it is known that newly allocated tracks will not be accessed by a host application for some time after the initial allocation, then it may be appropriate to bind the thin devices to the SATA tier.
Capacity management consideration

Binding all thin devices to a single pool will, ultimately, cause that single pool to be oversubscribed. This could potentially lead to issues as the pool fills up. Host writes to unallocated areas of a thin device will fail if there is insufficient space in the bound pool In an EFD, FC, SATA configuration, if the FC tier is significantly smaller than the SATA tier, binding all thin devices to the SATA tier will reduce the likelihood of the bound pool filling up. If performance needs require the thin devices to be bound to FC, the Pool Reserved Capacity, the FAST VP policy configuration, or a combination of both, can be used to alleviate a potential pool full condition.
Preallocation

A way to avoid writes to a thin device failing due to a pool being fully allocated is to preallocate the thin device when it is bound. However, performance requirements of newly written data should be considered prior to using preallocation. When FAST VP performs data movements only allocated extents are moved. This applies not only to extents allocated as the result of a host
24 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

write, but also to extents that have been preallocated. These preallocated extents will be moved even if no data has yet been written to them.
Note: When moving preallocated, but unwritten, extents no data is actually moved. The pointer for the extent is simply redirected to the pool in the target tier.

Preallocated, but unwritten, extents will show as inactive and as such will be demoted to the lowest tier included in the associated FAST VP policy. When these extents are eventually written to, the write performance will be that of the tier it has been demoted to. A best practice recommendation is to not preallocate thin devices managed by FAST VP. Preallocation should only be used selectively for those devices that can never tolerate a write failure due to a full pool.
System write pending limit

FAST VP is designed to throttle back data movements, both promotions and demotions, as the write pending count approaches the system write pending limit. This throttling gives an even higher priority to host I/O to ensure that tracks marked as write pending are destaged appropriately.
Note: By default the system write pending limit on a Symmetrix VMAX running Enginuity 5875 is set to 75 percent of the available cache.

If the write pending count reaches 60 percent of the write pending limit, FAST VP data movements will stop. As the write pending count decreases below this level, data movements will automatically restart. A very busy workload running on SATA disks, with SATA disks at, or near, 100 percent utilization causing a high write pending count, will impact FAST VP from promoting active extents to FC and/or EFD tiers. In an environment with a high write workload, it is recommended to bind thin devices to a pool in the FC tier.
Migration

When performing a migration to thin devices, it is possible that the thin devices will become fully allocated as a result of the migration. As such, the thin devices being migrated to should be bound to a pool that has sufficient capacity to contain the full capacity of each of the devices. Virtual Provisioning zero space reclamation can be used following the migration to deallocate zero data copied during the migration. Alternatively, the front-end zero detection capabilities of SRDF and Open Replicator can be used during the migration.

FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

25

Planning and design considerations

Rebinding a thin device

It is possible to change the binding information for a thin device without moving any of the current extent allocations for the device. This is done by a process called rebinding. Rebinding a thin device will increase the subscription level of the pool the device is being bound to, and decrease the subscription level of the pool it was previously bound to. However, the allocation levels in both pools will remain unchanged.
Note: When rebinding a device that is under FAST VP control, the thin pool the device is being re-bound to must belong to one of the VP tiers contained in the policy the device is associated with.

RAID protection considerations When designing a Virtual Provisioning configuration, particularly choosing a RAID protection strategy, both device level performance and availability implications should be carefully considered. For more information on these considerations, refer to the Best Practices for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning technical note available on Powerlink. FAST VP does not change these considerations and recommendations from a performance perspective. What FAST VP does change, however, is that a single thin device can now have its data spread across multiple tiers, of varying RAID protection and drive technology, within the array. Because of this, the availability of an individual thin device will not be based just on the availability characteristics of the thin pool the device is bound to. Instead, availability will be based on the characteristics of the tier with the lowest availability. While performance and availability requirements will ultimately determine the configuration of each tier within the Symmetrix array, as a best practice it is recommended to choose RAID 1 or RAID 5 protection on EFDs. The faster rebuild times of EFDs provide higher availability for these protection schemes on that tier. Also, it is recommended to use either RAID 1 or RAID 6 on the SATA tier. This is due to the slower rebuild times of the SATA drives (compared to EFD and FC) and the increased chance of a dual drive failure leading to data unavailability with RAID 5 protection.
Note: For new environments being configured in anticipation of implementing FAST VP, or for existing environments having additional tiers

26

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

added, it is highly recommended that EMC representatives be engaged to assist in determining the appropriate RAID protection schemes for each tier.

Drive configuration The VMAX best practice configuration guideline for most customer configurations recommends an even balanced distribution of physical disks across the disk adapters (DAs). This is of particular relevance for Enterprise Flash Drives (EFDs), which are each able to support thousands of I/Os per second, and therefore are able to create a load on the DA equivalent to that of multiple hard disk drives. An even distribution of I/O across all DAs is optimal to maximize their capability, and the overall capability of the VMAX. However, there will be scenarios where it is not appropriate to evenly distribute disk resources across DAs. An example of this is where the customer desires partitioning and isolation of disk resources to separate customer environments and workloads within the VMAX. Generally it is appropriate to configure more, smaller EFDs, than fewer, larger EFDs to spread I/O load as wide as possible. For example, on a 2engine VMAX with 16 DAs, if the EFD raw capacity requirement was 3.2 TB, it would be more optimal to configure 16 x 200 GB EFDs than 8 x 400 GB EFDs, as each DA could have one EFD configured on it. Storage group priority When a storage group is associated with a FAST policy, a priority value must be assigned to the storage group. This priority value can be between 1 and 3, with 1 being the highest prioritythe default is 2. When multiple storage groups are associated with FAST VP policies, the priority value is used when the data contained in the storage groups is competing for the same resources on one of the FAST VP tiers. Storage groups with a higher priority will be given preference when deciding which data needs to be moved to another tier. The best practice recommendation is to use the default priority of 2 for all storage groups associated with FAST VP policies. These values may then be modified if it is seen, for example, that a high-priority application is not getting sufficient resources from a higher tier. SRDF FAST VP has no restrictions in its ability to manage SRDF devices, however, what must be considered is that data movements are restricted to the array upon which FAST VP is operating. There is no coordination
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 27

Planning and design considerations

of data movements, with FAST VP acting independently on both the local and remote arrays. As a general best practice, FAST VP should be employed for both R1 and R2 devices, particularly if the remote R2 array has also been configured with tiered storage capacity. Similar FAST VP tiers and policies should be configured at each site.
Note: Each SRDF configuration will present its own unique behaviors and workloads. Prior to implementing FAST VP in an SRDF environment it is highly recommended that EMC representatives be engaged to assist in determining the appropriate application of FAST VP on both the local and remote Symmetrix VMAX arrays.

Prior to following this general best practice, the information in the following sections should be considered.
Note: The following sections assume that SRDF is implemented with all Symmetrix arrays configured for Virtual Provisioning (all SRDF devices are thin devices) and installed with the minimum Enginuity version capable of running FAST VP. FAST VP behavior

For SRDF R1 devices, FAST VP will promote and demote extent groups based on the read and write activity experienced on the R1 devices. Meanwhile, the SRDF R2 devices will typically only experience write activity during normal operations. As such, FAST VP is likely to promote only R2 device extents that are experiencing writes. If there are R1 device extents that only experience read activity, no writes, then the corresponding extents on the R2 devices will see no I/O activity. This will likely lead to these R2 device extents being demoted to the SATA tier, assuming this was included in the FAST VP policy.
SRDF operating mode

EMC best practices, for both synchronous and asynchronous modes of SRDF operation, recommend implementing a balanced configuration on both the R1 and R2 Symmetrix arrays. Ideally, data on each array would be located on devices configured with the same RAID protection type, on the same drive type. As FAST VP operates independently on each array, and also promotes and demotes data at the sub-LUN level, there is no guarantee that such a balance will be maintained. In SRDF synchronous (SRDF/S) mode, host writes are transferred synchronously from R1 to R2. These writes are only acknowledged to
28 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Planning and design considerations

the host when the data has been received into cache on the remote, R2 array. These writes to cache are then destaged asynchronously to disk on the R2.
Note: For more information on SRDF/S, see the EMC Solutions Enabler Symmetrix SRDF Family CLI Product Guide available on Powerlink.

In an unbalanced configuration, where the R2 data resides on a lowerperforming tier than on the R1, performance impact may be seen at the host if the number of write pendings builds up and writes to cache are delayed on the R2 array. With FAST VP this will typically not cause a problem as the promotions that occur on the R2 side will be the result of write activity. Areas of the thin devices under heavy write workload are likely to be promoted, and maintained, on the higher-performing tiers on the R2 array. In SRDF asynchronous (SRDF/A) mode, host writes are transferred asynchronously in pre-defined time periods or delta sets. At any given time there will be three delta sets in effect the capture set, the transmit/receive set, and the apply set.
Note: For more information on SRDF/A, see the EMC Solutions Enabler Symmetrix SRDF Family CLI Product Guide available on Powerlink.

A balanced SRDF configuration is more important for SRDF/A as data cannot transition from one delta set to the next until the apply set has completed destaging to disk. If the data resides on a lower-performing tier on the R2 array, compared to the R1, then the SRDF/A cycle time may elongate and eventually cause the SRDF/A session to drop. Similarly to SRDF/S mode, in most environments, this may not be a large issue as the data under write workload will be promoted and maintained on the higher-performing tiers. SRDF/A DSE (delta set extension) should be considered to prevent SRDF/A sessions from dropping should a situation arise where writes propagated to the R2 array are being destaged to a lower tier, potentially causing an elongation of the SRDF/A cycle time.
Note: For more information on SRDF/DSE, see the Best Practices for EMC SRDF/A Delta Set Extension technical note available on Powerlink. SRDF failover

As FAST VP works independently on both the R1 and R2 arrays it should be expected that the data layout will be different on each side. If an SRDF failover operation is performed, and host applications are
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 29

Planning and design considerations

brought up on the R2 devices, it should also be expected that the performance characteristics on the R2 will be different from those on the R1. In this situation it will take FAST VP some period of time to adjust to the change in workload and start promotion and demotion activities based on the mixed read and write workload.
Note: This same behavior would be expected following an SRDF personality swap and applications are brought up on the devices that were formerly R2 devices. SRDF bi-directional

Best practice recommendations change slightly in a bi-directional SRDF environment, where each Symmetrix array has both R1 and R2 devices configured. In this case the R1 and R2 devices on the same array will be under different workloads read and write for R1 and write only for R2. In this scenario, it is recommended to reserve the EFD tier for R1 device usage. This can be done by excluding any EFD tiers from the policies associated with the R2 devices. Should a failover be performed, then the EFD tier can be added, dynamically, to the policy associated with the R2 devices.
Note: This recommendation assumes that the lower tiers are of sufficient capacity and I/O capability to handle the expected SRDF write workload on the R2 devices.

Once again, it should be expected that some period of time will pass before performance on the R2 devices will be similar to that of the R1 devices prior to the failover.
EFD considerations

If there is a difference in the configuration of the EFD tier on the remote array, then it is recommended not to include the EFD tier in the FAST VP policies on the R2 array. Examples of a configuration difference include either fewer EFDs or no EFDs. Similarly, if the EFD configuration on the R2 array does not follow the best practice guideline of being balanced across DAs, do not include the EFD tier on the R2 side.
Note: This recommendation assumes that the lower tiers are of sufficient capacity and I/O capability to handle the expected SRDF write workload on the R2 devices.

30

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Summary and conclusion

Summary and conclusion


EMC Symmetrix VMAX series with Enginuity incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration to a 2 PB system. Symmetrix VMAX provides predictable, self-optimizing performance and enables organizations to scale out on demand in Private Cloud environments. Information infrastructure must continuously adapt to changing business requirements. FAST VP automates tiered storage strategies, in Virtual Provisioning environments, by easily moving workloads between Symmetrix tiers as performance characteristics change over time. FAST VP performs data movements, improving performance, and reducing costs, all while maintaining vital service levels. EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments automates the identification of data volumes for the purposes relocating application data across different performance/capacity tiers within an array. FAST VP proactively monitors workloads at both the LUN and sub-LUN level in order to identify busy data that would benefit from being moved to higher-performing drives. FAST VP will also identify less busy data that could be moved to higher-capacity drives, without existing performance being affected. Promotion/demotion activity is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, via thin storage pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed nondisruptively, without affecting business continuity and data availability. There are two components of FAST VP Symmetrix microcode and the FAST controller. The Symmetrix microcode is a part of the Enginuity storage operating environment that controls components within the array. The FAST controller is a service that runs on the service processor. FAST VP uses two distinct algorithms, one performance-oriented and one capacity allocation-oriented, in order to determine the appropriate tier a device should belong to. The intelligent tiering algorithm considers the performance metrics of all thin devices under FAST VP control, and determines the appropriate tier for each extent group. The allocation compliance algorithm is used to enforce the per-tier storage capacity usage limits. Data movements executed by FAST VP are performed by the VLUN VP data movement engine, and involve moving thin device extents between
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 31

Summary and conclusion

thin pools within the array. Extents are moved via a move process only; extents are not swapped between pools. Performance data for use by FAST VP is collected and maintained by the Symmetrix microcode. This data is then analyzed by the FAST controller and guidelines generated for the placement of thin device data on the defined VP tiers within the array. When collecting performance data at the LUN and sub-LUN level for use by FAST VP, the Symmetrix microcode only collects statistics related to Symmetrix back-end activity that is the result of host I/O. FAST VP provides a number of parameters that can be used to tune the performance of FAST VP and to control the aggressiveness of the data movements. These parameters can be used to nondisruptively adjust the amount of tier storage that a given storage group is allowed to use, or to adjust the manner in which storage groups using the same tier compete with each other for space. FAST VP is fully interoperable with all Symmetrix replication technologiesEMC SRDF, EMC TimeFinder/Clone, TimeFinder/Snap, and Open Replicator. Any active replication on a Symmetrix device remains intact while data from that device is being moved. Similarly, all incremental relationships are maintained for the moved or swapped devices. FAST VP also operates alongside Symmetrix features such as Symmetrix Optimizer, Dynamic Cache Partitioning, and Autoprovisioning Groups.

32

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Appendix: Best practices quick reference

Appendix: Best practices quick reference


The following provides a quick reference to the general best practice recommendations for planning the implementation of a FAST VP environment. The best practices documented are based on features available in Enginuity 5875.198.148, Solutions Enabler 7.3, and Symmetrix Management Console 7.3. For more detail on these recommendations, and other considerations, see Planning and design considerations. FAST VP configuration parameters FAST VP includes multiple configuration parameters that control its behavior. The following sections describe best practice recommendations for each of these configuration parameters.
Performance time window

Use the default performance time window to collect performance metrics 24 hours a day, every day.
Data movement time window

Create a data movement window to allow data movements for the same period of time that the performance time windows allow data collection.
Workload Analysis Period

Use the default workload analysis period of 7 days (168 hours).


Initial Analysis Period

At the initial deployment of FAST VP, set the initial analysis period to at least 24 hours to ensure, at a minimum, that a typical daily workload cycle is seen.
FAST VP Relocation Rate

For the initial deployment of FAST VP, start with a conservative value for the relocation rate, perhaps 7 or 8. At a later date the FRR can be gradually lowered to a more aggressive level, possibly 2 or 3.
Pool Reserved Capacity

For individual pools with bound thin devices set the PRC based on the lowest allocation warning level for that thin pool. For pools with no bound thin devices, set the PRC to 1 percent. FAST VP policy configuration The ideal FAST VP policy would be 100 percent EFD, 100 percent FC,
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 33

Appendix: Best practices quick reference

and 100 percent SATA. While ideal, operationally it may not be appropriate to deploy the 100/100/100 policy. There may be reasons to limit access to a particular tier within the array. The best way to determine appropriate policies for a FAST VP implementation is to examine the workload skew for the application data to be managed by FAST VP, by using a tool such as Tier Advisor. Thin device binding If no advance knowledge of the expected workload on newly written data is available, the best practice recommendation is to bind all thin devices to a pool within the second highest tier. In a three-tier configuration this would imply the FC tier. It is not recommended to bind thin devices, under FAST VP control, to the EFD tier. A best practice recommendation is to not preallocate thin devices managed by FAST VP. RAID protection considerations For EFD, choose RAID 1 or RAID 5 protection. For FC, choose RAID 1, RAID 5, or RAID 6 protection. For SATA, choose RAID 1 or RAID 6 protection. Drive configuration Where possible, balance physical drives evenly across DAs. Configure more, smaller EFDs, than fewer larger EFDs to spread I/O load as wide as possible. Storage group priority The best practice recommendation is to use the default priority of 2 for all storage groups associated with FAST VP policies. SRDF As a general best practice, FAST VP should be employed for both R1 and R2 devices, particularly if the remote R2 array has also been configured with tiered storage capacity. Similar FAST VP tiers and policies should be configured at each site.

34

FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Appendix: Best practices quick reference

Copyright 2010, 2011 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. All other trademarks used herein are the property of their respective owners.

FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance

35

Vous aimerez peut-être aussi