Académique Documents
Professionnel Documents
Culture Documents
Topics Covered
Share experiences gained on deploying SQL Server and SAN for a Highly Available Data Warehouse. Emphasis on
Intersection of SAN and SQL Server Technologies Not on Large Data Base Implementation or on Data Warehouse Best Practices
Project Overview Best Practices in a SAN environment Remote Site Fail-over using EMC SRDF and SQL Server Log Shipping
Client
Storage Database Implementation
Consultants
Geo-spatial Software
Application Requirements
A large (46 TB total storage) geo-spatial data warehouse for 2 USDA sites: Salt Lake City & Fort Worth Provide database fail-over and fail-back between remote sites
SAN Implementation
Understand your throughput, response time and availability requirements and potential bottlenecks and issues Work with your storage vendor
Get Best Practices Get design advice on LUN size, sector alignment, etc Understand the available backend monitoring tools
Do not try to over optimize, keep LUN, filegroup, file design simple, if possible
SAN Implementation
Balance I/O across all HBAs when possible using balancing software (e.g., EMCs PowerPath)
Provides redundant data paths Offers the most flexibility and much easier to design when compared to static mapping Some vendors are now offering implementations which use Microsofts MPIO (multi-path IO). Permits more flexibility in heterogeneous storage environments. Some configurations offer dynamic growth of existing LUNs for added flexibility (e.g., Veritas Volume Manger or SAN Vendor Utilities)
Managing growth
SAN Implementation 3
Benchmarking the I/O System
Before implementing SQL Server, benchmark the SAN. Shake out hardware/driver problems
Test a variety of I/O types and sizes. Combinations - read/write & sequential/random Include I/O of at least 8K, 64K, 128K, and 256K. Ensure test files are significantly larger than SAN Cache At least 2 to 4 times Test each I/O path individually & in combination to cover all paths Ideally - linear scale up of throughput (MB/s) as paths are added Save the benchmark data for comparison when SQL is being deployed
SAN Implementation 4
Benchmarking the I/O System
Share results with vendor: Is performance reasonable for the configuration? SQLIO.exe is an internal Microsoft tool
SAN Implementation
More or as many volumes as the number of CPUs. Could be volumes created by dividing a dynamic disk or separate LUNs Internal file structures require synchronization, consider the # of processors on the server Number of data files should be >= the number of processors
Data Requirements
23 terabytes of EMC SAN storage per site (46 TB total storage) 2 primary SQL Servers and 2 fail-over servers per site 15 TB of image data in SQL Server at Salt Lake City site with fail-over to Fort Worth 3 TB of vector data in SQL server at Fort Worth site with fail-over to Salt Lake City 80 GB of daily updates that need to be processed and pushed to fail-over site
Solution
Combination of SRDF and SQL Server Log Shipping Initial Synchronization using SRDF Push updates using SQL Server Log Shipping Use SRDF incremental update to fail-back after a fail-over Use SRDF to move log backups to remote site
Hardware Infrastructure
Site Configuration (identical at each site)
EMC SAN is partitioned into HyperVolumes and Meta-Volumes (collections of Hyper-Volumes) through BIN File configuration All drives are either mirrored or Raid 7+1 Hypers and or Metas are masked to hosts and are viewable as LUNs to the OS EMC Devices are identified by Sym Id EMC Devices are defined as R1, R2, Local or BCV devices in the Bin File
SQL Server databases are replicated by database or by groupings of databases if TSIMSNAP2 is used. Primary Host Fail-over Host
R2
R1 Group A Database 1
R1
R1 Group B Database 2
R1
R2
Process Overview
Initial Synchronization using SRDF in Adaptive Copy Mode (all database files). Use TSIMSNAP(2) to split SRDF group after synchronization is complete. Restore fail-over databases using TSIMSNAP(2) after splitting SRDF mirror. Use SQL Server Log shipping to push all updates to fail-over server (after initial sync). Fail-over database is up and running at all times, giving you confidence that the failover server is working.
Planning
Install SQL Server and system databases on Primary and Fail-over Servers (on Local non-replicated devices) Create user databases on R1 devices (MDF, NDF and LDF) on Primary Host Dont share devices among databases, if you need to keep databases independent for fail-over and fail-back. (Important) Database volumes can be drive letters or mount points
Initial Step
Create Databases on R1 Devices Load Data
Create SRDF Group for Database on R1 Set Group to Adaptive Copy Mode Establish SRDF Mirror to R2
Wait until Adaptive Copy is synchronized Use TSIMSNAP command to split SRDF group after device synchronization is complete. Use TSIMSNAP2 for multiple databases. TSIMSNAP writes Meta Data about databases to R1, which is used for recovering databases on R2 host.
Break Mirror
Verify SQL Server is installed and running on Fail-over host. Mount R2 volumes on remote host. Run TSIMSNAP RESTORE command on Fail-over host. Specify either standby (read-only) or norecovery mode.
Database is now available for log shipping on fail-over. SRDF Mirror is now broken, but the track changes are still tracked (for incremental mirror and/or for fail-back).
Log Shipping volume on separate R1 device (not the same as the database R1) Log Backup Maintenance Plan to backup logs to log shipping volume, which is an R1 device Set R1 to Adaptive Copy Mode Establish R1/R2 Mirror. Logs automatically get copied to R2.
BCV (mirror) of R2 Schedule a script that splits and mounts BCV, then restores logs to SQL Server database(s) Flush, un-mount and re-establish BCV mirror after logs have been restored
Initial Synchronization using SRDF in Adaptive Copy Mode. Use TSIMSNAP(2) to split SRDF group after synchronization is complete. Use SQL Server Log shipping to push updates to fail-over server. Fail-over database is up and running at all times, giving you confidence that the fail-over server is working.
Fail-over Process
Fail-over Type Read-only Full Update Required Action
No Server Action. Clients would need to point to fail-over server. SQL Command: Restore Database DBName with Recovery
Fail-back Process
From Read-only Failover Full Update Failover Required Action None Required. Point Clients to Primary.
1. 2. 3. 4.
5. 6.
Run SYMRDF Update command to copy from R2 to R1 in Adaptive Copy Mode. Detach database on R2 after Update Complete. Flush and un-mount volumes on R2 Run SYMRDF FAILBACK to replicate final changes back to R1 and write enable R1 Mount R1 volumes Attach Database on Primary Host
Closing Observations
So far SQL Server 2000 has met High Availability objectives Network traffic across the WAN was minimized, (by shipping only SQL Server Log Copies, once the initial synchronization was completed.) The dual Nishan fiber-to-IP switches allowed for data transfer at about 16 GB / hour, taking full advantage of the DS3. This transfer rate easily met USDAs needs for initial synchronization, daily log shipping, and the fail-back process. The working read-only version of the fail-over database meant that the administrators always knew the status of their fail-over system. The USDA implementation did not require a large number of BCV volumes - as some other replication schemes require.
Closing Observations
After the R1/R2 mirror has been split, SRDF continues to track updates to R1 (from normal processing) and R2 (from log restore process). SRDF is then able to ship only the modified tracks during fail-back or re-synchronization. This process is called an Incremental Establish, or an Incremental Fail-back and is much more efficient than a Full Establish or Full Fail-back. After fail-back, the R1 and R2 devices will be insync, and ready for log shipping startup with a minimal amount of effort. Since SRDF (initial synchronization, fail-back, and log shipping) all run in adaptive copy mode, the performance on the primary server is not impacted.
Software
SQL Server 2000 Enterprise Edition Windows 2000 / Windows 2003 Server EMC SYM Command Line Interface EMC Resource Pack
Call To Action
Understand your HA requirements Work with your SAN Vendor to architect and design for SQL Server deployment
Plan your device & database allocation before requesting a BIN File Decide if sharing devices for databases (use TSIMSNAP or TSIMSNAP2). Decision effects convenience, space & flexibility of operations
For more information, please email SCDLITE@microsoft.com You can download all presentations at www.microsoft.com/usa/southcentral/
2004 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.