Vous êtes sur la page 1sur 8

HP P4000 setup and configuration guide for Oracle environments

Technical white paper

Table of contents Overview.....................................................................................................................................2 Introduction ..............................................................................................................................2 P4000 SAN overview ...............................................................................................................2 Oracle overview .......................................................................................................................5 Choosing RAID and Network RAID levels ........................................................................................6 Server volume management .......................................................................................................6 Node RAID levels ......................................................................................................................6 Network RAID levels ..................................................................................................................7 Summary .....................................................................................................................................7 Recommendations .....................................................................................................................7 HP Converged Infrastructure .......................................................................................................8

Overview
Introduction
Fast, reliable access to data is critical to the success of businesses today. As a business grows, so does the size and complexity of the data it depends on. IT organizations are tasked with maintaining service levels in the face of growth and increased complexity. The P4000 storage area network (SAN) delivers a modular design that can grow with a business, configuration flexibility that complex environments need, and a centralized management interface that simplifies storage administration tasks. This paper recommends configuration guidelines to configure and run the P4000 more effectively in Oracle database environments. Oracles relational database management system allows databases to be designed and used in many different ways. Oracle databases are used in online transaction processing (OLTP), data warehouse systems, and many other implementations. Each application of this technology has its own set of business requirements. A development database may not have the same high-availability requirements as a data warehouse system. An OLTP application has I/O requirements that differ from a content delivery system. P4000 SANs can be configured to meet the service-level requirements of an Oracle application while still delivering impressive return on investment (ROI) and ease of use.

P4000 SAN overview


P4000 SANs use SAN/iQ virtualization technology to pool the storage capacity of physical storage nodes. Each storage node consists of a disk array, processor, memory, multiple network interface cards (NICs), and duel power supplies. The individual storage nodes are disk arrays that support RAID 10, RAID 5, or RAID 6. Each physical storage system has two TCP/IP NICs. There are three ways the NICs can be bonded. The recommended method for NIC bonding on the storage nodes is Adaptive Load Balancing (ALB). ALB provides increased performance and active-passive network redundancy. For other NIC bonding options refer to the P4000 SAN solution user guide. The SAN/iQ software manages clusters of nodes as a single storage pool. Volumes are created as a stripe that spans all the nodes in the cluster. SAN/iQ provides protection against a node failure with Network RAID. Network RAID 10 enables each block to be written to two different nodes in the cluster. This means that the block is available even if one node fails. Network RAID 10 meets Oracles default recommendation of Stripe and Mirror Everything (SAME). SAN/iQ 9 delivers new features that keep management tasks simple even for complex configurations:

Automated online upgrade management Transparent volume space allocation, between thin or full provisioning SNMP and email notification of alert conditions Host server cluster management to simplify volume presentation Best practice analyzer to provide information about configurations that can increase SAN reliability and
performance Storage administrators have the option to use other redundancy levels if Network RAID 10 does meet their service level needs. Choosing a RAID level should be done based on the performance and degree of fault tolerance required for the volume.

Figure 1: Network RAID 10 write pattern

Network RAID 10 data is striped and mirrored across two storage systems. Network RAID 10 is the default data protection level assigned when creating a volume, as long as there are two or more storage nodes in the cluster. Network RAID 10 is generally the best choice for applications that write to the volume frequently and do not tolerate storage system failures.

Figure 2: Network RAID 10+1 write pattern

Network RAID 10+1 data is stripped and mirrored across three or more storage systems. Data in volumes configured with Network RAID 10+1 is available in the event that any two storage nodes become unavailable. Network RAID 10+1 is best for environments that require data availability even if two storage nodes in the cluster fail. Network RAID 10+2 is also available providing a four-way mirror.

Figure 3: Network RAID 5 write patterns

Network RAID 5 divides the data into stripes, and each stripe is stored on three storage nodes with parity data stored on a fourth node. With SAN/iQ 9.0, only three storage nodes are required for Network RAID 5. Network RAID 5 volumes are thin provisioned by default. Network RAID 5 is best for volumes mostly used for sequential, read workloads. The space efficiency makes it a good choice for archived data. 1. P1 is parity data for blocks A, B, C 2. P2 is parity data for blocks D, E, F 3. P3 is parity data for blocks G, H, I 4. P4 is parity data for blocks J, K, L

Figure 4: Network RAID 6

Network RAID 6 divides the data into stripes, parity data is stored on two nodes. It performs best with sequential reads. With Network RAID 6 data is accessible even if two storage nodes become unavailable. Prior to SAN/iQ 9.0, six storage nodes were required for Network RAID 6. With SAN/iQ 9.0, only 5 storage nodes are required. 1. P1 is parity for data blocks A, B, C, D 2. P2 is parity for data blocks E, F, G, H 3. P3 is parity for data blocks I, J, K, L 4. P4 is parity for data blocks M,N, O, P

Oracle overview
Oracle I/O profiles commonly fall into two categories, OLTP and data warehouse. OLTP loads are generally more read than write intensive, and access random places in the database. OLTP accesses are made in small block sizes, 8k for example. Data warehouse applications make large sequential reads of larger blocks. Larger blocks of 1 MB or more are common in data warehouse databases. Oracle databases also contain internal structures that have specific I/O profiles (see table 1). Some considerations to keep in mind when fine-tuning an Oracle environment on a P4000 SAN are:

OLTP and data warehouse databases benefit from separation. In some cases, heavy random access loads can
perturb cache and affect the throughput of sequential access jobs that are running simultaneously. If this becomes an issue, consider scheduling the loads to run at different times or move the databases to separate clusters.

Control files are ideal candidates for high Network RAID levels. They are critical to an Oracle environment and
they generate relatively small amounts of disk I/O.

Redo logs and archive logs should reside on different volumes to avoid contention during a log switch. Redo Logs are
common bottlenecks in I/O intensive Oracle applications. Transactions may be held up if log switches are slow.

I/O performance of a data warehouse or a content streaming solution benefits from the greater throughput of a
10GbE host connect. 10GbE connectivity is available on the P4800 G2, and as an upgrade to the P4500 and P4300.

An OLTP solution can improve random I/O performance by increasing the number of P4000 nodes in the cluster.
Potential I/O performance increases linearly with the addition of nodes. The optimum number of storage nodes in a cluster ranges up to 10 (refer to the P4000 user guide for details.)

As a database grows the Oracle cache hit ratio can go down. Oracle buffer cache might need to be increased in
order to realize the performance benefits of adding storage nodes to accommodate a growing database.

Archive logs should be placed on large thin-provisioned volumes. The amount of space archive logs consume
grows between backups. P4000 SANs have a thin provisioning feature that does not allocate physical space to volume until it is written to. This allows for unexpected growth without stranding storage. Some of the configurations that work well with thin provisioning are raw volumes, Oracle ASM, and VxFS file systems.

The options chosen for your most heavily accessed data should reflect the service level to which you are delivering.
Higher Network RAID levels have a write performance overhead, but provide excellent protection from multiple failures.
Table 1: Oracle data I/O profile Oracle Data Control files System, undo, and so on Temporary table space Redo logs Archive logs OLTP application data Random access, more reads than writes Data warehouse application data I/O Profile Random read/write Random read/write Random read/write Sequential read/write Sequential write Small block size, Real time, CPU, and memory intensive Large blocks size, sequential access, predominantly reads Reporting and batch transactions, disk read intensive Description Contains data about structure and transaction consistency Oracle internal structures Sort segments Circular transaction logs Archived copies of full Redo Logs

Choosing RAID and Network RAID levels


Oracle recommends using SAME as a best practice for configuring storage. SAME is also referred to as RAID 10. Configuring the individual nodes as RAID 10 arrays and using Network RAID 10 fit the SAME recommendation. Increasing the number of nodes and disks in a cluster allows the stripes to span more devices and improve I/O throughput. The P4000 has a comprehensive set of configuration possibilities that can be used to further tune the SAN to meet specific availability and performance requirements. There are potentially three layers of configuration to be considered:

Server volume management Node RAID levels Network RAID levels

Server volume management


SAN/iQ virtualizes physical node hardware into RAID-protected or network-replicated volumes. Server operating systems access the volumes as simple iSCSI disks. The SAN/iQ Centralized Management Console can be used to change the size of volumes, provisioning, Network RAID levels dynamically, or to present new volumes to the server. When volumes are expanded, partition tables, file systems, and the OS driver stack must all recognize disk size changes before the additional space can be used by Oracle. The servers ability to recognize and use additional space dynamically varies by OS and implementation. Refer to OS specific documentation for the steps necessary to recognize changes in a disks size. Presenting additional iSCSI disks is a more common method of increasing the amount of storage available to a server. Volume management software on the server is often used to manage and allocate space further. Some examples of server-level volume managers are LVM, Veritas VxVM, and Oracle ASM. Using volume management software to carve up and distribute iSCSI disks works well with Oracle and the P4000. Many volume managers allow the administrator to mirror disks at the OS level. Creating mirrors with volume managers is unnecessary because of the protection provided by the P4000 SAN. Key server volume manager considerations include the following:

A servers ability to recognize changes to volume sizes dynamically varies by OS and implementation. RAID 0 striping of volumes at the server layer does not affect IOPS. Use external redundancy when creating Oracle ASM disk groups. VxVM plexes and LVM mirrors are unnecessary because of the redundancy provide by the P4000. Creating redundancy with server volume managers can slow performance and increase the amount of space used. Many small LUNs provide better IOPS than fewer large LUNs.

Node RAID levels


RAID devices combine physical disks into larger logical devices to protect against disk failure within a node. The RAID levels should be chosen based on how well they meet your space and data availability requirements:

RAID 10 combines mirroring data within pairs of disks and striping data across pairs of disk. This gives the node
the performance benefit of striping data across many devices and the fault tolerance of mirrored data. The storage capacity of a RAID 10 node is half of the total physical capacity of the node.

RAID 5 provides fault tolerance by striping data across all of the disks and writing parity data for each stripe. The
blocks containing parity data are distributed across the array. The storage capacity a RAID 5 node is equal to total capacity of all the enclosure minus the capacity of a single disk.

RAID 6 calculates parity for each stripe as in RAID 5. Additionally, RAID 6 creates a second parity block for each
stripe. This allows the node to tolerate two physical disk failures. The available storage space of a RAID 6 node equals the total capacity of the enclosure minus the capacity of two hard disks. P4000 SANs are preconfigured at RAID 5 by default. P4000 SANs dedicated to Oracle databases should be set to RAID 10 prior to adding them to a cluster. Refer to the P4000 user guide for details on reconfiguring RAID levels.

Network RAID levels


The Network RAID level protects against the failure of a single node or multiple nodes. Network RAID 10 is the recommended best practice for a cluster of two or more nodes. The Network RAID level can be chosen specifically to meet each volumes requirements. This enables utilization of additional storage only where it is required.

Summary
P4000 SAN solutions solve data storage and retrieval problems of complex Oracle environments by delivering the features needed to adapt and perform in a dynamic IT landscape. The modular design of highly available Network Storage Modules avoids single points of failure while allowing for future expansion.

Recommendations
Recommendations for Oracle on P4000 SAN solutions:

Oracles best practice of SAME can be met with nodes in RAID 10 and volumes in Network RAID 10. Place archive logs on large, thin-provisioned volumes. If concurrently running instances are in I/O contention, place the databases on separate clusters. Overall performance can be increased by adding nodes to a cluster. Use server-based volume managers to add redundancy is unnecessary. Use server-based volume managers to group iSCSI LUNs in RAID 0 has no performance impact. Use external redundancy with Oracle ASM disk groups. Under an OLTP-type load, more small LUNs have higher IOPS then a few large LUNs. 10GbE SANs deliver greater I/O throughput for large block sequential reads than 1GbE SANs. ALB is the most flexible NIC bonding method for P4000 nodes. ALB provides increased bandwidth and fault
tolerance without special switch configurations.

HP Converged Infrastructure
Unleash the potential of your infrastructure todaybe ready for the future. http://h18004.www1.hp.com/products/solutions/converged/main.html HP Storage for Oracle: www.hp.com/storage/oracle LeftHand SAN solutions: http://hp.com/go/p4000 HP and Oracle alliance: http://www.hp.com/go/oracle HP ActiveAnswers for Oracle applications: http://h71019.www7.hp.com/ActiveAnswers/cache/383733-0-0-0-121.html

To learn how you can adapt and perform in a dynamic IT landscape, visit http://hp.com/go/p4000.

Copyright 20102011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. 4AA0-2015ENW, Created January 2010; Updated June 2011, Rev. 2