Vous êtes sur la page 1sur 30

HP Business Intelligence Solutions

HP Reference Configurations for Oracle 10g Data Warehouses Release 2

1TB HP-UX SMP Intermediate Performance Configuration


Validation

HP Business Intelligence Advanced Technology Center


Cupertino, California
1 Overview ............................................................................................................................... 4
2 Intended Audience................................................................................................................... 4
3 Reference Configurations Overview ............................................................................................ 4
3.1 Overview ........................................................................................................................ 4
3.2 Configuration Groups........................................................................................................ 5
4 1TB HP-UX SMP Intermediate Performance Configuration................................................................ 6
5 Server Configuration ................................................................................................................ 6
5.1 Server Processor ............................................................................................................... 6
5.2 Server Memory................................................................................................................. 7
5.3 Server I/O Cards ............................................................................................................. 7
5.4 Server Miscellaneous......................................................................................................... 8
5.4.1 Internal Storage .......................................................................................................... 8
5.4.2 Power Supplies ........................................................................................................... 8
5.4.3 USB DVD Drive........................................................................................................... 8
6 Storage Configuration .............................................................................................................. 9
6.1 SAN Configuration ........................................................................................................... 9
6.1.1 SAN Switch ............................................................................................................... 9
6.1.2 Zoning ...................................................................................................................... 9
6.1.3 EVA Hardware ........................................................................................................... 9
6.1.4 EVA Configuration .................................................................................................... 10
6.2 Miscellaneous ................................................................................................................ 11
7 OS Configuration .................................................................................................................. 11
7.1 HP-UX 11i v3 Mass Storage Stack Improvements.................................................................. 11
7.1.1 Performance Improvements ......................................................................................... 11
7.1.2 Manageability Improvements....................................................................................... 12
7.1.3 Scalability Improvements ............................................................................................ 13
7.1.4 Availability Improvements ........................................................................................... 13
7.2 Kernel Parameters........................................................................................................... 13
7.3 Miscellaneous OS Configurations and Patches..................................................................... 14
7.4 Volume Management ...................................................................................................... 15
7.5 Networking ................................................................................................................... 15
8 Oracle Configuration and Best Practices.................................................................................... 15
8.1 Oracle Installation Best Practices ....................................................................................... 15
8.2 Oracle ASM, Data and Storage Design.............................................................................. 16
8.3 Oracle Parameters .......................................................................................................... 17
8.4 Oracle DM/DW Best Practices.......................................................................................... 18
8.4.1 Data Compression..................................................................................................... 18
8.4.2 Data Partitioning ....................................................................................................... 19
8.4.3 Parallelism ............................................................................................................... 19
8.5 Miscellaneous ................................................................................................................ 19
9 BI Workload......................................................................................................................... 19
9.1.1 Schema ................................................................................................................... 20
9.1.2 Data Volume and Population....................................................................................... 20
9.1.3 Queries and Query Streams........................................................................................ 21
10 Performance Data ................................................................................................................ 21
10.1 Base System Test Results................................................................................................ 21
10.1.1 Setup ................................................................................................................... 21
10.1.2 Tests..................................................................................................................... 22
10.1.3 Performance Data................................................................................................... 22
10.1.4 Test conclusions and projections ............................................................................... 22
10.1.5 Additional observations........................................................................................... 23
10.2 Overview of Oracle Performance Testing ......................................................................... 23
10.3 Data Load .................................................................................................................. 24
10.4 Single Query Runs ....................................................................................................... 24
10.5 Concurrency Tests ........................................................................................................ 24
10.6 Miscellaneous Tests ...................................................................................................... 26
Appendix A Useful Install Guides and Configuration Guides ................................................................ 27
Appendix B Component List for the 1TB Medium Reference Configuration .............................................. 28
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 4

1 Overview
This document is a product of the ongoing HP/Oracle Reference Configuration (RC) effort in HP. It is
a performance validation as well as a compilation of best practices, performance metrics, and
general observations regarding one specific RC, the 1TB HP-UX SMP Intermediate Performance
configuration.

By documenting these areas, the intent is to validate the performance of this specific RC, as well as
recommend best practices and facilitate the design of BI implementations in order to accomplish the
following goals.
• Validate the performance of this RC.
• Provide a real world proof point, based on a retail industry schema (7 fact tables and 17
Dimension tables), 39 business queries, and approximately 1 TB of raw data.
• Ensure solid and efficient design – a balanced configuration.
• Reduce the complexity and time surrounding design choices.
• Decreasing time to implementation.
• Further the knowledge of the BI design community.

It is important to note that we do not intend to duplicate existing install and design documentation
within this document. Instead, we will complement these existing documents with the intent of
providing any RC specific install and design information when it is missing from the existing
documentation. A list of install and configurations guides can be found in Appendix A. The majority
of these guides are delivered with the products.

2 Intended Audience
This paper has been created for technologists who understand and work with HP-UX/Oracle
infrastructure solutions for BI. Specifically, the following specialists will find this paper useful.
• BI Architects
• Capacity planners
• Performance specialists
• Project delivery participants for HP-UX/Oracle BI infrastructure implementations

3 Reference Configurations Overview


3.1 Overview

HP has developed a comprehensive set of BI Reference Configurations in order to:


1. Decrease time to implementation
a. Decrease planning cycles
i. Simplify the architectural design process
ii. Reduce the complexity of the architecture
b. Focus efforts around viable architectures
c. Present balanced architectures as an easily achieved foundation
2. Increase customer success
a. Reduce risk
b. Reduce architectural errors
c. Ensure sufficient IO throughput and query processing while maximizing performance
and decreasing IO bottlenecks
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 5

These reference configurations are based on:


• Over 13 years of measuring BI workloads
• Numerous measurements made during customer BI POC’s and benchmarks
• The TPC-H benchmark
• Proven, real-world BI deployments

The HP Reference Configurations are designed to meet requirements specifically for BI databases
rather than those requirements more applicable to transaction-based (OLTP) databases. These
configurations use off-the-shelf products to support a broad range of BI workloads. Furthermore, they
are designed to scale according to customers’ demands for growth.

Achieving balanced system I/O throughput, specifically for the BI workload, was a key objective in
developing the configurations. A balanced system maximizes implementation wide BI performance.
The configurations are designed to avoid bottlenecks and achieve balanced I/O throughout all system
components, from storage to the server(s).

In addition to taking into account the typical Oracle data expansion from raw (atomic) data, the HP BI
reference configurations also provide for RAID-based storage availability and data distribution across
disks to maximize I/O throughput and performance. Storage array type and sizing were specifically
chosen for best fit with the workload intensity, as well as OS, server, and price considerations.

Each configuration provides a “best-fit” start point that can then be optimized based on customers’
specific data loading, reporting and query profiles. The configurations apply to the enterprise data
warehouse (EDW), data warehouse (DW) and data mart (DM) infrastructures. These configurations
specifically exclude operational data stores (ODS) and online analytical processing (OLAP).

IT planners, IT architects, and solutions integrators can use these configurations to make platform
decisions quickly, deploy standard products rapidly at low risk, and obtain predictable data
warehouse performance.

3.2 Configuration Groups


The reference configurations are grouped in the following manner.

1. There are 3 architectural families.


a. HP-UX / SMP / Integrity / Oracle
b. HP-UX / cluster / Integrity / Oracle RAC
c. Linux / cluster / ProLiant / Oracle RAC
2. All architectural families have the same 3 “Performance Level” Categories.
a. Advanced Performance Category
i. Optimized for balanced IO and highest performance
ii. Performance and cost are higher
b. Intermediate Performance Category
i. Optimized for balanced IO and high performance
ii. Total performance has been slightly decreased while reducing cost
significantly
c. Entry Level Performance Category
i. Optimized for Price/Performance
ii. Performance and cost are lower than the Advanced Performance and
Intermediate Performance categories.
3. All 3 Categories are divided into 6 columns delineated by Raw (atomic) data size
a. 500GB
b. 1TB
c. 2TB
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 6

d. 4TB
e. 7TB
f. 10TB

This paper will explore only one RC, the 1TB SMP Intermediate Performance configuration.

4 1TB HP-UX SMP Intermediate Performance Configuration


Refer to figure a, for an architectural view of the implementation we built and tested to validate this
specific RC.

It is a relatively small implementation consisting of a midrange server, a midrange array, a midsized


SAN switch and a small Storage Management Appliance.

While it is a midsized implementation, it is important to note that it can handle TB+ sized data
volumes and reasonably complex BI schemas and queries, an assertion we believe we’ve proven in
this exercise.

rx3600
• (2) 1.6 GHz Itanium
ethernet - 2 sockets (each w/2
cores)
rx3600 • 48GB memory
• (2) 4Gb dual port FC
HBA’s
EVA8000
• (84) x 72 GB drives
at 15,000 RPM
FC • (6) disk enclosures
• (2) controllers
• (2) pairs of internal
switches
FC
SAN Switch 4/16
SAN Switch Fibre Channels
• (8) 4Gb FC’s from the
4/16 EVA8000 to the
switch
FC • (4) 4 Gb FC’s from
the rx3600 server to
the switch

DL 380 G5

EVA8000
Figure a – 1TB SMP Medium RC hardware general configuration

5 Server Configuration
5.1 Server Processor
The rx3600 was populated with (2) 1.6 GHz Montecito 18M processors, each containing 2 cores.
This will fill both available processor slots in the server.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 7

Figure b – Montecito processor

5.2 Server Memory


The server was populated with (6) 8GB memory modules for a total of 48 GB memory. Each memory
module is composed of (4) 2GB DIMMs for a total of 24 DIMMs in the server.

A 24 DIMM memory carrier board was used. All 24 DIMM slots were populated, filling up both sides
of the board.

5.3 Server I/O Cards


Two dual port 4Gb FC interface cards were installed in the server. These cards are also referred to as
HBA (Host Bus Adapter) cards. These cards are used to connect the server to the fabric (SAN Switch)
in this configuration. The SAN Switch in turn provides access to the storage EVA 8000 storage array,
completing the server/storage data transfer subsystem.

Two different backplanes are available for this server. Choose the PCI-X 2.0 I/O backplane as it is
required for use with these FC cards.

Populate slots 3 and 4, on the backplane, with the 2 FC cards. These are the fastest, full speed PCI
slots on the backplane, rated at 266 MHz. Using these slots will ensure the maximum I/O throughput
through this part of the I/O subsystem.

Slot 3 device path = 0/7/1 .


Slot 4 device path = 0/3/1 .

If additional FC cards need to be added in the future, slots 5 and 6 should be chosen. They are the
next fastest I/O slots, rated at 133 MHz.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 8

Figure c – 4Gb dual port FC card

5.4 Server Miscellaneous


5.4.1 Internal Storage
A minimum of (2) 146 GB internal drives should be installed. Choose a speed of 10,000 RPM or
better. Install these drives in slot ID’s 1 and 2.

These drives should be configured with RAID 1, also called IM (Integrated Mirror).

The internal drives should be RAID 1 configured before installing the OS, or any other software.
Configuring with RAID 1 will provide the best reliability for the internal drive based subsystem of the
server. If a disk in an IM fails, the hot swap capability enables the volume to be easily restored by
replacing the faulty drive. The firmware then automatically re-mirrors to the replaced disk.

From the initial console menu, select the EFI Shell (Extensible Firmware Interface). EFI is an Intel
standard interface that sits between the firmware and the OS. EFI contains native and shell
commands.

Use the drvcfg command, a driver configuration protocol GUI found in the EFI shell to configure the
IM feature (RAID 1). From the EFI Shell, use the drvcfg –s command to start the configuration process.

5.4.2 Power Supplies


The server can have one or two hot swap power supplies. We recommend that two power supplies
be installed on the server. Along with their hot swap capability, this provides optimal power reliability
for the server.

5.4.3 USB DVD Drive


The rx3600 uses USB DVD drives which will be necessary for software installs. The device files will be
located in /dev/deviceFileSystem/Usb/MassStorage/…. .
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 9

6 Storage Configuration
6.1 SAN Configuration
6.1.1 SAN Switch

The SAN switch installation guide was relatively short and the installation followed the guide without
requiring additional steps.

6.1.2 Zoning

With the use of the EVA array and the SAN switch comes design considerations regarding zoning.
We decided not to use zoning because it wasn’t structurally necessary and performance would not be
changed. Our reasoning follows.

• Zoning enforcement (authorization) wasn’t germane to this exercise and was not used.
Customers that have authorization needs can of course implement zoning enforcement.
• Zoning by application wasn’t needed for this exercise. It also wouldn’t be typical in a stand
alone DW environment at a customer site.
• As this was a small configuration, with a manageable number of paths to LUN’s, multiple
zones were not needed. In HP-UX 11i v3, 32 paths per LUN are supported. If we had
needed more than 32 I/O paths per LUN for this specific RC, we would have created the
necessary zones.

That being said, HP requires, for support, that an implementation such as this have a minimum two
zones.
1. Zone 1: consisting of the DL 380 G5 SMA (Storage Management Appliance)
adapters and the EVA controller ports.
2. Zone 2: consisting of the EVA controller ports and the rx3600 HBA’s.

The fact that we did not create these zones had no bearing on the performance of our RC.

6.1.3 EVA Hardware

The key physical characteristics of the EVA8000 array can be seen in Table 1 below.

EVA physical characteristic Value


Number of disk mechanisms (84) 15k 72GB

Number of disk enclosures 6

Number of controllers 2
Device FC-AL switches 4

Cache per controller pair 8 GB

Host ports (8) 4 Gb/sec FC ports

Table 1- Key physical characteristics for the EVA8000 array


HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 10

6.1.4 EVA Configuration

Through our analysis and benchmarking, along with other benchmarks, the following EVA
configurations were deemed optimal for this RC implementation.

HP Command View EVA is installed on the HP OpenView Storage Management Server (AKA Storage
Management Appliance; the DL380 G5). Command View was used to configure the EVA 8000 array
as follows.
• Create one disk group consisting of all 84 drives. This yields the optimal performance
compared to multiple disk groups.
• Efficiency also improves if the disk group consists of an even number of drives (84 drives)
and the number of drives is a multiple of the number of disk enclosures (6 enclosures x 14
drives = 84 drives).
• Specify RAID 5 for each LUN, as it’s being configured. Performance data and conclusions on
RAID 1 vs. RAID 5 can be found in section 10.1 below.

• 4 LUNs (AKA VDISKS), each 500 GB in size, were created in this disk group. The 500GB
size was chosen for several reasons.
1. Equal sized LUNs ensure that IO throughput and performance will be balanced.
2. A minimum of 2 LUNs is needed for this implementation to balance the load across
the 2 controllers in the EVA8000 storage array.
3. Each controller should always have the same number of equal sized LUNs.
4. The 500 GB size chosen was deemed to be a logical choice regarding
manageability and future growth as too many small LUN’s decrease manageability.
• It is necessary to note the WWIDs (World Wide IDs) of the LUN’s once they’ve been created.
These WWIDs will be shown in Command View. The WWIDs will be needed when
identifying the associated LUNs to Oracle ASM. Use the HP-UX management tool SAM or
SMH (SAM replacement) to identify the DSF names associated with the WWIDs for the LUNs
that have been added. scsimgr can also be used to obtain the WWIDs for the LUNs. Execute
the following command; ‘scsimgr –p get_attr all lun –a device_file –a wwid’.
• LUN 0 and LUN 1 were assigned to Controller A. Configured as Preferred Path A with
failover/fallback to Path B.
• LUN 2 and LUN 3 were assigned to Controller B. Configured as Preferred Path B with
failover/fallback to Path A.
The LUNs are now balanced across both array controllers with failover enabled. The EVA array will
assign LUNs in a round robin fashion to controllers but the failover/fallback paths need to be
specifically configured.

• Add the rx3600 host.


• Add all rx3600 host based FC adapters WWNs (World Wide Numbers) that are visible.
The WWNs for the server FC adapters can be found on the rx3600 server by executing
‘/opt/fcms/bin/fcmsutil /dev/fcd$’.

• Assign the LUN’s to the host system.


• It is optimal, for performance reasons, to zero out the 4 LUNs by using the following
command: ‘dd if=/dev/zero of=/dev/rdiskxxx bs=256k’. This forces the EVA array to
allocate all the blocks in the LUNs which speeds up subsequent:
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 11

o tablespace creation
o tablespace autoextends

6.2 Miscellaneous
• Leave read caching enabled. This enables prefetching when a sequential read stream is
detected, increasing performance.
• In numerous tests, both baseline, and with actual query workloads, there was no significant
difference in the throughput of RAID 1 vs. RAID 5 for a (mostly) read only workload. Thus
RAID 5 was chosen for it’s price/performance advantage over RAID 1. If the best statistical
reliability is an absolute requirement, then RAID 1 should be chosen, making allowance for
the additional storage required by using this level of RAID.

7 OS Configuration
HP-UX 11i v3 was used for this RC validation.

Starting in the early 1990’s, HP undertook an analysis of the DW/BI workload in order to specifically
improve the performance of HP-UX for this workload. Over the years, the tuning and functionality
improvements have improved HP-UX so that it is aware of important BI workload requirements and
thus requires a minimal effort in setting up HP-UX for BI implementations. Following this lengthy
tradition, improvements have been made to the HP-UX 11i v3 Mass Storage Stack. Many of these
improvements have a direct bearing on BI implementations, especially larger ones.

7.1 HP-UX 11i v3 Mass Storage Stack Improvements


There were major improvements made to the kernel Mass Storage Stack (MSS) regarding
performance, manageability, scalability, and availability in HP-UX 11i v3. These improvements have
made a major impact on the I/O throughput, ease of management, and usability of HP-UX 11i v3 for
large BI implementations.

7.1.1 Performance Improvements

7.1.1.1 Native Multi-pathing


Transparent, native multi-pathing was added to the MSS. One important multi-pathing benefit is the
ability to use multiple paths to a single LUN in order to provide load balancing of I/O’s across all
usable paths to a LUN. I/O’s requests are transparently distributed across all available paths to a
LUN using a choice of load balancing policies. Load balancing policies determine how a path is
selected for each individual I/O request. Not all of these policies are useful for the BI workload so
we’ll discuss all policies and the reasoning behind our choices.

• least command load policy – the path with the least number of outstanding I/O requests is
selected from all available paths; it is suitable when lunpaths have asymmetric I/O
performance characteristics
The least command load policy was chosen for this RC implementation as it exhibited the best
performance for this typical “read heavy” BI workload. From a logical perspective, it also appears to
be the most flexible fit for the workload. In lab tests, it has been shown to balance I/O’s better when
the workload has a mixture of IO sizes and/or variations in path performance.

• round-robin policy – (default policy) a path is selected in a round-robin manner from all
available paths; it is suitable when lunpaths have similar I/O operation characteristics.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 12

Congestion (overloaded paths) and/or mixed sized I/O’s can lead to unbalanced I/O. This is
workload dependant. That being said, we found the round-robin policy to be only slightly less efficient
than the least command load policy.

• cell-local round-robin policy – a path belonging to a specific cell in a cell based server, on
which the I/O request was initiated, is selected in a round-robin manner from all available
paths; it is suitable for a cell based system with significant latencies for non-local access
operations.
The server in these tests, the rx3600, was not a cell based system. Thus, this policy is not applicable.

• path lock down policy – a single path is selected for all I/O requests to the LUN.
This policy doesn’t provide the transparent parallelism needed for the typical BI workload.

• preferred path policy – similar to the path lock down policy with the addition that it provides
for automatic path failover when the preferred path fails.
This policy doesn’t provide the transparent parallelism needed for the typical BI workload.

7.1.1.2 Boot/scan Improvements


The HP-UX 11i v3 MSS fully parallelizes the probing of HBA’s, targets, and LUNs in order to
significantly reduce system boot times and system I/O scans.

This is significant for high-end BI implementations. These high-end BI implementations, which contain
large storage farms housing large Data Warehouses and Marts, create the need for numerous I/O
components, all of which need to be scanned for typical boot and scan tasks.

Reducing the time to boot a system, e.g. after a yearly planned maintenance downtime, means that
the system will be up and providing access to business users in a minimum amount of time. Reducing
I/O scan times on live systems eases maintenance time for the system administrator, reducing the time
they need to spend on typical tasks.

7.1.2 Manageability Improvements

7.1.2.1 Agile Addressing


One of the most visible changes to the HP-UX 11i v3 MSS is the introduction of agile addressing
(AKA persistent LUN binding). Agile addressing is the ability to address a LUN with the same Device
Special File (DSF) regardless of the location of that LUN. Thus, the device file for that LUN stays the
same even if the LUN is moved between FC cards (HBA’s), moved between ports on a switch, or
presented in a different manner between host and target port.

Agile addressing removes several significant shortcomings of pre-11i v3 legacy DSF management, all
related to the non-persistence on LUN bindings. There were several labor intensive management tasks
related to legacy style DSF management, especially in typical high-end BI implementations with large
storage subsystems.

Thus, the administrator can create a single (persistent) DSF for each unique LUN in the server, no
matter how many lunpaths are associated with the LUN or if any of the lunpaths change. Thus, LUN
DSFs are immune to key SAN topology changes. We found this new functionality to be an incredible
time saver and elegant in its design.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 13

7.1.2.2 Native Multi-pathing


Adding native multi-pathing to the MSS eliminated the need for SecurePath, PowerPath, AutoPath,
DMP, and pvlinks. This single native solution replaced the need for numerous vendor and device
specific multi-pathing drivers. System administrators no longer have to learn myriad commands, from
multiple vendors, to manage this environment.

7.1.2.3 Miscellaneous
• Automatic monitoring
• Dynamic discovery of lunpaths and LUNs
• All existing relevant commands have been enhanced to understand LUN multi-pathing

7.1.3 Scalability Improvements

I/O throughput in HP-UX 11i v3, for the BI workload, has increased significantly from HP-UX 11i v2,
largely due to the MSS improvements.

Additionally, looking at Table 2, the following limits have been increased for key MSS components.

Functionality HP-UX 11i v3 HP-UX 11i v2


Maximum LUNs per system (tested) 16,000 8,000

Maximum paths per LUN 32 8

Maximum lunpaths per system (tested) 64,000 16,000

Maximum storage controllers 16,000,000 256

Table 2 – HP-UX 11i v3 MSS scalability improvements

7.1.4 Availability Improvements

The improvements to the HP-UX 11i v3 MSS included new features increasing system availability as
well.
• lunpath monitoring and reporting improvements
• automatic failover and recovery of lunpaths
• lunpath authentication – avoids corruption by detecting abnormal LUN behavior, triggering a
failure report, failing over pending I/O’s and preventing anymore I/O’s to the LUN in
question
All of these improvements, which are automated, also improve manageability by offloading the
system administrator of several tasks.

7.2 Kernel Parameters


Only those kernel parameters which were changed are listed in Table 3 below.

Please note that these kernel parameters are workload dependant. Workloads that vary significantly
will often require different parameter values.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 14

Nevertheless, the following list is a good starting point for kernel parameters that should be
investigated for modification.

Tunable Default Used

aio_max_ops 2048 16384


lcpu_attr 0 1
max_thread_proc 256 2048
maxdsiz_64bit 0x100000000 0x800000000
maxfiles 2048 4096
maxssiz 8388608 134217728
maxssiz_64bit 268435456 1073741824
maxtsiz 100663296 1073741824
maxtsiz_64bit 1073741824 4294967296
maxuprc 256 3780
msgmnb 16384 65536

msgmni 512 4200

msgtql 1024 5120

ncsize 8976 36672

nflocks 4096 4200


ninode 8192 120000

semmni 2048 4200


semmns 4096 8400

semmnu 256 4196


semume 100 512
shmmax 0 x 40000000 0xC00000000
shmmni 400 2048
shmseg 300 512
STRMSGSZ 0 65535

vps_ceiling 16 64
Table 3- Key changed HP-UX kernel parameters

7.3 Miscellaneous OS Configurations and Patches

• 24GB paging configured


• 4k page size
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 15

Before installing Oracle, remove all legacy DSFs in HP-UX by executing ‘rmsf –L’ in the HP-UX shell.
This is necessary so that the new Agile Addressing will be used for DSFs. The new raw devices will
now reside in /dev/rdisk instead of the legacy /dev/rdsk directory.

Using scsimgr, set the MSS load balancing policy to least command load for all LUNs with multiple
paths. Execute the following command;
‘scsimgr save_attr –N /escsi/esdisk –a load_bal_policy=least_cmd_load’

The following patches are required for this RC implementations.

• ALUA patches (I/O subsystem patches to allow Asymmetric Logical Unit Access(ALUA) – an
explanation of ALUA functionality can be found in Section 10.1.5)
o PHCO_36250
o PHKL_36249
o UNOF_99334
• English CDE must be installed to enable the OUI (Oracle Universal Installer) to work

7.4 Volume Management


Oracle ASM was used for volume management. It is discussed in Oracle subsection 8.2 below.

As ASM works on raw devices, the Oracle database did not reside on an HP-UX file system for this
exercise.

7.5 Networking
We had to add or change nothing out of the ordinary in networking for this implementation.

8 Oracle Configuration and Best Practices


Please note that this section is not meant to reiterate the information contained in the
Oracle Database – Data Warehouse Guide – 10g Release 2. Instead, we’ve attempted to capture
best practices and specific data germane to installing Oracle 10gR2 on HP-UX 11i v3 for this specific
RC.

8.1 Oracle Installation Best Practices


The use of Oracle ASM is strongly recommended in order to simplify storage management as it
pertains to Oracle. We used ASM in this RC implementation.

ASM and Oracle database installations are relatively straightforward. Nevertheless, we observed a
few best practices that we have noted below.

• It’s recommended to create an Oracle product base directory and then install ASM and
Oracle in separate homes. This allows the simplicity of upgrading and patching the Oracle
database separately from ASM. This point is especially important if multiple Oracle instances
occur on the same server.
• Install and configure ASM first, before installing the Oracle database. This allows the Oracle
database to easily use ASM for storage management.
• Listener processes are needed for both ASM and Oracle, in order to provide connections to
the Oracle Enterprise Manager (OEM). Additionally, the listener is needed by numerous other
applications that need to access the Oracle database over the network, e.g. Siebel, Business
Objects, etc. When installing ASM first, from within the Oracle Universal Installer (OUI), do
not choose the default port for the listener. Choose another port for ASM. Later, you will need
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 16

to choose the default port for the Oracle database so that the OEM users will connect
properly to the Oracle database.
• To enable the SCHED_NOAGE policy (decreasing latch waits and sleeps), to enable
asynchronous I/O usage, and to use HPM (reliable user-mode IPC), the dba group has to be
assigned the following privileges with the setprivgrp command, ‘setprivgrp dba MLOCK
RTPRIO RTSCHED’. To make these changes persistent over reboots, store these privileges as a
line in /etc/privgroup; ‘dba MLOCK RTPRIO RTSCHED’.
• Ensure that /dev/async exists (default) is enabled properly on HP-UX, with a mode of 666.
/dev/async is needed by Oracle for efficient asynchronous I/O.

8.2 Oracle ASM, Data and Storage Design


ASM installation and configuration was straightforward. Furthermore, in cooperation with the
improved HP-UX 11i v3 MSS, the effort involved became simple and quick.

Before configuring and creating structures within ASM, the following tasks must be accomplished to
allow ASM to access necessary storage structures.

Enabling ASM to make use of necessary LUNs requires the following steps.

• Previously, when creating the LUNs on the EVA array specifically for the oracle database
environment, WWIDs were noted for each specific LUN. This was explained in detail in
section 6.1.4.
• The scsimgr command can be used to find the DSFs ( /dev/rdisk/disk…. ) associated with
these WWIs. Run ‘scsimgr –p get_attr all lun –a device_file –a wwid’ to get a list of the
WWIDs, DSFs, etc.
• Change the ownership of each raw device file, representing a LUN associated with the
Oracle database environment, to the oracle user; ‘chown oracle:dba /dev/rdisk/disk…’.
Then change the file mode of each raw device file to rwrw--; ‘chmod 660
/dev/rdisk/disk/…’.

Executing these tasks will allow ASM to discover, identify, and use these storage resources.

ASM was created with one disk group, an easy and manageable configuration. All LUNs were
assigned to this single ASM disk group. Since all LUNs were created equal, optimal I/O balance
across the LUNs, within ASM, is assured.

The ASM disk group can be created during the installation of ASM, within OUI, if the design is
already known. Otherwise, the disk group can be created at a later time using OEM or SQLPlus.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 17

ASM provides the following choices for redundancy, listed in Table 4.

Disk Group Type Supported Mirroring Levels Default Mirroring Level

Normal redundancy 2-way


3-way 2-way
Unprotected
High redundancy 3-way 3-way

External redundancyUnprotected (none) Unprotected (none)


Table 4 – ASM redundancy types

When creating the disk group, do not use any ASM redundancy as the EVA8000 array already
provides redundancy. Specifically choose ‘external redundancy’. If you choose any other level of
ASM redundancy, database performance will degrade significantly.

• Create the Oracle database control files and default structures (e.g. system, sysaux, redo
logs, etc.) in this disk group.
• Create all Oracle tablespaces in this disk group.

There are two options, relating to datafiles, when creating tablespaces, listed in Table 5.

Tablespace option Datafiles Management

Default Multiple Nontrivial – multiple files

Bigfile One Simple – one file


Table 5- Tablespace variants

We analyzed the performance of the two options and observed no difference in performance if
creating datafiles (default tablespace) were performed serially. But, creating datafiles serially takes a
long time for a database this size. Thus, we created the datafiles in parallel, to save time. But, when
we ran serial scans on the large fact tables resident in a tablespace created with ‘parallel creation’
datafiles, performance degraded by 20%-30%. Our findings were consistent over several runs. If the
default tablespace creation is chosen, serial datafile creation must be chosen to preserve table scan
performance.

Thus, for time saving and performance reasons, chose the ‘bigfile’ option for tablespace creation as it
eases management.

We created a single 1.2 TB ‘bigfile tablespace’ for the tables and indexes. A single 500 GB
temporary tablespace was also created in this ASM disk group. These sizes were chosen based on
data volumes and workload projections and proved to be solid choices during the testing phase.

We chose locally managed tablespaces for better performance, better use of space, and easier
management. A locally managed tablespace tracks all extent information in the tablespace itself using
a bitmap, obviating the need to look elsewhere for pertinent information needed to manage the
tablespace.

8.3 Oracle Parameters


Only those Oracle parameters which were changed are listed below, in Table 6.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 18

Please note that these Oracle parameters are workload dependant. Workloads that vary significantly
will often require different parameter values. Nevertheless, the following list is a good starting point
for Oracle parameters that should be investigated for modification.

Parameter Default Used

Audit_trail none FALSE


compatible 10.0.0 10.2.0.1.0
cpu_count Auto set by Oracle 8
db_block_size 8192 16384
db_file_multiblock_read_countPlatform dependant 64
db_cache_size calculated 5G
db_files 200 1200
optimizer_mode all_rows choose
parallel_adaptive_multi_user TRUE FALSE
parallel_max_servers derived 32
parallel_min_servers 0 8

pga_aggregate_target Calculated 24G

processes 40 to OS dependant 1024

shared_pool_size derived 4G

Table 6 – Key changed Oracle parameters

8.4 Oracle DM/DW Best Practices


8.4.1 Data Compression

We ran a compression checking program on all the data sets. The data for the fact tables, which
comprised most of the total raw data, didn’t contain a high enough occurrence of repetitious data
strings, thus the individual data sets didn’t compress enough to warrant using the compression
functionality. Please note that this is an unusual data set that has a low degree of repeated data, thus
the low compression ratio.

Even though our data sets didn’t contain sufficient redundancy, we recommend that data compression
opportunities are explored for all large tables contained in implementation data sets. While not all
tables’ data will show an attractive compression ratio (approximately 25% or more for large tables),
many do and the testing is not involved. An efficient compression checking tool can be found on
Oracle’s OTN website.

There is a penalty when loading data with compression turned on. The penalty has been measured
and varies in relation to the compression ratio. The penalty of reading compressed data has been
repeatedly measured between 1%-2%. Data compression should not be used for a workload that
exhibits a high degree of writes and updates; it is not recommended for any OLTP workload and
should be used with caution for the ODS workload.

There are two major benefits of using data compression when large data sets show an attractive
compression ratio. With more data compressed in an individual data block, less data blocks will
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 19

need to be accessed and moved into memory, thus reducing I/O’s per query and ultimately per
workload and improving performance. Data compression also reduces storage needs, especially for
midsized and large BI structures.

8.4.2 Data Partitioning

Data partitioning has been an essential and useful function of Oracle for years, especially for high-
end BI structures. Composite partitioning has proven especially useful, particularly range/hash
partitioning. For our implementation, range (on date) was chosen as the primary partitioning value
with hashing into subpartitions. It is recommended to make the number of hash partitions a multiple of
the number of CPU’s on the server.

We tested a number of range values (e.g. month, day) and found that partitioning by day yielded the
best query results.

The optimum level of partitioning will vary across diverse BI environments; one size won’t fit all
implementations. It is advisable to balance the granularity of partitioning with the management
required of fine grained partitioning. Thus, it is necessary to take into account how many partitions
each level of granularity will produce and weigh that against the performance gains.
Customers are advised to validate query plans of long running and intensive queries to ensure that
that table partitions are used effectively.
Partitioning by date range on certain tables can also simplify management as tasks can now be
focused on a partition basis instead of on an entire table. Examples of partition based management
include rolling window operations such as adding a current day or month’s data and dropping the
oldest month’s data.

8.4.3 Parallelism

The Degree of Parallelism (DOP) needs to be carefully managed in this environment, especially for the
larger queries and for large concurrency.

• We created all tables with the parallel switch, turning on parallelism.


• We set parallel_min_servers to 8 so that start up costs are not incurred.
• We set parallel_max_servers to 32 to minimize context switching.
• We turned off parallel_adaptive_multi_user (FALSE) to maximize the parallel efficiency of
long running queries.

8.5 Miscellaneous
The version of Oracle used for this exercise was:
Oracle Database 10G Enterprise Edition Release 10.2.0.1.0 - 64 bit HP-UX Itanium.

Oracle patch number 5086089, relating to shmmax, needs to be loaded for this RC.

9 BI Workload
A workload suitable for simulating Data Warehouse and Data Mart workloads was used for this
exercise. The workload consisted of a Retail Industry schema, data, and queries; representing a
retailer with presence in stores, through catalogs, and on the Internet.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 20

9.1.1 Schema
The database schema used was a dimensional model, with snowflaking, consisting of 7 fact tables
and 17 dimension tables, for a total of 24 tables. As is usual with these types of retail schemas, a
number of the fact tables share the same dimension tables, which enable joins or unions between all
large fact tables.

This schema is more representative of a large Data Mart, Super Mart, or a smaller Data Warehouse.
It is not a true Enterprise Data Warehouse where every available set of corporate data is dumped into
the database structure, spanning every business unit and logical entity in the Enterprise.

9.1.2 Data Volume and Population

The raw data set spans 5 years for the main fact tables (3 Sales tables) and slightly longer for the
secondary fact tables (3 Returns tables). Data skew exists over the course of a calendar year, which is
the norm for retailers. The skew manifests itself with the smallest volume of data for the first 7 months
of each year, approximately double the data for the next 3 months, and lastly more than double the
previous 3 months for the last 2 months of the year. More than half the data for each year is located
in the last 2 months of each year.

The row counts for the fact tables can be found in Table 7 below.

Table Number of rows/table

Fact Table 1 2.9 Billion


Fact Table 2 1.4 Billion
Fact Table 3 782 Million

Fact Table 4 720 Million

Fact Table 5 288 Million

Fact Table 6 144 Million

Fact Table 7 72 Million


Table 7 – Table row counts

The total raw data size , measured in the actual flat files used to load the database, was 918 GB.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 21

9.1.3 Queries and Query Streams

A query suite of 39 different queries was used for the testing. The queries were grouped in 3 different
categories to enable the simulation of different user types. The categories and descriptions are listed
in Table 8 below.

Query
Description
category

Entry level Queries that typically access one fact table and a number of related
dimensional tables. The query makes use of partition pruning on the fact table
and typically accesses a window of 1 month to a maximum of a quarter of
data. The use of enough and proper constraints ensure a quick turnaround
Intermediate Queries that typically access 1-2 of the large fact tables and a number of the
related dimension tables. The queries typically access 1 year or more of the fact
table data.
Advanced The queries are similar to the Intermediate queries, but due to the business
questions being asked, full table scans on the fact tables are performed.

Table 8- Query category descriptions


A query management tool is used to create a query stream of all the queries in a specific category.
The queries are placed in random order within the stream. This was done for all 3 query categories,
creating multiple query streams that will perform only Entry level, Intermediate, or Advanced Category
queries. The streams are then simultaneously submitted through a concurrency simulation tool,
effectively simulating a concurrent query environment.

10 Performance Data
Extensive performance data was gathered from the System Under test (SUT). Two distinct types of
testing were accomplished:

1. Base hardware I/O testing to gather baseline numbers and identify any possible I/O
bottlenecks.
2. Oracle performance testing.

10.1 Base System Test Results


10.1.1 Setup

Diskbench v1.1 was used to perform I/O testing. We specifically wanted to compare VRAID1 vs.
VRAID5, using different block sizes, on the EVA8000 array. Additionally, we tested new MSS
functionality - the two relevant load balancing policies and the ALUA patch set.

A single EVA disk group was created for all 84 disks. Four 500GB LUNs were created on the EVA
disk group and presented to the host. The LUNs were then “zeroed out” to ensure full allocation of all
the blocks using the dd command; ‘dd if=/dev/zero of=/dev/rdisk/diskxx bs=256k’.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 22

10.1.2 Tests

Tests were done with the following combinations.

• With VRAID1, with the ALUA patches, and different block sizes
• With VRAID1, without the ALUA patches, and different block sizes
• With VRAID5, with the ALUA patches, and different block sizes
• With VRAID5, without the ALUA patches, and different block sizes
• VRAID1 for sequential reads, sequential writes, and random reads
• VRAID5 for sequential reads, sequential writes, and random reads
• With the round robin I/O load balancing policy
• With the least command load I/O load balancing policy

10.1.3 Performance Data

The typical read predominant BI workload is neither strictly sequential or strictly random.
Characteristics of both are evident and are definitely workload dependant.

It is important to note that, since disk striping is employed for data distribution and data layout, even
read predominant BI workloads do not have a strictly sequential read signature. Neither do they have
a strictly random read either. Instead, the read patterns sit somewhere in-between. Some would call it
a ‘disjoint sequential workload’, predicated upon the 1MB blocks of associated data striped across
disk mechanisms.

We tested the EVA8000 for the combinations above to understand the performance “bookends”, i.e.
pure sequential and pure random. Understanding the peak performance of each variant allowed us to
tune the system towards whichever combination and throughput value proved superior. They were
important to understand what was possible, for the EVA8000 array, within the confines of a sterile
test. The results helped narrow down design choices for database design and performance testing.

10.1.4 Test conclusions and projections

Based upon these RAID level lab tests for a read predominant workload, and several other customer
benchmarks we’ve observed over the last few years, we believe there is little significant performance
difference between VRAID1 and VRAID5 on EVA8000 arrays. Dependant upon the workload being
tested, either RAID level has been seen to outperform the other in a statistically insignificant manner
for a largely read only workload with larger block sizes.

For performance reasons, we recommend using 1MB block sizes with a BI workload that is read
predominant. At this block size, it is apparent that the ALUA patches make a significant difference
with both VRAID1 and VRAID5. Both RAID levels can be seen to produce roughly equivalent
performance using 1MB block sizes.

The choice of RAID ultimately comes down to price vs. reliability, for a read predominant workload.
RAID5 (20% overhead for 4D +1P) is cheaper than RAID 1 (50% overhead for mirroring).
But, RAID1 provides superior reliability.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 23

10.1.5 Additional observations

When configuring a LUN on the EVA array, one controller needs to be specified as active and the
second controller needs to be specified as the failover controller. Note that the second controller is
passive, or slower for I/Os specified to an associated LUN.
Without the ALUA patches the OS would issue I/Os equally between the active and passive
controllers for each LUN with round robin load balancing. Using least command load more I/Os
would be issued to the active controller for the LUN, but up to 25% of I/Os would still be issued to the
passive controller.
Loading the ALUA patches would eliminate or severely limit the issuance of I/Os to the passive
controller of the LUN, based on the load balancing policy.
• round robin policy - no I/Os were issued to the passive controller.
• least command load policy - approximately 1 in a million I/Os will be issued to the passive
controller. This miniscule issuance of I/O’s is used for polling purposes.

10.2 Overview of Oracle Performance Testing


We intentionally designed these performance exercises to mimic what a typical customer experience
and expenditure would be for this specific implementation.

We did not “squeeze blood from a stone” and we used “off the shelf” components.

• There were no benchmark specials.


• Summary tables (materialized views) were not used.
• “In column” aggregates were not used.
• Bitmap join indexes were not used.
• Query hints were not used.
• No unreleased software or unreleased patches were used.

Thus, the performance that was achieved should be easily accomplished by most customers.

Streams, of queries, were used throughout Oracle performance testing. A stream is a collection of
queries, to be run in order. To simulate concurrency, several streams are run concurrently.

There were 5 different query runs.

1. Power – the queries were run in serial order with no Degree of Parallelism (DOP) limits
2. Baseline – the queries were run in serial order with DOP limits (similar DOP limits to the
concurrency runs (stream))
3. 10 Streams - to simulate 10 concurrent queries a workload was composed consisting of 7
Entry level steams, 2 Intermediate streams, and 1 advanced stream.
4. 20 Streams - to simulate 20 concurrent queries a workload was composed consisting of 14
Entry level steams, 4 Intermediate streams, and 2 advanced streams.
5. 40 Streams - to simulate 40 concurrent queries a workload was composed consisting of 28
Entry level steams, 8 Intermediate streams, and 4 advanced streams.

The numbers were chosen based on a fairly representative workload for the given concurrency value.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 24

10.3 Data Load


The complete data load took 7h 28mins and was CPU bound. This equated to a load rate of 35
MB/sec.

10.4 Single Query Runs


During single query runs only one query stream is run and thus, only one query is active on the system
at any given time. This was done for each of the different query categories in order to get an atomic
query performance result.

During the power run no limitation was put on the DOP for any query.

During the baseline run, only one query was active on the system at any given time. But, during this
run the system resource usage for queries was limited by setting the DOP for the query. The DOP was
set differently pending on the category of the query as listed below, in Table 9.

Query category DOP

Entry level 1

Intermediate 4
Advanced
8

Table 9 – DOP per query category

These serial query performance tests were performed in order to understand the standalone
performance of each individual query. We could then understand the change in performance per
query as we increased the query concurrency level.

Results of the two single query stream runs are displayed with the concurrency results in graph A
below. As can be seen, running the queries with no limits on the DOP provided better performance.

10.5 Concurrency Tests


During concurrency runs, a number of query streams representing each query category were executed
simultaneously to simulate a concurrent user environment. The composition, by query category, of the
3 concurrent runs can be seen in Table 9 below. The concurrent stream compositions were chosen
based on a fairly representative workload for the given concurrency value.

Concurrent Entry Level Intermediate Advanced


Streams Streams Streams Streams

10 7 2 1

20 14 4 2

40 28 8 4

Table 10 – Concurrent stream composition

The queries in each stream were executed immediately after each other on completion without any
“user think time”.
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 25

The result of all 5 query runs, showing average query run time in seconds, per each query in that run,
is displayed below in Graph A.

As can be seen, query response time increases with a higher concurrency level, especially for
Intermediate and Advanced queries. The workload scales very nicely to approximately 20 concurrent
queries. The scale between 20 and 40 concurrent deviated slightly from the ideal but also scaled
nicely.

RC Validation Testing
Average query run times (secs)

8000.0

7000.0

6000.0

5000.0
Entry
4000.0 Intermediate
3000.0 Advanced

2000.0

1000.0

0.0
Power Baseline 10 Streams 20 Streams 40 Streams

Graph A – Query workload run times

Additionally the system throughput was captured during the concurrency tests. The average and peak
throughputs were measured and are within expected ranges.

The detailed performance results are available under NDA through your HP technical account
representatives.

Once again, while still acceptable, the throughput at 40 concurrent queries has begun to taper off. At
40 concurrent queries, higher context switching and CPU contention were observed with performance
measurement tools. Additionally, there were virtually no I/O waits at 40 concurrent queries. The
workload was becoming CPU bound.

We believe there is additional concurrency headroom before performance becomes an issue because
of CPU contention. The system couldn’t tolerate a doubling of concurrency, but we felt an additional
20% growth was reasonable.

Please keep in mind that the server only had 4 Itanium cores and the workload is designated as an
Intermediate workload. Thus, a doubling of workload could easily be accommodated on the next
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 26

highest server, the rx6600, which is still a midsized server. Furthermore, the judicious use of summary
tables, aggregates, and hints, could increase response time tremendously.

10.6 Miscellaneous Tests

To determine the affect of multithreading for BI in concurrent environments, the same concurrency test
with 20 query streams were performed with and without multithreading enabled. Results are shown
below, in graph B.

Entry level queries were unaffected. But, the more resource intensive Intermediate and Advance
queries clearly benefited from the use of multithreading. Thus, we recommend multithreading be
enabled for all but the simplest BI workloads and higher concurrency levels.

Multithreading
Avg query time with 20 Concurrent streams

4500.0

4000.0

3500.0

3000.0

2500.0
Multithreading
2000.0 No Multithreading

1500.0

1000.0

500.0

0.0
Entry Level Intermediate Advanced

Graph B – Impact of multithreading on the query workload


HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 27

Appendix A Useful Install Guides and Configuration Guides

SERVER

• HP Integrity rx3600 Installation Guide


• HP Integrity, HP 9000 and carrier grade servers Configuration Guide (HP Internal and Channel
Partner only)

OS

• HP-UX 11iv3 Installation and Update Guide (5991-6460)


• HP-UX System Administrator’s Guide (volume 1-4)
• The Next Generation Mass Storage Stack HP-UX 11iv3
• scsimgr – SCSI Management and diagnostics utility on HP-UX 11iv3
• HP-UX 11iv3 Mass Storage Device Naming
• HP-UX 11iv3 Mass Storage I/O Performance Improvements
• HP-UX 11iv3 Native Multi-Pathing for Mass Storage

STORAGE

EVA

• HP StorageWorks 4000/6000/8000 Enterprise Virtual Array Hardware Configuration Guide


• HP StorageWorks 4000/6000/8000 Enterprise Virtual Array user guide
• HP StorageWorks Command View EVA Installation Guide (T3724-96075)
• HP StorageWorks Command View EVA User Guide (T3724-96061)
• HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for HP-UX installation
and reference guide (5697-5253)

SAN

• HP StorageWorks SAN Design Reference Guide (AA-RW86M-TE)


• HP StorageWorks SAN Switch installation guide (AA-RWF3A-TE)
• Brocade Web Tools Administrator’s Guide (53-0000194-01)

ORACLE

• Oracle Database – Installation Guide – 10g Release 2 (10.2) for HP-UX Itanium (B25293-01)
• Oracle Database – Administrator’s Reference – 10g Release 2 (10.2) for UNIX-Based Operating
Systems (B15658-05)
• Oracle Database – Data Warehousing Guide – 10g Release 2 (10.2) (B14223-02)
• Oracle Database –Reference – 10g Release 2 (10.2) (B14237-02)
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 28

Appendix B Component List for the 1TB Medium Reference Configuration

This component list is valid as of September, 2007.

Architecture SMP
Server rx3600
Nodes 1
Sockets/Cores per node 2/4c
Memory per node 48GB
FC Cards per node 2xDP 4Gb/s
Node Interconnect N/A
SAN Switch SAN 4/16
Storage array EVA8000 (84)
Usable Storage TB 4.5
OS HP-UX
Estimated Concurrent 40-80
Users
RBIP 4.5
Table 10 - 1TB Medium SMP RC

NOTE:
RAID 5 – all arrays
RBIP = Relative BI Performance

QUANTITY PART NUMBER DESCRIPTION


System
1 AB463A HP Integrity rx3600 base system
1 AD295A PCI-X 2.0 Backplane
1 AD052A Hot swap power supply for N+1 redundancy
1 337972-B21 HP Smart Array P600 Controller
2 375861-B21 144 GB SAS 10,000 rpm Hard Drive (2.5")
2 AB577A 1.6GHz/18MB dual-core
1 AD125A 24 DIMM Memory Carriers
6 AB565A 8GB DDR2 memory quad (4 x 2GB DIMMs)
2 AB379A Dual-Channel 4 Gb PCI-X
SAN
1 AF002A HP Universal Rack 10642 G2 Shock Rack
1 AF002A 001 Factory Express Base Racking
1 A7985A HP StorageWorks 4/16 SAN Switch
1 A7985A 0D1 Factory integrated
1 AF423A HP R3000 2U L6-20 NA UPS
1 AF423A 0D1 Factory integrated
1 AF062A HP 10K G2 600W Stabilizer Kit
1 AF062A B01 Include with complete system
1 AF054A HP 10642 G2 Sidepanel Kit
1 AF054A 0D1 Factory integrated
1 340653-001 HP UPS3 KVA 20A 250V US Power Cord
1 340653-001 0D1 Factory integrated
1 252663-B24 HP 16A High Voltage Modular PDU
1 252663-B24 0D1 Factory integrated
6 221692-B21 Storage Works LC/LC 2m Cable
6 221692-B21 0D1 Factory integrated
1 120672-B21 HP 9000 Series Ballast Option Kit
1 120672-B21 0D1 Factory integrated
2 HA113A1 HP Installation Service
1 HA113A1 5BY Rack and Rack Options Installation
1 HA113A1 5FE 2/16 FC switch Installation
2 U5466S HP Care Pack
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 29

70 U5466S 4NK Total Education One Svc


16 A7446B HP 4GB SW Single Pack SFP Transceiver
2 HA111A3 HP 3y Proactive 24 Service
1 HA111A3 400 Proactive San Services
1 HA111A3 7GV UPS < 6KVA HW Support
1 HA111A3 8LA 2/16 FC Switch Support
1 AD520B HP EVA8000 2C6D-A 60Hz 42U Cabinet
84 364621-B23 HP StorageWorks 146GB 15K FC HDD
84 364621-B23 0D1 Factory integrated
8 221692-B22 5m SW LC/LC FC Cable ALL
8 221692-B22 0D1 Factory integrated
1 T3724E HP CV EVA 7.0 Rep Sol Mgr 3.0 Media Kit
1 T5183A HP CV EVA 8k Series Unlimited Lic
2 142257-003 HP 10A IEC320 C14-C13 10ft/3m PDU Cable
1 AG453A HP DL380 G5 Base Storage Server
1 HA113A1 5D6 HP Install. NAS/Proliant StorSvr SVC
1 A8003A HP FC2242SR PCI-e DC HBA
1 HA111A3 300 Proactive Storage Services
1 HA111A3 84N HP DL380 G4 Base/SAN SS Support
1 HA111A3 8CL CV EVA5K/8K Unlim LTU Support
84 HA111A3 8HH EVA FC 146GB/300GB HDD Support
1 HA111A3 8WM EVA8000 2C6D Support
1 HA124A1 HP Technical Installation Startup SVC
1 HA124A1 5AS HP Startup Command View EVA (CV EVA) Svc
Table 12 - 1TB Medium SMP RC BOM
HP Reference Configurations for Oracle 10g Data Warehouses Release 2.0 Page 30

© 2007 Hewlett-Packard Development Company, L.P. The information contained


herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or
omissions contained herein.
Itanium is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States and other countries.

Vous aimerez peut-être aussi