Vous êtes sur la page 1sur 40

Enterprise Storage Architecture

Planning
Technical Infrastructure for HANA

Raiffeisenring 44
D-68789 St. Leon-Rot
SAP Active Global Support
July 2013
Version 2.2

2013 SAP AG

Enterprise Storage Architecture Planning

Table of Contents
1 Introduction
1.1 SAPs vantage point towards an Enterprise Storage Architecture
1.2 Main issues tackled by SAP services for Storage Infrastructures
1.3 Goals of this Document
2 Shared Storage Infrastructure
2.1 Why not use shared Storage Systems for SAP HANA?
3 Storage Systems
3.1 Frontend Adapter
3.2 Processor Complex
3.3 Disk Adapter
3.4 Physical Disks
3.5 Redundant Array of Independent Disks (RAID)
3.6 How Storage Systems provide storage for Host Systems
4 Interconnection of Storage- and Host Systems
4.1 Fiber Channel Protocol
4.2 Storage Network
5 Managing Storage on Host Systems
5.1 Logical Volume Manager
5.2 File System
5.3 Cluster File System
5.4 Database System
5.5 Storage management on Host Systems is crucial for optimal I/O Performance
6 The optimal configuration of the I/O Stack
6.1 Threefold Striping
6.2 Managing the Data Growth
6.3 Assessment of the suggested I/O Stack Configuration
7 Monitoring the I/O Stack
7.1 Configuration Analysis
7.2 Performance Analysis
7.3 Summary Monitoring the I/O Stack
8 Implementing SAP HANA on shared Enterprise Storage
8.1 HANA Persistence Layer
8.2 HANA I/O Pattern
8.3 HANA Storage Connector API
8.4 HANA on an optimal configured I/O Stack
9 References

2013 SAP AG

3
3
4
4
5
8
9
9
10
11
11
12
13
14
14
15
16
17
19
20
21
22
23
23
24
25
26
26
27
29
30
30
31
33
34
39

page 2/40

Enterprise Storage Architecture Planning

Introduction

Storage precisely the storage system is the common denominator for all SAP systems with their
requirement for disk space to store their databases and files.
In the technical infrastructure, storage systems must be considered as dedicated computers with specialpurpose disk storage management functions, specifically designed to consolidate the storage demands of
multiple hosts. Storage is presented to the host systems in the form of Logical Units (called LUNs) or files,
whereas the physical layers remain completely hidden. The functions of storage systems include:
Reliability and Availability: All components of the storage systems are redundant, so that in case of
a malfunction, the remaining component can take over the work. The data on the physical disks is
protected by redundancy (RAID - Redundant Array of Independent Disks). This means that the
storage systems always keeps redundant data to reconstruct the data in case of a failed disk.
Scalability: In case of increasing storage demand, the capacity of the storage systems can easily be
expanded without impact on operation.
Serviceability: Specifically designed tools are provided to ease storage management functions, such
as capacity expansion or monitoring system components.
Performance: The storage systems capability to stripe data across multiple physical disks
together with memory-cached I/O, and special-purpose CPUs for disk-to-cache (disk adapter) as well
as cache-to-host interfaces (host adapter) offer business-critical high-I/O throughput.
Storage systems provide fast copy functions available for data copies within the same (local) and
between (remote) storage systems, enabling the implementation of solutions for data high
availability, system (landscape) cloning and backup/restore.

1.1

SAPs vantage point towards an Enterprise Storage Architecture

Basically all storage systems with the above described properties are suitable for SAP systems, regardless of
the deployed disk type (Hard Disk Drives (HDDs) with rapidly rotating discs (platters) coated with magnetic
material, Solid State Disks (SSDs) without any moving mechanical components, using flash memory to store
data), transmission technology (Fiber Channel, Serial Attached SCSI, SATA), RAID protection (level 5, 6,
10), the interconnection architecture of disk adapters, cache and host adapters or the protocols used to
attach storage to hosts. Due to the selected technologies all storage systems will definitely have limitations
with regard to their I/O performance capabilities, but depending on a storage sizing, considering capacity and
performance requirements, the storage vendors will provide suitable storage systems.
Due to the reasonable - since cost efficient - consolidation of databases on common resources, it is essential
to balance all data across all storage system components to achieve optimal performance. No matter
how many storage systems are used, the best performance can only be achieved if all available storage
system components are evenly used.

2013 SAP AG

page 3/40

Enterprise Storage Architecture Planning

1.2

Main issues tackled by SAP services for Storage Infrastructures


Storage infrastructure does not fulfill I/O performance requirements The provided storage
fulfills capacity requirements in terms of space needed to store all data, but the storage system
components such as physical disks, disk adapters, cache or storage system adapters to the network
are not sufficient to fulfill I/O performance requirements.
Key I/O performance indicators (KPIs) are throughput (measured in I/O operations per second
(IOPS)), bandwidth (measured in transferred MB per second (MB/s)) or latency (the time that
elapses until the successful completion of an I/O operation, measured in milliseconds (ms)).
Insufficient I/O performance for some SAP systems Although the provided storage systems
fulfill the performance requirements of the storage sizing, some SAP systems suffer from insufficient
I/O performance.
I/O performance deterioration after data growth Customers start SAP operation with an initially
optimal storage system configuration, but with growth of data the I/O performance deteriorates.
Configuration and performance analysis of the complete I/O stack needed In case business
users of SAP systems suspect the storage infrastructure to be the culprit for not satisfying overall
performance, be prepared to conduct an analysis of the entire I/O stack on database system,
operating system and storage system level consisting of configuration and performance analyses.
What is an optimal configuration of the I/O Stack? Customers are planning to redeploy their
storage systems and are looking for suggestions how to optimally configure the I/O stack.

1.3

Goals of this Document

With this document we would like to establish a common understanding with regard to the building blocks and
physical components of an Enterprise Storage Architecture. If necessary, we will introduce a normalized
naming of Enterprise Storage components to ease the comparison of certain vendor solutions.
It is not our intention to explain in detail the ever evolving technologies and hardware components that are
deployed in state of the art storage infrastructures. For those who are interested in a deeper dive, we
provide references to enable them to enter into these broad themes.
Since in SAP Business Suite Systems most of the I/O operations are performed by the database server, we
will focus on this type of server.
The employment with the above described main issues leads to topics that need to be carefully considered
during planning of the Enterprise Storage Architecture:
Determine design guidelines for the I/O stack that are optimal regarding I/O performance and
management of system growth these guidelines are provided in chapter 6 The optimal configuration of
the I/O Stack.
Establish a monitoring infrastructure for the key components of the storage systems as well as for the
entire I/O stack refer to chapter 7 Monitoring the I/O Stack.
Feedback is very welcome, please send it to Siegfried.Schmidt@sap.com .

2013 SAP AG

page 4/40

Enterprise Storage Architecture Planning

Shared Storage Infrastructure

To illustrate the advantages for a common use of storage resources by systems of the SAP Business Suite,
see the following graphic.

One approach for providing an IT infrastructure for the SAP Business Suite could be one host system for
each Business Suite system. Each Host System is equipped with sufficient CPU power, Main Memory and
Storage (SSD and HDD) recources to fulfill the needs of the hosted Business Suite system.
With regard to CPU power and Main Memory resources it is undoubtedly the right decision to run the
database services for each productive Business Suite system on its own Host system to get best
performance. The Application Servers as well as the non-productive Database Servers may share CPU
power and Main Memory resources of Host Systems this consolidation will be provided by Server
Vitualization Technologies.
Key property of Virtualization Technologies is among others the efficient use of the acquired physical
resources, which will be achieved by provisioning of virtual rather than actual resources. Why not applying
the same concept for storage hiding all physical layers and taking advantage of storage technologies to
solve typical challenges such as managing the system growth or performance bottlenecks?

2013 SAP AG

page 5/40

Enterprise Storage Architecture Planning

Take advantage of a shared Storage Infrastructure


Managing the system growth might be a challenge in an IT infrastructure where each Host System has its
own storage. Consider that a Business Suite system on one Host may be out of storage space, while systems
on other hosts do not need all provided space. In a non shared storage environment, access to storage
recources on other hosts can be enabled through a network file system, but since this will most likely increase
I/O latency, additional storage will generally be provided on the host of the requesting Business Suite system.
The latter leads to an inefficient utilization of purchased resources.
Whenever additional storage resources will be provided, this provision must not interrupt the availability of the
Business Suite system. Since simply adding physical disks may lead to an accumulation of new data on the
new disks, the management of system growth must consider a balanced distribution of data across all
available disks to avoid hot-spots.
Performance bottlenecks on storage level will occur whenever single components such as SSDs or HDDs
are overstrained. As described above, this may be caused by an unbalanced distribution of data or by
insuficient resources. The latter is the case when, for example, customers decide to purchase a few highcapacity disks instead of more smaller disks to fulfill storage needs.
Hiding the physical storage layers from the application will simplify the storage configuration. However,
finding bottlenecks will get more complicated. In dependence on where database or storage level a
performance bottleneck was found, questions must be answered such as: On which physical disk is a certain
database object stored? or Which database objects are stored on overstrained physical disks?. This shows
that the analysis of I/O bottlenecks not only requires performance-monitors for storage components, but also
methods that determine how database objects are configured on the I/O stack.
Service Level Agreements such as constant availability and non-disruptive backup are typical for an IT
infrastructure that is used for SAP Business suite systems. Availability is the main feature of storage
systems and will be accomplished by redundancy: all components of a storage system are redundant, so that
in case of a malfunction, the remaining component can take over the work. The data on the physical disks for
example is protected by redundancy (RAID - Redundant Array of Independent Disks). Storage systems
provide fast copy functions available for data copies within the same (local) and between (remote) storage
systems, enabling the implementation of solutions for data high availability, disaster protection and
backup/restore.

Bottom line
For all systems that share the same storage resources, Storage Systems provide features that help to
manage the system growth as well as I/O performance. Moreover, they provide properties to enable
implementing solutions for data high availability, disaster protection and backup/restore. We therefore
recommend shared Storage Systems for all SAP Business Suite Systems.

2013 SAP AG

page 6/40

Enterprise Storage Architecture Planning

Without a doubt, the above illustration shows a very simplified storage system, but it depicts the basic ideas
of shared storage systems. For all SAP Business Suite systems storage is provided here in the form of
Logical Units (called LUNs). The LUNs are provided from Storage Pools and each LUN is evenly distributed
across all physical disks belonging to a Storage Pool. The figure above shows one Storage Pool containing
HDDs and SSDs, representing a newer feature of Storage Systems the Storage Tiering.
Each tier of storage consists of physical disks with the same performance characteristics. For Storage
Tiering at least two types of physical disks with different performance characteristics are necessary, such as
SSD and HDD. Storage Tiering provides an automatic relocation of most frequently used data on physical
disks with highest performance capabilities.
In the following chapter we will discuss in some more detail the architecture of a Storage System and its
basic components. Essentially Storage Systems consist of Physical Disks that are connected via Disk
Adapters and a Backplane with the Controllers. The Controllers provide among others, functions for
redundant (RAID) data storage and therefore they are often called RAID-Controllers. In some publications
even the entire Storage System is called RAID-Controller, but in this document we will use the term Storage
System. All Storage Systems have in common that all Read and Write I/O operations pass through the
Cache (a main memory layer). Access to Storage System LUNs will be provided via Network Interfaces the
Frontend Adapters.

2013 SAP AG

page 7/40

Enterprise Storage Architecture Planning

2.1

Why not use shared Storage Systems for SAP HANA?

Many SAP customers are already taking advantage of the features provided by shared storage systems. The
acquired storage resources are efficiently used by their business systems, so that each system gets the
required capacity and I/O performance. The storage resources are centrally managed and monitored, and if
capacity or performance needs to be increased, additional resources will be added without interrupting
system availability. To ensure business continuity, IT solutions have been implemented for high availability
and disaster recovery that very often are based on specific capabilities of shared storage systems.
With the introduction of SAP HANA complete IT landscapes consisting of CPU, memory, network
components and storage have been delivered to the customers. These Appliances have been configured
to precisely meet the customer's requirements and are equipped with components that perfectly fit together.
The good thing about appliances is that the vendors for hardware and software use components, that are
coordinated and certified, while the customer can use the appliance easily just plug and play.
From the perspective of data center integration, the Appliance Solution causes additional efforts.
Established and reliable solutions for high availability and disaster recovery cannot be used if they are based
on specific capabilities of shared storage systems. The procurement of additional storage for the appliance
may be quite different than for shared storage systems and systems that dont run on the Appliance have no
benefit. So a number of customers with a state-of-the art shared storage infrastructure prefer integrating SAP
HANA into this existing infrastructure.
As of May 2013, SAP opened HANA for a tailored data center integration (see references for further
information). SAP provides a procedure that allows the customer and his storage provider to check whether
his current shared storage infrastructure is suitable for SAP HANA. If all I/O requirements are met, then the
advantages of a shared storage infrastructure can be used for SAP HANA.

2013 SAP AG

page 8/40

Enterprise Storage Architecture Planning

Storage Systems

Goal of this chapter is to describe a storage system architecture containing all components that need to be
considered when determining design guidelines for the I/O stack and establishing a monitoring infrastructure.

3.1

Frontend Adapter

Frontend Adapters are devices that act as interfaces between the storage system and host systems,
respectively a network between the hosts and storage system in case hosts are not directly attached to the
storage system. The corresponding devices on Host System level are the Host Bus Adapters (HBAs).
In accordance with the Open Systems Interconnection (OSI) model, both the Frontend Adapter and Host Bus
Adapter belong to the lowest, the physical layer. Their function is to convert the electrical signals of the
storage system (or host system) into serial signals that are passed through the network between storage and
host system.
The Frontend Adapter connects to the storage systems inter component backplane. A Frontend Adapter can
have more than one port and for the management of I/O operations it consists of CPUs and memory. In
technical specifications Frontend Adapters are characterized by their maximal bandwith which is specified in
Giga Bits per second (Gb/s). Storage system monitors provide information about a currently achieved
bandwidth to assess if a Frontend Adapter is overstrained some storage system manufacturers additionally
provide for this assessment the utilization of the Frontend Adapter CPUs.
There are several manufacturers of Frontend Adapters and the corresponding devices (HBAs) on the host
systems, and when planning the storage infrastructure it is important to consider that these must be matched.

2013 SAP AG

page 9/40

Enterprise Storage Architecture Planning

Luckily, storage vendors take this task by testing in their labs the interoperability of all kinds of combinations
between these devices.

3.2

Processor Complex

The Processor Complex is the main component of an Enterprise Storage System and due to availability
requirements, this complex consists of redundant components. To explain this redundancy, we selected a
storage system model, that is configured like a server cluster. The clusters consist of dedicated CPUs and
main memory (cache), and they are interconnected using high-speed data transmission technology, which is
characterized by a low latency and high bandwidth.
Both clusters serve I/O requests and should one cluster not be available, host systems get access to their
data using the remaining cluster.
All functions of a storage system are provided by the Micro-Code (so to speak, the operating system) that is
operated on each cluster system. These functions include for example: fast copy functions for data copies
within the same (local) and between (remote) storage systems, algorithms for storing data with redundancy
(RAID) and across many physical disks, provision of Logical Units (LUNs) and LUN-masking (meaning
visibility and accessibility of individual LUNs to specific hosts) or Cache algorithms to detect for example
sequential access patterns and start pre-fetching data ahead of I/O requests.
Cache is crucial for optimal I/O performance. Consider that ALL I/O operations pass through the cache!
READ I/O operations will be processed in the folowing basic steps: (1) The cache is queried to determine if
the requested data is in cache. (2) If the data is not in cache, it will be retrieved from the physical disks and
transferred to the Cache (STAGED). (3) Data is sent from cache to the requesting application buffer.
WRITE I/O operations will be processed in the folowing basic steps: (1) Data is stored in the Cache. (2) The
storage system signals the Host that the I/O has completed. (3) RAID algorithms will be processed and the
data will be transferred (asynchronously) from Cache to physical Disks (DESTAGED).
The latter means that WRITE I/O operations requested by a host have the biggest advantage of the cache,
since the host can continue operation as soon as the data arrived in the cache and has been confirmed by
the storage system as successfully processed. Since data are transferred asynchronously from cache to
physical disk, the storage system provides features (cache is battery-buffered), so that no data will be lost
even if the power fails.
Since WRITE and READ operations share the cache, storage systems allow that only a part of the cache
must be filled with modified data that were not yet written to disk Write Pending Limit . Whenever this limit
is reached, the storage system must switch to deferred write mode to destage the data and this will cause
a considerable degradation of write performance. Storage system monitors provide information about this
performance crucial cache property. To avoid deferred write situations for WRITE intensive database objects,
such as DB system LOGs, it is recommended to distribute these objects across many physical disks.

2013 SAP AG

page 10/40

Enterprise Storage Architecture Planning

3.3

Disk Adapter

Disk Adapters provide the paths from the Processor Complex to the physical disks. For redundancy each
physical disk is reachable via two different Disk Adapters. Data will be transferred on a network between Disk
Adapter and physical disks, mainly using the Fiber Channel (FC) protocol other common protocols are
SATA (Serial Advanced Technology Attachment) or SAS (Serial Attached SCSI (=Small Computer System
Interface)). The network topology is mainly a switched fabric or an arbitrated loop.
In an Arbitrated Loop topology data flows through all disks before arriving at either end of the Disk Adapter.
The full loop is required to participate in data transfer, and loop stability can be affected by physical disk
failures.
In a switched Fabric topology the switches enable a direct data flow between Disk Adapter and physical
disks (direct point-to-point links), therefore a failure on one physical disk will not impair the data transfer to the
remaining disks.
Like Frontend Adapters the Disk Adapters are characterized by their maximal bandwith which is specified in
Giga Bits per second (Gb/s). Storage system monitors provide information about a currently achieved
bandwidth to assess if a Disk Adapter is overstrained.

3.4

Physical Disks

Physical Disks as depicted in the storage system model above are Hard Disk Drives (HDDs) with a stack
of rapidly rotating discs (platters) coated with magnetic material.
Between the platters are the disk-arms and at their top is a
READ/WRITE head, that can be positioned on the Tracks
of a platter. The READ/WRITE heads have always the
same distance from the spindle, and all tracks beneath the
current READ/WRITE head position are called Cylinder.
The capacity (measured in Giga Bytes (GB)) of a physical
disk depends on (1) the number of platters, (2) the number
of tracks per platter (Track Density) and (3) the number of
Bits of data that fit on a given surface area on a platter
(Areal Density).
Since manufactures agreed on certain size factors for physical disks (5,25; 3,5 or 2,5 inches), mainly the
development of the Areal Density will contribute to higher capacity of physical disks. As the outer tracks on
a platter can hold more data than the inner tracks, the algorithms used to store data, first fill the outer tracks
to optimize the sustained Transfer Rate (measured in MB/s) high sustained Transfer Rates will be beneficial
for sequential I/O patterns.
The Access Time (measured in milliseconds (ms)) of random I/O operations depends on (1) Latency, the
time it takes the disk to rotate until the data appear beneath the READ/WRITE head, (2) Seek Time, the time
it takes the READ/WRITE head to be positioned on the right track and (3) Transfer Time, the time needed to
transfer data between the physical disk and the cache.
2013 SAP AG

page 11/40

Enterprise Storage Architecture Planning

Currently used physical disks have a rotational speed of 15000 rpm (rotations per minute). Assuming that the
data will in average appear beneath the READ/WRITE head after a half rotation, the Latency will be 2 ms (=
1 minute / rotational speed / 2 = 60000 ms/ 15000 rpm / 2). The technical specifications of a certain physical
disk provide the Seek Time, and currently installed disks have a Seek Time of 3 ms. The Transfer Time is
typically on the order of a fraction of a millisecond. Assuming a Transfer Time of zero, a single physical disk
can serve about 170 I/O per second (1 s / (Latency + Seek Time) = 1000 ms / 2 ms/IO / 3 ms/IO).
Although manufacturers attempt to improve the throughput (IOPS), sustained Transfer Rate (MB/s) and
Access Time for a physical disk by adding cache to a disk, this is not sufficient to meet the needs that are
requested from SAP Business Suite systems. Even if the entire physical disk consists of flash memory
without any moving mechanical components such as Solid State Disks (SSDs), single disks will most likely
not meet the I/O performance needs.

3.5

Redundant Array of Independent Disks (RAID)

The introduction of the Redundant Array of Independent Disks (RAID) concept helped to overcome the
performance limitations of single physical disks.
RAID 0 (Striping) is the concept for increasing the I/O performance. A certain number of physical disks will
be grouped to one disk array, and physical storage is provided as LUNs (Logical Units), which are evenly
distributed across the disk array. Data which are transmitted to a LUN with one I/O operation, are broken into
as many pieces of equal size as there are disks within a group, and these pieces in turn are evenly
distributed across all physical disks. Although the I/O performance is increased by a factor equal to the
number of disks in a group, this improvement must be paid dearly: In case one disk of a stripe-set fails, all
data will be lost!
RAID 10 (Mirroring and Striping) combines data protection and performance improvement. First, several
mirror pairs of two physical disks are created and then data will be striped across these pairs. If not an entire
mirror pair fails, up to half of all disks of a disk array can fail until all data will be lost. Read I/O operations on a
LUN provided from a RAID 10 disk array will be served from all disks and Write I/O operations only on half of
all disks. Due to the mirroring the usable storage capacity is only half of the procured capacity 50%
capacity is used for data protection.
RAID 5 (Parity Protection and Striping) is a compromise between costly data protection and performance
improvement. Only the storage capacity of one physical disk is used for redundant information meaning the
disk array consists of n+1 physical disks. All data transferred to a LUN of a RAID 5 disk array are broken into
n pieces of equal size and a Parityblock (the redundant information needed to rebuild data) is created.
Finally, all data pieces and the Parityblock are evenly distributed across all physical disks. In case one
physical disks fails, all data can be rebuild using the information stored on the remaining n physical disks. In a
RAID 5 protected disk array consisting of n Data + 1 Parity disks, ((1 n / (n+1) ) * 100) is the percentage of
capacity used for protection for ex. in a RAID 5 (7+1) configured disk array 12,5% capacity is used for data
protection.

2013 SAP AG

page 12/40

Enterprise Storage Architecture Planning

The above described RAID levels are the most common. However, there are many other levels that are not
covered in this document. Levels that are not described include: RAID 3, 4, 6 (other types of parity protection)
or nested (hybrid) types, such as RAID 51.
Basically all RAID levels that provide data protection are suitable SAP Business Suite systems. The
performance capabilities of a RAID level depend on a manufacturer specific implementation.

3.6

How Storage Systems provide storage for Host Systems

The LUNs, which are provided from RAID configured disk arrays, can be directly supplied to Host Systems
using block-based transmission protocols such as Fibre Channel (FC), SCSI, iSCSI, ATA over Ethernet
(AoE), or HyperSCSI (bypasses the internet protocol suite (TCP/IP) and works directly over Ethernet). A
Storage Infrastructure that provides storage as devices (LUNs), using a block-based protocol is called
Storage Area Network (SAN). Since mainly the FC protocol is used in these infrastructures, Fiber Channel
has become a synonym for SAN.
Alternatively, storage can be provided to Host Systems using file-based transmission protocols such as NFS
(Network File System, popular on UNIX systems) or SMB/CIFS (Server Message Block/Common Internet File
System, used with MS Windows systems). A Storage Infrastructure that provides storage as files, using a filebased protocol is called Network Attached Storage (NAS).
The conceptual difference between NAS and SAN is that NAS appears to the client operating system (OS) as
a file server, whereas a device (LUN) available through a SAN still appears to the client OS as a device. The
storage system devices are managed either on storage system level (in case of NAS) or on operating system
level (in case of SAN) by utilities for Logical Volume Management (LVM) and File System (FS) Management.
We will explain these levels of the I/O stack in chapter: Managing Storage on Host Systems.

2013 SAP AG

page 13/40

Enterprise Storage Architecture Planning

Interconnection of Storage- and Host Systems

As mentioned in the introduction, we focus in this document on host systems that provide database services
and that are interconnected with storage systems using a dedicated network. Unlike networks with equivalent
systems: server network (client to server, server to server), in a storage network not equivalent host and
shared storage systems communicate with each other, and each host considers storage as his private
property. Both network types may use the similar hardware such as copper or optical cable for data
transmission, a similar topology and similar network components, but they differ in the used data
transmission protocols.

4.1

Fiber Channel Protocol

To standardize the communication in networks, the International Organization for Standardization (ISO)
provided the Open Systems Interconnection (OSI) model. It is a way of subdividing a communication system
into smaller parts called layers. Similar communication functions are grouped into logical layers. A layer
provides services to its upper layer while it receives services from the layer below.
Since mainly the Fiber Channel Protocol (FCP) is used in storage networks, we will briefly introduce the OSI
model and relate the Fiber Channel (FC) layers to the OSI layers.
The OSI Physical Layer (FC-0) defines electrical and physical specifications for devices (Frontend
Adapter, Switches, Host Bus Adapter). In particular, it defines the relationship between a device and a
transmission medium, such as a copper or optical cable. This relationship includes the layout of pins,
voltages, cable specifications, and more. On a Fiber Channel data are transmitted serial when writing this
document, a speed (bandwidth) of 4 or 8 Gb/s were standard.
The OSI Data Link Layer (FC-1) provides functions and procedures to transfer data between network
entities. This layer also detects and corrects errors that might occur in the Physical Layer. These functions
are enabled by signal encoding: Signals that are transmitted serial, never reach in identical intervals,
therefore the receiver must synchronize regularly with the transmitter. For this synchronisatzion, the data
stream clock can be used. In all serial transmission techniques the data stream clock can be derived due to a
special encoding, e.g. each Byte (8 Bit) will be translated in a 10-Bit-character (8b/10b encoding). This
explains that a 8 Gb/s FC Adapter never reaches a bandwidth of 1 Giga Bytes per second (GB/s), but at
maximum 20% less. Manufacturers work on a encoding with less overhead.
The OSI Network Layer (FC-2) provides functions and procedures for transferring variable length data
sequences. Large sequences are divided by the transmitter into several frames. A sequence is provided to
the next higher layer, after all frames have arrived. If a frame gets lost, todays FC networks transmit again the
entire sequence. Since a transmitter may only transmit as many frames as the receiver can process (per
credit method), it is important to coordinate the credits between HBA and Frontend Adapter (End-to-End
credits) or between Switch and HBA/Frontend Adapter (buffer-to-buffer credits).
The OSI Transport Layer (FC-3) provides transparent transfer of data between users and additional
services to the upper layers, such as multipathing, striping, mirroring, compression and encryption. Todays
FC products do not provide these additional services. Currently software beyond the Fiber Channel Protocol
stack and switches provide for example multipathing features.
2013 SAP AG

page 14/40

Enterprise Storage Architecture Planning

The upper OSI layers (FC-4) (the Upper Layer Protocols) define which protocol is used to exchanged data
between devices. A Fiber Channel network (layers FC-0 up to FC-3) can be used for multiple protocols, such
as SCSI, Internet Protocol (IP) or FICON. The protocol for SCSI (=Small Computer System Interface) on a
Fiber Channel network is called Fiber Channel Protocol (FCP).

4.2

Storage Network

The interconnection of Storage- and Host Systems will be provided by a storage network consisting of a
fabric of interface devices (Frontend Adapter, Host Bus Adapter) and Switches, that couple together network
segments via optical cables. The data is transmitted according to the specifications of the selected protocol.

For each database host system access paths between a HBA port and a Frontend Adapter port will be
defined this is called Zoning. In case multiple paths are defined to access storage, storage system
manufacturers offer a Multipathing Software, providing access to storage should one path fail as well as
I/O workload balancing. The Multipating Software (driver) has to be installed on the host operating system.
Each host considers the access paths and underlying hardware as its private property. This means that each
host has point-to-point connections from its HBA ports to Frontend Adapter ports, and as depicted above
in case a switch or Frontend Adapter port has a malfunction, access will be provided through an alternate
path, but in case Host Systems share Frontend Adapter ports they may impair each other. This means that
the Storage Network for SAP Business Suite systems with highest I/O performance requirements should be
configured in a way that these systems do not share Frontend Adapter ports.

2013 SAP AG

page 15/40

Enterprise Storage Architecture Planning

Managing Storage on Host Systems

In the following we assume that storage is provided to the Operating System of a Database Host as devices
(LUNs) using a block-based protocol that is storage from a Storage Area Network (SAN). On Operating
System level we will use for the LUNs provided by the SAN the more general term Host Devices.

In the previous chapter we learned that the Host Bus Adapter can buffer data (frames, see OSI Network
Layer), and that the number of buffered frames must be coordinated with the capabilities of the network
device connected with the HBA (buffer-to-buffer credits). Manufacturers of Host Bus Adapters provide the
software for managing I/O operations on HBA level the HBA Driver.
There is as well a driver for the management of I/O operations on Host Devices, and the Host Device
Driver manages for each Host Device a buffer that can queue I/O operations requested by a application.
Since many Host Devices share a HBA, their queue length must be coordinated with buffer capabilities of
the HBA.
Both, the I/O buffers on HBA and Host Device level are crucial for I/O performance and from database
application point of view, the more of each buffers are available for I/O operations on a database file, the
better will be the I/O performance.
Distributing the I/O requests for a Host Device across multiple HBAs will provide more HBA buffers this will
be achieved by I/O Multipathing. The I/O Multipathing Driver provides access to SAN storage should one
path fail as well as I/O workload balancing across all available paths.
The number of Host Device queues available for I/O operations on DB files can be increased by distributing
these objects across multiple Host Devices, and this will be achieved by the Logical Volume Manager.

2013 SAP AG

page 16/40

Enterprise Storage Architecture Planning

5.1

Logical Volume Manager

Like the Storage System, the Logical Volume Manager (LVM) provides methods for increasing I/O
performance and data protection. Since Storage System RAID configurations perfectly combine data
protection and performance improvement, this is very often considered as sufficient and the LVM capabilities
will be left aside.
Yes, the storage system redundancy algorithms (RAID) are undoubtedly sufficient for data protection, but
only the additional striping of data across Host Devices will enable the utilization of multiple Host Device
queues. To achieve this, Host Devices will be combined in a Volume Group (VG) and Logical Volumes
(LVs) will be created that are distributed across the Host Devices belonging to the VG.
When creating Logical Volumes in a Volume Group, storage will be allocated on the Host Devices in
contiguous chunks (often called partitions or extents), having a size of 1 MB and up to several GBs. This
size is defined per Volume Group, meaning that the chunks of all Logical Volumes in a VG have the same
size.
Only if these chunks are striped allocated (next chunk on next Host Device), data will be balanced across all
Host Devices of a Volume Group and consequently I/O requests can take advantage of multiple Host Device
queues. If all storage of a Volume Group is exhausted and more is needed due to database growth, the
Volume Group must be extended by multiple Host Devices to keep a balanced distribution of Logical
Volumes and a utilization of multiple Host Device queues. Ideally, the Volume Group should always be
extended with the same number of Host Devices, for example as many as initially used. We suggest to create
striped allocated Logical Volumes for database files containing data and indexes.
Another way to allocate the chunks of storage is to use first the entire storage of one Host Device, before
allocating storage on the next Host Device of a VG. This concatenated allocation may lead to an
accumulation of most frequently used data on a single Host Device and the I/O performance will be limited to
the performance of one Host Device. The concatenated allocated Logical Volumes are inappropriate for DB
objects of SAP Business Suite systems.
For some database objects, such as the performance crucial Redo LOGs, the granularity of chunks allocated
on Host Devices (> 1MB) will not be sufficient, since the Redo LOG Write I/O blocks are often smaller than
the allocated chunks and therefore the I/O performance might be limited to the performance (queue) of one
Host Device. Luckily the LVM provides for LVs as well a striping of blocks, each block having a size of 4
KB and up to several MBs, helping to overcome this performance limitation.
We suggest providing a separate Volume Group for the creation of Logical Volumes that will be used for
Redo LOG files. Each Logical Volume is again striped allocated. Additionally all Host Devices in the Volume
Group build a stripe set (the number of Host Devices is often called stripe width) and the blocks of the
stripe set each block has the same stripe size are balanced across all Host Devices of the VG.
These kind of block level striped Logical Volumes use the queues of all <n> Host Devices belonging to the
Volume Group, since I/O operations requested by the database will be chopped into <n> same sized
blocks, and these in turn are processed in parallel WRITE as well as READ I/O operations. The parallel

2013 SAP AG

page 17/40

Enterprise Storage Architecture Planning

processing improves the I/O performance significantly. For best Redo LOG I/O performance it is essential to
distribute these objects across many Host Devices.
The Logical Volume Manager and Storage Systems have one more concept in common, the data mirroring,
enabling the implementation of solutions for data high availability.
The basic concept of mirroring is simple keep the data in different locations to eliminate the possibility of
data loss upon disk block failure situation. From the high-availability point of view, each mirrored data should
be located on separate physical disks, and separate I/O adapters should be used. Ideally, the physical disks
should be located in separate storage systems to protect against data loss in a power failure situation.
As mentioned in chapter Storage Systems, these have functions that allow copies within and between
storage systems data will be copied from one storage system LUN to another. Data mirroring on Logical
Volume Manager level also known as Host Level Mirroring will be achieved by copying data from one
Logical Volume to another and both LVs can be physically stored on separate storage systems.
Both data mirroring concepts, on Storage System and Logical Volume Manager level, are equally suitable for
a data high availability solution, merely mirroring on Storage System level is totally transparent to the
Operating System and does not require any resources of the database host.
The next level above the Logical Volume Manger is the file system. Before discussing more details about file
systems, it should be mentioned that for database applications file systems are not needed to achieve
optimal I/O performance.
Basically the Logical Volumes can be provided to the database system so called raw devices and data
blocks will be directly transferred between database buffers and the Logical Volumes (Direct I/O). To achieve
best performance, database algorithms try to keep data in the buffers (main memory) as long as possible and
perform I/O operations asynchronously whenever possible (Asynchronous I/O). Since database systems
coordinate the concurrent data access of multiple processes (by database locking), there is no need for
locking I/O operations (Concurrent I/O).
File systems are preferred to raw devices, because they provide advantages such as a secure administration,
but no matter which file system type will be implemented, it should provide the key performance capabilities
of raw devices that is direct I/O, asynchronous I/O and concurrent I/O.

2013 SAP AG

page 18/40

Enterprise Storage Architecture Planning

5.2

File System

Todays File Systems must meet different challenges they must be fast, robust, ideally infinite scalable and
they must provide capabilities supporting data center operations, such as seamless file system growth and
backup/restore. Note: the term file system is often equated with the software that manages file systems.
Data (database objects) are stored in files and for fast access the file system management organizes the
files in a tree-structure.
The elements of the tree are called I-nodes. The node
at the top is called root and the nodes at the bottom
are called leaves.
Only the leaves contain data, whereas the upper
nodes contain organizational information pointers to
nodes or leaves.
The pointers are located in contiguously allocated
chunks of space the extents. The extents that build
the leaves are of variable size (a multiple of the file
system block size). The ability to allocate extents
reduces the administrative burden for growing file systems and improves performance.
The file system management keeps the tree balanced (B-Tree), so that all paths from the root node to the
leaves have the same length and are as short as possible. The latter provides the fast access to that part of a
file containing the data requested by the application (here the database system).
Changes on DB files due to insert, update and delete operations may lead to a re-balancing of a B-tree and
the file system management may need to temporarily lock I-nodes. This I-Node Locking can have a severe
impact on the I/O performance and to avoid this, file system option Concurrent I/O should be activated,
whenever available.
Ideally, change operations on DB files should not block any database process. To achieve this, file system
option Asynchronous I/O should be activated, whenever available. With this option I/O operations run in
the background and do not block database processes. This improves performance, since I/O operations and
database processing can run simultaneously.
Since file systems manage files of any application (not only databases), they also cache data changes in
main memory to improve I/O performance. To prevent loosing cached data in the event of a system crash, file
systems must provide measures that guarantee data consistency. This robustness will be achieved by a
logging of Write I/O operations and these logs are kept in file system journals.
Although file system caching generally improves the I/O performance, this is unsuitable for database
applications. The database system buffers all data in main memory therefore additional file system caching
will waste memory, while double copying (storage to file system cache and file system cache to DB buffer)
will waste CPU time. For file systems containing data, indexes and Redo LOG data, option Direct I/O we
suggest activating whenever available.

2013 SAP AG

page 19/40

Enterprise Storage Architecture Planning

File systems will be created with a certain size and due to database growth it may be necessary to increase
the size of the file system. Assuming that the Logical Volume on which the file system is mounted
(respectively the Volume Group) has spare storage resources, the size of a file system can be increased
online without any impact on operation. A decrease of the size is for some file system types not designed
and administrators must implement their own procedures, executing the steps: backup of files, creation of a
new smaller file system and restore of saved files.
Backup and Restore are the daily business of each computing center. The backup of file systems may be a
challenge if business application users cannot afford any downtime, since the backup requires a consistent
state of the file systems. Luckily file systems more precisely, file system management in cooperation with
Logical Volume Management offer the Snapshot capability, providing a fast and consistent point in time
copy.
To obtain a Snapshot, the file system offers a command that freezes all I/O operations, causes a copy of all
pointers (I-nodes) providing access to the current data and finally thaws I/O operations. Actually no data will
be copied this makes the process very fast and after I/O operations are thawed, all data changes will be
written to new locations while the old data (point in time copy) are kept. The point in time copy is consistent
and can be copied to a backup media or used to create a system clone.

5.3

Cluster File System

So far we discussed file systems as part of the Operating System kernel, supporting the management of
application data, which are stored in files and organized in file directories so to say single OS File
Systems.
The todays ever increasing needs for performance and high availability, especially requested for database
applications, will be fulfilled by the power of multiple host systems, sharing a common data base. To keep the
common database (files) consistent, a kind of coordinator is needed, wrapped around the single OS file
systems and synchronizing their activities. This functionality is provided by a Cluster File System.
Cluster File Systems rely on an inter-host
system network, which is characterized by
low-latency and high bandwidth. Basically
Cluster

File

Systems

consist

of

three

components: (1) the Cluster-Framework


describes which Host Systems belong to a
cluster and take over functions of the Cluster
File System, (2) the Locking Mechanism
ensures that no two Host Systems can
simultaneously make changes on the same
database file and (3) the Fencing ensures that a malfunction on one Host System does not destroy any data.
The currently available Cluster File Systems can be distinguished by the physical location of the common
database. The most common Cluster File System solutions use a shared storage infrastructure and

2013 SAP AG

page 20/40

Enterprise Storage Architecture Planning

storage is provided using a block-based protocol (SAN). In other solutions, the storage is provided from the
local physical disks of the Host Systems that build the cluster.
Another way to allow multiple host systems to share a common database is to interpose a dedicated
coordinator providing to the host systems files, which are used by the database system. This File Server and
the Database Host Systems are interconnected via the Server Network and the file system is installed only on
the File Server the Network File System (NFS).
Basically the File Server can be regarded as
a single OS File System:
Storage may be provided from a storage
network using a block-based protocol (SAN)
and is managed by the File Servers Logical
Volume Manager.
The File System manages all files and
provides

these

to

the

database

Host

systems.
The File Server must as well provide a
Locking Mechanism ensuring that no two Host Systems can simultaneously make changes on the same file.
As discussed in chapter Storage Systems, the functionality of the File server can be part of the storage
infrastructure, which is then called Network Attached Storage (NAS).

5.4

Database System

At top of the I/O stack are the database objects: tables, indexes and Redo LOGs. For storing of tables and
indexes, Database Systems provide a concept the table spaces that combine these objects. The table
spaces in turn consist of chunks of space (called extent, segment, superblock) that are stored in files.
Usually table spaces consist of many Database files, and as discussed in the previous chapter the DB
files are distributed across multiple file directories (these directories are as well called file systems).
If multiple DB files are available for a table space, Database Systems balance the extents across the DB
files (next extent in next DB file) this is called extent based striping. Ideally, each table space consists of
as many same sized DB files as file systems exist, and each DB file is stored in a different file system. Due to
this provision of DB files and the extent based striping, all tables and indexes will be evenly distributed across
all file systems. For growing table spaces full stripes of same sized DB files, one file in each file systems,
should be provided to keep the balanced data distribution.

2013 SAP AG

page 21/40

Enterprise Storage Architecture Planning

5.5

Storage management on Host Systems is crucial for optimal I/O Performance

At the beginning of this chapter, we have emphasized that it is crucial for I/O performance, to use as many
I/O buffers of Host Devices and Host Bus Adapter as possible. We discussed which components of the I/O
stack (Multipath Driver, Logical Volume Management and Database System) are helping to achieve on Host
Systems this goal.
Only the consideration of storage management capabilities on both, Host System and Storage System level,
will help us to determine design guidelines for the I/O stack that are optimal regarding I/O performance and
management of system growth.

2013 SAP AG

page 22/40

Enterprise Storage Architecture Planning

The optimal configuration of the I/O Stack

From now on, we assume that the reader is familiar with the building blocks and physical components of an
Enterprise Storage Architecture, which was described in the previous chapters. We will now present design
guidelines for the entire I/O stack, which we consider to be optimal, since the used data distribution
techniques will assure a uniform utilization of all available physical resources.

6.1

Threefold Striping

In the following, we assume that storage is provided from a Storage Area Network LUNs using a blockbased protocol. We will describe the I/O stack configuration from bottom to top.
First level of Striping
On Storage System level, the physical disks are organized in groups of disks. The LUNs that are provided
by the disk groups are RAID configured, using a RAID level that offers both, data protection and data
distribution. The LUNs are evenly distributed (striped) across the physical disks of a RAID-group and in case
the Storage System has the feature of combining RAID-groups in a storage Pool, then the provided LUNs
are additionally striped across multiple RAID-Groups.

2013 SAP AG

page 23/40

Enterprise Storage Architecture Planning

For production SAP Business Suite Systems access to a LUN is provided via multiple paths (at least two),
and the zoning between Frontend Adapter ports and Host Bus Adapter ports deploys multiple different
adapters on storage and host system level. On the Host System a Multipathing Driver is installed,
providing access to storage should one path fail as well as I/O workload balancing.
Second level of Striping
On Host System level, the Host Devices (LUNs) are managed by the Logical Volume Manager and they are
organized in Volume Groups. The Logical Volumes consist of contiguous chunks of data, that are striped
allocated (next chunk on next Host Device) on all Host Devices belonging to the Volume Group. The Logical
Volumes that are used for the DB system Redo LOG file systems, are additionally block-level striped. When
choosing the chunk- and block size, it should be considered that smaller sizes lead to a better distribution.
The File Systems are mounted on Logical Volumes there is always a 1:1 relationship between a File
System and a Logical Volume and the number of File Systems is arbitrary, BUT should NOT be increased
due to increasing storage demands. The file systems are implemented with the following mount options:
direct I/O, asynchronous I/O and concurrent I/O, to achieve the performance capabilities of raw devices.
In the above model, each file system is built on a separate volume group, and when the volume group is
increased by further Host Devices, then the additional capacity is clearly associated to one Logical Volume,
respectively file system. The use of multiple Volume Groups may be necessary if the Logical Volume
Manager specifies limits for a Volume Group, such as size of a Logical Volume, total number of manageable
chunks or the number of Host Devices. If no Volume Group limits exist and administrators prefer to provide
all Logical Volumes from one Volume Group, this is as well suitable, BUT at least the Redo LOG file systems
are stored in their own Volume Group(s).
Third level of Striping
To enable on Database System level an extent based striping, <n> SAPDATA file systems are created.
The DB table spaces are created with a uniform extent size and storage is provided in a number of <n>
same sized DB files, each stored in a separate file system.

6.2

Managing the Data Growth

Two tasks must be managed in case the storage demands grow: provision of more physical storage and
extension of table spaces on the already allocated storage. Both tasks should be easy to manage without any
impact to operation especially neither performance decrease nor downtime.
Extend physical storage
Each Volume Group is extended by the same number of Host Devices, as originally used a full stripe of
<m> same sized LUNs. The new Host Devices need not necessarily to have the same size as the initially
used Host Devices. To keep the balanced utilization of storage system components, the new LUNs must be
balanced across the RAID groups respectively storage pools.
If not automatically done by the Logical Volume Manager, the Logical Volume must be increased by the
provided capacity. Finally the file system, that is mounted on the Logical Volume can be increased.
2013 SAP AG

page 24/40

Enterprise Storage Architecture Planning

Extend Table Spaces


Add to each file system one new file a full stripe of <n> same sized files. If a very slow growing table space
should not be extended by <n> files, then it is better to resize all existing files uniformly, instead of adding just
one new file.

6.3

Assessment of the suggested I/O Stack Configuration

Due to the suggested threefold striping, all performance crucial components of the I/O stack: Host Device
queues, Host Bus Adapter buffers, multiple I/O paths; Storage System: Frontend Adapter, Cache and
Processor Complex, Disk Adapter and physical Disks are evenly utilized.
Data growth can be managed without impact on operation and as long as the guidelines are followed the
balanced utilization of all performance crucial components will be kept.

2013 SAP AG

page 25/40

Enterprise Storage Architecture Planning

Monitoring the I/O Stack

Think of a situation where the business users of SAP systems are not satisfied with the overall performance,
and since the transactions or business processes that they run are I/O intensive (processing a huge amount
of Read and Write I/O operations) the storage infrastructure is suspected to be the culprit.
After reading the previous chapters, it should be clear that potential I/O bottlenecks are not necessarily
caused by the components of the storage infrastructure, but can be caused on all levels of the I/O stack.
Since the storage infrastructure is used by many systems, it should be noted that the systems may interfere
with each other. In case the storage infrastructure executes data replications between storage systems, due
to a implemented High Availability solution, the I/O performance of a system may even become impaired by
systems that operate on a different storage system (see chapter Storage Network).
This makes clear that for monitoring the IO stack it is important to get a complete picture of how a system is
mapped to the components of the IO stack, and what other systems use these components also. The
detailed configuration analyses will allow to pinpoint the part of the IO stack, causing performance
bottlenecks and to detect these, detailed performance analyses are needed.
The monitors and tools providing raw data for the I/O stack analyses, depend on the used DB System,
Operating System and Storage System. Since any combination of these systems can be deployed and each
system level has many monitors, we will not describe specific monitors or tools, but rather the information
they must supply to enable an end-to-end I/O performance analysis. Usually are the levels of the I/O stack
administrated by different experts and special authorizations are needed for the monitors and tools, therefore
it is essential for the analysis of the entire I/O stack, that all experts cooperate.

7.1

Configuration Analysis

To get a complete picture how the SAP system is configured on the DB host and Storage system, the
configuration analysis comprises the mapping of application system's Database objects (Tables, Indexes,
LOGs) to the different parts of the I/O stack, such as DB files, File Systems, Logical Volumes, Volume
Groups, Host Devices and Host Adapters and finally to the Storage System components like Frontend
Adapters, Cache, Disk Adapters, RAID groups and physical disks.
This analysis will show how far the current configuration differs from the design guidelines.
Database System
The DB System maintains system tables that contain information describing how tables and indexes are
distributed across the DB files that belong to a table space. This information show for example if a database
extent based striping is achieved or if most frequently used tables and indexes are accumulated in few DB
files.
Operating System
Detailed information about the Host System hardware is needed to assess if the available computing power is
able to fulfill the requirements of the business application. These information comprise the equipment with

2013 SAP AG

page 26/40

Enterprise Storage Architecture Planning

CPUs, Memory, Interface cards to the server network and Host Bus Adapter interfaces to the SAN. For all
these components the current parameter settings are as well needed.
For File Systems detailed settings such as mount options and block sizes are needed and the data gathered
from the Logical Volume Manager must especially show how Logical Volumes are build on Host Devices, the
I/O block sizes and the queue length defined for Host Devices.
Finally, information are required from the Multipath Driver to enable the analysis of I/O paths, such as
selected I/O distribution policy, HBA buffer length, used HBA ports and used storage system frontend ports.
Storage System
Only the storage administration knows all Host Systems that share the storage infrastructure, how many
storage systems are used and how the storage systems are related to each other.
For each storage system that may impair the performance of the selected Host System, inventory information
is needed, such as Frontend Adapters, Cache (main memory size, thresholds), Disk Adapters, physical disks
(interface, size, RPM) and RAID configuration (which physical disks belong to a group and which RAID level
is implemented). For each LUN, information are needed about the size and assignment to RAID group, Disk
Adapters and Frontend Adapters and in case LUNs are provided from storage pools that are combined of
RAID groups, these information are needed as well.
For all Host Systems that share Storage Systems with the selected Host System, at least their names and
their zoning to Frontend Adapter ports is needed.

7.2

Performance Analysis

As mentioned in the introduction of this chapter, since mostly business users complain about not satisfying
I/O performance, the performance analyses should be conducted during periods with peak business
workload, such as Month End Closing or load of data into Business Warehouse cubes any artificial
workload that is generated outside business periods will not be helpful.
The data needed for the analyses must be collected in parallel on all levels of the I/O stack DB System,
Operating System and Storage System. For this there are monitors available that provide comprehensive
performance metrics. Depending on the level of the I/O stack, the metrics will be collected in different
intervals, since i.e. small intervals on storage system level can effect the performance of all systems that
share the storage systems. The different interval length must be taken into account when interpreting the
data. We recommend using an interval length of 5 minutes on DB and storage system, and 10 seconds on
the operating system.
Goal of the analysis is to pinpoint which part of the IO stack causes performance bottlenecks. Since
performance optimization on each level of the I/O stack needs special expertise, it is crucial to identify the
level that is causing bottlenecks as fast as possible to involve the right experts.
A suitable performance indicator is the I/O latency (the time that elapses until the successful completion of
an I/O operation, measured in milliseconds (ms)) that can be derived on all levels of the I/O stack for most of
the components. Latency on the different I/O stack levels should be compared in periods with significant I/O

2013 SAP AG

page 27/40

Enterprise Storage Architecture Planning

workload, which can be derived on all levels as throughput (measured in I/O operations per second (IOPS))
and bandwidth (measured in transferred MB per second (MB/s)).
If the latency of comparable periods is on the current level significantly higher than on the next level below,
then the bottleneck may be caused on components between the current level and the level below. Therefore
it is reasonable to start the performance analysis on operating system level, to pinpoint faster the level that
causes the bottleneck.
Operating System
If the latency for Read or Write I/O operation measured on DB System level is significantly higher than the
latency measured on Operating System (Host Device) level, the bottleneck is very likely on components
above the Host Devices or even on DB System level.
The memory and CPU utilization must be analysed to verify if the Database Host is Memory or CPU bound. If
this is not the case, the distribution of the I/O workload across Host Devices may not be optimal and few Host
Devices must serve most of the I/O requests, which is indicated by a high utilization of Host Devices. The
latter will lead to unpredictable latency just acceptable and shortly after poor I/O service times. Another
reason for high I/O latency on DB System level might be an improper block size on file system level.
If the I/O latency on both DB System and on Operating System level is not satisfying, then the bottleneck is
very likely on Host Device level or on components below, that is the SAN level. Host Devices might be highly
utilized, have an unsuitable queue size or the used Host Bus Adapters are overloaded. The HBA overload
may be caused by too many Host Devices that use the same HBA, HBA buffers that are too small or a
multipathing driver that does not properly balance the I/O workload.
Storage System
A high utilization of the used Frontend Adapters will lead to high I/O latency. If the Frontend Adapters are
shared with other Host Systems, their I/O workload may be the reason for the high utilization and
consequently for the not satisfying I/O latency.
A high latency of Write I/O operations may be caused by cache shortages, meaning the cache is filled with
modified data that were not yet written to disk (Write Pending Limit reached). This is very likely caused by a
bottleneck in the storage backend and the I/O performance of the LUNs must be analyzed. If the Write I/O
latency is high, but Write Pending Limit is not reached, a synchronous replication of data to the cache of a
remote storage system may be the reason either network bottlenecks or on the remote storage system the
Write Pending Limit reached.
LUNs with not satisfying I/O latency may be accumulated on the same RAID groups (physical disks), stored
on too few RAID groups or the LUNs of other Host Systems with high I/O workload use the same RAID
groups.

2013 SAP AG

page 28/40

Enterprise Storage Architecture Planning

7.3

Summary Monitoring the I/O Stack

There are many monitors and tools for monitoring the I/O stack, but few that allow an end-to-end analysis
starting at the application down through the components of the database-, operating- and storage system.
Therefore it is important to select for the implemented infrastructure exactly those monitors and tools that
enable this end-to-end analysis.
The monitor and tool selection is made by the administrators who are responsible for the different levels of
I/O stack. On database level the monitors can be queries on statistical tables and on operating system level,
commands that collect metrics from the Kernel. Storage system manufacturers provide administrators often
complete tool suites for configuration and monitoring of the systems. Whatever monitors and tools are
choosen, the administrators must prove that they can perform the above outlined end-to-end configuration
and performance analysis.

2013 SAP AG

page 29/40

Enterprise Storage Architecture Planning

Implementing SAP HANA on shared Enterprise Storage

In the previous chapters the building blocks and physical components of an Enterprise Storage Architecture
were described and we outlined design guidelines for an optimal configuration of database objects on the
entire I/O stack. In this chapter we will provide guidelines for the implementation of SAP HANA on storage
that is provided from a Storage Area Network LUNs using a block-based protocol.
To develop these guidelines, we first need to know which objects SAP HANA saves on physical disk storage
and for this we give an overview of the SAP HANA persistence layer. The components on the I/O stack
provide measures to optimize sequential- or random I/O operations with small or large I/O blocks, therefore
we discuss SAP HANA I/O patterns and the processes that generate these. Finally we introduce the Storage
Connector API, the SAP HANA built-in solution for file access sharing and fencing of storage, that is need for
HANA scale-out solutions.

8.1

HANA Persistence Layer

The SAP HANA database services: Index Server, Name Server, Statistics Server and XS Server save
data on the physical storage level. HANA distinguishes for each database service two types of data that must
be stored transactional Redo LOG information and all other DATA.
The XS server is a lightweight application server that is integrated into SAP HANA. The Statistics Server
evaluates information about status, performance and resource consumption from all components belonging
to the system. It stores the monitoring and alert information in its own database tables. From there the
information can be accessed by administrative tools such as the SAP HANA studio.
With regard to availability and consistency of business data, only the Name and Index Server are relevant.
The Name Server knows the topology which tables, table replicas, or partitions of tables are located on
which index server, either in a single-host or in distributed multi-host (scale-out) HANA environment. The
Index Server is managing the business data the database tables. Multiple Index Servers can be active,
and each maintains its own subset of tables shared nothing.
HANA DATA
The database tables can be stored row by row: Row Store or column by column: Column Store. The Row
Store works directly with pages (blocks of data), while the Column Store use an additional abstraction layer,
the containers.
There are different types of containers. Virtual files provide a file-like interface. Virtual files are used by the
column store to write and read the data (main storage) and the delta logs (delta storage) of columnar tables.
Other types of containers such as fixed size entry container, var size entry container and B* trees are
used internally by the persistence layer. Data of all different types of containers will finally be passed as
pages to the lower layers.
Each HANA database service (Index-, Name-, Statistics- and XS Server) stores the pages whether from
Row Store or Column Store in its own Data Volume. From the perspective of the Linux operating system,
the Data Volume is a physical file. The physical files are managed by the HANA Page I/O module, that
partitions the file into so called Superblocks, each having a size of 64 MB.

2013 SAP AG

page 30/40

Enterprise Storage Architecture Planning

The HANA Page I/O module ensures that each superblock contains only pages of the same size. Pages
that are used for Row Store tables always have a size of 16 KB, while pages of the Column Store are
either: 4 KB, 16 KB, 64 KB, 256 KB, 1 MB, 4 MB or 16 MB. The pages of the Row Store do not share
superblocks with 16 KB pages that come from the Column Store. When the HANA Page I/O module has to
write data, then it will try to make the pages as large as possible.
Apart from the business data other information is stored on the Data Volumes, such as Undo LOG, Restart
Record, List of open Transactions and other technical data.
HANA LOG
Each HANA database service (Index-, Name-, Statistics- and XS Server) has its own Log Volumes that are
used to record changes. As in other transactional DB systems, HANA will use this information after a system
crash to redo all completed transactions.
At least 2 Log Volumes called Log Segments will initially be created, so that logging can continue while a
full log segment is archived. Additional Log Segments can be created if needed. The Log Segments are
represented on Linux operating system level as physical files, and the HANA Logger writes pages into these
files that have a size of 4 KB, 16 KB, 64 KB, 256 KB or 1 MB.

8.2

HANA I/O Pattern

In this section we will arrange the processes that generate I/O operations by their I/O frequency: first
processes that continuously generate I/O operations, and then processes that periodically or rarely generate
I/O operations.
Redo LOG writing
The information that is needed to redo a committed transaction must be successfully written to the physical
storage level, before processing can continue the continuous Redo LOG write I/O operations are
synchronous. The LOG write I/O pattern are sequential and depending on commit frequency and filling
state of the Log Buffer, Redo LOG I/O blocks are between 4 KB and 1 MB. The major part of the
performance crucial LOG I/O operations processes 4 KB blocks, therefore measures should be taken that
optimize sequential writes of 4 KB I/O blocks with regard to I/O latency.
Savepoint writing
The persistence layer periodically performs savepoints. During the savepoint operation modified data pages
in the page cache are written to the physical storage level. Buffered redo log entries are flushed to the
physical storage level as well. The purpose of performing savepoints is to speed up the restart, since the redo
log need not be processed from the beginning but only from the last savepoint position. Therefore periodic
savepoints are essential to be prepared for a fast restart. By default savepoints will be processed every 5
minutes the savepoint periods are adaptable.
The periodic Savepoint write I/O operations are asynchronous. Savepoints generate some load on Log
Volumes but the main load is on Data Volumes. The Savepoint I/O pattern are sequential and I/O blocks
between 4 KB and 16 MB are processed. Due to HANAs practice to make the pages as large as possible,
rather large I/O blocks can be expected. Therefore measures should be taken that optimize I/O bandwidth
(the processed MB/s) rather than throughput (IOPS).

2013 SAP AG

page 31/40

Enterprise Storage Architecture Planning

Delta Merge writing


HANA keeps columnar organized data in the main storage, a memory area containing data that is
compressed and organized for highest read performance. Modified data will be stored in a memory area
related to main storage the delta storage so that the performance properties of the main storage are
retained.
The purpose of the delta merge operation is to move modified data that is collected in the delta store into the
read optimized main storage. During the merge operation, the complete main storage is rewritten to disk. The
delta merge is performed asynchronously to transactions that made changes. Certain column store events
trigger a delta merge, such as: (1) the number of lines in delta storage exceeds the specified limit or (2) the
memory consumption of the delta storage exceeds the specified limit.
The regular Delta Merge write I/O operations are asynchronous. Delta Merges generate load on Data
Volumes. The Delta Merge I/O pattern are sequential and I/O blocks between 4 KB and 16 MB are
processed. Due to HANAs practice to make the pages as large as possible, rather large I/O blocks can be
expected. Therefore measures should be taken that optimize I/O bandwidth (the processed MB/s) rather
than throughput (IOPS).
Backup processing
Two kinds of Backup are distinguished the Log Backup and the Data Backup.
Log Backups are automated local processes that are triggered and executed by each HANA server (Index-,
Name-, Statistics- and XS Server) autonomously. The Log Backup is performed (1) when a log segment is
full, (2) when a configured time limit is exceeded or (3) after startup of a server.
Data Backups are coordinated for all HANA servers by the Backup Manager. The backup manager first
tells all HANA servers to perform a global savepoint. The global savepoint is required to get a system wideconsistent snapshot of the database. When the global savepoint is complete, a database-internal snapshot is
created based on this savepoint. This means that the pages belonging to this savepoint will not be
overwritten by subsequent savepoints. Now the data volumes on the persistence layer contain a snapshot
with the frozen consistent state of the database.
In the follwing phase of the backup procedure, the backup manager tells the backup executors of the servers
to write the content of the previously created snapshot to the backup files. Each server reads the pages of the
snapshot from the persistence layer and writes them to the backup files.
The regular Backup I/O operations are asynchronous. The first phase of Data Backups (Savepoints)
generate write I/O load, some load on Log Volumes but the main load on Data Volumes. The I/O pattern are
sequential and I/O blocks between 4 KB and 16 MB are processed.
During the Log Backup or the second phase of the Data Backup, large I/O blocks are sequentially read and
then written to the backup files. Since usual database operations (Savepoint writing, Delta Merges) continue
during the second phase of the Data Backup, backup Read I/O operations may compete with Write I/O
operations.
Since Backups sequentially process large I/O blocks, measures should be taken that optimize I/O
bandwidth (the processed MB/s).

2013 SAP AG

page 32/40

Enterprise Storage Architecture Planning

Read I/O operations


Read I/O operations rare events as they are mainly performed during backups. Of course, data will be read
from the persistence layer during system startup and during usual database processing, read I/O operations
will be performed to load rarely used tables.
In a multi-host HANA environment a high read I/O load occurs during failover, when a HANA host takes
over the work. The time needed to failover to the standby host depends on the size of the Row-Store, since
HANA can start processing earliest, after the Row-Store is completely loaded. The latter is only relevant if the
HANA server node that is hosting the Row Store is affected by the server failure.
Application transactions respectively system users must wait until these rare Read I/O operations are
completed. The I/O pattern are sequential and I/O blocks between 4 KB and 16 MB are processed, therefore
measures should be taken that optimize I/O bandwidth (the processed MB/s).
Bottom line
From the perspective of the I/O layer SAP HANA performs mainly sequential write I/O operations. On the
Data Volumes mainly large I/O blocks are processed, while on Log Volumes mainly 4 KB I/O blocks are
processed.
To provide best I/O performance for HANA, I/O operations on the Data Volumes should be optimized for
high bandwidth (MB/s) and due to the synchronous Log write I/O operations on Log Volumes, these should
be optimized for low latency (ms).

8.3

HANA Storage Connector API

In HANA scale-out solutions one HANA database deploys a cluster of hosts and one standby host is ready
to take over in case an active host fails a layer is required that coordinates the access to Data and Log
Volumes. As discussed in chapter 5.3, this layer can be a Cluster File System that provides the following
components:
The Cluster-Framework that describes which Host Systems belong to a cluster and take over functions
of the Cluster File System.
The Locking Mechanism that ensures that no two Host Systems can simultaneously make changes on
the same file.
The Fencing that ensures that a malfunction on one Host System does not destroy any data.
HANA Storage Connector API provides exactly this functionality. The Storage Connector manages on each
HANA host 2 LUNs. One LUN is used for Data Volumes and the other for the Log Volumes of the HANA
servers (Index-, Name-, Statistics- and XS Server) running on the host. SAP offers a ready to use
implementation of this Storage Connector API for all storage subsystems attached via Fiber Channel using
native Linux multipathing and supporting the SCSI-3 protocol especially the Persistent Reservation (PR)
feature of the SCSI-3 protocol is used for fencing.
In case an active host of a HANA cluster fails, HANA calls the appropriate Storage Connector API method, to
(1) allow the storage device driver to re-mount the required Data and Log LUNs to the standby host and (2)
fence off these LUNs from the failed host.

2013 SAP AG

page 33/40

Enterprise Storage Architecture Planning

If the Storage Connector does not suffice, for example, because of a shared storage approach or the lack of
SCSI-3 persistent reservations, a custom Storage Connector can be implemented.

8.4

HANA on an optimal configured I/O Stack

For single-host SAP HANA systems the configuration guidelines described in Chapter 6 (The optimal
configured I/O Stack), are basically applicable.
HANA requires for its servers (Index-, Name-, Statistics- and XS Server) 2 separate storage devices one for
the Data Volumes and a second for the Log Volumes. These 2 storage devices can be Logical Volumes
(LV) that are managed by the Linux Logical Volume Manager (LVM), and each Logical Volume should be
provided from a separate Volume Group (VG) consisting of LUNs that are from the Storage Area Network.
The Logical Volumes consist of contiguous chunks of data, that should be striped allocated (next chunk on
next Host Device = LUN) on all Host Devices belonging to the Volume Group. Since HANA partitions the
Data Volumes into Superblocks of 64 MB, the LV chunks should have the same size. Due to this measure
the performance capabilities of many LUNs can be utilized for I/O operations on the Data Volumes.
The Logical Volume used for HANA Log Volumes should be prepared for parallel I/O operations on all LUNs
that belong to the Volume Group to increase bandwidth and to minimize I/O latency of Log writes. This will be
achieved by block level striped Logical Volumes. With this implementation the LUNs of the Volume Group
build a stripe set (the number of LUNs is the stripe width) and the blocks of the stripe set (each block has
the same stripe size) are balanced across all LUNs of the VG.
The parameters for the optimization of the I/O performance on Log Volumes are stripe width and stripe
size, and these should be adapted to the expected peak write load. The following graphic gives an idea how
the I/O bandwidth (MB/s) on a Log Volume depends on the LV stripe width and stripe size, assuming that the
write I/O latency on the LUNs is 2 ms (consider that larger I/O blocks may have a higher latency).
In this example the stripe
width varies between 4
LUNs and 10 LUNs, and
the stripe size between 4
KB and 64 KB.
The assumed average
write

I/O

realistic,

latency
and

in

is
our

analyses of storage infrastructures, even a better


write I/O latency was
measured.
If

the

expected

peak

write load of all Log


Volumes that are stored
on

the

LOG

Logical

Volumes is 250 MB/s,

2013 SAP AG

page 34/40

Enterprise Storage Architecture Planning

then stripe width = 8 LUNs and stripe size = 64 KB will be sufficient, and in case the average write I/O latency
is 1 ms, a bandwidth of 500 MB/s will be reached.

Bandwidth [MB/s] = (Stripe Size [KB] / Latency [ms] *1000) /

1024 * Stripe Width

In a multi-host SAP HANA scale-out solution, where the Storage Connector API is used, the Linux
Logical Volume Manager can not be used this is the current state when writing this document (SAP HANA
SP6, July 2013).
This means that the Logical Volume Manager based optimizations cannot be applied. Thus the 2 storage
devices that are required for each HANA host, are LUNs from the Storage Area Network, and all HANA I/O
performance demands must be made available from this level.

The graphic shows the HANA I/O stack on one host of a multi-host scale-out solution (the ith host). Two LUNs
(Host Devices) are provided from the Storage Area Network on the Data Host Device (blue) the file system:
/hana/data/<sid>/mnt<i> (<sid> = SAP system Identifier; <i> host number) and on the Log Host Device (red)
the file system: /hana/log/<sid>>/mnt<i> is mounted. The Linux ext3 block-based file system is used.
For every HANA service there is on both file systems a subdirectory in the example above, for the
nameserver subdirectory /hdb<j> and for the indexserver subdirectory /hdb<j+1>. Every HANA service
stores in the DATA file system its DATAvolume<m> and in the LOG file system its Log Volumes:
LOGsegment<m>_<n>.
In each SAP HANA system there is one master nameserver that owns the topology and distribution data.
This data is replicated to all other nameservers, called slave nameservers. The slave name servers write the
replicated data to a cache in shared memory from where the indexservers of the same instance can read it.
The master name server has its own persistence where it stores name server data (topology, distribution
data). The slave name servers have no persistence as they are only holding replicated data.
In a HANA scale-out solution it is best practice to run only one indexserver on each host per SID.
2013 SAP AG

page 35/40

Enterprise Storage Architecture Planning

How do we get the I/O performance required by HANA services?


The following considerations are valid for both, single-host and multi-host (scale-out) HANA solutions. As
discussed in chapter 5 (Managing Storage on Host Systems), it is crucial for I/O performance, to use as many
I/O buffers of Host Devices and Host Bus Adapters (HBAs) as possible, and to provide for the LUNs
sufficient I/O paths these are the connections from HBAs via Switches to Frontend Adapter Ports of
the storage system.
Paths
For data availability at least 2 paths should be defined for each LUN to provide access to SAN storage should
one path fail as well as I/O workload balancing across all available paths. On Linux operating system level,
the Device-Mapper-Multipath (dm-multipath) module will be used for this purpose. For the distribution of I/O
requests across the available paths, dm-multipath offers 3 path_selector algorithms:
round-robin loops through every path in the path group, sending the same amount of I/O load to each.
queue-length sends the next bunch of I/O down the path with the least amount of outstanding I/O.
service-time chooses the path for the next bunch of I/O based on the amount of outstanding I/O to the
path and its relative throughput (I/O operations per second).
Up to Linux kernel 2.6.31 the default number of I/O to route to a path before switching to the next in the same
path group was 1000 specified by parameter rr_min_io. Starting with Linux kernel 2.6.32 the new
parameter rr_min_io_rq was introduced, that specifies as well the number of I/O to route to a path before
switching to the next in the same path group, using request-based dm-multipath default is 1.
Assessment and suggestions
Although the round-robin path_selector algorithm is suggested as default, notice that this algorithm does not
consider the impact of I/O load on components (HBAs or Frontend Adapter Ports) that are as well used by
other paths. The other two path_selector algorithms will consider the utilization of the components that are
used for the path, therefore we suggest to use for the Data LUNs the path_selector algorithm queue-length
and for the Log LUNs service-time, that takes into account the I/O service time.
Apparently 1000 I/O operations before switching to the next path is not optimal for applications that process
sequentially large I/O blocks (such as HANA), since this may lead to a high utilization of the HBA and
Frontend Adapter Port that is used for the path, while components used on other paths are idle. Obviously
switching every next I/O to another path will not lead to a considerable overhead (storage vendor
benchmarks confirm this) and the new parameter rr_min_io_rq with default = 1 was introduced.
We suggest to check which setting the storage vendors propose for their storage systems. If there are no
proposals, then start for the HANA LUNs with rr_min_io_rq = 10 (respectively rr_min_io = 10) and consider
increasing this value in case the Bandwidth (MB/s) is not sufficient.
How many paths are actually sufficient depends on the performance of the components (4Gbit, 8Gbit, ...,
HBAs and Frontend Adapter Ports) and their availability. It is best practice to use no more than 4 paths for a
LUN, and optimally the paths do neither share HBAs nor Frontend Adapter Ports of the storage system. Since
the I/O pattern on HANA Data and Log LUNs are the same, they can share the paths. In case the continuous
Log write I/O operations on the Log LUNs are impaired by the periodic writes on Data LUNs, then use
separate paths for Log and DATA LUNs.

2013 SAP AG

page 36/40

Enterprise Storage Architecture Planning

I/O Queues
The queue length is the number of parallel executed I/O operations and for Host Bus Adapters the
maximum is 2048. The sum of lengths of all Host Devices queues that share a Host Bus Adapter must not
be greater than the HBA queue length.
Notice that HBAs of different host systems may share Frontend Adapter Ports of the storage system. If this is
the case, the I/O load on the HBA with a maximum queue length may impair the performance of I/O
operations that are generated on another host system. The information about the exact number of parallel I/O
operations that can be processed on a certain type of Frontend Adapter, can only be provided by the storage
vendors.
Example
The publications of benchmarks for Frontend Adapter cards from different vendors show that 8 Gbit
cards can process at maximum about 3200 I/O operations in parallel (queue length). In this example
we will assume that for our HANA database 4 storage system Frontend Adapter cards are used, each
with a maximum of 3200 parallel I/O operations.
We further assume that we have a HANA scale-out solution with 2-active plus 1-standby host, and on
each host 4 Host Bus Adapter cards are used. Each of the two LUNs on the hosts make use of all 4
HBAs each LUN has 4 paths.
How should the queue length for the 4 LUNs and 8 HBAs be set assuming that HANA
generates on all LUNs the same I/O load?
According

to

assumption,

our
the

maximum parallel I/O


operations on storage
level is 4 * 3200 =
12800.

We

further

assumed that the I/O


load is equivalent on
both

active

hosts,

therefore each host


can process 12800 / 2
= 6400 and each of
the 4 HBAs 6400 / 4 = 1600 parallel I/O operations. Thus the queue length for all HBAs will be 1600
and for each LUN = 800.
Due to the continuous load on the LOG LUNs, we might consider to define for these a greater queue
length, to favor Redo Log write I/O operations during peak load periods.

2013 SAP AG

page 37/40

Enterprise Storage Architecture Planning

Optimizations on Storage System level


SAP HANA performs mainly sequential write I/O operations and apart from Backups, Read I/O operations are
rare events. If data is actually read, then this happens during the initial system start or when switching to the
standby host.
As discussed in chapter 3 (Storage Systems) ALL I/O operations pass through the cache of the storage
system, but the data that HANA wants to read are very likely not in the cache. Therefore all measures that
provide as much write cache as possible will be beneficial for HANA write I/O operations, but most
enterprise storage systems do not distinguish between read and write cache - all I/O operations share the
cache. Storage systems allow that only a part of the cache may be filled with modified data that were not yet
written to disk (destaged) Write Pending Limit. Whenever this limit is reached, the storage system must
switch to deferred write mode to destage the data and this will cause a considerable degradation of write
performance.
Consequently Cache optimization means to avoid deferred writes and to write modified data as fast as
possible to physical disks.
Optimize Storage System Backend I/O performance
Basically this will be achieved by LUNs that are distributed across many physical disks and a balanced
utilization of all Disk Adapters. For this storage systems distribute (stripe) LUNs evenly across the physical
disks of a RAID-group and in case the Storage System has the feature of combining RAID-groups in a
storage Pool, then the provided LUNs are additionally striped across multiple RAID-Groups.
If the storage system offers the Storage Tiering feature, we suggest to use this. The storage tiers consist of
physical disks with the same performance characteristics (highest performance is provided in the tier
equipped with SSDs (Solid State Disks), then tiers with Fiber Channel (FC) disks, fast SAS (Serial Attached
SCSI) disks and/or SATA disks), and Storage Tiering provides an automatic relocation of most frequently
used data on physical disks with highest performance capabilities.
Notice that the HANA database is very likely not the only application that uses the storage pool of the storage
system, therefore the pool must be equipped with as many physical disks as needed to meet the I/O
requirements of all applications.
Impact on I/O performance by synchronous data replication between storage systems
The synchronous replication of data changes from the primary to a secondary storage system will increase
the Write I/O latency by about 1 ms per 300 km cable length (speed of light). In addition deferred write
occurences on the secondary storage system, caused by insufficient backend I/O performance, will as well
increase the latency of synchronous Write I/O operations HANA Log writes on the primary storage
system. The measures used to optimize the backend I/O performance of the primary storage system must as
well be applied on the secondary storage system.

2013 SAP AG

page 38/40

Enterprise Storage Architecture Planning

References
Introduction to Storage Area Networks and System Networking, IBM-Redbook SG24-5470-04, Jon
Tate, Pall Beck, Hector Hugo Ibarra, Shanmuganathan Kumaravel, Libor Miklas, November 2012
iX kompakt Storage von SSD bis Cloud, Susanne Nolte et al., Februar 2011
Overview SAP HANA tailored data center integration link to www.saphana.com
SAP HANA Fiber Channel Storage Connector Admin Guide, Thomas Weichert, April 2013

2013 SAP AG

page 39/40

Enterprise Storage Architecture Planning

2013 SAP AG. All rights reserved.


SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, and other SAP products and
services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other
countries.
Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and
other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered
trademarks of Business Objects Software Ltd. Business Objects is an SAP company.
Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase products and services mentioned herein as
well as their respective logos are trademarks or registered trademarks of Sybase, Inc. Sybase is an SAP company.
All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document
serves informational purposes only. National product specifications may vary.
These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (SAP
Group) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or
omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the
express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an
additional warranty.

2013 SAP AG

page 40/40

Vous aimerez peut-être aussi