Académique Documents
Professionnel Documents
Culture Documents
COCKPIT
Flight Plans for DB2 LUW
Database Administrators
DB2 is now the database most recommended for use with SAP applications, and DB2
skills are now critical for all SAP technical professionals. The most important tool
within SAP for database administration is the SAP DBA Cockpit, which provides a
more extensive administrative interface on DB2 than any other database. This book
steps through every aspect of the SAP DBA Cockpit for DB2. Readers will quickly
learn how to use the SAP DBA Cockpit to perform powerful DB2 administration tasks
and performance analysis. This book provides both DB2 beginners and experts an
invaluable reference for the abundance of information accessible from within the SAP
DBA Cockpit for DB2. It makes it easy for SAP NetWeaver administrators, consultants,
and DBAs to understand the strengths of DB2 for SAP, and how to leverage those
strengths within their own unique application environments.
PATRICK ZENG
JEREMY BROUGHTON
Certified DB2 Solutions Expert
SAP Certified Basis Consultant for
DB2 on NetWeaver 2004 Certified SAP Technology Consultant
MC Press Online, LP
125 N. Woodland Trail
Lewisville, TX 75077
SAP DBA Cockpit
Flight Plans for
DB2 LUW Database Administrators
Eduardo Akisue
Jeremy Broughton
Liwen Yeow
Patrick Zeng
Every attempt has been made to provide correct information. However, the publisher and the author
do not guarantee the accuracy of the book and do not assume responsibility for information in-
cluded in or omitted from it.
IBM is a registered trademark of International Business Machines Corporation in the United States,
other countries, or both. DB2 is a registered trademark of International Business Machines Corpo-
ration in the United States, other countries, or both. All other product names are trademarked or
copyrighted by their respective manufacturers.
This publication is protected by copyright, and permission must be obtained from the publisher
prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by
any means, electronic, mechanical, photocopying, recording, or likewise.
ISBN: 978-158347-089-3
About the Authors
The authors would like to express their gratitude for the technical contributions
received by the following colleagues:
At IBM:
Guiyun Cao
Martin Mezger
Karl Fleckenstein
At SAP AG:
Torsten Ziegler
Ralf Stauffer
Andreas Zimmermann
Steffen Siegmund
Britta Bachert
Contents
i
Performance: Active Inplace Table Reorganizations . . . . . . . 41
Performance: History–Database . . . . . . . . . . . . . . . . . . 41
Performance: History–Tables . . . . . . . . . . . . . . . . . . . 42
Performance Warehouse . . . . . . . . . . . . . . . . . . . . . . 43
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
ii
Updating Statistics . . . . . . . . . . . . . . . . . . . . . . . 86
Table Reorganization. . . . . . . . . . . . . . . . . . . . . . 87
Custom Job Scripts . . . . . . . . . . . . . . . . . . . . . . . 87
The DBA Log . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Back-end Configuration . . . . . . . . . . . . . . . . . . . . . . 89
SQL Script Maintenance. . . . . . . . . . . . . . . . . . . . . . 90
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
iii
Monitoring Settings . . . . . . . . . . . . . . . . . . . . . . . 128
Automatic Maintenance Settings . . . . . . . . . . . . . . . . . 130
Automatic Backups . . . . . . . . . . . . . . . . . . . . . . 130
Automatic RUNSTATS. . . . . . . . . . . . . . . . . . . . 131
Automatic REORG . . . . . . . . . . . . . . . . . . . . . . 132
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
iv
The Dump Directory . . . . . . . . . . . . . . . . . . . . . . . 168
The DB2 Help Center . . . . . . . . . . . . . . . . . . . . . . 169
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
v
Foreword
This is a remarkable book, written by IBM experts who have in-depth knowledge
about SAP on DB2. The authors have their profound experience not only from
their work with many customers who adopted DB2 for their SAP applications,
but also from their very close cooperation with SAP development. Based on the
analogy of a pilot’s need to know about the controls of his aircraft, this book
takes you through the entire world of DB2 monitoring and administration. You
will find it a useful introduction if you are new to SAP on DB2, and you will also
be able to use it as a reference if you are an experienced DBA.
The SAP DBA Cockpit is one of many visible proof points of the excellent inte-
gration of SAP solutions with IBM DB2. This book will familiarize you with ev-
erything you need to know to operate IBM DB2 optimally with your SAP
solution. In a tutorial-like, easy to read style it takes you from the basic controls
to advanced monitoring and tuning, and at the same time provides you with use-
ful background information about DB2. And even more, it is just fun to read.
—Torsten Ziegler
SAP Manager
DB2 LUW Platform Development
vi
Chapter 1
P iloting a large commercial aircraft requires a great deal of skill. Pilots must
understand how the adjustments they make to the aircraft components affect
the flight of the airplane. Balancing lift and drag, speed and altitude, yaw and
wind are all important parts of a safe, comfortable flight. However, a huge
amount of technology also operates and manages the individual aircraft compo-
nents. A pilot who flew the aircraft without knowing what the technology does
could disrupt automated flight operations. Similarly, if the technology were not
leveraged specifically for the aircraft flight requirements, flight operations could
become more difficult. To ensure an efficient and comfortable flight, an adept pi-
lot must understand both the high-level operation of the aircraft and the underly-
ing technology that operates the components.
1
CHAPTER 1: The SAP DBA Cockpit
Administrators can now easily access all of the database key performance indica-
tors (KPIs) and make changes to improve system performance from within the
same dialog screens. The most important information for SAP administrators is
now at their fingertips, and the database administrative tasks can often be exe-
cuted with a few simple mouse clicks. This single DBA Cockpit interface simpli-
fies monitoring and maintenance tasks, and can reduce the overall time spent on
database administration.
The DBA Cockpit contains two main sections: a large detailed display on the
right, and a small navigation menu on the left. Figure 1.1 shows the System Con-
figuration screen, which is the initial dialog screen displayed by running the
DBACOCKPIT transaction. This can also be displayed at any time by clicking the
System Configuration button, just above the left navigation menu.
2
CHAPTER 1: The SAP DBA Cockpit
Figure 1.1: The SAP DBA Cockpit for DB2 has a large display area on the right and a small
navigation menu on the left.
The right display window contains a list of all the database systems that are con-
figured for monitoring from the DBA Cockpit. The left navigation menu contains
the following folders for navigating into database function groups:
3
CHAPTER 1: The SAP DBA Cockpit
The left navigation frame of SAP Enhancement Package 1 for SAP NetWeaver
7.0 contains two additional screens. The first entry links the user directly into the
DB2 LUW main web page in the SAP Developers Network (SDN), allowing the
user to browse the SDN from directly within the SAP GUI. The other screen
launches the new web browser-based DBA Cockpit. Several of the new features
of the DBA Cockpit are now launched as WebDynpro browser applications.
When one of these is clicked in the SAP GUI-based DBA Cockpit, the corre-
sponding WebDynpro screen will automatically launch in the browser. The Start
WebDynpro GUI menu entry launches the main page of the web browser-based
DBA Cockpit, similar to the DBACOCKPIT transaction in the SAP GUI.
The contents of the left navigation menu may differ slightly among different ver-
sions of SAP BASIS, in order to leverage new functionality available in the latest
releases of SAP and DB2. This book illustrates the latest features available in the
DBA Cockpit in SAP Enhancement Package 1 for SAP NetWeaver 7.0.
4
CHAPTER 1: The SAP DBA Cockpit
Remote connections can be established using the database information from the
System Landscape Directory (SLD). Alternatively, they can be configured manu-
ally from within the DBA Cockpit, using the DB Connections button at the top of
the left navigation menu. From the System Configuration screen, simply click the
SLD System Import button. This provides a graphical interface to select and reg-
ister the unregistered SAP systems into the cockpit. This allows the entire SAP
system landscape to be centrally managed in the SLD, and provides a simple way
to register any new or changed systems in your central DBA Cockpit.
Alternatively, click the Add button to manually register new databases into the
cockpit. This allows administrators to register even non-SAP systems. Therefore,
the DBA Cockpit can provide a single administrative GUI for every SAP and
non-SAP database in your IT landscape.
Summary
The SAP DBA Cockpit for DB2 is a powerful interface for SAP pilots to cen-
trally manage the DB2 database operations of their SAP systems. It provides a
single point of administration for every DB2 database in your organization. The
SAP DBA Cockpit for DB2 gives administrators fast and easy access to all of the
most important DB2 database information, all from within the familiar look and
feel of SAP GUI.
5
Chapter 2
Performance Monitoring
Are You Flying a Glider or a Jet?
6
CHAPTER 2: Performance Monitoring
Probably the most important performance indicator is buffer pool hit ratio. This
can be calculated by comparing the number of logical and physical reads.
Alternatively, it can be displayed by double-clicking one of the partitions to view
7
CHAPTER 2: Performance Monitoring
the database snapshot data from that partition. On each partition, the index hit
ratio should be about 98 percent, and the data hit ratio should be 95 to 98 percent.
Figure 2.1: The performance characteristics of the DB2 database partitions are shown in
the Performance: Partition Overview screen.
Administrators should try to balance I/O as evenly as possible across all parti-
tions in the system. The easiest way to achieve this is to distribute all large or
heavily accessed tables across all partitions. However, for very large systems
with a very high number of partitions, it might be impractical to distribute tables
thinly across all partitions. In this case, heavily accessed tables can be balanced
equally across subsets of partitions. For example, one heavily accessed InfoCube
can reside on partitions 1 through 9, and another heavily accessed InfoCube can
reside on partitions 10 through 19. The most important point is to try to keep da-
tabase size and I/O activity as balanced as possible across all partitions, so that
the database leverages the full processing capacity of all partitions equally.
8
CHAPTER 2: Performance Monitoring
• Buffer pool
• Cache
• Asynchronous I/O
• Direct I/O
• Real-time statistics
• Locks and deadlocks
• Logging
• Calls
• Sorts
• XML storage
9
CHAPTER 2: Performance Monitoring
Figure 2.2: This tab of the database snapshot dialog displays statistics about the buffer
pool.
High buffer quality is probably one of the most important criteria for perfor-
mance. If an agent can find the pages it needs already in memory, I/O wait is re-
duced and response time improves. For peak performance, overall buffer quality
for the entire database should be above 95 percent, with data hit ratios above 95
percent and indexes hit ratios above 98 percent. Hit ratios can be improved by in-
creasing buffer pool size, compressing the database, improving cluster ratios for
SAP NetWeaver BW, or by optimizing buffer pool allocation, which can be done
automatically by the DB2 Self Tuning Memory Manager (STMM).
10
CHAPTER 2: Performance Monitoring
Buffer pool hit ratios depend on the ratio of logical and physical reads. Each re-
quest for a page of table or index data is referred to as a logical read. In a
well-tuned system, the majority of logical read requests will be satisfied from the
buffer pool, resulting in buffer pool hits. If a page is not in the buffer pool, a
buffer pool miss occurs, and the page must be read from disk, which is called a
physical read. The buffer pool quality is the ratio of the number of page requests
found in the buffer pool to the total number of logical read requests.
Physical reads and writes are unavoidable, because new transactions are always
reading and writing new data to the database. However, a properly configured da-
tabase will perform most disk I/O asynchronously and in parallel, thereby mini-
mizing the I/O wait experienced by the client and maintaining high buffer
quality. Physical reads and writes can either be synchronous or asynchronous, de-
pending on which DB2 agent (process or thread) performs the I/O operation.
Synchronous I/O is performed directly by the database agent working on behalf
of the client connection, and asynchronous I/O is performed by the DB2
prefetchers and page cleaners. The statistic labeled “Average Time for the Physi-
cal Reads and Physical Writes” on the DBA Cockpit indicates the I/O subsystem
performance. An average physical read time above 10ms and/or an average phys-
ical write time above 5ms indicates an I/O subsystem that is not performing
optimally.
Synchronous reads occur when an agent reads a page of data from disk itself,
rather than signaling the prefetchers to read the page asynchronously. This occurs
most frequently during random requests for single pages, which are common in
OLTP applications operating on single rows of data via an index. However, this
may also occur if the prefetchers are all busy with other prefetch requests.
11
CHAPTER 2: Performance Monitoring
Each synchronous read request results in I/O wait at the client, because the agent
processing the SQL statement must directly perform a read from disk before it
can continue query processing. For single-row access, it is just as efficient for the
agent to read the single page itself. However, for prefetch requests involving
multiple pages, it is far more efficient to have the prefetchers read these pages in
the background.
The physical writes specify the number of pages written from buffer pool to disk.
Similar to a read, a write can be either synchronous or asynchronous, depending
on the agent that performs it. Asynchronous writes are performed in the back-
ground by the DB2 page cleaners at specific checkpoints. These are far more effi-
cient than synchronous writes, which are performed directly by the DB2 agents
to make room in the buffer pool for new data pages being accessed by that agent.
DB2 can perform page cleaning in two different ways: Standard Page Cleaning
or Pro-Active Page Cleaning. By default, all new SAP installations use Standard
Page Cleaning.
12
CHAPTER 2: Performance Monitoring
Whenever the above two thresholds are exceeded, the DB2 page cleaners begin
writing changed pages from the buffer pool(s) to disk. This avoids LSN gap situa-
tions, and ensures that there is room in the buffer pool for future prefetch requests.
db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON
Using Proactive Page Cleaning, the page cleaners no longer respond to the
CHNGPGS_THRESH parameter. Rather than keeping a percentage of the buffer
pool clean, this alternate method only uses SOFTMAX, and DB2 keeps track of
good victim pages and their locations in the buffer pool. Good victim pages in-
clude those that have been recently written to disk and are unlikely to be read
again soon. If either a LSN gap occurs, or the number of good victim pages drops
below an acceptable threshold, the page cleaners are triggered. They proceed to
search the buffer pool, write out pages, and keep track of these new good victim
pages. The page cleaners will not only write out pages in a LSN gap situation,
13
CHAPTER 2: Performance Monitoring
but will also write pages that are likely to enter a LSN gap situation soon, based
on the current level of activity.
When the database agents need to read new data into the buffer pool, the
prefetchers read the list of good victim pages, rather than searching through the
buffer pool for victims. This tends to spread writes more evenly, by writing
smaller amounts more frequently. By spreading the page cleaner write operations
over a greater period of time, and avoiding buffer pool searches for victim pages,
high-update workloads might see performance improvements.
Since most SAP workloads on DB2 9.5 have been found to perform marginally
better using Standard Page Cleaning, we recommend using it for all SAP applica-
tions. Future changes to Proactive Page Cleaning might increase its usage within
SAP. For now, though, if you have a uniquely heavy-update workload that you
think might benefit from Proactive Page Cleaning, test the change thoroughly to
determine the effect on performance before enabling it in the production system.
The No Victim Buffers element in the DBA Cockpit can help evaluate whether
you have enough page cleaners when using Proactive Page Cleaning. This ele-
ment displays the number of times a database agent was unable to find
pre-selected victim pages in the buffer pool during a prefetch request, and in-
stead, needed to search through the buffer pool for suitable victim pages. If this
element is high relative to the number of logical reads, the database page cleaners
are not keeping up with the changes occurring in the database, and more page
cleaners are likely required.
If Proactive Page Cleaning is off, and you are using Standard Page Cleaning, the
No Victim Buffers monitor element can be safely ignored. In the default configu-
ration, Standard Page Cleaning is triggered by CHNGPGS_THRESH and SOFTMAX,
and the prefetchers will usually search the buffer pool to find suitable victim
pages. Therefore, you can expect this monitor element to be large.
Synchronous Writes
If the database must read data from disk into a buffer pool, and there are no free
pages remaining in the buffer pool, DB2 must make room, by replacing existing
14
CHAPTER 2: Performance Monitoring
data pages (victims) with the data pages being read. If these victim buffer pool
pages contain changed data, these pages must be written to disk before they are
swapped out of memory. In this case, the pages are written to disk synchronously
by the DB2 agent processing the SQL statement.
Synchronous writes always result in I/O wait at the client, because the write
operation must occur synchronously, before the buffer pool page can be
victimized (replaced with a new page from disk). A large percentage of
synchronous write operations indicates that the DB2 page cleaners are not
operating effectively. This might be due to slow disks or unbalanced I/O in the
storage system, or the system might require more page cleaners to handle the
system workload.
For most transactional systems, temporary table space I/O should be fairly low,
since most calculations should be performed in memory. SAP NetWeaver BW
systems might show larger temporary table space I/O, but large values here might
still indicate inefficient queries or a need to create higher-level aggregates to
improve query performance.
15
CHAPTER 2: Performance Monitoring
Figure 2.3: The Cache tab displays the Catalog Cache and Package Cache statistics.
A high catalog cache hit ratio is even more important in multi-partition SAP
NetWeaver BW systems. In a partitioned SAP NetWeaver BW system, the
system catalog tables all reside on the Administration Partition (partition 0).
Therefore, if other partitions need to read system catalog information from disk,
they must request this information from partition 0, which inserts into the catalog
cache on partition 0, and then sends the information to the catalog cache on the
other partition. Caching most of the system catalog information at each partition
avoids both disk I/O and network I/O, and reduces the workload on the
Administration Partition. All of these contribute to better performance.
The default catalog cache size in new SAP installations is 2,560 4KB pages.
Well-configured systems should have a hit ratio of 98 percent and experience no
overflows. If overflows occur, DB2 must allocate more memory from database
shared memory into the catalog cache. Then, when some table descriptor and au-
thorization information is no longer needed for active transactions, it is removed
16
CHAPTER 2: Performance Monitoring
from memory, and the cache is reduced to its configured size. This involves extra
overhead in the system, and should be avoided by increasing the catalog cache size.
The total number of overflows and the high-water mark can be used together
with the cache quality to determine whether or not the default size is adequate for
your workload. The catalog cache size is set by the CATALOGCACHE_SZ database
configuration parameter. To view or change this parameter in the DBA Cockpit,
click Configuration à Database à Database Memory.
Static SQL statements are embedded in application programs. These statements must
be precompiled and bound into a package, which gets stored in the DB2 system cata-
log tables. SAP does not use static SQL, so this will not be discussed further.
By default, the package cache size in new SAP installations is dynamically con-
figured and adjusted by DB2, as part of its Self Tuning Memory Manager
(STMM) feature. This allows DB2 to adjust the size of this cache to optimize
overall performance, based on your changing workload. Package cache hit ratio
should remain above 98 percent, and overflows should not occur. The package
cache size is set by the PCKCACHESZ database configuration parameter. To view
or change the package cache size in the DBA Cockpit, click Configuration à
Database à Self-Tuning Memory Manager.
Larger catalog and package cache sizes might be required if the workload in-
volves a large number of SQL statements accessing many different database ob-
jects. However, in most cases, it is recommended that you keep the package
17
CHAPTER 2: Performance Monitoring
cache size set to AUTOMATIC, and let DB2 STMM configure the size based on
your current available memory and optimal overall system performance.
Asynchronous I/O
The third tab in the Database Performance Monitor is Asynchronous I/O, shown
in Figure 2.4. This displays information on the I/O reads and writes that use
background read and write operations to perform disk I/O to and from the DB2
buffer pools, using the DB2 prefetchers and page cleaners. Asynchronous I/O op-
erations anticipate application I/O requirements, and operate in the background to
minimize I/O wait. Therefore, well-performing systems should perform the ma-
jority of disk I/O asynchronously.
Asynchronous I/O is performed by the DB2 prefetchers and page cleaners. The
number of prefetchers and page cleaners should be configured to drive the physi-
cal disks in underlying storage system to full capacity. This is set by two data-
base configuration parameters: NUM_IOSERVERS for prefetchers and
NUM_IOCLEANERS for page cleaners. Both are found in the cockpit under
Configuration à Database I/O.
New SAP installations default both of these parameters to automatic. This allows
DB2 to calculate the optimal number of prefetchers and page cleaners, when the
database is activated, based on the following formulae:
18
CHAPTER 2: Performance Monitoring
The formula for page cleaners ensures that they are evenly distributed across all
partitions in a partitioned SAP NetWeaver BW system, and that there are never
more page cleaners than CPUs. This prevents asynchronous page cleaning from
affecting normal transaction processing performance. Ideally, both asynchronous
read and write times should be less than 5 ms.
Figure 2.4: The Asynchronous I/O tab shows statistics for background disk I/O performed
by the DB2 prefetchers and page cleaners.
Direct I/O
Direct I/O is involved whenever a DB2 agent reads from disk or writes to disk,
without using the DB2 buffer pools. Direct I/O is performed in units, the smallest
being a 512-byte disk sector. Direct reads always occur when the database reads
LONG or LOB data, and when a database backup is performed. Direct writes al-
ways occur when LONG or LOB data is written to disk, and when database re-
store and load operations are performed.
The Direct I/O tab of the DBA Cockpit screen is shown in Figure 2.5. Direct I/O
should be extremely fast, because it operates on entire disk sectors. Therefore,
read and write times should generally be under 2ms. The average I/O per request
should be proportional to the average size of the LOB columns in the database.
19
CHAPTER 2: Performance Monitoring
Figure 2.5: The Direct I/O tab displays statistics for database disk I/O that is
not buffered in memory by the DB2 buffer pools.
The information available in the DBA Cockpit, shown in Figure 2.6, is valuable
for determining the performance impact of RTS. It might suggest the need for
more structured statistics collection for some tables in the system.
Figure 2.6: The Real-Time Statistics tab shows details related to RTS statistics
collection.
20
CHAPTER 2: Performance Monitoring
The statistics cache is a portion of the catalog cache used to store real-time statis-
tics information. If RTS is being frequently triggered, a larger catalog cache
might be required.
The final piece of data for RTS is based on statistics fabrication (or statistics esti-
mation). If a sampled RUNSTATS table or index scan consumes too much time,
then new metadata stored in the data and index manager system catalog tables is
used to estimate the current table statistics. Those statistics are immediately made
available in memory for all other queries to use until a RUNSTATS is performed
on the table. In the cockpit, statistics estimation is displayed by the number of
statistics collections during query compilation, and the time spent during query
compilation.
21
CHAPTER 2: Performance Monitoring
The default isolation level for most SAP applications is Uncommitted Read,
which allows the highest level of concurrency within the database. SAP transac-
tion integrity is managed within the SAP application. One SAP transaction may
involve multiple database transactions, each of which is committed into the SAP
update tables. While one SAP transaction updates data in the update tables, other
SAP transactions are reading committed data from the tables containing the per-
manent, committed data. Therefore, concurrent SAP transactions always read
committed data. When an SAP transaction is finally committed, those update ta-
ble records are applied to the target database tables by the SAP update work pro-
cesses, and other transactions then see the committed changes from the entire
SAP transaction.
22
CHAPTER 2: Performance Monitoring
One potential exception to the UR default isolation level occurs when accessing
cluster or pool tables. Since reading a single logical row may involve reading
multiple physical rows, more restrictive locking might be required. SAP first tries
to read the logical row with UR. If this does not produce a consistent read of all
physical rows, SAP will read again, first trying CS, and if necessary, finally
reading with RS, which will guarantee read consistency for all physical rows in
the logical record. However, inconsistent reads on logical rows using UR rarely
occur, and most cluster/pool table reads succeed the first time with UR.
Database locks are stored in a portion of database memory called the lock list.
When row locks are acquired, they are added to this lock list. If the size of the
row locks exceeds the size of the lock list, DB2 will convert multiple row locks
on a single table into a single table lock. This lock escalation frees up space in
the lock list for other row locks. However, it can also reduce concurrent access to
the table involved in the escalation. At best, this might reduce performance for
applications accessing that table; at worst, it might result in increased lock waits
or deadlock scenarios in other concurrently running applications.
Normal lock escalations allow read access to the locked tables, but force writes to
wait for the application holding the lock to commit. Exclusive lock escalations
also disallow reads, thereby reducing concurrency even further. Therefore,
administrators should try to completely avoid lock escalations, by ensuring that
the lock list is large enough to contain the locks for the concurrent activity in the
SAP system.
The size of the lock list is set by the LOCKLIST database configuration parameter,
which can be found in the cockpit under Configuration à Database à
Self-Tuning Memory Manager. Lock list utilization can be calculated using the
lock_list_in_use monitor element and the lock list size. If utilization is high, con-
sider increasing the lock list size. These details can be easily found within the
Locks and Deadlocks section of the SAP DBA Cockpit for DB2, which is shown
in Figure 2.7.
23
CHAPTER 2: Performance Monitoring
lock escalations and optimize overall system performance. Normally, lock esca-
lation is extremely rare for databases with a properly configured lock list or for
databases using STMM.
Figure 2.7: The Locks and Deadlocks tab displays information on lock management and
deadlock occurrences.
If there is only one active transaction, DB2 will adjust this to a large percentage.
However, if many applications are holding locks, this percentage might need to
be lower to avoid a scenario where one application consumes most of the lock
list, while the others quickly run out of space in the lock list and are forced to es-
calate. Properly configuring the LOCKLIST and MAXLOCKS parameters or using
STMM will prevent lock escalations.
24
CHAPTER 2: Performance Monitoring
If lock escalations are occurring, then abnormally large values can be expected for
the lock wait monitor elements, too. If lock escalations occur, other applications ac-
cessing that same table must wait for the escalating application to commit. In addi-
tion, if more applications are waiting for table locks to be released, there is a
greater possibility that one of these waiting applications will already be holding a
lock that will be requested by the escalating application. This would result in a
deadlock, with each application waiting for locks already held by the other.
Large lock wait values without lock escalations or deadlocks might indicate that
custom applications are not efficiently committing their units of work. Custom
applications should try to hold locks for as little time as possible, by performing
efficient SQL statements and accessing only required records, and by performing
related updates together, followed immediately by committing the unit of work.
Infrequent commits can hold locks excessively long, and increase lock wait
scenarios.
A lock timeout occurs when an application waits to acquire a lock for longer than
the LOCKTIMEOUT database configuration parameter, which is set to 3,600 sec-
onds (1 hour). This default value is much larger than any application should be
required to wait for locks. If a lock wait occurs, an application has probably hung
in the middle of a unit of work, and is holding locks abnormally. In this scenario,
an administrator will likely need to identify the hung database agent, and manu-
ally terminate that application using a command:
This will cause a rollback and release the locks currently held by that application.
Logging
The transactional log files of the database maintain database integrity by contain-
ing a physical copy, on disk, of all committed database transactions. When data is
updated, the changes are made directly in the DB2 buffer pool, and logged in the
DB2 log buffer. When a transaction commits, each entry in the log buffer must
be successfully written from the log buffer to the log files before the commit
25
CHAPTER 2: Performance Monitoring
returns successfully to the client. Since writes to the log files occur synchro-
nously with each commit, fast SAP dialog response times depend on fast writes
to the DB2 transactional log files.
DB2 contains two kinds of log files: primary and secondary. The number and
size of these log files are set with the LOGPRIMARY, LOGSECOND, and LOGFILSZ
database configuration parameters. Primary log files are pre-allocated when the
database is created. Secondary log files are allocated on demand, whenever ac-
tive transactions exceed the total size of the primary log files. Therefore, the total
size allocated to primary log files should be large enough to hold all the log re-
cords expected from concurrent transactions during normal database activity.
Secondary log files should only be required for infrequent spikes in activity,
which may require additional log space.
Logging can be configured for either circular or archive logging. Circular log-
ging reuses primary log files once they no longer contain log records required for
crash recovery, which means that point-in-time recovery is not possible with cir-
cular logging. Therefore, circular logging is not suitable for production systems.
Production systems require archive logging, which ensures that all log files pro-
duced during the entire lifetime of the database are saved, and that point-in-time
recovery is always possible. When a primary log file becomes full, it is archived
(copied) by DB2 to the locations set in the LOGARCHMETH1 and LOGARCHMETH2
database configuration parameters. Once the log file is no longer needed for
crash recovery, it is renamed to the next log file sequence number, and its header
is re-initialized for re-use. During normal workloads in properly configured sys-
tems, the next empty primary log file usually already exists when the current log
file becomes full, and a transaction spanning multiple log files rarely incurs the
overhead of allocating the next log file.
The Logging tab, shown in Figure 2.8, displays the number and size of log files
available and allocated in the system. If the database is using secondary logs, you
can see the number currently allocated, and the maximum secondary log file
space used by the database.
26
CHAPTER 2: Performance Monitoring
Figure 2.8 The Logging tab displays information on log file consumption and logging I/O.
This information can help determine if the primary log space is adequate for your
current workload. In general, we recommend that the log file system should be
1.5 times the size of all primary and secondary log files configured for your sys-
tem. This ensures enough space for all configured log files, plus extra space for
inactive (online archive) logs waiting to be archived, or new logs being formatted
for future use.
If secondary log space is being used consistently, logging overhead may be re-
duced by allocating more primary log space. This is done by either increasing the
number of primary log files or increasing the log file size. First, always ensure
that the log file system is large enough to contain all of the configured logs. The
cockpit also displays the database application ID with the oldest uncommitted
transaction. This can help identify long-running transactions that might need
attention.
The Log Buffer Consumption section is valuable for determining the effective-
ness of page cleaning. The LSN Gap value specifies the percentage of the
SOFTMAX checkpoint that is currently consumed in log files by dirty pages. This
27
CHAPTER 2: Performance Monitoring
includes pages that have been changed in a buffer pool by both committed and
uncommitted transactions, but which have not yet been written to disk in the ta-
ble spaces. If this is above 100 percent, the page cleaners are unable to keep up
with the transaction workload on the system, and more page cleaners might be re-
quired. The Restart Range value is similar, but corresponds to the percentage of
SOFTMAX occupied in the log files by committed transactions. Statements in this
Restart Range will need to be rolled forward during crash recovery. Again, if this
is greater than SOFTMAX, more page cleaners might be required.
The I/O characteristics of the log file system are also provided. The Log Pages
Read displays the physical log file page reads required during rollback operations
in the database, and the Log Pages Written displays the pages of transactional
data written into the log files. The transaction commit time depends on the log
file system’s write performance. Therefore, having the fastest log file system
possible minimizes dialog response time. A well-performing system should have
log file system write times below 2 ms.
Ideally, very few log buffer overflows should occur. This indicates the number of
times any database agent has waited for log buffer flushes in order to write into
the log buffer. These can occur when large transactions produce a series of log
records larger than the buffer, or when high transaction volumes consume the en-
tire buffer with many smaller log records simultaneously. When this occurs, all
in-flight transactions must wait for the log buffer to be written to disk before they
can continue writing log records into the buffer. This introduces I/O wait into all
in-flight transactions and hurts performance. For optimal performance, the log
buffer should be large enough to avoid overflows during normal workloads.
Calls
The Calls tab, shown in Figure 2.9, contains a summary of the different types of
SQL statements issued, and their performance impact on the SAP system. This
displays the number of rows read, deleted, inserted, selected, and updated. These
can be compared to the number of DML and DDL statements executed and their
execution time, to understand the average number of rows read per SQL state-
ment, and the time spent processing those statements within the database.
28
CHAPTER 2: Performance Monitoring
Figure 2.9: The Calls tab displays how different types of SQL statements contribute to the
load on the database.
The Hash Joins section shows some interesting statistics on the hash join opera-
tions performed by the database. DB2 performs hash joins when large amounts of
data are joined by equality predicates on columns of the same data type (for ex-
ample, tab1.colA = tab2.colB). First, the inner table is scanned, and the relevant
rows are copied into memory and partitioned by a hash function. The hash func-
tion is then applied to the rows from the outer table, and the join predicates are
then only compared for inner and outer table rows hashing to the same partition.
If the hash join data exceeds sort heap memory, DB2 will consume temporary ta-
ble space on disk to compute the join. Obviously, performance will be better if
this can be avoided, and instead, the join can be done entirely within a sort heap.
If the total hash join data exceeds the sort heap by less than 10 percent, this
counts as a small overflow. If the number of small overflows is greater than 10
percent of the total overflows, avoiding these small overflows with a larger sort
heap may improve performance. If a single partition of data from the hashing
function (the set of rows hashing to the same value) is larger than the sort heap, a
hash loop results. When this occurs, the intermediate join of that one section of
data overflows to temporary table space, causing extra disk I/O for the join of in-
dividual hash partitions.
29
CHAPTER 2: Performance Monitoring
For performance reasons, always try to minimize the number of hash loops and
hash join overflows. With DB2 9.5, the sort heap memory parameters default to
automatic settings using the DB2 Self-Tuning Memory Manager. This allows
DB2 to automatically adjust the available sort heap memory to avoid unnecessary
hash join overflows or hash loops.
Sorts
The Sorts tab, shown in Figure 2.10, displays memory usage and overflows from
database sorts. The Sort Overflows value is probably the most important one on
this tab. Transactional systems should have less than one percent of total sorts over-
flowing from sort memory to temporary table space. BW systems may have more,
but overall, sort memory should be configured to avoid most sort overflows.
Figure 2.10: The Sorts tab shows the memory consumed by database sort operations.
The private and shared sort heap parameters can be compared with the current al-
located memory and high-water mark, to determine whether the sort memory
heaps are properly configured. DB2 9.5 defaults to automatic shared sort memory
and the Self-Tuning Memory Manager. This allows DB2 to manage sort memory
allocation based on overall system requirements, which avoids unnecessary sort
memory allocation and prevents most sort overflows.
30
CHAPTER 2: Performance Monitoring
XML Storage
The XML Storage tab provides I/O characteristics for XML Storage Objects
(XDA). This is only valid for database tables using the XML data type to lever-
age the DB2 PureXML features for storing and accessing XML documents na-
tively in XML format.
As of the writing of this book, SAP currently does not use DB2 PureXML fea-
tures. Therefore, this tab is really only valid for non-SAP databases cataloged
into the cockpit, or for user tables created manually by SAP customers.
Performance: Schemas
There should be very few schemas existing within a SAP database. The vast ma-
jority of database access is done through the SAP connection users, which default
to SAP<SAPSID> for ABAP systems, and SAP<SAPSID>DB for Java systems. The
only other users who generally connect are the SAP admin user, <SAPSID>ADM,
and the DB2 instance owner, DB2<DBSID>.
The Schemas dialog screen can be used to identify the activity of users
connecting to any database partition from outside the SAP application. I/O
performance characteristics of reads and writes can be monitored for each
schema.
The buffer pool snapshot provides the logical and physical read statistics for the
data, index, and temporary table spaces on all database partitions. If different
31
CHAPTER 2: Performance Monitoring
buffer pools have been created for different database objects, this provides an
easy interface to compare the individual statistics for each buffer pool on each
database partition.
The initial screen contains a list of all visible buffer pools created in the system,
along with an overview of their hit ratios and read characteristics.
Double-clicking on any buffer pool partition returns a more detailed buffer pool
snapshot for that particular buffer pool on that particular partition, as shown in
Figure 2.11. This displays the data and index read statistics, buffer quality, and
utilization state of the buffer pool. It also includes tabs showing the detailed
asynchronous and direct I/O operations, and performance characteristics for this
buffer pool. All of these details are important for proper performance tuning of
each individual buffer pool.
Figure 2.11: The Buffer Pool Snapshot displays detailed I/O information for an individual
buffer pool.
32
CHAPTER 2: Performance Monitoring
As a safety net, DB2 is also pre-configured with hidden buffer pools for each
possible page size (4K, 8K, 16K, and 32K). These hidden buffer pools ensure
that an appropriate buffer pool is always available. These hidden buffer pools
may be used if the system does not contain enough memory to allocate the de-
fined buffer pools, errors allocating the buffer pools occur during the database
activation, or if anything in the database performs I/O using a page size without a
corresponding user-defined buffer pool. Since these hidden buffer pools are only
16 pages in size, performance will likely suffer if they are used. An entry is
logged in the notification log whenever a hidden buffer pool is used.
First, the most frequently accessed table spaces should have the highest buffer
pool hit ratios. Table spaces with a high number of logical reads should have a
buffer pool quality of at least 95 to 98 percent. The frequently accessed index ta-
ble spaces (with names ending in I) are especially critical for high hit ratios.
Next, the physical read and write times for all table spaces should be fairly fast.
Ideally, both read and write times should be under 5ms. If all table spaces have
slower I/O, you might simply have slow disks. However, this might also be a
sign of disk contention, especially if more frequently accessed table spaces are
slower than others. To improve performance, spread the data across a greater
number of physical disks, or move one or more frequently accessed tables to a
new table space on a new series of disks. The Tablespace Snapshot can be used,
together with the Operating System Monitor à Detailed analysis menu à
Disk Snapshot (from transaction ST06), to lay out table spaces and balance data-
base I/O evenly across all SAPDATA file systems.
33
CHAPTER 2: Performance Monitoring
Figure 2.12: The Tablespace Snapshot displays the I/O characteristics of all table spaces.
Similar to the previous buffer pool snapshot, double-clicking any row displays a
more detailed table space snapshot for the chosen table space and partition. This
snapshot shows the detailed buffer pool statistics, and the asynchronous and di-
rect I/O operations and performance characteristics.
34
CHAPTER 2: Performance Monitoring
page. When this row is accessed, DB2 must perform two I/O reads instead of
one: the first to read the pointer from the original location, and the second to read
the data from the pointer.
Figure 2.13: The Table Snapshot dialog displays data access characteristics of individual
tables.
Also, if table space analysis has indicated unbalanced I/O, the table snapshot can
be used to identify the most frequently accessed tables. If several heavily ac-
cessed tables reside in the same table space, I/O can be balanced by separating
these tables into different table spaces on different sets of physical disks.
35
CHAPTER 2: Performance Monitoring
Double-clicking any application in the initial list displays a detailed snapshot for
that single application. Shown in Figure 2.14, this snapshot displays all of the key
application statistics, organized conveniently into unique screen tabs.
Figure 2.14: The Application Snapshot contains many tabs for accessing detailed informa-
tion on the resource consumption of the database applications.
The first Application tab describes the application on the host, and displays the
client user and SAP application server executing this application. The Agents tab
36
CHAPTER 2: Performance Monitoring
describes the number of agents, processing time, and memory usage for this ap-
plication. Note that with DB2 9.5, the parameters for the number of agents in the
database default to automatic, and are dynamically maintained by DB2 to opti-
mize memory utilization and performance.
The Buffer Pool tab displays the application’s detailed data, index and temporary
table space read statistics, and buffer pool quality. The read statistics can indicate
the I/O efficiency of the queries in this application. The performance details of
the non-buffered I/O (e.g. LOB access, backup and restore) are shown in the
Direct I/O tab.
The Locks and Deadlocks, Calls, Sorts, and Cache tabs contain the same infor-
mation as the database performance tabs, except that the details are specific to the
currently selected application. If an application is holding too many locks, caus-
ing lock escalations, or involved in deadlocks, consider looking more closely at
the application coding and SQL. A properly coded application will hold as few
locks as possible and commit as frequently as possible, so that locks are released
quickly. An application should commit as frequently as possible, and not perform
unnecessary calculations inside SQL units of work. The SQL statements should
also try to reduce the amount of data accessed during a query, and only return the
rows of relevance for the application.
The Unit of Work tab displays the length of time and log space consumption of
the current transaction. The Statement tab shows the statistics of the current state-
ment within the current unit of work. The Statement Text tab displays the current
SQL statement being executed. This screen also contains buttons to load the
optimizer execution plan for the statement, or to view the ABAP source code for
the program executing this SQL statement. These tools can be used to analyze the
program logic and SQL execution plans, to ensure efficient SQL and indexed ac-
cess to the data pages being fetched.
37
CHAPTER 2: Performance Monitoring
system. The Performance: SQL Cache Snapshot screen, shown in Figure 2.15, al-
lows administrators to easily identify the queries that are consuming the largest
amount of resources.
Figure 2.15: The SQL Cache Snapshot shows the execution time and resource consump-
tion of queries that have run in the system.
In the screen, the columns listing numbers of executions, total execution time,
and average execution time allow the DBA to identify the queries that take the
most execution time. The buffer pool hit ratio is given for each query, to identify
how much disk I/O the query is causing.
The next few columns provide valuable information about SQL query quality and
I/O quality. The Rows Read/Rows Processed column gives a ratio of how many
rows must be read to identify the rows required for the final result set. The BP
Gets/Rows Processed column indicates the number of pages that must be ac-
cessed from the buffer pool to read the final result set. The BP Gets/Execution
column provides the number of pages read from buffer pool per query execution.
If the number of rows read or the ratio of rows read to rows processed is high, the
index advisor might help to identify a better index, to reduce the number of rows
38
CHAPTER 2: Performance Monitoring
evaluated by the query. If the BP gets are high, clustering the table differently
might improve performance, or a table reorganization might help to reduce the
number of pages read from disk.
The last few columns of the Performance: SQL Cache Snapshot screen provide in-
formation on sorting. A query that displays a large number of rows written indi-
cates sort overflows to disk in the temporary table space. The cockpit also displays
the total number of sorts, number of sort overflows, and total time spent sorting
during the query. If sort overflows are occurring, and the total sort time is a signifi-
cant portion of the average execution time, further analysis of the query, indexes,
and potentially sort parameters might be required to try to reduce sort overflows.
Click the Explain button, and the optimizer execution plan is displayed, showing
the query cost and join methods. From there, click the Details button to open a
new window with all of the detailed optimizer data, including all indexes and da-
tabase objects accessed, join methods, and cardinality estimates for each join.
Click of the Index Advisor button, and the DB2 Advisor is run to suggest new,
optimal indexes to optimize data access for this query. (Both the Optimizer Ex-
plain and the Index Advisor interfaces are explained in detail in Chapter 8.)
If the first application were to then request a lock already held by the second, the
two applications would enter a deadlock scenario. In this state, both applications
are waiting for locks held by the other, and neither can proceed. Deadlocks can
affect any relational database. They are usually caused by infrequent or missing
commit statements within custom applications.
39
CHAPTER 2: Performance Monitoring
The active lock waits and deadlock scenarios can be seen through the Perfor-
mance: Lock Waits and Deadlocks screen shown in Figure 2.16. The screen lists
the database agents and lock types involved in all active lock waits and dead-
locks, and includes buttons to view the last SQL statement from each unit of
work involved in these scenarios. This provides real-time analysis of the applica-
tions causing locking issues.
Figure 2.16: The Lock Waits and Deadlocks dialog displays the current lock wait and dead-
lock scenarios that are actively occurring in the system.
40
CHAPTER 2: Performance Monitoring
Performance: History–Database
Catching performance problems in action is a reactive process. All administrators
should try to be proactive about monitoring performance trends and taking action
to prevent potential problems before they occur. Having easy access to these his-
torical trends makes proactive analysis much easier, and SAP can be configured
to collect this historical information when the system is registered into the DBA
Cockpit.
41
CHAPTER 2: Performance Monitoring
Figure 2.17: Daily historical performance data can be analyzed in this dialog.
Clicking any single day in the list displays the details gathered for each monitor
element periodically throughout the day. This can be viewed in two different
tabs. The Snapshot tab provides the details of each individual sample throughout
the day. The Interval tab displays only deltas. Therefore, it will contain only en-
tries for times when one or more monitor element value changed from its previ-
ous value.
Performance: History–Tables
The performance history of individual tables is also available for proactive plan-
ning. For each table on each database partition, the Performance: History–Tables
screen displays the rows read and written, overflow records accessed, and page
reorganizations. This information can be displayed for each day, week, or month.
Both short- and long-term trends for table access can be easily analyzed, provid-
ing the DBA with the information needed to proactively plan for system changes
to accommodate changing workloads.
42
CHAPTER 2: Performance Monitoring
Performance Warehouse
The new SAP Database Performance Warehouse provides an integrated historical
performance analysis model for both the database and the SAP applications. Da-
tabase performance data is extracted and loaded from all SAP systems into a cen-
tral SAP NetWeaver BW warehouse. Historical performance data can then be
mined, trended, and analyzed, using powerful SAP NetWeaver BW interfaces
with charts, dashboards, and drill-down capabilities.
The ABAP cockpit contains a Reporting link for analyzing performance data and
a Configuration link for setting up the Performance Warehouse reporting parame-
ters. The Reporting screen links directly into the Performance Reporting
WebDynpro. An example of the data is given in Figure 2.18. This illustrates his-
toric buffer pool quality over a two-week period. This data clearly displays recur-
ring trends that can identify areas that might benefit from tuning.
Only this brief introduction to the Performance Warehouse will be given in this
book. More documentation on the Performance Warehouse can be found on the
SAP Service Marketplace or SDN.
Figure 2.18: The SAP Performance Warehouse displays detailed reports on historical
performance and resource consumption trends.
43
CHAPTER 2: Performance Monitoring
Summary
The performance section of the DBA Cockpit provides a comprehensive, single
interface for all DB2/SAP database performance monitoring and tuning. All of
the most important information is easily accessible, and displayed in an intuitive,
meaningful way.
Since all the information is in one location, it is easy to drill down from database
monitors, to table space and table monitors, to application monitors, and even
right down to SQL statement monitors. Administrators can start with a wide fo-
cus and methodically narrow that focus to the exact source of the problem under
investigation. The best part is that the tool is part of SAP, so both SAP basis and
database administrators can leverage this powerful tool in a familiar interface, to
get the best performance from their SAP systems.
44
Chapter 3
Storage Management
Flying Efficiently with Heavy Cargo
DBAs can spend a lot of time designing and planning the layout of table spaces
for a storage subsystem. Today’s advanced storage subsystems offer many
choices on how physical disk volumes can be grouped into RAID arrays, and
within these arrays, how logical volumes (LUNs) can be defined and made avail-
able as usable storage to the database. Designing the placement of table spaces
can be more like an art than a science. The problem in spending so much time on
an elaborate design is that it is only appropriate for the quantity of data and
45
CHAPTER 3: Storage Management
workload at a given point in time. As the system matures and evolves, so must
the storage layout.
As companies adapt their SAP systems for future business needs, such as adding
additional modules, the amount of data inevitably grows. Therefore, the data ac-
cess pattern will evolve, rendering the initial data layout design obsolete. To keep
the system running optimally, time-consuming and intrusive administrative tasks
might be required regularly, to re-evaluate and re-optimize the data layout. Often,
a simpler, more generic storage layout, like that provided by DB2’s automatic
storage feature, provides a better solution for high performance and low mainte-
nance throughout the entire lifetime of the SAP application.
DB2 table spaces store their data in physical storage objects known as containers.
A table space can span one or more containers. Data within the table spaces are
striped evenly across all containers for that table space.
DB2 uses two types of table space concepts: System Managed Space (SMS) and
Database Managed Space (DMS). With SMS, the storage allocation within the
table space containers is managed by the operating system (OS). Containers are
OS directories, and a unique file exists in each container for each database object
residing in that table space. By default, I/O to these table spaces will be buffered
by the file system, and the sizes of the files in the containers will be extended or
reduced, depending on the quantity of data stored in the database objects. Addi-
tion and deletion of containers in SMS is only possible during a redirected
restore.
With DMS, the storage allocation within the table space containers is managed
by DB2. The containers are either pre-allocated files or raw devices. I/O to these
pre-allocated containers is handled mainly by the database, with little or no OS
overhead. The OS is only involved when pre-allocated file containers are ex-
tended or reduced. Also, addition and deletion of containers is possible online via
DDL statements.
To simplify the administration of the table spaces, all table space containers
should be spread as widely as possible on all disk spindles. Although an
46
CHAPTER 3: Storage Management
elaborately designed layout might briefly provide a slight performance benefit (of
perhaps five percent), this simpler approach will provide a more consistent I/O
pattern over time. It will also be less vulnerable to additions of new SAP mod-
ules, or changes in function and workload. DB2 has also introduced a feature
called Automatic Storage, in which the database is given a pool of storage (gener-
ally two or more file systems), from which table space containers will be allo-
cated. Automatic Storage is fundamentally a combination of DMS table spaces
(used for the System Catalog and User table spaces) and SMS table spaces (used
for Temporary table spaces).
In the DBA Cockpit, SAP has not only made the monitoring of database perfor-
mance metrics available, as described in Chapter 1, but it has also made the
maintenance of table spaces, tables, and indexes available in the SPACES section.
Automatic Storage
Automatic Storage is the default storage layout when installing DB2 with SAP
NetWeaver 7.0 and higher. During installation, SAPinst will use sapdata1 through
sapdata4 as storage paths. Depending on the storage subsystem and the number
of LUNs/file systems available, additional storage paths can be added at that
time. Administrators can view the DB2 storage paths from the DBA Cockpit, as
shown in Figure 3.1.
Once the database has been created, additional storage paths can be added, if the
original file systems containing the table spaces are getting full. Adding new stor-
age paths at this time will create a new stripe set of storage for all table spaces.
A new stripe set will not cause a rebalancing of data from the older set of con-
tainers into the new storage. The containers in the previous stripe set will be
filled before the new stripe set begins to be used. Therefore, to provide equiva-
lent performance, ensure that the I/O capacity of each stripe set is the same. This
requires a similar number of disk spindles in each stripe set. The simplest way to
achieve this is to always keep everything the same. Each time storage is extended
by adding new automatic storage paths, add the same number of sapdata file sys-
tems, always using identical LUNs from the storage system.
47
CHAPTER 3: Storage Management
Figure 3.1: Automatic Storage storage paths can be managed from within SAP.
To add new storage paths, just click and enter the new file systems in the
dialog shown in Figure 3.2. The new storage locations must exist and be accessi-
ble by the database. The bottom of the panel will display the DDL for the ALTER
DATABASE statement, as confirmation of the changes made.
Figure 3.2: Click the Add button to add a new storage path.
48
CHAPTER 3: Storage Management
Table Spaces
Table spaces in SAP can either be of Automatic Storage or DMS/SMS type. By
default, SAP NetWeaver 7.0 or higher will create the system catalog table space
and all data table spaces using Automatic Storage, and the temporary table spaces
using SMS. DMS/SMS table spaces can still be created for user data, even if the
database uses Automatic Storage.
As shown in Figure 3.3, the Tablespace screen displays the table spaces accord-
ing to their type, in either the Automatic Storage tab or DMS/SMS tab.
Detailed data about both the logical and physical storage consumption for each
table space is displayed in the following columns:
49
CHAPTER 3: Storage Management
? Regular table spaces are the default for SMS, but they can also be used
for DMS. They have smaller limits for maximum size and slots (rows
per page) than Large table spaces, and cannot contain LONG/LOB
data.
? Large table spaces are the default for DMS table spaces. They are only
allowed for DMS. They can contain both user data and LONG/LOB
data.
• Page Size—The page size for table spaces can be allocated in 4KB, 8KB,
16KB, and 32KB sizes.
• KB Free—This represents the amount of space in the table space that was
allocated, but does not contain any data pages.
50
CHAPTER 3: Storage Management
Adding a new table space in Automatic Storage requires the DBA to navigate
and modify three tabs. However, the DBA must first provide a new table space
name, beginning with Z or Y, for user customized objects.
51
CHAPTER 3: Storage Management
Figure 3.5: Specify the technical settings when creating new table spaces.
The settings in the Size of I/O Units area of Figure 3.5 will influence how DB2
will store the data on disk and access it. The SAP default is 16KB pages, two
pages per extent and prefetch size of automatic. The automatic value in the
Prefetch Size is a computed value based on the number of containers, the number
of disk spindles, and the extent size. The formula used for this calculation is as
follows:
Prefetch size =
(number of containers) *
(number of physical disks per container) *
(extent size)
52
CHAPTER 3: Storage Management
Disk Performance values are predefined, using default values. A different buffer
pool could also be assigned to this new table space, although you should maintain
only one buffer pool if all table spaces are of the same page size. DB2 requires
that there be at least one buffer pool of the corresponding page size for each page
size used by table spaces.
Figure 3.6: Define the storage parameters for new table spaces.
53
CHAPTER 3: Storage Management
Figure 3.7: DB2 defines the containers itself for Automatic Storage table spaces.
Figure 3.8: The table spaces for DMS/SMS are listed here.
54
CHAPTER 3: Storage Management
When adding a new table space under DMS/SMS, the only difference in the Tech-
nical Settings is that AutoStorage is not the default, as you can see in Figure 3.9.
In the Containers tab for DMS/SMS, shown in Figure 3.10, the container infor-
mation is now required. For DMS containers, you must specify a full path and
file name for each container. For SMS, specify a directory for each container.
Figure 3.10: Container definitions are required for DMS and SMS table spaces.
55
CHAPTER 3: Storage Management
Containers
To view all the related containers for the table spaces, select the Containers op-
tion. The screen shown in Figure 3.11 will be displayed.
Figure 3.11: The Containers screen displays the containers for all table spaces.
The Containers screen displays storage parameters and statistics in the following
columns:
• Stripe Set—The stripe set to which containers belong determines the set of
containers across which DB2 will evenly distribute the data. In Automatic
Storage, when additional storage pools are added through “ALTER
DATABASE…ADD STORAGE ON…,” or via the DBA Cockpit, a new stripe
set is automatically created. In DMS table spaces, “ALTER TABLE
SPACE…BEGIN NEW STRIPE SET…” will create a new stripe set.
56
CHAPTER 3: Storage Management
• Container Name—This contains the full path and file name of the
container for DMS, or a full directory path name for SMS.
57
CHAPTER 3: Storage Management
• Tables that have Type-1 indexes, which were used in DB2 v7 and older,
before Type-2 indexes were introduced to improve concurrency
In the Table and Indexes screen, tables that meet the criteria in the filter are dis-
played. The tables displayed here are also dependent on a set of DB6 tables that
are populated when the REORGCHK FOR ALL TABLES job is run in the DBA
Planning Calendar. (Select Jobs à DBA Planning Calendar.) If this job has
never been run, it might be possible that no tables are displayed.
The following columns are represented in the screen, which is shown in Figure
3.13:
58
CHAPTER 3: Storage Management
Figure 3.13: The Tables and Indexes dialog displays the storage characteristics of the indi-
vidual database tables.
59
CHAPTER 3: Storage Management
In older versions of SAP NetWeaver, you can run REORGHK from this screen by
clicking , which will be located on the Application menu bar, near the
top of the screen. This opens a window, shown in Figure 3.14, to allow adminis-
trators to run a REORGHK on stale tables. New versions of SAP NetWeaver 7.0 no
longer have a REORGHK button. Instead, a REORGHK is executed every time the
table is loaded in Single Table Analysis.
60
CHAPTER 3: Storage Management
• Last REORG Check—This is the date and time REORGCHK was last run.
• Total Table Size—This value represents the size of regular and long data
in the table, in kilobytes.
• Total Index Size—This value represents the size of all indexes for the
table, in kilobytes.
• Free Space Reserved—This is the percentage of free space in the table’s
allocated pages.
• F1 Overflow Rows—This is the percentage of overflow rows.
• F2 Table Size/Allocated Space—This is the percentage of general
fragmentation in the table.
• F3 Full Pages/Allocated Pages—This is the percentage of full pages
fragmentation.
• REORG Pending—This indicates if REORG is pending.
• Last REORG of Table—This is when REORG was last run.
• Runtime of Last REORG—This is the elapsed time of the last REORG.
The System Catalog area of the Table tab contains these values:
• Last Runstats—This indicates when RUNSTATS was last run against table.
• Tablespace—This is the table space to which the table belongs.
• Cardinality—This is the number of rows in the table.
• Overflow Records—This is the number of rows that span two or more
pages.
• No. of Pages with Data—This value represents pages that contain table data.
61
CHAPTER 3: Storage Management
Figure 3.15: Storage details for individual tables are available in Single Table Analysis.
62
CHAPTER 3: Storage Management
63
CHAPTER 3: Storage Management
Figure 3.16: Index storage statistics are also available in Single Table Analysis.
Figure 3.17: The Table Structure tab displays the columns and their data types.
64
CHAPTER 3: Storage Management
The data in the left side depends on the configuration of AutoRunstats. When
AutoRunstats is enabled (which is the default for SAP NetWeaver 7.0 installa-
tions), the screen appears as shown in Figure 3.18. The Statistics Attributes area
of the screen contains the following:
65
CHAPTER 3: Storage Management
The RUNSTATS profile for the table is displayed in the right side of the screen.
This displays the type and detail of statistics collected on the table. The Table
Analysis Method shows the options that RUNSTATS will use to collect statistics
for the table. The Index Analysis Method shows the options that RUNSTATS will
use to collect statistics for the indexes.
Figure 3.18: The RUNSTATS Control tab shows how DB2 collects statistics for
this table.
66
CHAPTER 3: Storage Management
Figure 3.19: The Index Structures tab displays the database data type and size of the col-
umns in each index on this table.
The following key pieces of information are shown within the Availability and
Other Technical Information section:
67
CHAPTER 3: Storage Management
• Large RIDs—Is the table using large RIDs? If the value is PENDING, the
table supports large RIDs (that is, the table is in a large table space), but at
least one of the indexes for the table has not yet been reorganized or
rebuilt. Therefore, that index is still using smaller, 4-byte RIDs. It must be
reorganized to convert it to the larger, 6-byte RIDs.
• Large Slots—Does the table support more then 255 rows per page? If the
value is PENDING, the table supports large slots (that is, the table is in a
large table space), but there has not yet been an offline table reorganization
or a table truncation operation. Therefore, the table is still using a
maximum of 255 rows per page.
Figure 3.20: The table size and status are available in the Table Status tab.
68
CHAPTER 3: Storage Management
The Compression Status tab is shown in Figure 3.21. If compression has been en-
abled on the table, the Compression Details area of the tab displays the compres-
sion statistics. Otherwise, if a compression check has been executed on the table,
the Compression Check Results can be used to evaluate the potential benefits of
compressing that table. The following information is displayed in this section
when the table is enabled for compression, and compression has already been ap-
plied to the data rows of the table:
The Compression Check Results section shows the estimation of what can be ex-
pected if compression were enabled on the data rows of the table. It contains the
following values:
• Rows too Small—This is the number of rows that were too small to be
used for compression calculations.
69
CHAPTER 3: Storage Management
Figure 3.21: DB2 Deep Compression statistics are shown in the Compression Status tab.
The Application menu bar, shown in Figure 3.22, provides options to run
RUNSTATS, REORG, and Compression on the table. Select one of the buttons to
start the action.
Figure 3.22: Some database utilities can be run from the Single Table Analysis Application
menu bar.
When RUNSTATS is select to be run in the background, the dialog menu shown in
Figure 3.23 is displayed, with choices on how statistics will be collected for both
the table and index. Once the statistics collection method has been chosen, the
job could be run once, or repeated on a schedule from the Recurrence tab.
70
CHAPTER 3: Storage Management
71
CHAPTER 3: Storage Management
Virtual Tables
Virtual tables were introduced by SAP to save disk storage and help improve the
performance of many utilities, such as Automatic RUNSTATS and Automatic
REORG. The concept of virtual tables is simple. Do not materialize (create) an
empty table in the database. Instead, just logically define it in the SAP DDIC.
When the first row is inserted into a virtual table, the SAP Database Support
Layer (DBSL) determines that the table does not yet exist in the database. It is-
sues the CREATE TABLE statement to materialize the table before inserting that
first row.
SAP systems contain thousands of empty tables. Each empty table may consume
as many as 11 extents (22 pages or 352K, with the default 16K page size and ex-
tent size of 2). These extents are consumed by the following:
72
CHAPTER 3: Storage Management
The first screen in the Virtual Table tab, shown in Figure 3.25, lists all of the vir-
tual tables in the system. To materialize a virtual table manually, select it and
click the Materialize button.
Figure 3.25: The Virtual Tables tab contains the list of virtual tables.
73
CHAPTER 3: Storage Management
The second tab in the Virtual Table screen, shown in Figure 3.26, lists all the
empty tables that are eligible to be converted to virtual tables. If the Convert
Empty Tables button was selected, all eligible tables will be dropped from the da-
tabase and re-created as virtual tables in a background job. Eligible tables match
the following criteria:
• Empty
• Non-volatile
• Does not have a partitioning key defined
• Non-MDC
Figure 3.26: Empty tables that can be virtualized are listed in the Candidates for
Virtualization tab.
74
CHAPTER 3: Storage Management
Historical Analysis
The History Overview screen provides a general overview of the size and quan-
tity of table spaces, tables, and indexes in the database. The Database and
Tablespaces tab of the screen is shown in Figure 3.27.
Figure 3.27: A database size overview is provided in the Database and Tablespaces
tab of the History Overview.
• Last Analysis—This date and time indicates when the last analysis was run
to collect the history information of the database objects.
75
CHAPTER 3: Storage Management
• Free Space—This is the amount of free space (in kilobytes) in all the table
spaces.
Figure 3.28: The Tables and Indexes tab displays the size of the tables and indexes.
76
CHAPTER 3: Storage Management
Figure 3.29: The Space tab displays the change history of database and ta-
ble space storage consumption..
Double-clicking a row in the list of tables and indexes shown in Figure 3.30 dis-
plays a detailed history that documents the item’s size changes over time.
Figure 3.30: The Tables and Indexes tab displays the historical storage consumption for ta-
bles and indexes.
77
CHAPTER 3: Storage Management
In the example in Figure 3.31, the Delta Tables value for 07/03/2008 was nega-
tive, indicating that some tables were deleted from the database. In this case, they
were converted to virtual tables.
Figure 3.31: Historic details of a database’s size changes are available here.
78
CHAPTER 3: Storage Management
Figure 3.32: Histiorical size changes for the tables and indexes are displayed here.
Selecting any object and double-clicking its row provides more historical data for
that object, as shown in Figure 3.33.
79
CHAPTER 3: Storage Management
Summary
Managing database storage, monitoring database object size, and planning stor-
age capacity are all key operations for ensuring the stable, efficient, and
cost-effective operation of any SAP system. The SAP DBA Cockpit provides
easy access to many of the most important DB2 features for storage management.
Regular maintenance tasks, such as reorganization and statistics collection, are all
easily executed on-demand, scheduled as repeating jobs, or enabled for automatic
DB2 maintenance with a couple of mouse clicks. Powerful performance and
space-optimization features, such as compression and virtual tables, are also fully
integrated by SAP into the DB2 cockpit, making the unique benefits of DB2 easy
to implement in an otherwise complex environment.
80
Chapter 4
Job Scheduling
Flying on Auto-Pilot
81
CHAPTER 4: Job Scheduling
When registering a remote system is registered in the DBA Cockpit, you must
click the Collect Central Planning Calendar Data checkbox to allow SAP to
update the central calendar with the job status for this system. Then, schedule the
Central Calendar Log Collector job to run every morning on the system normally
used for monitoring. This will collect and consolidate all the remote systems’ cal-
endar data on that one SAP system.
Figure 4.1: The Central Calendar shows all database jobs for all registered SAP systems.
The central planning calendar shows a single entry for each system with a job
scheduled for that day, in the format “001 <SID> 001,” where <SID> is the SAP
system ID. The first number indicates the number of jobs scheduled for that day
for the given SID. The second number indicates the number of those jobs that
have finished with the same, highest status severity. The severity will be indi-
cated with a color code for that cell in the calendar.
82
CHAPTER 4: Job Scheduling
You can easily see which systems and jobs have had warnings or errors. Dou-
ble-click any date on the calendar to view the details of all the jobs scheduled on
all systems for that date. Double-clicking any specific entry takes you to the
DBA Planning Calendar for that date and SAP system. This allows administra-
tors to view the detailed job logs, and then modify or re-execute those jobs.
When a new SAP system is installed, no recurring jobs are initially scheduled.
The administrator must determine the pattern of jobs required for that system.
These jobs may, or may not, run in parallel. Therefore, the schedule must take
into account dependencies between the jobs and their impact to the system. There
are also a few database-related jobs that run regularly in every SAP system:
Keep these jobs in mind when planning the DBA background jobs in the
calendar.
83
CHAPTER 4: Job Scheduling
Figure 4.2: The DBA Planning Calendar provides scheduling and monitoring of background
database jobs.
The DBA Planning Calendar provides a wizard to help with the initial setup of
the recurring administration tasks on each SAP system. To run the wizard, click
the Pattern Setup button. This wizard steps through the setup of a backup sched-
ule, automatic table REORG, and the scheduling of the REORGCHK for all Tables
job. For each job, reasonable default times are provided, but these can be
changed as desired. The remaining jobs can either be scheduled from the list of
common jobs in the Action Pad next to the calendar, or created and scheduled in
the calendar as a command line processor (CLP) script.
84
CHAPTER 4: Job Scheduling
• Calculates the size of tables and indexes, which is used for creating
incremental space consumption history
• Can perform compression estimates, starting with SAP BASIS 7.0 SP12
All calculated data is stored in SAP database tables and displayed in the DBA
Cockpit under Space à Tables and Indexes. Therefore, the REORGCHK for all
Tables job should be scheduled to run weekly, to ensure that accurate data is dis-
played in the cockpit. SAP recommends excluding the compression check from
the recurring job, because a compression check of all (potentially over 50,000)
tables can take a long time. If you need a full compression check, schedule it
once during low workload hours, or use the /ISIS/ZCOMP ABAP report, at-
tached to SAP Note 980067. (See SAP Note 1268290 for the most recent recom-
mendations about the REORGCHK for all Tables job.)
Scheduling Backups
The calendar’s Action Pad now contains four options for backup:
You can schedule full backups, or a combination of delta, incremental, and full
backups to satisfy your time and recovery requirements. All backups scheduled
through the DBA Planning Calendar are now done online.
85
CHAPTER 4: Job Scheduling
Options can be specified in the calendar’s job scheduler to archive each log file
to two different locations on tape (the Double Store option), overwrite expired
tapes, or eject the tape at the end of the operation. You should always keep two
redundant copies of each archive log file. Therefore, either set LOGARCHMETH1
and LOGARCHMETH2 to different file systems, or use the Double Store option of
the Tape Manager to keep two copies of each log file on tape.
Updating Statistics
By default, DB2 updates statistics automatically using its Real Time Statistics
(DB2 9.5) and Automatic RUNSTATS features. Every two hours, a daemon pro-
cess checks tables for change activity and updates the table statistics, if neces-
sary. With DB2 9.5 Real Time Statistics, if the optimizer determines that
statistics are too stale to provide acceptable query performance, it invokes statis-
tics collection or estimation during query optimization. This removes almost all
need for administrators to worry at all about table statistics.
By default in SAP, both regular and distribution statistics are collected for all ta-
bles, and detailed statistics are collected for all indexes using sampling. This aug-
ments the regular table statistics with additional range histogram data for all
columns of the tables, and collects detailed statistics for the indexes by sampling
the individual index keys in each index. For SAP NetWeaver BW tables, table
statistics are only collected on key columns. If a different statistics collection
method is desired for certain tables, administrators can either update the statistics
profile using the DB2 RUNSTATS command, or schedule the RUNSTATS and
REORGCHK for Single Table job, which allows the RUNSTATS parameters to be
tailored specifically for that RUNSTATS invocation.
86
CHAPTER 4: Job Scheduling
Table Reorganization
There are several jobs and maintenance settings for reorganizing tables. Although
there is an Automatic REORG job in the planning calendar, the native DB2 auto-
matic REORGis recommended instead for tables smaller than 1GB. This is ex-
plained further in Chapter 6. For larger tables, REORG jobs can be run on demand,
or scheduled periodically through the calendar.
Since larger tables are excluded from automatic REORG, a REORGCHK must be per-
formed on these tables periodically, to determine when a REORG is required. This
can be done by scheduling the REORGCHK for All Tables job. This job is also a pre-
requisite for the correct functioning of the Space Tables and Indexes dialog screen
in the DBA Cockpit (formerly transaction DB02). Therefore, this must be scheduled
to occur regularly in every SAP system. You should run it at least once weekly.
The Action Pad then contains three additional table REORG jobs:
• REORG and RUNSTATS for Set of Tables—This job allows the administrator
to enter a list of tables to be periodically reorganized. The REORG can be
done in either offline (read-only) or online mode.
• REORG and RUNSTATS of Flagged Tables—This job reads the flagged table
details from the REORGCHK for All Tables job, and generates a list of tables
to be reorganized. Since the list of tables is generated when the job is
scheduled, this job does not recur; it must be scheduled each time it is to be
executed. The administrator can select all or part of the list for offline table
reorganization, and specify an optional maximum runtime for this job.
• REORG of Tables in Tablespace(s)—This job allows the administrator to
select one or more table spaces for reorganization. An offline REORG will
then run on all tables in that table space. The administrator can again select
a maximum runtime for this job.
87
CHAPTER 4: Job Scheduling
new scripts, load scripts from text files, or select predefined CLP scripts created
from the SQL Script Maintenance dialog screen. These custom scripts can then
be scheduled in the calendar in the same manner as any other job.
All jobs are listed on the DBA Planning Calendar with a color code to specify
status. Any entry on the calendar can be clicked to display its details. Future jobs
can be modified. Completed jobs can only be viewed, and also contain a tab to
display the Job Log. The Job Log contains the status messages and output pro-
duced by the background job. This provides a good first point of problem deter-
mination for jobs that do not complete successfully.
Figure 4.3: The DBA Log displays the status of database background jobs.
88
CHAPTER 4: Job Scheduling
The display defaults to the list of jobs executed during the current week. Previous
weeks can be displayed by double-clicking dates from the calendar. The display
can also be filtered by severity, by clicking the status icons ( ) in the
Summary. This gives administrators a very easy way to view weekly job status
and identify any jobs that did not complete successfully.
Back-end Configuration
The Change Back End Configuration screen, shown in Figure 4.4, provides an in-
terface to control the execution of the DBA Planning Calendar’s background jobs
on different SAP systems. For each system, a unique background server, user,
and job priority can be configured.
Figure 4.4: The Change Back End Configuration dialog configures the server, priority, and
user for executing background database jobs.
89
CHAPTER 4: Job Scheduling
Figure 4.5: SQL Script Maintenance allows administrators to store frequently run SQL
scripts within SAP, so they can be scheduled easily in the DBA Planning Calendar.
More commonly, the saved SQL scripts can be scheduled from the DBA
Planning Calendar, by selecting the CLP Script job from the Action Pad. The
saved scripts can be selected from a drop-down list, and then scheduled to recur
as required. The status of these jobs is then displayed in the DBA Log. To view
detailed results of a job, double-click it from the DBA Planning Calendar or from
the Job Overview (transaction SM37).
90
CHAPTER 4: Job Scheduling
Summary
The DBA Cockpit for DB2 contains all the functionality administrators need to
easily schedule and monitor any type of recurring database task in SAP systems.
Predefined jobs and centralized monitoring greatly simplify normal SAP data-
base maintenance, and the custom repository provides the flexibility to easily de-
fine, maintain, and schedule more complex database maintenance tasks.
91
Chapter 5
S AP applications store their data in the underlying database, in this case DB2.
Objects like application and technology tables, ABAP programs, and user
data (customizations and transactional data) are all stored in DB2 database ob-
jects. Therefore, administrators need to protect the database, so it can be recov-
ered in case of a problem (such as a user or application error) or a major
catastrophe (such as a disk crash).
The first step to enable the necessary protection to the data is to enable archival
logging for the SAP database. To do that, the DBA needs to configure the
92
CHAPTER 5: Backup and Recovery
The second step to protect the database is to take backups regularly, so that fewer
transaction log files are applied to recover the database to the latest consistent
point in time. Backing up the database is a recurring task. The best way to pro-
gram these backups is to schedule jobs in the DBA Cockpit.
Finally, it is also crucial that the DBA validates the backup procedure by check-
ing the message logs, and most importantly, through programmed restores on a
test system. We recommend a restore test once every three months.
• Few database backups—With few database backups, the DBA relies more
on the transaction logs for a possible database recovery. The problem with
this approach is that there could be many transaction logs to apply, so
recovery could take longer.
93
CHAPTER 5: Backup and Recovery
Another fact worth noting is that the backup utility spawns multiple DB2 agents,
to increase parallelism during the backup procedure. DB2 uses at most one agent
per table space. In some cases, therefore, you might consider separating some ta-
bles (usually the largest ones) in their own table spaces, to increase parallelism
during the backup of the database.
Utility Throttling
As a DB2 administrator, you must perform some regular maintenance tasks to
keep the database running at optimal performance while protecting its data. Many
of these maintenance tasks are performed through DB2 utilities (in both online
and offline mode), such as these:
The fact that these utilities must execute regularly causes a dilemma for the
DBA, since they consume system resources and can affect the performance of the
database. You can opt to run these utilities in an offline maintenance window, but
in a 24x7 world, such windows are getting smaller or are even non-existent.
Therefore, in most cases, these utilities must execute online with user transac-
tions. Your challenge is to minimize their impact on the system.
DB2 provides a feature called adaptive utility throttling, which allows mainte-
nance utilities to run concurrently with other user transactions, while keeping
their system resource consumption in controlled limits. Before running utilities in
throttled mode, the DBA has to enable a database manager configuration parame-
ter, called UTIL_IMPACT_LIM. This parameter dictates the overall limit at the in-
stance level that all utilities together can affect. Values for this parameter range
from one to 100, and the unit of measure is a percentage of allowable impact on
workload within this DB2 instance. For example, setting this parameter to 100
means that all utilities run in unthrottled mode.
94
CHAPTER 5: Backup and Recovery
Once this instance-wide limit is specified, you can run utilities in throttled mode
when they are started or after they have started running. To run in throttled mode,
a utility must also be invoked with a non-zero priority. For example, to run the
backup utility in throttled mode, specify the following option when launching the
BACKUP command:
The UTIL_IMPACT_PRIORITY option accepts values between one and 100, with
one representing the lowest priority, and 100 the highest. If the
UTIL_IMPACT_PRIORITY keyword is specified with no priority, the backup will
run with the default priority of 50. If UTIL_IMPACT_PRIORITY is not specified, the
backup will run in unthrottled mode.
If there were another utility running at the same time in throttled mode (for ex-
ample, a RUNSTATS with priority 50), both utilities combined should affect the
system at a maximum limit of UTIL_IMPACT_LIM. The utility with higher priority
would get more of the available resources.
The DBA also has the option to specify the backup priority directly on the DBA
Cockpit, when the backup job is scheduled through the DBA Planning Calendar
(described in the following section). Again, this will only have an effect if the
UTIL_IMPACT_LIM (impact policy) has been set to a value other than 100. (SAP’s
standard configuration has this parameter set at ten percent.)
95
CHAPTER 5: Backup and Recovery
When one of these backup actions is dropped into the calendar, a new window
pops up, Schedule a New Action, shown in Figure 5.1. Backup options are speci-
fied in this window.
In the Action Description area of this window, the DBA can redefine the action,
date, and time. In the Action Parameters tab, you can choose different options for
the backup, such as these for Backup Mode:
96
CHAPTER 5: Backup and Recovery
Note that online backups are only possible when log archival mode is enabled.
(Archive logging ensures that log files are saved when they fill up, and are not
reused.)
The TRACKMOD database parameter needs to be set to YES to use the options for
incremental backups.
Clicking the Compress checkbox in the tab means the backup image is created in
compressed format. There are also several optimization parameters on the tab:
97
CHAPTER 5: Backup and Recovery
These parameters are not mandatory. If not specified, DB2 will define optimal
values for the parameters without explicit values. The remaining two parameters
in the tab are as follows:
Once the backup options are specified, add the task to the DBA Planning Calen-
dar using the Add button. The backup can be monitored in the Job Log tab,
shown in Figure 5.2.
Figure 5.2: The Job Log screen can be used to monitor the progress of the backup.
98
CHAPTER 5: Backup and Recovery
If you have access to a terminal and can log into the machine where the database
server resides (as db2<SID> or another user with the necessary authority), you can
display more details about the backup job using the LIST UTILITIES command.
The output of this command, shown in Figure 5.3, includes interesting informa-
tion about all utilities that are running at the moment. For backups, some of the
information presented includes the database name, description, state, throttling
mode, and percentage complete.
Figure 5.3: The LIST UTILITIES command can also be used to monitor the progress of the
backup.
With database backups, logs archived, and validation tests, you now provide the
necessary protection to the SAP database. Should a recovery be needed, you
would restore the database using one of the backup images, and then apply the
logs up to the time of interest.
Multi-partition Databases
To handle a multi-partition database, the DBA Cockpit offers the options to back
up each partition individually or to back up all of them in one single job. On DB2
9.5, there is a new feature called single system view, in which a multi-partition
99
CHAPTER 5: Backup and Recovery
To back up all partitions of a database in a single job, just select the option All in
the Partition field (only available when the database is multi-partition), displayed
in the Schedule a New Action window.
On DB2 9.5, this type of backup is integrated into the backup utility, so fewer
configurations are necessary. This backup option is available in the DBA
Planning Calendar (Action Pad) as Snapshot Backup.
The recovery history file can grow very quickly, so DBAs might have to prune
some of the old information. This is done with the command PRUNE HISTORY.
100
CHAPTER 5: Backup and Recovery
parameter specifies the number of backups to keep active in the history file. Once
the number of backups exceeds this value, the oldest backups are marked as ex-
pired in the history file. The entries for these expired backups are then deleted
from the history file when the next backup is performed.
The rec_his_retentn parameter specifies the number of days to keep backup infor-
mation in the history file. This configuration parameter should be set to a value
compatible with the value of num_db_backups. For example, if num_db_backups
is set to a large value, rec_his_retentn should be large enough to support that
number of backups. The PRUNE HISTORY <timestamp> command will remove
backup information from the history file for backups older than the value of this
parameter. If this value is set to -1, um_db_backups determines the expiration of
history file entries.
If the following command is run, the archived log files will also be removed
from the archive storage location:
However, the DB2 backup images still need to be manually deleted after they expire.
With DB2 9.5, DBAs can also set the database configuration parameter
auto_del_rec_obj=on, which enables DB2 to automatically do the following oper-
ations when either the PRUNE HISTORY AND DELETE or BACKUP commands are
run:
Setting these parameters allows DB2 9.5 administrators to simply schedule nor-
mal backups. When those backups complete, DB2 will automatically maintain
the required number of backups, archived logs, and history entries, and automati-
cally delete anything that has become expired.
101
CHAPTER 5: Backup and Recovery
The DBA Cockpit also categorizes the backup executions using a color scheme.
An execution displayed in green means that it finished successfully. If a backup
execution is red, it was aborted with an error. The DBA can then diagnose the
backup failure using the Diagnostics option of the DBA Cockpit (discussed in
Chapter 8).
Figure 5.4: The execution status of previous database backups can be checked here.
102
CHAPTER 5: Backup and Recovery
log file belongs. A log chain is a DB2 feature used to control log files that have
the same name but different contents. With this feature, the DBA doesn’t need to
manually control which logs to apply in a recovery scenario. DB2 manages this
automatically.
Figure 5.5: The log files that have been archived are displayed here.
Logging Parameters
The Logging Parameters screen shows information about the transaction log
files. Transaction log files are used to keep track of database transactions. These
files are used in recovery scenarios (crash or roll-forward recovery). The follow-
ing recovery scenarios require log files:
This screen is divided into multiple tabs: Log Directory, RCHMETH1, and possi-
bly ARCHMETH2. (The ARCHMETH2 tab will be displayed when you have enabled
103
CHAPTER 5: Backup and Recovery
two methods to archive log files.) The archival methods are controlled by the da-
tabase configuration parameters LOGARCHMETH1 and LOGARCHMETH2.
From here, you can also monitor the space used and available in the file system.
This monitoring is necessary to avoid “log full” error messages, when there is no
more space available for new log files. You can set the blk_log_dsk_ful database
configuration parameter, so that the DB2 database manager will repeatedly at-
tempt to create the new log file until the file is successfully created, instead of re-
turning “disk full” errors.
For performance reasons, the log directory should also be mounted on separate
disks, preferably on RAID 10 LUNs.
Figure 5.6: The Log Directory tab displays information about log files, as well as log space
usage.
104
CHAPTER 5: Backup and Recovery
Figure 5.7: The ARCHMETH1 tab displays information about the logs saved by the log ar-
chive method specified.
Summary
Database backups and log file management are essential activities to protect the
SAP system against unplanned situations. Planned situations, such as system
cloning, are also related to backups and log files activities. Such activities can be
easily scheduled and monitored through the DBA Cockpit, as described in this
chapter.
105
Chapter 6
Configuration
Optimize Your Flight Patterns
All of the variables and configuration parameters in these areas have default val-
ues supplied by DB2. However, the DB2 default values will usually not meet the
106
CHAPTER 6: Configuration
performance required by SAP systems. Therefore, SAP provides its own set of
default or recommended values for these variables and configuration parameters.
SAP default values can be obtained from the following SAP notes, one for each
supported DB2 version, respectively:
In these notes, some parameter values are recommended by SAP and should not
be changed. Other parameter values, though, are initial values that should be ad-
justed according to the particular system workload, as well as the hardware re-
source available. These SAP default values will also be set automatically during
the SAP installation.
Autonomic computing is one of the strategic directions of DB2 product. The ulti-
mate goal for DB2 is to become self-configuring, self-healing, self-optimizing,
and self-protecting. Hence, DB2 becomes a zero-administration database. By
sensing and responding to situations that occur, autonomic computing shifts the
burden of managing a database system from database administrators to DB2
technology. This greatly reduces the total cost of ownership (TCO).
In addition to these DB2 variables and configuration parameters, the DBA Cock-
pit also provides maintenance tool for other areas of database and system config-
uration. All of these tools are organized in the following sections under the
Configuration menu:
107
CHAPTER 6: Configuration
The general information about the database includes the database name, the in-
stance name, the database version, and the fix pack level. If the database is in-
stalled as a High Available Disaster Recovery (HADR) database, the detailed
HADR status information will also be displayed here.
Figure 6.1: The Overview screen shows general information about the database and the
operating system.
108
CHAPTER 6: Configuration
For some database manager configuration parameters, the database manager must
be stopped (db2stop) and restarted (db2start) for the new parameter values to
take effect. Other parameters can be changed online. These are called
configurable online configuration parameters. Some parameters support the
AUTOMATIC value, which means the database manager will tune the runtime
value automatically based on the current system workload and the system re-
source available.
Choose Configuration à Database Manager, and you will be able to view and
maintain database manager configuration parameters. All parameters are nicely
grouped in a tree structure, as shown in Figure 6.2. To view parameters belong-
ing to a particular group, such as Memory, click its name to expand the tree.
Each parameter has a short description, a technical name, the current value, and
the deferred value. The current value is the active value stored in the memory,
while the deferred value is the value stored in the configuration file on the disk,
which will not take effect until the database manager (or instance) is restarted
next time.
109
CHAPTER 6: Configuration
Figure 6.2: View and maintain database manager configuration parameters here.
Note that some parameter values are associated with a unit. For example, the pa-
rameter INSTANCE_MEMORY is measured in 4KB unit. If this parameter is set to
250,000, its actual value is 250,000 multiplied by 4KB, i.e., 1,000MB.
110
CHAPTER 6: Configuration
2. Click (the “Display <-> Change” button), and enter the new
configuration parameter values. Some configuration parameters are
enabled for automatic value adjustment. In this case, the checkbox
AUTOMATIC is displayed. If you select it, the value will automatically be
maintained by DB2. You can also enter the new value, which will be
used as the starting value for automatic adjustment.
Table 6.1 lists some parameters that require tuning after the system is installed.
For other parameter settings, please refer to the SAP notes mentioned earlier in
this chapter.
111
CHAPTER 6: Configuration
The Database
There are a large number of configuration parameters defined on database level.
Some parameters are informational, as they show the database attributes (such as
database codepage) and the database states (such as backup pending and
roll-forward pending). Most of the other parameters are configurable, as they are
used to control system resource utilization (CPU, memory, and disk I/O), transac-
tion logging, log file management, database automatic maintenance, database
high availability, and so on.
112
CHAPTER 6: Configuration
Like the database manager configuration parameters (DBM CFG), most of data-
base parameters (DB CFG) are configurable online. In addition, many parameters
can be simply set to AUTOMATIC so that DB2 will tune the values dynamically.
Choose Configurationà Database, and you will be able to view and maintain
database configuration parameters. As you can see in Figure 6.4, all parameters
are nicely grouped in a tree structure similar to the database manager configura-
tion parameters. The same interface layout is used to view and modify the param-
eter values.
113
CHAPTER 6: Configuration
You might also notice the little Show Value History icon beside the configu-
ration parameters in the Self-Tuning Memory Manager group. By clicking the
icon, you will see the value change history for the corresponding parameter. The
result for a parameter is displayed in a separate window. By default, the value
history information is displayed as a chart, as shown in Figure 6.5. To switch to a
tabular view, click the List button. To limit the history time frame, choose From
date and/or To date.
Figure 6.5: Clicking the Show Value History icon for an STMM configuration parameter dis-
plays a chart of value history information.
With the DBA Cockpit, it is easy to compare the database configuration parame-
ter settings for multiple partitions. On the Configuration: Database–Display
screen, click the button. Select the partitions that you want to compare
in the Select Partitions to Compare pop-up window, and then click Compare. A
114
CHAPTER 6: Configuration
Figure 6.6: Clicking the Compare button on the Database–Display screen displays this
comparison.
115
CHAPTER 6: Configuration
116
CHAPTER 6: Configuration
Registry Variables
Two types of variables can be maintained in the Registry Variables section of da-
tabase configuration: operating system environment variables and DB2 profile
registry variables. These variables control how to start up and run the database
manager. Only a handful of variables need to be set in the OS environment. Most
variables can now be set in the centrally controlled DB2 profile registry.
117
CHAPTER 6: Configuration
Environment Variables
In an SAP database instance, you will find some DB2-related OS environment
variables being defined in db2<dbsid>, <sid>adm, and sap<sid> user profiles,
such as these:
DB2INSTANCE=db2<dbsid>
INSTHOME=/db2/db2<dbsid>
These OS environment variables are defined automatically during the SAP in-
stance installation, and will not be changed. Hence, no ongoing maintenance is
required on the environment variables.
Registry Variables
Registry variables are centrally controlled by DB2 profiles. There are four profile
registries:
118
CHAPTER 6: Configuration
DB2 configures the operating environment by checking for registry values and
environment variables, and resolving them in the following order:
1. Environment variables set with the set command (or the export
command on UNIX platforms).
2. Registry values set with the instance node level profile (using the db2set
-i <instance name> <nodenum> command).
3. Registry values set with the instance level profile (using the db2set -i
command).
4. Registry values set with the global level profile (using the db2set -g
command).
119
CHAPTER 6: Configuration
As you can see in Figure 6.7, the environment variables and the DB2 profile reg-
istry variables are displayed on the same screen. They are identified by different
“scopes.”
Figure 6.7: Environment variables and the DB2 profile registry variables are displayed on
the same screen.
You will notice that the first registry variable is DB2_WORKLOAD, which is an ag-
gregate variable. An aggregate registry variable allows several registry variables
to be grouped as a configuration that is identified by another registry variable
name. As of DB2 9.5, the only valid aggregate registry variable is
DB2_WORKLOAD. When DB2_WORKLOAD is set to the value SAP, DB2 engine im-
plicitly sets a list of registry variables, depending on the current DB2 version and
fix pack, to the values that are optimized for SAP systems. These variables,
shown in Figure 6.8, can influence different areas of the database manager, such
as the DB2 optimizer, locking behavior, table object creation, and MDC usage.
These variables and their respective values are chosen by the SAP and IBM DB2
development team to optimize the database manager for SAP applications, based
on the team’s customer experience and knowledge of the SAP applications. They
cannot be changed in the DBA Cockpit screen because they are not intended to
be tuned by customers. Some of these variables are even undocumented. The
workload values can be superseded by explicitly setting these registry variables
to different values. However, this should only be done on the advice of SAP
global support or IBM DB2 support, to address a specific need. In general, SAP
customers only need to ensure DB2_WORKLOAD is set to SAP.
120
CHAPTER 6: Configuration
Parameter Changes
Choose Configuration à Parameter Changes, and you will be able to view the
current and previous settings of the registry variables, database manager, and da-
tabase configuration parameters. You can also view the date and time of the
change. This feature can help DBAs keep track of the parameter’s change
history.
The initial screen, shown in Figure 6.9, only displays the active values for the
variables and configuration parameters. To see the change history, select History
in the Parameter field. You can also specify the period of the change history, as
well as the Parameter Type, which can be set to either Registry Variables, DB
Manager, or Database.
121
CHAPTER 6: Configuration
The parameter change history data is collected by a standard DBA job, “Collec-
tion of DB/DBM Config History,” on an hourly basis. The data collected is saved
in an SAP table and can be displayed on this screen.
Figure 6.9: The initial Parameter Changes screen displays the active values for the variables
and configuration parameters.
122
CHAPTER 6: Configuration
By default, the SAP installation program (SAPinst) will only create a database
with a single partition (partition number 0000). Therefore, all predefined partition
groups will be defined on this partition initially, as shown in Figure 6.10.
Figure 6.10: All predefined partition groups will be initially defined on parti-
tion 0000.
After you add a new partition, you can use the Edit button on this screen to mod-
ify the existing partition group, or use the Add button to define a new partition
group. You can also use the Delete button to remove a partition group on which
no table space exists.
Buffer Pools
A buffer pool is an area of main memory that has been allocated by the database
manager for the purpose of caching table and index data as it is read from disk. A
DB2 database can have one or multiple buffer pools.
123
CHAPTER 6: Configuration
Unlike other memory pools in the database, a buffer pool is considered a data-
base object, and its size is not controlled by a configuration parameter. To create
a new buffer pool, change the size of an existing buffer pool, or delete an existing
buffer pool, choose Configuration à Buffer Pools.
By default, the SAP installation program (SAPinst) creates a default buffer pool
named IBMDEFAULTBP, with a 16K page size, as shown in Figure 6.11. Buffer
pools usually take up the biggest portion of the database shared memory. You
can specify a buffer pool size either to a fixed size or to AUTOMATIC. If the buffer
pool size is set to AUTOMATIC, and STMM is enabled, the actual buffer pool size
will be tuned by DB2 automatically, in response to workload requirements.
When you create a new table space, you need to associate it with a buffer pool of
the same page size. Therefore, if you have table spaces created on different page
sizes, you have to create multiple buffer pools corresponding to those page sizes.
To view the buffer pool size, page size, associated partitions, and table spaces,
double-click the buffer pool from the list shown in Figure 6.11. Detailed informa-
tion about the buffer pool will be displayed, as shown in Figure 6.12.
124
CHAPTER 6: Configuration
125
CHAPTER 6: Configuration
We recommend that you enable DB2 automatic statistics feature for a SAP sys-
tem. To do this, either update the database configuration parameter
AUTO_RUNSTATS, or select Configuration à Automatic Maintenance Setting
in the DBA Cockpit.
There are some special tables whose cardinality and content can vary greatly in run
time. These tables are called volatile tables. For volatile tables, statistics data col-
lected by RUNSTATS often becomes inaccurate. Therefore, the statistics of these ta-
bles should not be collected and should not be used by the optimizer. Volatile
tables are marked in the DB2 system catalog table, so that the optimizer can iden-
tify these tables. The automatic statistics feature will not apply to these tables.
126
CHAPTER 6: Configuration
File Systems
Choose Configuration à File Systems, and a list of file systems is displayed, as
shown in Figure 6.14. The information displayed on this screen can help you to
determine how much free space is available in these file systems. (This function
is not available for systems monitored using a remote database connection.)
Figure 6.14: The File Systems screen can help you to determine how much free space is
available.
Data Classes
A data class is used by the SAP DDIC (Data Dictionary) to define the physical
area of the database (i.e., the table space) in which the table should be created.
On DB2 LUW databases, each data class is mapped to two table spaces, the Data
Tablespace and the Index Tablespace.
This function can be used to maintain the relationship between a data class and
DB2 table spaces. It is only available for SAP ABAP systems.
Choose Configuration à Data Classes. A list of SAP ABAP data classes and
their corresponding DB2 table spaces is displayed, as shown in Figure 6.15. On
this screen, you can click the Edit button to modify the data class and table
spaces mapping, the Add button to create a new data class as well as its associa-
tion to table spaces, or the Delete button to drop a data class.
127
CHAPTER 6: Configuration
Figure 6.15: A list of SAP ABAP data classes and their corresponding DB2 table spaces is
displayed here.
A table space must be created before it can be associated to a data class. To cre-
ate a table space from the DBA Cockpit, select Space à Tablespaces. A new
data class name must also conform to SAP naming convention. (For details, see
“SAP Note 46272.”)
Monitoring Settings
Choose Configuration à Monitoring Settings to set the user-defined function
libraries’ (UDFs’) path, and change the retention periods for the history data.
There are a few DB2 UDFs developed by SAP. They are required for monitoring
remote DB2 database system through the DBA Cockpit. These UDFs are pack-
aged in a shared library file named db6pmudf, which is part of the SAP kernel.
On the Configuration: Monitoring Settings screen, you need to set the path for
this library, as shown in Figure 6.16. Normally, this path should be the standard
SAP kernel path, “/usr/sap/<SID>/D*/exe.” To be sure about this, click the Test
button to test the UDF library loading.
128
CHAPTER 6: Configuration
Figure 6.16: Set the path for the UDFs’ library here.
During the SAP installation, SAP defines a number of standard DBA jobs, such
as “Collection of DB Performance History,” “Collection of DB/DBM Config
History,” and “Collection of Bufferpool History.” The history data collected by
these jobs will be saved to internal SAP tables. You can specify the retention pe-
riod of history data on the screen shown in Figure 6.17.
It is also a good practice to archive the DB2 diagnostic log file “db2diag.log”
regularly, so that it will not grow to an unmanageable size. Do this by clicking
129
CHAPTER 6: Configuration
the Switch Weekly checkbox for this file. The current “db2diag.log” will be
saved under a new name with a timestamp, and a new “db2diag.log” file will be
created automatically.
Automatic Backups
Automatic database backups help to ensure that your database is backed up prop-
erly and regularly, so that you don’t have to worry about when to back up or
know the syntax of the DB2 BACKUP command. An automatic database backup
can be either online or offline. It is triggered by predefined conditions, based on
the considerations of database recoverability and performance impact. Using the
Starting Conditions area of the Automatic Backup tab shown in Figure 6.18, you
can choose a predefined condition or customize the condition by specifying the
number of days and amount of log space created since the last backup. You also
need to specify the backup media.
Figure 6.18: Choose a predefined starting condition or customize the condition here.
130
CHAPTER 6: Configuration
Automatic RUNSTATS
Automatic statistics collection can improve the database performance by main-
taining up-to-date table statistics. This feature is fully supported and works very
well with SAP systems. Therefore, you should enable automatic RUNSTATS for
all SAP systems. Automatic statistics collection is a background process that runs
approximately every two hours. The process evaluates all active tables, to check
whether or not tables require statistics to be updated. It then schedules RUNSTATS
jobs for tables whose statistics are out of date. The background RUNSTATS jobs al-
ways run in online and throttled mode, which means they do not affect the normal ac-
cess to the tables.
By default, automatic RUNSTATS jobs collect the basic table statistics with distri-
bution information and detailed index statistics using sampling. (The RUNSTATS
command is issued, specifying the WITH DISTRIBUTION and SAMPLED DETAILED
INDEXES ALL options.) You can customize the type of statistics collected by en-
abling statistics profiling, which uses information about previous database activ-
ity to determine which statistics are required by the database workload. You can
also customize the type of statistics collected for a particular table by creating
your own statistics profile for that table. As you can see in Figure 6.19, volatile
tables are excluded from automatic RUNSTATS.
131
CHAPTER 6: Configuration
Automatic REORG
Automatic reorganization determines the need for reorganization on tables and
indexes by using the REORGCHK formulas. It periodically evaluates tables and in-
dexes that have had their statistics updated, to see if reorganization is required. If
so, it internally schedules reorganization on the table and indexes.
Since the reorganization of large tables will generally take a long time, you
should enable automatic reorganization only on small tables. SAP has defined a
policy to select tables for automatic reorganization. The policy is based on the ta-
ble size. The default table filter size is set to 1GB, although this can be changed
on the Automatic REORG tab. A filter size of 1GB allows tables smaller than
that to be qualified for automatic reorganization. Larger tables would need to be
reorganized manually, using the DBA Cockpit’s Jobs à DBA Planning Calen-
dar or Space à Single Table Analysis. If you want to specify a more granular ta-
ble filter policy, you need to use the DB2 Control Center tool.
132
CHAPTER 6: Configuration
All automatic maintenance activities will only occur within a specified time pe-
riod, called the maintenance window. An online maintenance window is used to
specify the time period for performing online activities, such as automatic
RUNSTATS, online automatic database backup, or online automatic index reorga-
nization. An offline maintenance window is used to specify the time period for
performing offline activities, such as offline automatic database backup and
offline table reorganization. Both online and offline maintenance windows can be
defined on the General tab of the Automatic Maintenance Settings screen, shown
in Figure 6.21.
Summary
Database configuration is critical to system performance, and to ensure smooth
operations. In an SAP environment, the database configuration must be tuned to
meet the demands of SAP applications, and to be consistent with SAP system
configuration, such as SAP Data Classes and the ABAP Dictionary (DDIC).
133
CHAPTER 6: Configuration
The SAP DBA Cockpit provides easy tools to help maintain every area of data-
base configuration and the database-specific SAP configuration. The joint
IBM-SAP development team has made a huge effort to optimize DB2 databases
for SAP applications and to enhance the autonomic computing features of DB2
database. The goal is to make a DB2 database a zero-administration database, so
that DBAs can concentrate on higher value work, and thus lower the total cost of
ownership (TCO).
134
Chapter 7
T he CCMS alert monitors for the DB2 database are integrated into the Alerts
section of the DBA Cockpit. All database alert monitoring, the alert mes-
sage history, and some alert configuration parameters are now easily accessible
here. The monitors include thresholds for disk space consumption, memory utili-
zation, buffer pool quality, locking, database backup, and log archival. If the da-
tabase exceeds the defined thresholds, emails can automatically notify
administrators, who can then implement corrections before the system is affected.
First, however, background monitoring must be activated. Execute transaction RZ21
and click Technical Infrastructure à Local Method Execution à Activate
Background Dispatching. Then, return to RZ21, in the Methods section, select
Method Definitions and click the Display Overview button. Search for, and dou-
ble-click either CCMS_OnAlert_Email or CCMS_OnAlert_Email_V2. Config-
ure the Parameters tab with the proper email sender, recipients, subject, etc. Then,
the specified recipients will be alerted via email when an alert threshold is crossed.
135
CHAPTER 7: The Alert Monitor
The CCMS system in SAP comes with pre-configured alert categories, parame-
ters, and thresholds for the DB2 database. Experienced users may modify this
configuration or change threshold values in transaction RZ21. In most cases,
though, we recommend keeping the default values for these thresholds.
Figure 7.1: The Alert Monitor displays a clear overview of overall system health.
Administrators can drill down through the categories to the individual monitor el-
ements, see status messages, and compare current values with the assigned
136
CHAPTER 7: The Alert Monitor
threshold values. For more detail, load the CCMS Monitor Sets (transaction
RZ20), and drill down through SAP CCMS Monitor Templates à Database à
DB2 Universal Database for NT/UNIX. You will be able to view the complete
monitor element details for the database.
Figure 7.2: The Alert Message Log displays the history of alert messages.
137
CHAPTER 7: The Alert Monitor
Alert Configuration
The Alert Configuration screen provides access to the alert threshold properties
from transaction RZ21. The main screen, shown in Figure 7.3, provides a list of
all alert monitors and threshold values.
Figure 7.3: The Alert Configuration screen displays a list of database alert monitor elements
from SAP CCMS.
Double-click any individual row to see the detailed information on that monitor
element, including threshold value details and data collection schedules. Through
this screen, shown in Figure 7.4, you can enable or disable email notification for
certain monitor thresholds, and activate or deactivate monitor elements. For ele-
ments not related to performance (such as the backup elements), the alert thresh-
olds can also be configured within the DBA Cockpit. However, for any of the
elements related to performance, attribute and threshold value maintenance must
be done within transaction RZ21.
138
CHAPTER 7: The Alert Monitor
Figure 7.4: Alert thresholds can be changed here for elements not related to database per-
formance.
Summary
The integration of the SAP CCMS database monitor elements into the DBA
Cockpit alert monitor simplifies the process of proactive problem analysis. Ev-
erything is easily visible within a single transaction, and automatic alert notifica-
tion ensures that the proper people are notified as soon as warning and error
thresholds are crossed. This allows problems to be caught and prevented before
they affect the system.
139
Chapter 8
Database Diagnostics
Dealing with Air Turbulence
O ne of the tasks that a DBA must perform every day involves monitoring
the health of the database to look for possible problems and inconsisten-
cies. No database is perfect, and administrators will face a challenge sooner or
later. What differentiates database managers from one another is the way they
deal with challenges, based on the mechanisms and tools available. In that sense,
DB2 and SAP offer a variety of tools that can help the DBA quickly identify, di-
agnose, and solve a problem.
The deep integration of DB2 and SAP is showcased again in the DBA Cockpit’s
diagnostic option. It is composed of many tools that you can use to troubleshoot
diverse problems, such as database security, query performance, concurrency,
and inconsistencies between ABAP and database objects.
140
CHAPTER 8: Database Diagnostics
Figure 8.1:The Audit Log displays information about actions performed at the database
level.
By default, changes that happened in the current week are displayed. However,
the calendar can be used to choose a different week. DBAs can also change the
number of days for the messages. The fields listed in the Audit Log are explained
in Table 8.1.
141
CHAPTER 8: Database Diagnostics
Field Meaning
Command Command (SQL, add table space, delete table space, edit
configuration)
However, there are special situations that can bring the performance down for a
particular application, or sometimes even affect the performance of the entire
system. In such cases, the DBAs must apply their knowledge to analyze and re-
solve the performance issue using diagnostic tools, historic data for comparison,
and their best judgment.
142
CHAPTER 8: Database Diagnostics
compilation phase. One of the components involved in this phase is the DB2
cost-based optimizer.
For query processing, one of the tasks performed by the optimizer is to develop di-
verse strategies, called access plans, to process the SQL statement. The optimizer
attributes a certain cost (optimizer’s best estimate of the resource usage for a query)
to each plan, using an arbitrary IBM unit called timerons. The optimizer then
chooses the plan with the lowest cost, and follows its execution strategy.
Of course, the optimizer chooses the plan based on the information available, so
providing correct information is vital for a good optimizer decision. Some data
used by the optimizer include the following:
• Statistics in system catalog tables. (If statistics are not current, update them
using the RUNSTATS command or configure the AUTO RUNSTATS feature
through the DBA Cockpit.)
• Configuration parameters.
• Bind options.
• The query optimization class.
• Available CPU and memory resources.
The execution strategy can include such factors as which objects will be used to
execute the query (index or table scan), the join methods (nested loop, hash,
merge, etc.), whether the query involves multiple tables, the access order of the
objects, and the use of auxiliary tables.
The EXPLAIN option of the DBA Cockpit allows the administrator to generate the
access plan used by the optimizer in a particular query. Based on this informa-
tion, you can study the internal characteristics of the objects involved, and take
the proper actions. Some of these actions can include the following:
143
CHAPTER 8: Database Diagnostics
Figure 8.2: The EXPLAIN option allows you to display the SQL access plan.
144
CHAPTER 8: Database Diagnostics
Notice that the information in Figure 8.2 is displayed in a tree format, containing
the operators and objects used in the query. The cost of the access plan is also
displayed in timerons, as well as the optimization level and the degree of
parallelism.
A set of extra options is provided via buttons at the top of the screen. If you need
to study the access plan in more detail, or if you need to collect data to sent to
SAP support, use these buttons, as follows:
• Details—When you click this button, you will see very detailed
information about the query execution plan. CPU speed, buffer pool size,
optimization level, optimized statement, and estimated number of rows are
just some of the information displayed. If you select an operator, only
information related to that operator is displayed.
Another parameter that can be changed for testing purposes is the query
degree. A degree of one (the default) means that no intra-partition
parallelism (parallelism inside the partition) is used. A value greater than
that might activate intra-partition parallelism, provided that this
functionality is activated at the database manager.
145
CHAPTER 8: Database Diagnostics
• Edit—This button allows you to edit the original query and explain it
again.
146
CHAPTER 8: Database Diagnostics
web browser, as shown in Figure 8.3. However, it contains basically the same op-
tions as the traditional version.
Figure 8.3: Here is an access plan displayed in the new version of EXPLAIN.
There might be some situations, however, in which the ABAP dictionary is not in
sync with the database. Some objects might be defined in the dictionary, but
don’t exist in DB2, and vice versa.
The administrator can use the Diagnostics option of the DBA Cockpit to check if
there are any inconsistencies between the ABAP dictionary and the database.
147
CHAPTER 8: Database Diagnostics
Figure 8.4: Discrepencies between the ABAP dictionary and the database are displayed
here.
148
CHAPTER 8: Database Diagnostics
149
CHAPTER 8: Database Diagnostics
are accessing and modifying the database. There are application development
guidelines that specifically deal with avoiding deadlocks, including these:
DB2 has mechanisms that monitor and resolve deadlock situations in specific in-
tervals, dictated by the database configuration parameter DLCHKTIME. When a
deadlock is detected, the database manager resolves the situation by randomly
picking one of the participating applications (the victim) to roll back, which al-
lows the other application to continue.
Here are the steps to follow to display the deadlocks that must be analyzed:
150
CHAPTER 8: Database Diagnostics
151
CHAPTER 8: Database Diagnostics
Chapter 2 for more information.) If so, the information about these deadlocks can
be analyzed using the historic information collected by the Deadlock Monitor. The
information is displayed when the screen is refreshed or the Monitor is stopped.
As shown in Figure 8.6, the deadlocks recorded are displayed, and each occur-
rence can be expanded into more detail. Information on each occurrence is con-
tained in a root folder called “Deadlock Victim: <application that got rolled
back>.” Inside the folder, there is a summary of the agents involved in the dead-
lock. Information about the agents includes the client PID, host, authorization ID,
and waiting lock information (table, type, mode, etc.). Special arrow buttons can
be used to expand and collapse the detailed information.
Figure 8.6: This is an example of a deadlock situation captured by the Deadlock Monitor.
To find out about the SQL statements involved in the deadlock, click the State-
ments History button. This information can also be viewed separately, for each
agent involved in the scenario. Click the Agent Details button, and the Agent
Details window opens. This window has two tabs:
152
CHAPTER 8: Database Diagnostics
• Locks Held—This tab shows information about the locks held by the agent
and the locks that are needed (waiting).
The SQL statement history is one of the most important pieces of information to
diagnose a deadlock scenario. As you can see in Figure 8.7, it contains the full
stack of SQL statements executed by the agent in the transaction involved in the
deadlock. By looking at the statements involved, the administrator can easily find
which ABAP program or report generated the SQL, and then talk to the devel-
oper of the application. The problem might not necessarily be caused by the pro-
gram found here, but the developer and the administrator can work together to
see if more commit points can be introduced so locks are released faster, or
whether more significant changes need to be made.
Figure 8.7: The statement history information can also be viewed here.
153
CHAPTER 8: Database Diagnostics
The DBA Cockpit offers an interface to the CLP, which allows administrators to
run SQL statements and some administrative commands. To access the interface,
select Diagnostics à SQL Command Line. The administrative commands that
can be executed through this interface are the ones supported by the ADMIN_CMD
stored procedure. This procedure is used by applications to run administrative
commands using the SQL CALL statement. Figure 8.8 shows an example of the
commands that can be executed in this interface.
154
CHAPTER 8: Database Diagnostics
Figure 8.8: You can execute SQL and administrative commands in the SQL Command
Line interface.
155
CHAPTER 8: Database Diagnostics
Thankfully, the DBA Cockpit comes to the rescue again. One of the most inter-
esting features provided in the Diagnostics option is the Index Advisor. The In-
dex Advisor is a subset of the DB2 Design Advisor. It is used to help you find
better indexes to support your workload. You can use the Index Advisor to create
virtual indexes and to let DB2 recommend indexes for SQL statement.
156
CHAPTER 8: Database Diagnostics
After defining virtual indexes, you can explain the query again and have the
optimizer consider the virtual indexes, as well as the existing ones, when building
the access plan. If the optimizer selects a virtual index (whether user-defined or
157
CHAPTER 8: Database Diagnostics
recommended by the Index Advisor), you can create such an index in the data-
base, with the touch of a button.
In Figure 8.10, for example, the Index Advisor is recommending one new index
to support the execution of the query, and there is one user-defined virtual index.
Figure 8.10: The EXPLAIN option is shown here with existing, recommended, and
user-defined indexes.
158
CHAPTER 8: Database Diagnostics
You can compare the plans and the costs (in timerons), and based on the EXPLAIN
outputs, decide whether or not to create the indexes in the database and the
ABAP dictionary. To do that, just click the appropriate button (the magic wand)
next to the recommended index, and fill out the index description information.
For this reason, tracing the DBSL layer can give the DBA a good idea of the
SQL statements that can be affecting the performance of the database. SAP pro-
vides a way to use a cumulative trace of the database interface, so that the infor-
mation collected can be analyzed by the administrator later.
To use the cumulative trace, you must first activate it. There are two different
ways to do this:
• Activate the trace dynamically via profile parameter. Run transaction RZ11
and enable the profile parameter dbs/db6/dbsl_cstrace =1.
Note that you might have to restart the SAP system if all work processes are to be
traced. Configuration through the profile parameter is dynamic, but not
permanent.
159
CHAPTER 8: Database Diagnostics
All SAP systems that use the database interface, and are now executed, write
trace data to the table DB6CSTRACE in the table space PSAPUSER1D. The data
collected can be analyzed directly in the DBA Cockpit, by selecting Diagnostics
Cumulative SQL Trace. Alternatively, you can run report RSDB6CSTRACE using
transaction SA38, and analyze the data from there.
No statements are displayed when the trace has never been activated. After the
trace is activated and SQL statements are being logged, click the Refresh button
to refresh the window.
The actions PREPARE, EXECUTE, and FETCH are summed up in tabs, as shown in
Figure 8.11, and can be evaluated separately.
Figure 8.11: Trace information collected by the Cumulative SQL Trace facility helps DBAs
in their performance monitoring activities.
160
CHAPTER 8: Database Diagnostics
On this detailed view, the administrator has the option to run the EXPLAIN facil-
ity. Click the button to display the access plan. (For more information on
how to activate the cumulative SQL trace, refer to “SAP Note 139286.”)
161
CHAPTER 8: Database Diagnostics
(DBSL). Run the transaction. The trace information will be logged in the
work directory of the instance.
Note that the trace directory must exist and be accessible, for all of these meth-
ods. Refer to “SAP Note 31707” for more details on how to activate the sequen-
tial DBSL trace.
The DBA Cockpit’s Deadlock Monitor can help analyze the occurrences of dead-
locks. SAP provides another way to track deadlocks. The DBSL deadlock trace
can be enabled in the following ways:
• Dynamically activate the DBSL deadlock trace for all work processes via
transaction RZ11, by changing the profile parameter
dbs/db6/dbsl_trace_deadlock_time = <seconds>. SAP recommends a time
interval of 20 to 26 seconds. The other parameter is
dbs/db6/dbsl_trace_dir = <tracepath>.
162
CHAPTER 8: Database Diagnostics
• Activate the trace for all processes of a LOGON session. Set the following
environment variables for user <sid>adm:
DB6_DBSL_TRACE_DEADLOCK_TIME = <time in seconds> and
DB6_DBSL_TRACE_DIR = <path>.
The default trace path is /tmp/TraceFiles for UNIX and \\sapmnt\TraceFiles for
Windows. (Refer to “SAP Note 175036” for more information about the DBSL
deadlock trace.)
To access information on the sequential DBSL trace and the DBSL deadlock
trace, choose Diagnostics à DBSL Trace Directory in the navigation frame of
the DBA Cockpit.
Figure 8.12 shows that the trace directory is set to the default, /tmp/TraceFiles. A
subdirectory <SID> is created under the trace directory, which is where the trace
files are generated. Notice that there are sequential trace files
(TraceFile<Appl-ID>.txt) and Deadlock Trace files (DeadlockTrc<App-ID>.txt)
in this directory, since both traces are using the default directory. To see the con-
tents of each file directly from here, double-click it.
Figure 8.12: You can see the trace files generated here.
163
CHAPTER 8: Database Diagnostics
Trace Status
SAP provides three different ways to trace the Database Support Layer: cumula-
tive SQL trace, sequential DBSL trace, and deadlock trace. These traces work in-
dependently of each other, so one trace can be activated despite the fact that the
others are disabled. They can also all be activated at the same time.
You can check if a cumulative DBSL trace is activated by checking whether new
records are being inserted in table sap<SID>.DB6CSTRACE. For sequential and
deadlock traces, check whether files are being updated or created in the trace di-
rectory. You can also check environment variables and profile parameters.
None of this is really necessary, however, because the DBA Cockpit provides a
very convenient way to check which DBSL traces are active at the moment. To
access this information, just select Diagnostics à Trace Status.
In the example in Figure 8.13, you can see that all three DBSL traces are cur-
rently activated. For the sequential trace, some options can be updated from this
same screen.
Figure 8.13: All three DBSL traces are currently activated here.
164
CHAPTER 8: Database Diagnostics
Besides checking the status of the traces, you can also activate and deactivate
traces dynamically from this window, by using the icon. The DBSL trace re-
quires the Trace Level information before being activated, and the deadlock trace
requires the Detection Interval value.
Therefore, to have an overall look at the health of the database, you can use two
diagnostic files. The first one is the Database Notification Log (also known as the
Administration Notification Log), which is located in the directory specified by
the DIAGPATH database manager configuration parameter. The name of the file is
<instance name>.nfy. Since it is an ASCII file, it can be opened directly on the
database server machine, using an editor.
The DB2 database manager writes the following kinds of information to the Ad-
ministration Notification Log:
A database administrator can use this information to diagnose problems, tune the
database, or simply monitor the database.
165
CHAPTER 8: Database Diagnostics
To access the Database Notification Log directly from the DBA Cockpit, choose
Diagnosticsà Database Notification Log. You can filter what messages get
displayed by choosing the date and the starting time. You can also filter by the
severity of the messages, which can vary from informational to error.
The level of detail reported in the Database Notification Log is controlled by the
NOTIFYLEVEL database manager configuration parameter. It ranges from zero to
four. The default value of three is appropriate for most systems.
The db2diag.log file can grow very big even at the default level, so from time to
time, the administrator should archive it. DB2 offers a tool for that, called
db2diag. By using the –A option (db2diag –A), the current db2diag.log file gets a
timestamp appended to it, and a new log file is created.
To access the contents of the db2diag.log file directly from the DBA Cockpit,
choose Diagnostics à Database Diag Log. You can also filter which messages
to display. Filters are available for date, time, and severity of the message.
The db2diag.log can also be accessed directly on the database server machine, since it
is an ASCII file. We usually recommend this method, since you can use OS com-
mands like grep (in UNIX/Linux systems) to apply other filters on the db2diag.log
file. Alternatively, you can use the db2diag tool, which provides grep-like and
tail-like functionality (among others), so more restrictive filters can be applied.
166
CHAPTER 8: Database Diagnostics
DB2 Logs
The DB2 Logs option, shown in Figure 8.14, is available in DB2 version 9.5.
This option shows the combined information of the Database Notification Log,
the Database Diagnostic Log, and the Statistics Log (information generated by
the autonomic computing daemon, db2acd). There are several filters you can ap-
ply to display only a subset of the information:
After applying the filters, press the Find button to refresh the messages.
167
CHAPTER 8: Database Diagnostics
Figure 8.15: You can view the DB2 diagnostic files here.
168
CHAPTER 8: Database Diagnostics
The DB2 Help Center can be accessed through a browser. It can also be accessed
directly from the DBA Cockpit by choosing Diagnostics à DB2 Help Center.
Figure 8.16: The DB2 documentation can be viewed directly from the DBA Cockpit.
Summary
Just like a pilot must deal with air turbulence during a flight, a DBA must deal
with problems that might occur in the database. The DBA Cockpit provides
169
CHAPTER 8: Database Diagnostics
diverse tools to quickly diagnose the most common problems in a SAP database,
such as ABAP consistency, SQL performance, and concurrency. For other prob-
lems, the DBA Cockpit provides a convenient way to access FODC information
captured by DB2. Even novice DB2 administrators can easily access these vital
files without needing to log onto the database server machine and know their
locations.
170
Chapter 9
New Features
Flying into the Future
The previous chapters have outlined the current benefits of the integrated SAP
DBA Cockpit for DB2 LUW. In this chapter, you will see that these benefits con-
tinue to grow as the SAP-DB2 partnership continues to mature.
171
CHAPTER 9: New Features
SAP Enhancement Package 1 for SAP NetWeaver 7.0 integrates DB2 9.5 Work-
load Management into the SAP kernel. SAP delivers a predefined WLM configu-
ration proposal, which defines workloads and service classes for each unique
work process type. This basic configuration can then be enhanced by creating
one additional workload and service class, which can prioritize work based on
the SAP user, SAP transaction, or SAP application server.
Figure 9.1: Workloads and service classes from DB2 Workload Management are now inte-
grated into SAP.
172
CHAPTER 9: New Features
The service class priorities can be maintained within the General tab in the bot-
tom half of the display. The Statistics tab contains detailed information and
graphical histograms displaying performance characteristics of the applications
that have run within that service class.
Critical Activities
The Critical Activities screen, shown in Figure 9.2, provides an administrative in-
terface for the thresholds defined for WLM. There is one area to maintain and
configure thresholds for various database activities, and another to view histori-
cal information on threshold violations.
Figure 9.2: Threshold violations can be viewed within the Critical Activities screen.
The thresholds define the Service Level Agreements for the system. The thresh-
old violations allow administrators to quickly identify performance problems re-
lated to these SLAs, and then take measures to resolve any issues.
173
CHAPTER 9: New Features
Finally, the SAP WLM Setup Status provides an overview of the WLM configu-
ration. This displays the status of the various WLM configuration steps, and dis-
plays the areas of WLM that have been successfully set up.
BI Administration
DB2 provides several key, unique features to improve the performance and man-
ageability of large SAP NetWeaver BW data warehouses. Here are two of these
key features:
Due to the importance of these features, SAP has integrated DB2 DPF and MDC
tooling into the DBA Cockpit, within a folder named either “BW Administra-
tion” or “Wizards,” depending on the release of SAP being used.
BI Data Distribution
DB2 table spaces are created in partition groups. When an SAP NetWeaver BW
system is installed on a partitioned DB2 database, the objects in the BW table
spaces may be distributed across multiple database partitions. If a DBA changes
the partition layout (usually by adding partitions to the BW partition groups), the
data residing in those table spaces needs to be redistributed, so that the same
174
CHAPTER 9: New Features
amount of data resides on each partition. This ensures that each partition has
nearly the same workload when processing large BW reports. For example, if a
partition group with four partitions is altered to add two new partitions, the data
previously distributed across the original four partitions must be redistributed
across all six partitions.
The BW Data Distribution Wizard in the DBA Cockpit provides a very simple
interface for this process, shown in Figure 9.3. First, you select the partitions for
each partition group from a grid of checkboxes. Next, the wizard defines tempo-
rary table space containers, based on the default SAP container paths. Finally,
you schedule the redistribution job to run during low system usage.
The wizard immediately alters the partition groups, creates the temporary table
space containers, and schedules the redistribution job in the DBA Planning Cal-
endar. Once the redistribution job completes, the partition layout changes are
done.
175
CHAPTER 9: New Features
Figure 9.3: The BI Data Distribution wizard guides users through the steps required to re-
partition a DB2 SAP BW system.
Creating proper MDC indexes can greatly improve SAP NetWeaver BW perfor-
mance. However, finding the best columns for the MDC index on a BW object
can be challenging. The MDC index will benefit performance most if its columns
are frequently used as query restrictions in the WHERE clause of many large BW
queries. Therefore, optimal MDC index selection requires you to search through
the SQL cache for BW object queries, and identify the frequently used columns.
Those columns will be the best candidates for MDC dimensions on that table.
176
CHAPTER 9: New Features
Then, you must identify the cardinality (number of unique values) of each poten-
tial MDC dimension. High-cardinality columns might not be desirable, because
DB2 will allocate one extent for each unique combination of MDC index values.
Therefore, if a unique index is included in the MDC index, each extent will only
contain one row, resulting in wasted disk space. The best MDC index columns
are low-cardinality columns frequently used in query restrictions. This can im-
prove performance without increasing table size.
DB2 contains several Advisors, which are able to assist you with some of the
more intensive tasks. DB2 has both a traditional Index Advisor (discussed in the
previous chapter), and an MDC Index Advisor. The MDC Index Advisor collects
queries run on selected tables, analyzes their characteristics, and recommends op-
timal MDC indexes to improve performance without increasing disk consump-
tion. This takes into account all queries run during the collection period, and
greatly reduces the effort involved in MDC index creation.
SAP Enhancement Package 1 for SAP NetWeaver 7.0 includes a graphical inter-
face to the DB2 MDC Index Advisor in the DBA Cockpit, under BI Administra-
tion à MDC Advisor. The Input tab, shown in Figure 9.4, contains methods for
collecting and analyzing queries for specific BW objects (InfoCube FACT tables
and the active table of DataStore Objects).
Figure 9.4:Add InfoProviders to the MDC Advisor, and let DB2 recom-
mend beneficial MDC indexes.
177
CHAPTER 9: New Features
1. Click the Add InfoProvider button to input the BW object(s) you want
to analyze.
3. Execute the BW reports that run on the objects being analyzed. The
queries that execute against the selected BW objects will be stored in
database tables in the SYSTOOLS table space.
5. Select the BW object(s) to analyze, and click the Analyze button to start
the query analysis process. The analysis is scheduled as a background
job, which can be monitored through the DBA Planning Calendar. Once
the analysis job completes, the MDC Advisor displays the results and
deletes any saved BW temporary tables and query information.
The MDC proposals can be viewed in the Result tab, shown in Figure 9.5. The
recommended MDC index is listed beneath each analyzed InfoProvider, with es-
timates for performance and space consumption. The Estimated Improvement
gives the overall performance improvement expected for all queries on that
InfoProvider. The Estimated Space Increase specifies the percentage that the
InfoProvider may increase in size. The MDC Advisor will only recommend
MDC indexes with an estimated space increase of less than 10 percent. The pro-
posed MDC indexes can then be implemented from transaction RSA1.
178
CHAPTER 9: New Features
Figure 9.5: The Results tab contains the MDC Indexes recommended by DB2.
Summary
These pages have presented countless examples of DB2 administration integrated
into the core SAP NetWeaver technology. SAP DBAs can perform almost any
DB2 administrative task through standard SAP transactions. This integration sim-
plifies many SAP database administration tasks, and eases the transition from
other relational databases to DB2. The partnership between DB2 and SAP, and
the complete integration of DB2 into the DBA Cockpit, are two of the many rea-
sons why DB2 is the preferred and recommended database for SAP systems.
The DBA Cockpit provides SAP database administrators a single interface for al-
most all DB2 monitoring and administration, such as the following:
179
CHAPTER 9: New Features
As IBM releases new DB2 technology, new features are continually integrated
into the SAP DBA Cockpit. This enables SAP database administrators to easily
exploit the new technology in their SAP systems.
To close with one final airline pilot analogy: fly the latest and greatest jet. Select
the cockpit that allows you the most control to perfect the performance of your
aircraft. Pilot the best technology, which is integrated completely and optimized
specifically for your cockpit. Launch your SAP business systems into the future
on DB2.
180
SAP DBA
COCKPIT
Flight Plans for DB2 LUW
Database Administrators
DB2 is now the database most recommended for use with SAP applications, and DB2
skills are now critical for all SAP technical professionals. The most important tool
within SAP for database administration is the SAP DBA Cockpit, which provides a
more extensive administrative interface on DB2 than any other database. This book
steps through every aspect of the SAP DBA Cockpit for DB2. Readers will quickly
learn how to use the SAP DBA Cockpit to perform powerful DB2 administration tasks
and performance analysis. This book provides both DB2 beginners and experts an
invaluable reference for the abundance of information accessible from within the SAP
DBA Cockpit for DB2. It makes it easy for SAP NetWeaver administrators, consultants,
and DBAs to understand the strengths of DB2 for SAP, and how to leverage those
strengths within their own unique application environments.
PATRICK ZENG
JEREMY BROUGHTON
Certified DB2 Solutions Expert
SAP Certified Basis Consultant for
DB2 on NetWeaver 2004 Certified SAP Technology Consultant
MC Press Online, LP
125 N. Woodland Trail
Lewisville, TX 75077