Vous êtes sur la page 1sur 193

SAP DBA

COCKPIT
Flight Plans for DB2 LUW
Database Administrators
DB2 is now the database most recommended for use with SAP applications, and DB2
skills are now critical for all SAP technical professionals. The most important tool
within SAP for database administration is the SAP DBA Cockpit, which provides a
more extensive administrative interface on DB2 than any other database. This book
steps through every aspect of the SAP DBA Cockpit for DB2. Readers will quickly
learn how to use the SAP DBA Cockpit to perform powerful DB2 administration tasks
and performance analysis. This book provides both DB2 beginners and experts an
invaluable reference for the abundance of information accessible from within the SAP
DBA Cockpit for DB2. It makes it easy for SAP NetWeaver administrators, consultants,
and DBAs to understand the strengths of DB2 for SAP, and how to leverage those
strengths within their own unique application environments.

EDUARDO AKISUE LIWEN YEOW


Certified DB2 9 Administrator Certified Technology Associate–System
Certified Informix Administrator Administration (DB2) for SAP NetWeaver 7.0

Certified SAP Technology Consultant SAP Certified Technology Consultant


for DB/OS Migration
SAP Certified OS/DB Migration Consultant

PATRICK ZENG
JEREMY BROUGHTON
Certified DB2 Solutions Expert
SAP Certified Basis Consultant for
DB2 on NetWeaver 2004 Certified SAP Technology Consultant

SAP Certified OS/DB Migration Consultant

MC Press Online, LP
125 N. Woodland Trail
Lewisville, TX 75077
SAP DBA Cockpit
Flight Plans for
DB2 LUW Database Administrators

Eduardo Akisue
Jeremy Broughton
Liwen Yeow
Patrick Zeng

Foreword by Torsten Ziegler


SAP DBA Cockpit: Flight Plans for DB2 LUW Database Administrators
Eduardo Akisue, Jeremy Broughton, Liwen Yeow, Patrick Zeng
Foreword by Torsten Ziegler
October 2009

© 2009 IBM Corporation. All rights reserved.


Portions © MC Press Online, LP.

Every attempt has been made to provide correct information. However, the publisher and the author
do not guarantee the accuracy of the book and do not assume responsibility for information in-
cluded in or omitted from it.

IBM is a registered trademark of International Business Machines Corporation in the United States,
other countries, or both. DB2 is a registered trademark of International Business Machines Corpo-
ration in the United States, other countries, or both. All other product names are trademarked or
copyrighted by their respective manufacturers.

This publication is protected by copyright, and permission must be obtained from the publisher
prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by
any means, electronic, mechanical, photocopying, recording, or likewise.

For information regarding permissions or special orders, please contact:


MC Press
Corporate Offices
125 N. Woodland Trail
Lewisville, TX 75077 USA

For information regarding sales and/or customer service, please contact:


MC Press
P.O. Box 4300
Big Sandy, TX 75755-4300 USA

ISBN: 978-158347-089-3
About the Authors

Eduardo Akisue is a member of the WW DB2 SAP Technical


Sales Enablement and Support team. Previous to his current
role, he worked for many years supporting DB2 and Informix
customers in the Latin America region. He is a Certified DB2 9
Administrator, a Certified Informix Administrator, and an
Informix “dial-up” Engineer. He is also a Certified SAP Tech-
nology Consultant and an SAP Certified OS/DB Migration
Consultant. Eduardo can be reached at akisue@us.ibm.com.

Jeremy Broughton is a Technical Enablement Specialist for


IBM DB2 and SAP. He has worked within the IBM DB2 De-
velopment Lab for 10 years, first developing infrastructure and
tooling for DB2 development, and then rewriting internal DB2
code to optimize compilation performance and development
agility. For the past three years, Jeremy has been dedicated to
helping SAP professionals leverage the strengths of DB2
within SAP implementations. He has assisted with proofs of
concept, provided consulting to customers implementing SAP
on DB2, and presented numerous workshops around the world
teaching DB2 administration and migration methodology for
SAP systems. He is an SAP Certified Basis Consultant for DB2
on NetWeaver 2004, and an SAP Certified OS/DB Migration
Consultant. Jeremy can be reached at jeremyb@ca.ibm.com.
Liwen Yeow is the WW SAP Technical Sales Manager for
DB2 Distributed Platforms. He has been with IBM since 1988
and has worked in the SAP field since 1995 in multiple capaci-
ties: as part of DB2 Service, as an SAP Consultant for DB2, as
a Customer Advocate for many of the large SAP-DB2 custom-
ers, and as Manager of the IBM-SAP Integration and Support
Center. In his current role, he is responsible for the enablement
of the Technical Pre-Sales teams and provides guidance to the
Sales teams in SAP sales opportunities. He is a Certified Tech-
nology Associate–System Administration (DB2) for SAP
NetWeaver 7.0, and an SAP Certified Technology Consultant
for DB/OS Migration. Liwen can be reached at
yeow@ca.ibm.com

Patrick Zeng was a member of the WW DB2 SAP Technical


Sales Enablement and Support team and currently works as a
DBA at Bank of America. He has many years’ worth of expe-
rience supporting SAP and DB2 customers. He is a Certified
DB2 Solutions Expert and a Certified SAP Technology Con-
sultant. Patrick can be reached at
patrick.pucheng.zeng@gmail.com.

Torsten Ziegler has been the Development Manager for SAP


NetWeaver on IBM DB2 for Linux, UNIX, and Windows since
2001. After having worked in other industries, he joined SAP
as a developer in 1997. In his current role, he is responsible for
development, maintenance, and development support for all
DB2-specific components of SAP NetWeaver and applications
based on NetWeaver. He can be reached at
torsten.ziegler@sap.com
Acknowledgments

The authors would like to express their gratitude for the technical contributions
received by the following colleagues:

At IBM:
Guiyun Cao
Martin Mezger
Karl Fleckenstein

At SAP AG:
Torsten Ziegler
Ralf Stauffer
Andreas Zimmermann
Steffen Siegmund
Britta Bachert
Contents

Foreword by Torsten Ziegler. . . . . . . . . . . . . . . . . . . vi

Chapter 1: The SAP DBA Cockpit . . . . . . . . . . . . . . . . . . . . . . 1


Central Monitoring of Remote Systems . . . . . . . . . . . . . . 5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 2: Performance Monitoring . . . . . . . . . . . . . . . . . . . . . 6


Performance: Partition Overview . . . . . . . . . . . . . . . . . . 7
Performance: Database Snapshot . . . . . . . . . . . . . . . . . . 9
The Buffer Pool . . . . . . . . . . . . . . . . . . . . . . . . 10
The Catalog Cache and Package Cache . . . . . . . . . . . . 15
Asynchronous I/O . . . . . . . . . . . . . . . . . . . . . . . 18
Direct I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Real-Time Statistics (RTS). . . . . . . . . . . . . . . . . . . 20
Locks and Deadlocks. . . . . . . . . . . . . . . . . . . . . . 21
Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Sorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
XML Storage . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Performance: Schemas. . . . . . . . . . . . . . . . . . . . . . . 31
Performance: Buffer Pool Snapshot . . . . . . . . . . . . . . . . 31
Performance: Tablespace Snapshot . . . . . . . . . . . . . . . . 33
Performance: Table Snapshot . . . . . . . . . . . . . . . . . . . 34
Performance: Application Snapshot . . . . . . . . . . . . . . . . 36
Performance: SQL Cache Snapshot . . . . . . . . . . . . . . . . 37
Performance: Lock Waits and Deadlocks . . . . . . . . . . . . . 39

i
Performance: Active Inplace Table Reorganizations . . . . . . . 41
Performance: History–Database . . . . . . . . . . . . . . . . . . 41
Performance: History–Tables . . . . . . . . . . . . . . . . . . . 42
Performance Warehouse . . . . . . . . . . . . . . . . . . . . . . 43
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Chapter 3: Storage Management . . . . . . . . . . . . . . . . . . . . . . 45


Automatic Storage . . . . . . . . . . . . . . . . . . . . . . . . . 47
Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
The Technical Settings Tab . . . . . . . . . . . . . . . . . . 51
The Storage Parameters Tab . . . . . . . . . . . . . . . . . . 53
The Containers Tab . . . . . . . . . . . . . . . . . . . . . . 54
DMS/SMS Table Spaces . . . . . . . . . . . . . . . . . . . . 54
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Tables and Indexes. . . . . . . . . . . . . . . . . . . . . . . . . 57
Single Table Analysis . . . . . . . . . . . . . . . . . . . . . . . 60
The Table Tab . . . . . . . . . . . . . . . . . . . . . . . . . 61
The Indexes Tab . . . . . . . . . . . . . . . . . . . . . . . . 63
The Table Structures Tab . . . . . . . . . . . . . . . . . . . 64
The RUNSTATS Control Tab . . . . . . . . . . . . . . . . . 65
The Index Structures Tab . . . . . . . . . . . . . . . . . . . 67
The RUNSTATS Profile Tab . . . . . . . . . . . . . . . . . 67
The Table Status Tab. . . . . . . . . . . . . . . . . . . . . . 67
The Compression Status Tab. . . . . . . . . . . . . . . . . . 69
Virtual Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Historical Analysis . . . . . . . . . . . . . . . . . . . . . . . . 75
The Database and Table Spaces . . . . . . . . . . . . . . . . 77
Tables and Indexes . . . . . . . . . . . . . . . . . . . . . . . 78
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Chapter 4: Job Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 81


The Central Calendar . . . . . . . . . . . . . . . . . . . . . . . 81
The DBA Planning Calendar . . . . . . . . . . . . . . . . . . . 83
REORGCHK for All Tables . . . . . . . . . . . . . . . . . . 84
Scheduling Backups . . . . . . . . . . . . . . . . . . . . . . 85
Archiving Log Files to a Tape Device . . . . . . . . . . . . . 86

ii
Updating Statistics . . . . . . . . . . . . . . . . . . . . . . . 86
Table Reorganization. . . . . . . . . . . . . . . . . . . . . . 87
Custom Job Scripts . . . . . . . . . . . . . . . . . . . . . . . 87
The DBA Log . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Back-end Configuration . . . . . . . . . . . . . . . . . . . . . . 89
SQL Script Maintenance. . . . . . . . . . . . . . . . . . . . . . 90
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Chapter 5: Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . 92


The Backup Strategy. . . . . . . . . . . . . . . . . . . . . . . . 93
Utility Throttling. . . . . . . . . . . . . . . . . . . . . . . . . . 94
Scheduling Backups in the DBA Cockpit . . . . . . . . . . . . . 95
Multi-partition Databases . . . . . . . . . . . . . . . . . . . 99
Advanced Backup Technology . . . . . . . . . . . . . . . . 100
The DB2 Recovery History File . . . . . . . . . . . . . . . 100
The Backup and Recovery Overview Screen . . . . . . . . . . 102
The Database Backup Tab . . . . . . . . . . . . . . . . . . 102
The Archived Log Files Tab . . . . . . . . . . . . . . . . . 102
Logging Parameters . . . . . . . . . . . . . . . . . . . . . . . 103
The Log Directory . . . . . . . . . . . . . . . . . . . . . . 104
The ARCHMETH1 Tab . . . . . . . . . . . . . . . . . . . 105
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Chapter 6: Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 106


The Overview Screen. . . . . . . . . . . . . . . . . . . . . . . 108
The Database Manager . . . . . . . . . . . . . . . . . . . . . . 109
The Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Registry Variables . . . . . . . . . . . . . . . . . . . . . . . . 117
Environment Variables . . . . . . . . . . . . . . . . . . . . 118
Registry Variables . . . . . . . . . . . . . . . . . . . . . . 118
Parameter Changes . . . . . . . . . . . . . . . . . . . . . . . . 121
Database Partition Groups . . . . . . . . . . . . . . . . . . . . 122
Buffer Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Special Tables Regarding RUNSTATS . . . . . . . . . . . . . 125
File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Data Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

iii
Monitoring Settings . . . . . . . . . . . . . . . . . . . . . . . 128
Automatic Maintenance Settings . . . . . . . . . . . . . . . . . 130
Automatic Backups . . . . . . . . . . . . . . . . . . . . . . 130
Automatic RUNSTATS. . . . . . . . . . . . . . . . . . . . 131
Automatic REORG . . . . . . . . . . . . . . . . . . . . . . 132
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Chapter 7: The Alert Monitor . . . . . . . . . . . . . . . . . . . . . . . . 135


The Alert Monitor. . . . . . . . . . . . . . . . . . . . . . . 136
The Alert Message Log . . . . . . . . . . . . . . . . . . . . 137
Alert Configuration . . . . . . . . . . . . . . . . . . . . . . 138
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Chapter 8: Database Diagnostics . . . . . . . . . . . . . . . . . . . . . 140


The Audit Log . . . . . . . . . . . . . . . . . . . . . . . . . . 141
The EXPLAIN Option . . . . . . . . . . . . . . . . . . . . . . 142
The New Version of EXPLAIN . . . . . . . . . . . . . . . . . 146
Missing Tables and Indexes . . . . . . . . . . . . . . . . . . . 147
The Deadlock Monitor . . . . . . . . . . . . . . . . . . . . . . 149
Creating the Deadlock Monitor . . . . . . . . . . . . . . . 151
Enabling the Deadlock Monitor . . . . . . . . . . . . . . . 151
Analyzing the Information Collected . . . . . . . . . . . . . 151
Stopping the Deadlock Monitor . . . . . . . . . . . . . . . 154
Resetting or Dropping the Deadlock Monitor . . . . . . . . 154
The SQL Command Line. . . . . . . . . . . . . . . . . . . . . 154
The Index Advisor . . . . . . . . . . . . . . . . . . . . . . . . 156
Indexes Recommended by DB2 . . . . . . . . . . . . . . . 157
Creating Virtual Indexes . . . . . . . . . . . . . . . . . . . 157
The Cumulative SQL Trace . . . . . . . . . . . . . . . . . . . 159
The DBSL Trace Directory. . . . . . . . . . . . . . . . . . . . 161
The Sequential DBSL Trace . . . . . . . . . . . . . . . . . 161
The Deadlock Trace. . . . . . . . . . . . . . . . . . . . . . 162
Trace Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
The Database Notification Log. . . . . . . . . . . . . . . . . . 165
The Database Diagnostic Log . . . . . . . . . . . . . . . . . . 166
DB2 Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

iv
The Dump Directory . . . . . . . . . . . . . . . . . . . . . . . 168
The DB2 Help Center . . . . . . . . . . . . . . . . . . . . . . 169
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Chapter 9: New Features . . . . . . . . . . . . . . . . . . . . . . . . . . 171


Workload Management (WLM) . . . . . . . . . . . . . . . . . 171
Workloads and Service Classes. . . . . . . . . . . . . . . . 172
Critical Activities . . . . . . . . . . . . . . . . . . . . . . . 173
BI Administration . . . . . . . . . . . . . . . . . . . . . . . . 174
BI Data Distribution . . . . . . . . . . . . . . . . . . . . . 174
The MDC Advisor . . . . . . . . . . . . . . . . . . . . . . 176
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

v
Foreword
This is a remarkable book, written by IBM experts who have in-depth knowledge
about SAP on DB2. The authors have their profound experience not only from
their work with many customers who adopted DB2 for their SAP applications,
but also from their very close cooperation with SAP development. Based on the
analogy of a pilot’s need to know about the controls of his aircraft, this book
takes you through the entire world of DB2 monitoring and administration. You
will find it a useful introduction if you are new to SAP on DB2, and you will also
be able to use it as a reference if you are an experienced DBA.

The SAP DBA Cockpit is one of many visible proof points of the excellent inte-
gration of SAP solutions with IBM DB2. This book will familiarize you with ev-
erything you need to know to operate IBM DB2 optimally with your SAP
solution. In a tutorial-like, easy to read style it takes you from the basic controls
to advanced monitoring and tuning, and at the same time provides you with use-
ful background information about DB2. And even more, it is just fun to read.

I hope you will find it as useful and enjoyable as I did.

—Torsten Ziegler
SAP Manager
DB2 LUW Platform Development

vi
Chapter 1

The SAP DBA Cockpit


A Pilot Must Know the Controls

Like a pilot must know the aircraft cockpit, a database


administrator must know the SAP database administration
tools. The SAP DBA Cockpit is the central database
administration interface for SAP systems on all databases.
The DBA Cockpit for DB2 provides administrators
with a more comprehensive administration and
monitoring tool for SAP databases.

P iloting a large commercial aircraft requires a great deal of skill. Pilots must
understand how the adjustments they make to the aircraft components affect
the flight of the airplane. Balancing lift and drag, speed and altitude, yaw and
wind are all important parts of a safe, comfortable flight. However, a huge
amount of technology also operates and manages the individual aircraft compo-
nents. A pilot who flew the aircraft without knowing what the technology does
could disrupt automated flight operations. Similarly, if the technology were not
leveraged specifically for the aircraft flight requirements, flight operations could
become more difficult. To ensure an efficient and comfortable flight, an adept pi-
lot must understand both the high-level operation of the aircraft and the underly-
ing technology that operates the components.

1
CHAPTER 1: The SAP DBA Cockpit

Considering the operation of the database technology within an SAP application,


administrators and pilots have similar skill requirements. Operating SAP applica-
tions without considering the optimizations within the database technology can
cause inefficiencies, and configuring the database without considering the unique
SAP application workload characteristics can produce unstable, sub-optimal per-
formance results. Adept SAP administrators must understand how to best lever-
age the database technology specifically for the workloads of their SAP systems.
Traditionally, this is where administrative consoles have come up short. Database
administration consoles were too generic to focus on application-specific require-
ments, and application administration consoles were not specific enough to fully
leverage the database. SAP and IBM took huge steps to bridge this gap, though,
with the development of the SAP DBA Cockpit for DB2. The result is a complete
graphical interface for monitoring and administering the database, all within a
single transaction in the SAP application.

Administrators can now easily access all of the database key performance indica-
tors (KPIs) and make changes to improve system performance from within the
same dialog screens. The most important information for SAP administrators is
now at their fingertips, and the database administrative tasks can often be exe-
cuted with a few simple mouse clicks. This single DBA Cockpit interface simpli-
fies monitoring and maintenance tasks, and can reduce the overall time spent on
database administration.

The DBA Cockpit contains two main sections: a large detailed display on the
right, and a small navigation menu on the left. Figure 1.1 shows the System Con-
figuration screen, which is the initial dialog screen displayed by running the
DBACOCKPIT transaction. This can also be displayed at any time by clicking the
System Configuration button, just above the left navigation menu.

2
CHAPTER 1: The SAP DBA Cockpit

Figure 1.1: The SAP DBA Cockpit for DB2 has a large display area on the right and a small
navigation menu on the left.

The right display window contains a list of all the database systems that are con-
figured for monitoring from the DBA Cockpit. The left navigation menu contains
the following folders for navigating into database function groups:

• Performance—Display performance statistics for monitoring database


memory, disk I/O, application resource usage, query execution, and more.

• Space—Review historical and real-time storage usage for table spaces,


containers, and individual tables, and perform administrative functions to
alter the logical and physical storage layout of the SAP database.

• Backup and Recovery Operations—Review historical backup and log


archival information, and real-time log file system statistics.

• Database Configuration—Display and update database configuration


parameters, configure partition groups and buffer pools, and adjust
monitoring and automatic maintenance settings.

3
CHAPTER 1: The SAP DBA Cockpit

• Job Scheduling—Create, schedule, and monitor periodic jobs from a


planning calendar.

• Alert Monitoring—View key database health alert statuses and messages,


and enable notification for database alert threshold violations.

• Diagnostic Functions—View and filter messages from the database


diagnostic logs, view optimizer access plans and recommended indexes for
SQL statements, run SQL commands, view DB2 online help, and more.

• Workload Management—Set up, maintain, and monitor the different


workloads and service classes configured for the SAP system in DB2’s
Workload Management.

• BW Administration—Change data distribution and analyze


Multi-Dimensional Clustering in partitioned SAP NetWeaver BW
databases.

The left navigation frame of SAP Enhancement Package 1 for SAP NetWeaver
7.0 contains two additional screens. The first entry links the user directly into the
DB2 LUW main web page in the SAP Developers Network (SDN), allowing the
user to browse the SDN from directly within the SAP GUI. The other screen
launches the new web browser-based DBA Cockpit. Several of the new features
of the DBA Cockpit are now launched as WebDynpro browser applications.
When one of these is clicked in the SAP GUI-based DBA Cockpit, the corre-
sponding WebDynpro screen will automatically launch in the browser. The Start
WebDynpro GUI menu entry launches the main page of the web browser-based
DBA Cockpit, similar to the DBACOCKPIT transaction in the SAP GUI.

The contents of the left navigation menu may differ slightly among different ver-
sions of SAP BASIS, in order to leverage new functionality available in the latest
releases of SAP and DB2. This book illustrates the latest features available in the
DBA Cockpit in SAP Enhancement Package 1 for SAP NetWeaver 7.0.

4
CHAPTER 1: The SAP DBA Cockpit

Central Monitoring of Remote Systems


The DBA Cockpit allows administrators to configure connections to every SAP
system from a single DBA Cockpit session. A Solution Manager instance or a
standalone SAP NetWeaver instance can be installed for administrators to use for
central monitoring and administration. You should keep this SAP system at the
most current SAP release level, to maximize backward compatibility and make
the most advanced DBA Cockpit features available for all systems.

Remote connections can be established using the database information from the
System Landscape Directory (SLD). Alternatively, they can be configured manu-
ally from within the DBA Cockpit, using the DB Connections button at the top of
the left navigation menu. From the System Configuration screen, simply click the
SLD System Import button. This provides a graphical interface to select and reg-
ister the unregistered SAP systems into the cockpit. This allows the entire SAP
system landscape to be centrally managed in the SLD, and provides a simple way
to register any new or changed systems in your central DBA Cockpit.

Alternatively, click the Add button to manually register new databases into the
cockpit. This allows administrators to register even non-SAP systems. Therefore,
the DBA Cockpit can provide a single administrative GUI for every SAP and
non-SAP database in your IT landscape.

Summary
The SAP DBA Cockpit for DB2 is a powerful interface for SAP pilots to cen-
trally manage the DB2 database operations of their SAP systems. It provides a
single point of administration for every DB2 database in your organization. The
SAP DBA Cockpit for DB2 gives administrators fast and easy access to all of the
most important DB2 database information, all from within the familiar look and
feel of SAP GUI.

5
Chapter 2

Performance Monitoring
Are You Flying a Glider or a Jet?

The DBA Cockpit performance monitors provide


a simple interface to easily access all of the key
performance data for the DB2 database.
By understanding the DBA Cockpit information
and integrating it with the other performance data
available within SAP, administrators can more
effectively optimize the performance of their SAP systems.

P erformance tuning can be a very complicated task, involving many different


areas of the SAP application. The database is one of the key areas, and the
SAP DBA Cockpit for DB2 can greatly reduce the effort of monitoring and tun-
ing it. The DBACOCKPIT transaction efficiently organizes the database perfor-
mance statistics into the following sections, containing easily accessible screens
and tabs for important, related information:

• Performance: Partition Overview


• Performance: Database Snapshot
• Performance: Schemas
• Performance: Buffer Pool Snapshot
• Performance: Tablespace Snapshot

6
CHAPTER 2: Performance Monitoring

• Performance: Table Snapshot


• Performance: Application Snapshot
• Performance: SQL Cache Snapshot
• Performance: Lock Waits and Deadlocks
• Performance: Active Inplace Table Reorganizations
• Performance: History—Database
• Performance: History—Tables
• Everything needed by a database administrator is only a click or two away.

Performance: Partition Overview


Database Partitioning Feature (DPF) is one of the key DB2 features for improv-
ing the performance of SAP NetWeaver BW systems. DPF allows a SAP
NetWeaver BW database to scale out incrementally on lower-cost hardware, or
grow massive data warehouses across multiple, large servers. The goal of data-
base partitioning is to divide the database workload evenly across multiple parti-
tions, perhaps on different physical machines, so that long-running SQL
statements can be “divided and conquered.” If the workload is balanced evenly
across all partitions, all then operate on an equal share of the data and process
their intermediate results sets in about the same amount of time. This equal divi-
sion of processing minimizes the overall response time and maximizes
performance.

To access the partition overview, shown in Figure 2.1, click Performance à


Partitions in the navigation frame of the DBA Cockpit. This displays the most
important performance statistics for each active partition in the current SAP
NetWeaver BW system. For each partition, this overview shows the total number
and size of the buffer pools, key I/O read and write characteristics, SQL state-
ment executions, and package cache statistics. Ideally, a well-balanced system
will have similar values on each partition for all of these characteristics.

Probably the most important performance indicator is buffer pool hit ratio. This
can be calculated by comparing the number of logical and physical reads.
Alternatively, it can be displayed by double-clicking one of the partitions to view

7
CHAPTER 2: Performance Monitoring

the database snapshot data from that partition. On each partition, the index hit
ratio should be about 98 percent, and the data hit ratio should be 95 to 98 percent.

Figure 2.1: The performance characteristics of the DB2 database partitions are shown in
the Performance: Partition Overview screen.

Administrators should try to balance I/O as evenly as possible across all parti-
tions in the system. The easiest way to achieve this is to distribute all large or
heavily accessed tables across all partitions. However, for very large systems
with a very high number of partitions, it might be impractical to distribute tables
thinly across all partitions. In this case, heavily accessed tables can be balanced
equally across subsets of partitions. For example, one heavily accessed InfoCube
can reside on partitions 1 through 9, and another heavily accessed InfoCube can
reside on partitions 10 through 19. The most important point is to try to keep da-
tabase size and I/O activity as balanced as possible across all partitions, so that
the database leverages the full processing capacity of all partitions equally.

8
CHAPTER 2: Performance Monitoring

Partitioned SAP NetWeaver BW databases have unique package cache require-


ments. Since all application servers connect to the Administration Partition (par-
tition 0), all SAP basis function-related SQL statements will only be compiled
and performed on partition 0. Therefore, the Administration Partition requires a
bigger package cache than other data partitions. Package cache quality should be
95 to 98 percent on each partition.

Performance: Database Snapshot


The database performance dialog of the DBA Cockpit is the equivalent of run-
ning the ST04 transaction code. This screen, shown in Figure 2.2, contains tabs
for each of the following key database performance indicators (KPIs):

• Buffer pool
• Cache
• Asynchronous I/O
• Direct I/O
• Real-time statistics
• Locks and deadlocks
• Logging
• Calls
• Sorts
• XML storage

By default, the database performance monitor displays database statistics since


the last system reset. The system can be manually reset at any time by clicking
the Reset button at the top of the screen. To the right of the Reset button, you
will find a Since Reset button and a Since DBM Start button. These toggle the
statistics between the values since the last reset, and the values since the start of
the database manager (the DB2 instance).

9
CHAPTER 2: Performance Monitoring

Figure 2.2: This tab of the database snapshot dialog displays statistics about the buffer
pool.

The Buffer Pool


Disk I/O is relatively slow compared to other types of database operations.
Therefore, if a database reduces disk I/O and performs most disk I/O operations
in the background (asynchronous), performance generally improves. On the other
hand, if an SQL statement is forced to wait for disk I/O (synchronous), perfor-
mance generally declines. Administrators should strive for high buffer quality,
fast physical I/O, and few synchronous reads. All of this information is available
in the DBA Cockpit buffer pool statistics, shown in Figure 2.2.

High buffer quality is probably one of the most important criteria for perfor-
mance. If an agent can find the pages it needs already in memory, I/O wait is re-
duced and response time improves. For peak performance, overall buffer quality
for the entire database should be above 95 percent, with data hit ratios above 95
percent and indexes hit ratios above 98 percent. Hit ratios can be improved by in-
creasing buffer pool size, compressing the database, improving cluster ratios for
SAP NetWeaver BW, or by optimizing buffer pool allocation, which can be done
automatically by the DB2 Self Tuning Memory Manager (STMM).

10
CHAPTER 2: Performance Monitoring

Buffer pool hit ratios depend on the ratio of logical and physical reads. Each re-
quest for a page of table or index data is referred to as a logical read. In a
well-tuned system, the majority of logical read requests will be satisfied from the
buffer pool, resulting in buffer pool hits. If a page is not in the buffer pool, a
buffer pool miss occurs, and the page must be read from disk, which is called a
physical read. The buffer pool quality is the ratio of the number of page requests
found in the buffer pool to the total number of logical read requests.

Physical reads and writes are unavoidable, because new transactions are always
reading and writing new data to the database. However, a properly configured da-
tabase will perform most disk I/O asynchronously and in parallel, thereby mini-
mizing the I/O wait experienced by the client and maintaining high buffer
quality. Physical reads and writes can either be synchronous or asynchronous, de-
pending on which DB2 agent (process or thread) performs the I/O operation.
Synchronous I/O is performed directly by the database agent working on behalf
of the client connection, and asynchronous I/O is performed by the DB2
prefetchers and page cleaners. The statistic labeled “Average Time for the Physi-
cal Reads and Physical Writes” on the DBA Cockpit indicates the I/O subsystem
performance. An average physical read time above 10ms and/or an average phys-
ical write time above 5ms indicates an I/O subsystem that is not performing
optimally.

Asynchronous reads are performed in the background by the DB2 prefetchers,


which anticipate the needs of the applications, and load, from disk into buffer
pools, the pages that are likely to be required. In most cases, the prefetchers read
these pages just before they are needed. For example, during a full table select,
the prefetchers will populate the buffer pool with all of the pages containing data
for that table, so that when the agent tries to access that data, it is already avail-
able in memory.

Synchronous reads occur when an agent reads a page of data from disk itself,
rather than signaling the prefetchers to read the page asynchronously. This occurs
most frequently during random requests for single pages, which are common in
OLTP applications operating on single rows of data via an index. However, this
may also occur if the prefetchers are all busy with other prefetch requests.

11
CHAPTER 2: Performance Monitoring

Each synchronous read request results in I/O wait at the client, because the agent
processing the SQL statement must directly perform a read from disk before it
can continue query processing. For single-row access, it is just as efficient for the
agent to read the single page itself. However, for prefetch requests involving
multiple pages, it is far more efficient to have the prefetchers read these pages in
the background.

A properly configured system performs most read operations asynchronously and


minimizes overall system I/O wait. If a large percentage of read operations are
synchronous, it might indicate that the prefetchers are not doing their job effec-
tively. This might be due to slow disks or an inefficient database layout, or the
system might just require more prefetchers to satisfy the database workload.

The physical writes specify the number of pages written from buffer pool to disk.
Similar to a read, a write can be either synchronous or asynchronous, depending
on the agent that performs it. Asynchronous writes are performed in the back-
ground by the DB2 page cleaners at specific checkpoints. These are far more effi-
cient than synchronous writes, which are performed directly by the DB2 agents
to make room in the buffer pool for new data pages being accessed by that agent.

DB2 can perform page cleaning in two different ways: Standard Page Cleaning
or Pro-Active Page Cleaning. By default, all new SAP installations use Standard
Page Cleaning.

Standard Page Cleaning


Using Standard Page Cleaning, page cleaners will asynchronously write data to
disk whenever one of the following occurs:

• CHNGPGS_THRESH is exceeded.—The database configuration parameter


CHNGPGS_THRESH specifies the maximum percentage of changed pages
allowed within a DB2 buffer pool. Once a buffer pool reaches this
percentage of changed pages, the DB2 page cleaners are signaled to write
those changed pages to disk in the table space containers. This parameter
is set to 40 percent by SAPinst. To find it in the cockpit, click
Configuration à Database à Optimization.

12
CHAPTER 2: Performance Monitoring

• SOFTMAX is exceeded.—The database configuration parameter SOFTMAX


specifies the maximum total size of changed pages in the buffer pool that
have not yet been written to disk. You can find this parameter in the
cockpit by clicking Configuration à Database à Logging. It is
specified as a percentage of one log file in size, and is set to 300 by
SAPinst. This means that the buffer pool can contain a maximum of three
log files’ worth of changes (300 percent of one log file). Once this
parameter is exceeded, the database enters a log sequence number (LSN)
gap situation, and the page cleaners are signaled to begin writing those
changed pages from buffer pool to disk in the table space containers.

Whenever the above two thresholds are exceeded, the DB2 page cleaners begin
writing changed pages from the buffer pool(s) to disk. This avoids LSN gap situa-
tions, and ensures that there is room in the buffer pool for future prefetch requests.

Proactive Page Cleaning


DB2 also has another method of page cleaning, Proactive Page Cleaning, which
is not currently used by default by SAP. Performance testing has indicated that
Standard Page Cleaning currently performs marginally better for most SAP
workloads. However, for OLTP systems with very update-intensive workloads,
performance might improve slightly by enabling Proactive Page Cleaning in the
DB2 profile registry:

db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON

Using Proactive Page Cleaning, the page cleaners no longer respond to the
CHNGPGS_THRESH parameter. Rather than keeping a percentage of the buffer
pool clean, this alternate method only uses SOFTMAX, and DB2 keeps track of
good victim pages and their locations in the buffer pool. Good victim pages in-
clude those that have been recently written to disk and are unlikely to be read
again soon. If either a LSN gap occurs, or the number of good victim pages drops
below an acceptable threshold, the page cleaners are triggered. They proceed to
search the buffer pool, write out pages, and keep track of these new good victim
pages. The page cleaners will not only write out pages in a LSN gap situation,

13
CHAPTER 2: Performance Monitoring

but will also write pages that are likely to enter a LSN gap situation soon, based
on the current level of activity.

When the database agents need to read new data into the buffer pool, the
prefetchers read the list of good victim pages, rather than searching through the
buffer pool for victims. This tends to spread writes more evenly, by writing
smaller amounts more frequently. By spreading the page cleaner write operations
over a greater period of time, and avoiding buffer pool searches for victim pages,
high-update workloads might see performance improvements.

Since most SAP workloads on DB2 9.5 have been found to perform marginally
better using Standard Page Cleaning, we recommend using it for all SAP applica-
tions. Future changes to Proactive Page Cleaning might increase its usage within
SAP. For now, though, if you have a uniquely heavy-update workload that you
think might benefit from Proactive Page Cleaning, test the change thoroughly to
determine the effect on performance before enabling it in the production system.

The No Victim Buffers element in the DBA Cockpit can help evaluate whether
you have enough page cleaners when using Proactive Page Cleaning. This ele-
ment displays the number of times a database agent was unable to find
pre-selected victim pages in the buffer pool during a prefetch request, and in-
stead, needed to search through the buffer pool for suitable victim pages. If this
element is high relative to the number of logical reads, the database page cleaners
are not keeping up with the changes occurring in the database, and more page
cleaners are likely required.

If Proactive Page Cleaning is off, and you are using Standard Page Cleaning, the
No Victim Buffers monitor element can be safely ignored. In the default configu-
ration, Standard Page Cleaning is triggered by CHNGPGS_THRESH and SOFTMAX,
and the prefetchers will usually search the buffer pool to find suitable victim
pages. Therefore, you can expect this monitor element to be large.

Synchronous Writes
If the database must read data from disk into a buffer pool, and there are no free
pages remaining in the buffer pool, DB2 must make room, by replacing existing

14
CHAPTER 2: Performance Monitoring

data pages (victims) with the data pages being read. If these victim buffer pool
pages contain changed data, these pages must be written to disk before they are
swapped out of memory. In this case, the pages are written to disk synchronously
by the DB2 agent processing the SQL statement.

Synchronous writes always result in I/O wait at the client, because the write
operation must occur synchronously, before the buffer pool page can be
victimized (replaced with a new page from disk). A large percentage of
synchronous write operations indicates that the DB2 page cleaners are not
operating effectively. This might be due to slow disks or unbalanced I/O in the
storage system, or the system might require more page cleaners to handle the
system workload.

Temporary Table Space I/O


The DBA Cockpit also contains I/O characteristics for the temporary table
spaces, displaying the temporary logical and physical reads for both data and in-
dexes. The logical reads display the total number of read requests for temporary
table space data. The physical reads display the number of read requests that
were not satisfied from the buffer pool, and therefore, had to be read physically
from disk.

For most transactional systems, temporary table space I/O should be fairly low,
since most calculations should be performed in memory. SAP NetWeaver BW
systems might show larger temporary table space I/O, but large values here might
still indicate inefficient queries or a need to create higher-level aggregates to
improve query performance.

The Catalog Cache and Package Cache


The second tab in the DBA Cockpit database performance monitor is the Cache
tab, shown in Figure 2.3. This tab displays the details for the database catalog
cache and the package cache.

15
CHAPTER 2: Performance Monitoring

Figure 2.3: The Cache tab displays the Catalog Cache and Package Cache statistics.

The Catalog Cache


The catalog cache is a portion of database memory that is dedicated to caching
access to table descriptor and authorization information from the database system
catalog tables. These table descriptors include the table information used by DB2
during query optimization. When this data is accessed, it is first read from disk
into the catalog cache, and then the database agents requesting this data read it
from memory. Therefore, high hit ratios on this buffer are important for perfor-
mance. If the most frequently accessed system catalog details can be cached in
memory, unnecessary disk reads can be avoided.

A high catalog cache hit ratio is even more important in multi-partition SAP
NetWeaver BW systems. In a partitioned SAP NetWeaver BW system, the
system catalog tables all reside on the Administration Partition (partition 0).
Therefore, if other partitions need to read system catalog information from disk,
they must request this information from partition 0, which inserts into the catalog
cache on partition 0, and then sends the information to the catalog cache on the
other partition. Caching most of the system catalog information at each partition
avoids both disk I/O and network I/O, and reduces the workload on the
Administration Partition. All of these contribute to better performance.

The default catalog cache size in new SAP installations is 2,560 4KB pages.
Well-configured systems should have a hit ratio of 98 percent and experience no
overflows. If overflows occur, DB2 must allocate more memory from database
shared memory into the catalog cache. Then, when some table descriptor and au-
thorization information is no longer needed for active transactions, it is removed

16
CHAPTER 2: Performance Monitoring

from memory, and the cache is reduced to its configured size. This involves extra
overhead in the system, and should be avoided by increasing the catalog cache size.

The total number of overflows and the high-water mark can be used together
with the cache quality to determine whether or not the default size is adequate for
your workload. The catalog cache size is set by the CATALOGCACHE_SZ database
configuration parameter. To view or change this parameter in the DBA Cockpit,
click Configuration à Database à Database Memory.

The Package Cache


The package cache is another important area of database memory. It is dedicated to
caching compiled static and dynamic SQL statements and optimizer access plans.
When a new dynamic SQL statement is executed, the DB2 optimizer compiles it,
computes an access plan for reading the data pages required to satisfy the query,
and then caches this information in the package cache. The database agents execut-
ing SQL statements then read this access plan from memory. If the same query is
executed multiple times, the access plan can be read from memory, which avoids
repeating the compilation and optimization phase of query processing.

Static SQL statements are embedded in application programs. These statements must
be precompiled and bound into a package, which gets stored in the DB2 system cata-
log tables. SAP does not use static SQL, so this will not be discussed further.

By default, the package cache size in new SAP installations is dynamically con-
figured and adjusted by DB2, as part of its Self Tuning Memory Manager
(STMM) feature. This allows DB2 to adjust the size of this cache to optimize
overall performance, based on your changing workload. Package cache hit ratio
should remain above 98 percent, and overflows should not occur. The package
cache size is set by the PCKCACHESZ database configuration parameter. To view
or change the package cache size in the DBA Cockpit, click Configuration à
Database à Self-Tuning Memory Manager.

Larger catalog and package cache sizes might be required if the workload in-
volves a large number of SQL statements accessing many different database ob-
jects. However, in most cases, it is recommended that you keep the package

17
CHAPTER 2: Performance Monitoring

cache size set to AUTOMATIC, and let DB2 STMM configure the size based on
your current available memory and optimal overall system performance.

Asynchronous I/O
The third tab in the Database Performance Monitor is Asynchronous I/O, shown
in Figure 2.4. This displays information on the I/O reads and writes that use
background read and write operations to perform disk I/O to and from the DB2
buffer pools, using the DB2 prefetchers and page cleaners. Asynchronous I/O op-
erations anticipate application I/O requirements, and operate in the background to
minimize I/O wait. Therefore, well-performing systems should perform the ma-
jority of disk I/O asynchronously.

Asynchronous I/O is performed by the DB2 prefetchers and page cleaners. The
number of prefetchers and page cleaners should be configured to drive the physi-
cal disks in underlying storage system to full capacity. This is set by two data-
base configuration parameters: NUM_IOSERVERS for prefetchers and
NUM_IOCLEANERS for page cleaners. Both are found in the cockpit under
Configuration à Database I/O.

New SAP installations default both of these parameters to automatic. This allows
DB2 to calculate the optimal number of prefetchers and page cleaners, when the
database is activated, based on the following formulae:

NUM_IOSERVERS = MAX( MAX over all table spaces


( parallelism setting × MAX # of containers in a stripe set
), 3 )
NUM_IOCLEANERS = MAX( CEIL( # CPUs / # local logical partitions ) –
1, 1 )

The parallelism setting for prefetchers refers to the DB2_PARALLEL_IO registry


variable, which tells DB2 the number of physical disks assembled into the con-
tainers in each table space. This ensures that the number of prefetchers is always
greater or equal to the number of disks available to any one table space, which
enables asynchronous prefetch requests to drive every available disk in parallel.

18
CHAPTER 2: Performance Monitoring

The formula for page cleaners ensures that they are evenly distributed across all
partitions in a partitioned SAP NetWeaver BW system, and that there are never
more page cleaners than CPUs. This prevents asynchronous page cleaning from
affecting normal transaction processing performance. Ideally, both asynchronous
read and write times should be less than 5 ms.

Figure 2.4: The Asynchronous I/O tab shows statistics for background disk I/O performed
by the DB2 prefetchers and page cleaners.

Direct I/O
Direct I/O is involved whenever a DB2 agent reads from disk or writes to disk,
without using the DB2 buffer pools. Direct I/O is performed in units, the smallest
being a 512-byte disk sector. Direct reads always occur when the database reads
LONG or LOB data, and when a database backup is performed. Direct writes al-
ways occur when LONG or LOB data is written to disk, and when database re-
store and load operations are performed.

The Direct I/O tab of the DBA Cockpit screen is shown in Figure 2.5. Direct I/O
should be extremely fast, because it operates on entire disk sectors. Therefore,
read and write times should generally be under 2ms. The average I/O per request
should be proportional to the average size of the LOB columns in the database.

19
CHAPTER 2: Performance Monitoring

Figure 2.5: The Direct I/O tab displays statistics for database disk I/O that is
not buffered in memory by the DB2 buffer pools.

Real-Time Statistics (RTS)


The concept of Real-Time Statistics (RTS) was first introduced in DB2 9.5. SAP
Enhancement Package 1 for SAP NetWeaver 7.0 now contains a performance
monitoring screen for this new DB2 feature. RTS allows DB2 to trigger either
statistics collection or estimation during query compilation, if table statistics are
either absent or stale. If statistics collection would exceed 5 seconds, it is done in
the background. Otherwise, it may even be done synchronously during query
compilation, depending on the cost of the query relative to the cost of the statis-
tics collection. This feature ensures that recent statistics are always available for
queries, and that performance is never excessively bad due to stale statistics.

The information available in the DBA Cockpit, shown in Figure 2.6, is valuable
for determining the performance impact of RTS. It might suggest the need for
more structured statistics collection for some tables in the system.

Figure 2.6: The Real-Time Statistics tab shows details related to RTS statistics
collection.

20
CHAPTER 2: Performance Monitoring

The statistics cache is a portion of the catalog cache used to store real-time statis-
tics information. If RTS is being frequently triggered, a larger catalog cache
might be required.

Asynchronously collected statistics occur when synchronous statistics collection


during query compilation would take longer than 5 seconds. Rather than consum-
ing this time synchronously during query compilation, statistics collection is in-
stead started as a background job, so that subsequent queries will benefit from
newer statistics.

Synchronous statistics collection occurs when a RUNSTATS is triggered to collect


statistics during query compilation. This RUNSTATS may or may not be sampled,
depending on the RUNSTATS profile for the table and the time estimate for statis-
tics collection. The end user might experience a maximum of 5 seconds extra
time running this query, due to the synchronous RUNSTATS. The number of syn-
chronous RUNSTATS occurrences and the total time consumed by those occur-
rences are displayed in the cockpit.

The final piece of data for RTS is based on statistics fabrication (or statistics esti-
mation). If a sampled RUNSTATS table or index scan consumes too much time,
then new metadata stored in the data and index manager system catalog tables is
used to estimate the current table statistics. Those statistics are immediately made
available in memory for all other queries to use until a RUNSTATS is performed
on the table. In the cockpit, statistics estimation is displayed by the number of
statistics collections during query compilation, and the time spent during query
compilation.

Locks and Deadlocks


Whenever table records are accessed, DB2 places locks on those records to main-
tain transaction integrity and ensure that two transactions cannot update the same
data at the same time. The type of lock used by DB2 depends on the isolation
level defined for the application accessing those records. Traditional DB2 lock-
ing involves the following isolation levels, ordered by increasingly restrictive
locking:

21
CHAPTER 2: Performance Monitoring

• UR (Uncommitted Read)—Read operations do not acquire any locks.


Uncommitted updates of other transactions can be read immediately.

• CS (Cursor Stability)—Read-only locks are placed on the current record being


accessed by a cursor. If that record contains an uncommitted update, the read
of that row must wait until that update is committed. This ensures that the
application cannot read uncommitted data, and that the current position of the
cursor cannot be changed while the application is accessing it.

• RS (Read Stability)—Read-only locks are placed on the entire result set


retrieved within a unit of work, and those locks are held until the unit of
work is committed or rolled back. This ensures that any row read during a
unit of work (UOW) remains unchanged until the UOW commits, and that
the application cannot read uncommitted changes from other transactions.

• RR (Repeatable Read)—Read-only locks are placed on all records


referenced during the processing of the current UOW. This includes all
rows in the result set, plus any rows evaluated and excluded due to WHERE
clause restrictions in the query. This ensures that new rows do not appear
in the result set, existing rows remain unchanged, and uncommitted
updates from other transactions cannot be read.

The default isolation level for most SAP applications is Uncommitted Read,
which allows the highest level of concurrency within the database. SAP transac-
tion integrity is managed within the SAP application. One SAP transaction may
involve multiple database transactions, each of which is committed into the SAP
update tables. While one SAP transaction updates data in the update tables, other
SAP transactions are reading committed data from the tables containing the per-
manent, committed data. Therefore, concurrent SAP transactions always read
committed data. When an SAP transaction is finally committed, those update ta-
ble records are applied to the target database tables by the SAP update work pro-
cesses, and other transactions then see the committed changes from the entire
SAP transaction.

22
CHAPTER 2: Performance Monitoring

One potential exception to the UR default isolation level occurs when accessing
cluster or pool tables. Since reading a single logical row may involve reading
multiple physical rows, more restrictive locking might be required. SAP first tries
to read the logical row with UR. If this does not produce a consistent read of all
physical rows, SAP will read again, first trying CS, and if necessary, finally
reading with RS, which will guarantee read consistency for all physical rows in
the logical record. However, inconsistent reads on logical rows using UR rarely
occur, and most cluster/pool table reads succeed the first time with UR.

Database locks are stored in a portion of database memory called the lock list.
When row locks are acquired, they are added to this lock list. If the size of the
row locks exceeds the size of the lock list, DB2 will convert multiple row locks
on a single table into a single table lock. This lock escalation frees up space in
the lock list for other row locks. However, it can also reduce concurrent access to
the table involved in the escalation. At best, this might reduce performance for
applications accessing that table; at worst, it might result in increased lock waits
or deadlock scenarios in other concurrently running applications.

Normal lock escalations allow read access to the locked tables, but force writes to
wait for the application holding the lock to commit. Exclusive lock escalations
also disallow reads, thereby reducing concurrency even further. Therefore,
administrators should try to completely avoid lock escalations, by ensuring that
the lock list is large enough to contain the locks for the concurrent activity in the
SAP system.

The size of the lock list is set by the LOCKLIST database configuration parameter,
which can be found in the cockpit under Configuration à Database à
Self-Tuning Memory Manager. Lock list utilization can be calculated using the
lock_list_in_use monitor element and the lock list size. If utilization is high, con-
sider increasing the lock list size. These details can be easily found within the
Locks and Deadlocks section of the SAP DBA Cockpit for DB2, which is shown
in Figure 2.7.

By default, SAPinst enables DB2 STMM. Therefore, LOCKLIST is set to


AUTOMATIC, allowing DB2 to dynamically adjust the size of the lock list to avoid

23
CHAPTER 2: Performance Monitoring

lock escalations and optimize overall system performance. Normally, lock esca-
lation is extremely rare for databases with a properly configured lock list or for
databases using STMM.

Figure 2.7: The Locks and Deadlocks tab displays information on lock management and
deadlock occurrences.

Another parameter automatically tuned by STMM is MAXLOCKS. This specifies


the maximum percentage of the lock list that can be consumed by a single appli-
cation before lock escalation will occur for locks held by that application. Using
STMM, DB2 can automatically adjust this percentage, depending on the number
of concurrent transactions and the number of locks held by each concurrent unit
of work.

If there is only one active transaction, DB2 will adjust this to a large percentage.
However, if many applications are holding locks, this percentage might need to
be lower to avoid a scenario where one application consumes most of the lock
list, while the others quickly run out of space in the lock list and are forced to es-
calate. Properly configuring the LOCKLIST and MAXLOCKS parameters or using
STMM will prevent lock escalations.

24
CHAPTER 2: Performance Monitoring

If lock escalations are occurring, then abnormally large values can be expected for
the lock wait monitor elements, too. If lock escalations occur, other applications ac-
cessing that same table must wait for the escalating application to commit. In addi-
tion, if more applications are waiting for table locks to be released, there is a
greater possibility that one of these waiting applications will already be holding a
lock that will be requested by the escalating application. This would result in a
deadlock, with each application waiting for locks already held by the other.

Large lock wait values without lock escalations or deadlocks might indicate that
custom applications are not efficiently committing their units of work. Custom
applications should try to hold locks for as little time as possible, by performing
efficient SQL statements and accessing only required records, and by performing
related updates together, followed immediately by committing the unit of work.
Infrequent commits can hold locks excessively long, and increase lock wait
scenarios.

A lock timeout occurs when an application waits to acquire a lock for longer than
the LOCKTIMEOUT database configuration parameter, which is set to 3,600 sec-
onds (1 hour). This default value is much larger than any application should be
required to wait for locks. If a lock wait occurs, an application has probably hung
in the middle of a unit of work, and is holding locks abnormally. In this scenario,
an administrator will likely need to identify the hung database agent, and manu-
ally terminate that application using a command:

db2 force application (appl_handle)

This will cause a rollback and release the locks currently held by that application.

Logging
The transactional log files of the database maintain database integrity by contain-
ing a physical copy, on disk, of all committed database transactions. When data is
updated, the changes are made directly in the DB2 buffer pool, and logged in the
DB2 log buffer. When a transaction commits, each entry in the log buffer must
be successfully written from the log buffer to the log files before the commit

25
CHAPTER 2: Performance Monitoring

returns successfully to the client. Since writes to the log files occur synchro-
nously with each commit, fast SAP dialog response times depend on fast writes
to the DB2 transactional log files.

DB2 contains two kinds of log files: primary and secondary. The number and
size of these log files are set with the LOGPRIMARY, LOGSECOND, and LOGFILSZ
database configuration parameters. Primary log files are pre-allocated when the
database is created. Secondary log files are allocated on demand, whenever ac-
tive transactions exceed the total size of the primary log files. Therefore, the total
size allocated to primary log files should be large enough to hold all the log re-
cords expected from concurrent transactions during normal database activity.
Secondary log files should only be required for infrequent spikes in activity,
which may require additional log space.

Logging can be configured for either circular or archive logging. Circular log-
ging reuses primary log files once they no longer contain log records required for
crash recovery, which means that point-in-time recovery is not possible with cir-
cular logging. Therefore, circular logging is not suitable for production systems.

Production systems require archive logging, which ensures that all log files pro-
duced during the entire lifetime of the database are saved, and that point-in-time
recovery is always possible. When a primary log file becomes full, it is archived
(copied) by DB2 to the locations set in the LOGARCHMETH1 and LOGARCHMETH2
database configuration parameters. Once the log file is no longer needed for
crash recovery, it is renamed to the next log file sequence number, and its header
is re-initialized for re-use. During normal workloads in properly configured sys-
tems, the next empty primary log file usually already exists when the current log
file becomes full, and a transaction spanning multiple log files rarely incurs the
overhead of allocating the next log file.

The Logging tab, shown in Figure 2.8, displays the number and size of log files
available and allocated in the system. If the database is using secondary logs, you
can see the number currently allocated, and the maximum secondary log file
space used by the database.

26
CHAPTER 2: Performance Monitoring

Figure 2.8 The Logging tab displays information on log file consumption and logging I/O.

This information can help determine if the primary log space is adequate for your
current workload. In general, we recommend that the log file system should be
1.5 times the size of all primary and secondary log files configured for your sys-
tem. This ensures enough space for all configured log files, plus extra space for
inactive (online archive) logs waiting to be archived, or new logs being formatted
for future use.

If secondary log space is being used consistently, logging overhead may be re-
duced by allocating more primary log space. This is done by either increasing the
number of primary log files or increasing the log file size. First, always ensure
that the log file system is large enough to contain all of the configured logs. The
cockpit also displays the database application ID with the oldest uncommitted
transaction. This can help identify long-running transactions that might need
attention.

The Log Buffer Consumption section is valuable for determining the effective-
ness of page cleaning. The LSN Gap value specifies the percentage of the
SOFTMAX checkpoint that is currently consumed in log files by dirty pages. This

27
CHAPTER 2: Performance Monitoring

includes pages that have been changed in a buffer pool by both committed and
uncommitted transactions, but which have not yet been written to disk in the ta-
ble spaces. If this is above 100 percent, the page cleaners are unable to keep up
with the transaction workload on the system, and more page cleaners might be re-
quired. The Restart Range value is similar, but corresponds to the percentage of
SOFTMAX occupied in the log files by committed transactions. Statements in this
Restart Range will need to be rolled forward during crash recovery. Again, if this
is greater than SOFTMAX, more page cleaners might be required.

The I/O characteristics of the log file system are also provided. The Log Pages
Read displays the physical log file page reads required during rollback operations
in the database, and the Log Pages Written displays the pages of transactional
data written into the log files. The transaction commit time depends on the log
file system’s write performance. Therefore, having the fastest log file system
possible minimizes dialog response time. A well-performing system should have
log file system write times below 2 ms.

Ideally, very few log buffer overflows should occur. This indicates the number of
times any database agent has waited for log buffer flushes in order to write into
the log buffer. These can occur when large transactions produce a series of log
records larger than the buffer, or when high transaction volumes consume the en-
tire buffer with many smaller log records simultaneously. When this occurs, all
in-flight transactions must wait for the log buffer to be written to disk before they
can continue writing log records into the buffer. This introduces I/O wait into all
in-flight transactions and hurts performance. For optimal performance, the log
buffer should be large enough to avoid overflows during normal workloads.

Calls
The Calls tab, shown in Figure 2.9, contains a summary of the different types of
SQL statements issued, and their performance impact on the SAP system. This
displays the number of rows read, deleted, inserted, selected, and updated. These
can be compared to the number of DML and DDL statements executed and their
execution time, to understand the average number of rows read per SQL state-
ment, and the time spent processing those statements within the database.

28
CHAPTER 2: Performance Monitoring

Figure 2.9: The Calls tab displays how different types of SQL statements contribute to the
load on the database.

The Hash Joins section shows some interesting statistics on the hash join opera-
tions performed by the database. DB2 performs hash joins when large amounts of
data are joined by equality predicates on columns of the same data type (for ex-
ample, tab1.colA = tab2.colB). First, the inner table is scanned, and the relevant
rows are copied into memory and partitioned by a hash function. The hash func-
tion is then applied to the rows from the outer table, and the join predicates are
then only compared for inner and outer table rows hashing to the same partition.

If the hash join data exceeds sort heap memory, DB2 will consume temporary ta-
ble space on disk to compute the join. Obviously, performance will be better if
this can be avoided, and instead, the join can be done entirely within a sort heap.
If the total hash join data exceeds the sort heap by less than 10 percent, this
counts as a small overflow. If the number of small overflows is greater than 10
percent of the total overflows, avoiding these small overflows with a larger sort
heap may improve performance. If a single partition of data from the hashing
function (the set of rows hashing to the same value) is larger than the sort heap, a
hash loop results. When this occurs, the intermediate join of that one section of
data overflows to temporary table space, causing extra disk I/O for the join of in-
dividual hash partitions.

29
CHAPTER 2: Performance Monitoring

For performance reasons, always try to minimize the number of hash loops and
hash join overflows. With DB2 9.5, the sort heap memory parameters default to
automatic settings using the DB2 Self-Tuning Memory Manager. This allows
DB2 to automatically adjust the available sort heap memory to avoid unnecessary
hash join overflows or hash loops.

Sorts
The Sorts tab, shown in Figure 2.10, displays memory usage and overflows from
database sorts. The Sort Overflows value is probably the most important one on
this tab. Transactional systems should have less than one percent of total sorts over-
flowing from sort memory to temporary table space. BW systems may have more,
but overall, sort memory should be configured to avoid most sort overflows.

Figure 2.10: The Sorts tab shows the memory consumed by database sort operations.

The private and shared sort heap parameters can be compared with the current al-
located memory and high-water mark, to determine whether the sort memory
heaps are properly configured. DB2 9.5 defaults to automatic shared sort memory
and the Self-Tuning Memory Manager. This allows DB2 to manage sort memory
allocation based on overall system requirements, which avoids unnecessary sort
memory allocation and prevents most sort overflows.

30
CHAPTER 2: Performance Monitoring

XML Storage
The XML Storage tab provides I/O characteristics for XML Storage Objects
(XDA). This is only valid for database tables using the XML data type to lever-
age the DB2 PureXML features for storing and accessing XML documents na-
tively in XML format.

As of the writing of this book, SAP currently does not use DB2 PureXML fea-
tures. Therefore, this tab is really only valid for non-SAP databases cataloged
into the cockpit, or for user tables created manually by SAP customers.

Performance: Schemas
There should be very few schemas existing within a SAP database. The vast ma-
jority of database access is done through the SAP connection users, which default
to SAP<SAPSID> for ABAP systems, and SAP<SAPSID>DB for Java systems. The
only other users who generally connect are the SAP admin user, <SAPSID>ADM,
and the DB2 instance owner, DB2<DBSID>.

The Schemas dialog screen can be used to identify the activity of users
connecting to any database partition from outside the SAP application. I/O
performance characteristics of reads and writes can be monitored for each
schema.

Performance: Buffer Pool Snapshot


The default installation of SAP on DB2 creates all table spaces using 16KB
pages. By default, the only visible buffer pool is IBMDEFAULTBP, which is also
created with 16KB pages. If other table space sizes are created, then other buffer
pools corresponding to each unique page size must be created, too. However,
SAP recommends keeping everything in your system at a uniform page size of
16KB. This simplifies configuration and avoids the additional complexity in-
volved when joining tables with different page sizes.

The buffer pool snapshot provides the logical and physical read statistics for the
data, index, and temporary table spaces on all database partitions. If different

31
CHAPTER 2: Performance Monitoring

buffer pools have been created for different database objects, this provides an
easy interface to compare the individual statistics for each buffer pool on each
database partition.

The initial screen contains a list of all visible buffer pools created in the system,
along with an overview of their hit ratios and read characteristics.
Double-clicking on any buffer pool partition returns a more detailed buffer pool
snapshot for that particular buffer pool on that particular partition, as shown in
Figure 2.11. This displays the data and index read statistics, buffer quality, and
utilization state of the buffer pool. It also includes tabs showing the detailed
asynchronous and direct I/O operations, and performance characteristics for this
buffer pool. All of these details are important for proper performance tuning of
each individual buffer pool.

Figure 2.11: The Buffer Pool Snapshot displays detailed I/O information for an individual
buffer pool.

32
CHAPTER 2: Performance Monitoring

As a safety net, DB2 is also pre-configured with hidden buffer pools for each
possible page size (4K, 8K, 16K, and 32K). These hidden buffer pools ensure
that an appropriate buffer pool is always available. These hidden buffer pools
may be used if the system does not contain enough memory to allocate the de-
fined buffer pools, errors allocating the buffer pools occur during the database
activation, or if anything in the database performs I/O using a page size without a
corresponding user-defined buffer pool. Since these hidden buffer pools are only
16 pages in size, performance will likely suffer if they are used. An entry is
logged in the notification log whenever a hidden buffer pool is used.

Performance: Tablespace Snapshot


Evenly distributed I/O and fast access to the most frequently accessed data are
critical for performance. The performance statistics for each individual table
space can help administrators identify data hot spots or inadequate buffer pool
configurations. The Performance: Tablespace Snapshot screen, shown in Figure
2.12, displays the I/O statistics of each table space on each partition.

First, the most frequently accessed table spaces should have the highest buffer
pool hit ratios. Table spaces with a high number of logical reads should have a
buffer pool quality of at least 95 to 98 percent. The frequently accessed index ta-
ble spaces (with names ending in I) are especially critical for high hit ratios.

Next, the physical read and write times for all table spaces should be fairly fast.
Ideally, both read and write times should be under 5ms. If all table spaces have
slower I/O, you might simply have slow disks. However, this might also be a
sign of disk contention, especially if more frequently accessed table spaces are
slower than others. To improve performance, spread the data across a greater
number of physical disks, or move one or more frequently accessed tables to a
new table space on a new series of disks. The Tablespace Snapshot can be used,
together with the Operating System Monitor à Detailed analysis menu à
Disk Snapshot (from transaction ST06), to lay out table spaces and balance data-
base I/O evenly across all SAPDATA file systems.

33
CHAPTER 2: Performance Monitoring

Figure 2.12: The Tablespace Snapshot displays the I/O characteristics of all table spaces.

Similar to the previous buffer pool snapshot, double-clicking any row displays a
more detailed table space snapshot for the chosen table space and partition. This
snapshot shows the detailed buffer pool statistics, and the asynchronous and di-
rect I/O operations and performance characteristics.

Performance: Table Snapshot


Page reorganizations and overflow record accesses are two key performance indi-
cators for individual tables. Page reorganizations occur when an insert or update
is done to a data page that contains enough free space for the new data, but the
free space is fragmented within that page. Before the insert or update is per-
formed, the single page of data is reorganized to consolidate the free space at the
end of the page, and then the insert or update proceeds. This extra overhead can
hurt insert and update performance. However, if an update is being done, and a
page reorganization cannot reclaim enough contiguous space for the updated
row, the row must be moved to a new page. An overflow record (or pointer) is
then created to point from the original location to the new location on the other

34
CHAPTER 2: Performance Monitoring

page. When this row is accessed, DB2 must perform two I/O reads instead of
one: the first to read the pointer from the original location, and the second to read
the data from the pointer.

If a table contains a large number of page reorganizations or overflow accesses,


both of these problems can be fixed by reorganizing the table. Double-clicking
any table from the screen in Figure 2.13 loads the Single Table Analysis screen
(explained fully in the Chapter 3). The table reorganization can be executed from
Single Table Analysis, or scheduled through the DBA Planning Calendar
(discussed in Chapter 4).

Figure 2.13: The Table Snapshot dialog displays data access characteristics of individual
tables.

Also, if table space analysis has indicated unbalanced I/O, the table snapshot can
be used to identify the most frequently accessed tables. If several heavily ac-
cessed tables reside in the same table space, I/O can be balanced by separating
these tables into different table spaces on different sets of physical disks.

35
CHAPTER 2: Performance Monitoring

Performance: Application Snapshot


The main dialog of the Performance: Application Snapshot screen displays a
summary list of all the database applications with active connections to the data-
base. This overview gives descriptions of the applications, their status, their
buffer quality, and the number of reads performed. Almost all of these will corre-
spond to SAP work processes.

Double-clicking any application in the initial list displays a detailed snapshot for
that single application. Shown in Figure 2.14, this snapshot displays all of the key
application statistics, organized conveniently into unique screen tabs.

Figure 2.14: The Application Snapshot contains many tabs for accessing detailed informa-
tion on the resource consumption of the database applications.

The first Application tab describes the application on the host, and displays the
client user and SAP application server executing this application. The Agents tab

36
CHAPTER 2: Performance Monitoring

describes the number of agents, processing time, and memory usage for this ap-
plication. Note that with DB2 9.5, the parameters for the number of agents in the
database default to automatic, and are dynamically maintained by DB2 to opti-
mize memory utilization and performance.

The Buffer Pool tab displays the application’s detailed data, index and temporary
table space read statistics, and buffer pool quality. The read statistics can indicate
the I/O efficiency of the queries in this application. The performance details of
the non-buffered I/O (e.g. LOB access, backup and restore) are shown in the
Direct I/O tab.

The Locks and Deadlocks, Calls, Sorts, and Cache tabs contain the same infor-
mation as the database performance tabs, except that the details are specific to the
currently selected application. If an application is holding too many locks, caus-
ing lock escalations, or involved in deadlocks, consider looking more closely at
the application coding and SQL. A properly coded application will hold as few
locks as possible and commit as frequently as possible, so that locks are released
quickly. An application should commit as frequently as possible, and not perform
unnecessary calculations inside SQL units of work. The SQL statements should
also try to reduce the amount of data accessed during a query, and only return the
rows of relevance for the application.

The Unit of Work tab displays the length of time and log space consumption of
the current transaction. The Statement tab shows the statistics of the current state-
ment within the current unit of work. The Statement Text tab displays the current
SQL statement being executed. This screen also contains buttons to load the
optimizer execution plan for the statement, or to view the ABAP source code for
the program executing this SQL statement. These tools can be used to analyze the
program logic and SQL execution plans, to ensure efficient SQL and indexed ac-
cess to the data pages being fetched.

Performance: SQL Cache Snapshot


If administrators are going to spend their valuable time tuning the performance of
individual queries, then it makes sense to focus on the queries that most affect the

37
CHAPTER 2: Performance Monitoring

system. The Performance: SQL Cache Snapshot screen, shown in Figure 2.15, al-
lows administrators to easily identify the queries that are consuming the largest
amount of resources.

Figure 2.15: The SQL Cache Snapshot shows the execution time and resource consump-
tion of queries that have run in the system.

In the screen, the columns listing numbers of executions, total execution time,
and average execution time allow the DBA to identify the queries that take the
most execution time. The buffer pool hit ratio is given for each query, to identify
how much disk I/O the query is causing.

The next few columns provide valuable information about SQL query quality and
I/O quality. The Rows Read/Rows Processed column gives a ratio of how many
rows must be read to identify the rows required for the final result set. The BP
Gets/Rows Processed column indicates the number of pages that must be ac-
cessed from the buffer pool to read the final result set. The BP Gets/Execution
column provides the number of pages read from buffer pool per query execution.
If the number of rows read or the ratio of rows read to rows processed is high, the
index advisor might help to identify a better index, to reduce the number of rows

38
CHAPTER 2: Performance Monitoring

evaluated by the query. If the BP gets are high, clustering the table differently
might improve performance, or a table reorganization might help to reduce the
number of pages read from disk.

The last few columns of the Performance: SQL Cache Snapshot screen provide in-
formation on sorting. A query that displays a large number of rows written indi-
cates sort overflows to disk in the temporary table space. The cockpit also displays
the total number of sorts, number of sort overflows, and total time spent sorting
during the query. If sort overflows are occurring, and the total sort time is a signifi-
cant portion of the average execution time, further analysis of the query, indexes,
and potentially sort parameters might be required to try to reduce sort overflows.

Click the Explain button, and the optimizer execution plan is displayed, showing
the query cost and join methods. From there, click the Details button to open a
new window with all of the detailed optimizer data, including all indexes and da-
tabase objects accessed, join methods, and cardinality estimates for each join.
Click of the Index Advisor button, and the DB2 Advisor is run to suggest new,
optimal indexes to optimize data access for this query. (Both the Optimizer Ex-
plain and the Index Advisor interfaces are explained in detail in Chapter 8.)

Performance: Lock Waits and Deadlocks


A lock wait occurs when one application acquires a lock on a database object,
and then another application requests an incompatible lock on that same database
object. When this occurs, the second application must wait for the first to release
its lock, through either a commit or rollback. The amount of time applications
wait to acquire locks is the lock wait.

If the first application were to then request a lock already held by the second, the
two applications would enter a deadlock scenario. In this state, both applications
are waiting for locks held by the other, and neither can proceed. Deadlocks can
affect any relational database. They are usually caused by infrequent or missing
commit statements within custom applications.

DB2 resolves deadlocks automatically, by periodically checking for their exis-


tence, and when found, randomly selecting one of the deadlocked applications to

39
CHAPTER 2: Performance Monitoring

rollback. The frequency of deadlock checks is set by the database configuration


parameter DLCHKTIME, which defaults to 300,000 ms (5 minutes) in SAP
NetWeaver 7.0 systems. The rolled back application fails with a SQL0911N error,
and all of its locks are released. This allows the other application to acquire its
locks and proceed.

The active lock waits and deadlock scenarios can be seen through the Perfor-
mance: Lock Waits and Deadlocks screen shown in Figure 2.16. The screen lists
the database agents and lock types involved in all active lock waits and dead-
locks, and includes buttons to view the last SQL statement from each unit of
work involved in these scenarios. This provides real-time analysis of the applica-
tions causing locking issues.

Figure 2.16: The Lock Waits and Deadlocks dialog displays the current lock wait and dead-
lock scenarios that are actively occurring in the system.

40
CHAPTER 2: Performance Monitoring

Performance: Active Inplace Table Reorganizations


Reorganizations of large tables can take a long time to run. DB2 provides the
ability to perform online table reorganizations in-place in the table space, so that
it is not necessary to consume the table size in temporary table space to perform
the reorganization. The Performance: Active Inplace Table Reorganizations
screen allows the DBA to monitor and administer any active in-place reorganiza-
tion jobs. There are even buttons to allow the DBA to pause, resume, or suspend
active table reorganizations, if necessary.

Performance: History–Database
Catching performance problems in action is a reactive process. All administrators
should try to be proactive about monitoring performance trends and taking action
to prevent potential problems before they occur. Having easy access to these his-
torical trends makes proactive analysis much easier, and SAP can be configured
to collect this historical information when the system is registered into the DBA
Cockpit.

The Performance: History–Database screen shown in Figure 2.17 displays many


key historical performance indicators. The main screen displays the average read
and write times, the number of reads and writes, and the number of commits and
rollbacks, as well as information on lock waits, lock escalations, and deadlocks.
This information can be displayed in two ways. Select Total Day to display the
averages and totals for each day over the configured monitoring period. Select
Peak to display the maximum value for the day for all measured monitor ele-
ments. This gives a simple way to identify average and peak performance trends
over time.

41
CHAPTER 2: Performance Monitoring

Figure 2.17: Daily historical performance data can be analyzed in this dialog.

Clicking any single day in the list displays the details gathered for each monitor
element periodically throughout the day. This can be viewed in two different
tabs. The Snapshot tab provides the details of each individual sample throughout
the day. The Interval tab displays only deltas. Therefore, it will contain only en-
tries for times when one or more monitor element value changed from its previ-
ous value.

Performance: History–Tables
The performance history of individual tables is also available for proactive plan-
ning. For each table on each database partition, the Performance: History–Tables
screen displays the rows read and written, overflow records accessed, and page
reorganizations. This information can be displayed for each day, week, or month.
Both short- and long-term trends for table access can be easily analyzed, provid-
ing the DBA with the information needed to proactively plan for system changes
to accommodate changing workloads.

42
CHAPTER 2: Performance Monitoring

Performance Warehouse
The new SAP Database Performance Warehouse provides an integrated historical
performance analysis model for both the database and the SAP applications. Da-
tabase performance data is extracted and loaded from all SAP systems into a cen-
tral SAP NetWeaver BW warehouse. Historical performance data can then be
mined, trended, and analyzed, using powerful SAP NetWeaver BW interfaces
with charts, dashboards, and drill-down capabilities.

The ABAP cockpit contains a Reporting link for analyzing performance data and
a Configuration link for setting up the Performance Warehouse reporting parame-
ters. The Reporting screen links directly into the Performance Reporting
WebDynpro. An example of the data is given in Figure 2.18. This illustrates his-
toric buffer pool quality over a two-week period. This data clearly displays recur-
ring trends that can identify areas that might benefit from tuning.

Only this brief introduction to the Performance Warehouse will be given in this
book. More documentation on the Performance Warehouse can be found on the
SAP Service Marketplace or SDN.

Figure 2.18: The SAP Performance Warehouse displays detailed reports on historical
performance and resource consumption trends.

43
CHAPTER 2: Performance Monitoring

Summary
The performance section of the DBA Cockpit provides a comprehensive, single
interface for all DB2/SAP database performance monitoring and tuning. All of
the most important information is easily accessible, and displayed in an intuitive,
meaningful way.

Since all the information is in one location, it is easy to drill down from database
monitors, to table space and table monitors, to application monitors, and even
right down to SQL statement monitors. Administrators can start with a wide fo-
cus and methodically narrow that focus to the exact source of the problem under
investigation. The best part is that the tool is part of SAP, so both SAP basis and
database administrators can leverage this powerful tool in a familiar interface, to
get the best performance from their SAP systems.

44
Chapter 3

Storage Management
Flying Efficiently with Heavy Cargo

Properly managing the volume of data within your SAP


system is key for efficient performance. The DBA Cockpit
gives you all of the storage statistics and growth
characteristics you need to optimize database
storage, now and for the future.

S torage is the most frequently overlooked aspect of database performance


configuration, yet it can significantly contribute to how well a database per-
forms, because disk I/O is the slowest part of any computer interface. A poorly
configured data layout will ultimately be the constricting bottleneck, regardless
of how well the SQL statement is formed or what access plan is chosen.

DBAs can spend a lot of time designing and planning the layout of table spaces
for a storage subsystem. Today’s advanced storage subsystems offer many
choices on how physical disk volumes can be grouped into RAID arrays, and
within these arrays, how logical volumes (LUNs) can be defined and made avail-
able as usable storage to the database. Designing the placement of table spaces
can be more like an art than a science. The problem in spending so much time on
an elaborate design is that it is only appropriate for the quantity of data and

45
CHAPTER 3: Storage Management

workload at a given point in time. As the system matures and evolves, so must
the storage layout.

As companies adapt their SAP systems for future business needs, such as adding
additional modules, the amount of data inevitably grows. Therefore, the data ac-
cess pattern will evolve, rendering the initial data layout design obsolete. To keep
the system running optimally, time-consuming and intrusive administrative tasks
might be required regularly, to re-evaluate and re-optimize the data layout. Often,
a simpler, more generic storage layout, like that provided by DB2’s automatic
storage feature, provides a better solution for high performance and low mainte-
nance throughout the entire lifetime of the SAP application.

DB2 table spaces store their data in physical storage objects known as containers.
A table space can span one or more containers. Data within the table spaces are
striped evenly across all containers for that table space.

DB2 uses two types of table space concepts: System Managed Space (SMS) and
Database Managed Space (DMS). With SMS, the storage allocation within the
table space containers is managed by the operating system (OS). Containers are
OS directories, and a unique file exists in each container for each database object
residing in that table space. By default, I/O to these table spaces will be buffered
by the file system, and the sizes of the files in the containers will be extended or
reduced, depending on the quantity of data stored in the database objects. Addi-
tion and deletion of containers in SMS is only possible during a redirected
restore.

With DMS, the storage allocation within the table space containers is managed
by DB2. The containers are either pre-allocated files or raw devices. I/O to these
pre-allocated containers is handled mainly by the database, with little or no OS
overhead. The OS is only involved when pre-allocated file containers are ex-
tended or reduced. Also, addition and deletion of containers is possible online via
DDL statements.

To simplify the administration of the table spaces, all table space containers
should be spread as widely as possible on all disk spindles. Although an

46
CHAPTER 3: Storage Management

elaborately designed layout might briefly provide a slight performance benefit (of
perhaps five percent), this simpler approach will provide a more consistent I/O
pattern over time. It will also be less vulnerable to additions of new SAP mod-
ules, or changes in function and workload. DB2 has also introduced a feature
called Automatic Storage, in which the database is given a pool of storage (gener-
ally two or more file systems), from which table space containers will be allo-
cated. Automatic Storage is fundamentally a combination of DMS table spaces
(used for the System Catalog and User table spaces) and SMS table spaces (used
for Temporary table spaces).

In the DBA Cockpit, SAP has not only made the monitoring of database perfor-
mance metrics available, as described in Chapter 1, but it has also made the
maintenance of table spaces, tables, and indexes available in the SPACES section.

Automatic Storage
Automatic Storage is the default storage layout when installing DB2 with SAP
NetWeaver 7.0 and higher. During installation, SAPinst will use sapdata1 through
sapdata4 as storage paths. Depending on the storage subsystem and the number
of LUNs/file systems available, additional storage paths can be added at that
time. Administrators can view the DB2 storage paths from the DBA Cockpit, as
shown in Figure 3.1.

Once the database has been created, additional storage paths can be added, if the
original file systems containing the table spaces are getting full. Adding new stor-
age paths at this time will create a new stripe set of storage for all table spaces.

A new stripe set will not cause a rebalancing of data from the older set of con-
tainers into the new storage. The containers in the previous stripe set will be
filled before the new stripe set begins to be used. Therefore, to provide equiva-
lent performance, ensure that the I/O capacity of each stripe set is the same. This
requires a similar number of disk spindles in each stripe set. The simplest way to
achieve this is to always keep everything the same. Each time storage is extended
by adding new automatic storage paths, add the same number of sapdata file sys-
tems, always using identical LUNs from the storage system.

47
CHAPTER 3: Storage Management

Figure 3.1: Automatic Storage storage paths can be managed from within SAP.

To add new storage paths, just click and enter the new file systems in the
dialog shown in Figure 3.2. The new storage locations must exist and be accessi-
ble by the database. The bottom of the panel will display the DDL for the ALTER
DATABASE statement, as confirmation of the changes made.

Figure 3.2: Click the Add button to add a new storage path.

48
CHAPTER 3: Storage Management

Table Spaces
Table spaces in SAP can either be of Automatic Storage or DMS/SMS type. By
default, SAP NetWeaver 7.0 or higher will create the system catalog table space
and all data table spaces using Automatic Storage, and the temporary table spaces
using SMS. DMS/SMS table spaces can still be created for user data, even if the
database uses Automatic Storage.

As shown in Figure 3.3, the Tablespace screen displays the table spaces accord-
ing to their type, in either the Automatic Storage tab or DMS/SMS tab.

Figure 3.3: Table spaces are arranged according to type.

Detailed data about both the logical and physical storage consumption for each
table space is displayed in the following columns:

• Contents—The Contents column shows whether the table space was


created as a Regular, Large, System Temporary, or User Temporary table
space:

49
CHAPTER 3: Storage Management

? Regular table spaces are the default for SMS, but they can also be used
for DMS. They have smaller limits for maximum size and slots (rows
per page) than Large table spaces, and cannot contain LONG/LOB
data.

? Large table spaces are the default for DMS table spaces. They are only
allowed for DMS. They can contain both user data and LONG/LOB
data.

? System Temporary table spaces store the derived temporary tables


used by DB2 for sorts or joins that are too large to perform in main
memory.

? User Temporary table spaces store declared global temporary tables,


which are used by SAP NetWeaver BW to improve reporting
performance.

• TS State—The state of a table space is usually Normal, but it could be in


some other state, such as Quiesce, Backup, or Rollforward.

• KB Total—This is the allocated size of the table space, in kilobytes.

• Page Size—The page size for table spaces can be allocated in 4KB, 8KB,
16KB, and 32KB sizes.

• No. containers—This is the number of containers that have been allocated


for the table space.

• KB Free—This represents the amount of space in the table space that was
allocated, but does not contain any data pages.

• High-Water Mark—For DMS, this represents the current size as


represented by the page number of the first free extent following the last
allocated extent of a table space.

• Percent Used—This represents the total amount of space consumed up to


the high-water mark, in relation to the total allocated table space size.

50
CHAPTER 3: Storage Management

• Pending Free Pages—This is the number of pages in a table space that


would become free if all pending transactions were committed or rolled
back, and new space were requested for an object.

Maintenance of table spaces can be performed by selecting one of the three


choices shown in Figure 3.4: Change, Add, or Delete. Changing an existing table
space should be done if any permitted technical setting needs to be modified
(such as autoresize or prefetch size). Adding a new table space to DB2 can be
done here. It should be followed by the addition of this new table space to a SAP
data class in Configuration à Data Class. This will ensure consistency between
the DB2 table spaces and the SAP DDIC. Deletion of a table space will be re-
flected at both the database level and at the SAP data dictionary.

Figure 3.4: Table space maintenance


can be performed directly from within
SAP.

Adding a new table space in Automatic Storage requires the DBA to navigate
and modify three tabs. However, the DBA must first provide a new table space
name, beginning with Z or Y, for user customized objects.

The Technical Settings Tab


Notice in Figure 3.5 that AutoStorage is automatically selected if you create table
spaces from within the Automatic Storage tab. Next, you select the table space
content type (Regular, Large, System Temporary, or User Temporary), as de-
scribed in the previous section of this chapter.

51
CHAPTER 3: Storage Management

Figure 3.5: Specify the technical settings when creating new table spaces.

The settings in the Size of I/O Units area of Figure 3.5 will influence how DB2
will store the data on disk and access it. The SAP default is 16KB pages, two
pages per extent and prefetch size of automatic. The automatic value in the
Prefetch Size is a computed value based on the number of containers, the number
of disk spindles, and the extent size. The formula used for this calculation is as
follows:

Prefetch size =
(number of containers) *
(number of physical disks per container) *
(extent size)

52
CHAPTER 3: Storage Management

Disk Performance values are predefined, using default values. A different buffer
pool could also be assigned to this new table space, although you should maintain
only one buffer pool if all table spaces are of the same page size. DB2 requires
that there be at least one buffer pool of the corresponding page size for each page
size used by table spaces.

The Storage Parameters Tab


The Storage Parameters tab, shown in Figure 3.6, is used to determine the Initial
Allocated Size of the table space, the incremental growth when the table space
encounters a “table space full” condition (in either kilobytes or percentage), and
if there is a maximum size to which the table space can extend. If no maximum
size is specified, the table space will grow until it either reaches the maximum ta-
ble space size or consumes all the storage available in the file system. With the
default 16KB page size, the maximum table space size is 8TB in DB2 9 and 9.5,
and 32TB in DB2 9.7.

Figure 3.6: Define the storage parameters for new table spaces.

53
CHAPTER 3: Storage Management

The Containers Tab


In Automatic Storage, the database determines the number of containers based on
the number of storage paths assigned. Therefore, adding containers is not permit-
ted in Automatic Storage table spaces, as you can see in Figure 3.7.

Figure 3.7: DB2 defines the containers itself for Automatic Storage table spaces.

DMS/SMS Table Spaces


The DMS/SMS tab of the Tablespaces screen is shown in Figure 3.8. As you can
see, the information displayed is very similar to that of the Automatic Storage tab.

Figure 3.8: The table spaces for DMS/SMS are listed here.

54
CHAPTER 3: Storage Management

When adding a new table space under DMS/SMS, the only difference in the Tech-
nical Settings is that AutoStorage is not the default, as you can see in Figure 3.9.

Figure 3.9: Add a table space in the Technical Settings tab.

In the Containers tab for DMS/SMS, shown in Figure 3.10, the container infor-
mation is now required. For DMS containers, you must specify a full path and
file name for each container. For SMS, specify a directory for each container.

Figure 3.10: Container definitions are required for DMS and SMS table spaces.

55
CHAPTER 3: Storage Management

Containers
To view all the related containers for the table spaces, select the Containers op-
tion. The screen shown in Figure 3.11 will be displayed.

Figure 3.11: The Containers screen displays the containers for all table spaces.

The Containers screen displays storage parameters and statistics in the following
columns:

• Stripe Set—The stripe set to which containers belong determines the set of
containers across which DB2 will evenly distribute the data. In Automatic
Storage, when additional storage pools are added through “ALTER
DATABASE…ADD STORAGE ON…,” or via the DBA Cockpit, a new stripe
set is automatically created. In DMS table spaces, “ALTER TABLE
SPACE…BEGIN NEW STRIPE SET…” will create a new stripe set.

When adding storage to the database, it is always recommended to either


extend all containers in the current stripe set by the same amount, or create
a new stripe set. This keeps the containers balanced and avoids the data
movement and I/O caused by rebalancing.

56
CHAPTER 3: Storage Management

• Container Name—This contains the full path and file name of the
container for DMS, or a full directory path name for SMS.

• KB Total—This is the total allocated size, in kilobytes.

• Pages Total—This is the number of allocated pages. The size reported in


the previous column depends on the table space page size and can be
found in the table space technical settings.

• Accessible—If the table space is in a Normal state, all containers should be


accessible.

• FS ID/FS Free Size—These two columns relate to the file system on


which the container resides. If Auto Resize is enabled for the table spaces
occupying the file system, ensure that there is enough space in the file
system for the table spaces to grow.

Tables and Indexes


An SAP ERP 6.0 system contains over 50,000 tables. You can display all these
tables in the Tables and Indexes section, or you can limit what appears. When
this section is first selected, the pop-up filter shown in Figure 3.12 is displayed.
You can use this filter to narrow the choices of which tables you would like to
see, choosing from the following criteria:

• A certain table space


• A specific table, or table names that match a pattern
• Tables greater than or equal to a given size
• Flagged tables, which are tables or indexes that have exceed the threshold
for reorganization
• Large RIDs, which refers to displaying tables in large table spaces that
have not yet been enabled for large RIDs
• Tables that have a status of Not Available
• Tables that have a status of Reorg Pending

57
CHAPTER 3: Storage Management

• Tables that have Type-1 indexes, which were used in DB2 v7 and older,
before Type-2 indexes were introduced to improve concurrency

Figure 3.12: Selection criteria can filter the tables dis-


played.

In the Table and Indexes screen, tables that meet the criteria in the filter are dis-
played. The tables displayed here are also dependent on a set of DB6 tables that
are populated when the REORGCHK FOR ALL TABLES job is run in the DBA
Planning Calendar. (Select Jobs à DBA Planning Calendar.) If this job has
never been run, it might be possible that no tables are displayed.

The following columns are represented in the screen, which is shown in Figure
3.13:

• Schema—This is the schema of the tables.

• Table Name—This is the name of tables that qualified.

58
CHAPTER 3: Storage Management

• F1 REORGCHK formula—This value represents the overflow rows, as a


percentage of total rows in the table.
• F2 REORGCHK formula—This value represents the table size divided by
allocated space, as a percentage.
• F3 REORGCHK formula—This value represents full pages divided by
allocated pages, as a percentage.
• Table Flagged—If this is flagged, the table needs to be reorganized.
• Index Flagged—If this is flagged, the indexes on this table need to be
reorganized.
• Size—This is the table size, in kilobytes.
• REORG Check Date—This is the date when REORGCHK was last run
against the table, or when RUNSTATS was executed from dmdb6srp.
• REORG Check Time—This is the time when REORGCHK was last run
against the table or when RUNSTATS was executed from dmdb6srp.

Figure 3.13: The Tables and Indexes dialog displays the storage characteristics of the indi-
vidual database tables.

59
CHAPTER 3: Storage Management

In older versions of SAP NetWeaver, you can run REORGHK from this screen by
clicking , which will be located on the Application menu bar, near the
top of the screen. This opens a window, shown in Figure 3.14, to allow adminis-
trators to run a REORGHK on stale tables. New versions of SAP NetWeaver 7.0 no
longer have a REORGHK button. Instead, a REORGHK is executed every time the
table is loaded in Single Table Analysis.

Figure 3.14: You can run REORGCHK from the


Application menu bar.

Single Table Analysis


Have you ever wanted to know all the details related to a table? In the Single Ta-
ble Analysis screen, you can see all the technical information and statistical data
of both the table and related indexes, and have the ability to run maintenance
against it (e.g., RUNSTATS, REORGS, and Compression). You can also get to this
screen from Spaceà Table and Indexes, by selecting a table.

60
CHAPTER 3: Storage Management

The Table Tab


The table’s information is categorized into tabs for easy viewing of related data.
The Table tab is shown in Figure 3.15. The information in the REORG Check Sta-
tistics area of the Table tab is as follows:

• Last REORG Check—This is the date and time REORGCHK was last run.
• Total Table Size—This value represents the size of regular and long data
in the table, in kilobytes.
• Total Index Size—This value represents the size of all indexes for the
table, in kilobytes.
• Free Space Reserved—This is the percentage of free space in the table’s
allocated pages.
• F1 Overflow Rows—This is the percentage of overflow rows.
• F2 Table Size/Allocated Space—This is the percentage of general
fragmentation in the table.
• F3 Full Pages/Allocated Pages—This is the percentage of full pages
fragmentation.
• REORG Pending—This indicates if REORG is pending.
• Last REORG of Table—This is when REORG was last run.
• Runtime of Last REORG—This is the elapsed time of the last REORG.

The System Catalog area of the Table tab contains these values:

• Last Runstats—This indicates when RUNSTATS was last run against table.
• Tablespace—This is the table space to which the table belongs.
• Cardinality—This is the number of rows in the table.
• Overflow Records—This is the number of rows that span two or more
pages.
• No. of Pages with Data—This value represents pages that contain table data.

61
CHAPTER 3: Storage Management

• Total Number of Pages—This is the total number of pages consumed by


table.

• Value Compression—If this is flagged, column-value compression is used.

• Row Compression—If this is flagged, row compression is used.

• VOLATILE—If a table is marked as volatile, RUNSTATS is never run


against it, as the cardinality of the table changes constantly. VBDATA is
an example of a volatile table.

• Pooled, Cluster or Import/Export Table—This flag indicates whether the


table is defined as a pooled table, a cluster, or an import/export table in the
ABAP dictionary (e.g., CDCLS).

Figure 3.15: Storage details for individual tables are available in Single Table Analysis.

62
CHAPTER 3: Storage Management

The Indexes Tab


The Indexes tab, shown in Figure 3.16, displays the statistical information for in-
dexes. This information is similar to the table content, except that the REORGCHK
formulas are used to determine the cluster ratio and the index space fragmenta-
tion, as follows:

• F7 Ratio of Deleted Index Entries—With Type-2 indexes, it is possible


that RID entries in the index are pseudo-deleted. (Keys have been marked
as deleted when the row is deleted or updated.) This value is the number of
pseudo-deleted RIDs on non-pseudo-empty pages.
• F8 Ratio of Deleted Index Leafs—This is the number of pseudo-empty leaf
pages, over the number of leaf pages.
The System Catalog area of the Indexes tab contains the following statistics:

• Number of Leaves—This number indicates the leaf pages in the index


B*Tree.
• Number of Levels—This is the maximum number of pages traversed from
the index root page to a leaf page.
• Sequential Pages—This is the number of index leaves physically located
on the hard disk sorted by index without large intervals between them.
• Density—This indicates the relative density of the sequential pages, as a
proportion of the total number of index pages.
• First Key Cardinality—This is the number of unique values in the first
column of the index.
• First 2 Key Cardinality— This is the number of unique values in the first
two columns of the index.
• First 3 Key Cardinality— This is the number of unique values in the first
three columns of the index.
• First 4 Key Cardinality— This is the number of unique values in the first
four columns of the index.
• Full Key Cardinality— This is the number of unique values in all columns
of the index.

63
CHAPTER 3: Storage Management

Figure 3.16: Index storage statistics are also available in Single Table Analysis.

The Table Structures Tab


The Table Structures tab, shown in Figure 3.17, holds the table columns’ defini-
tions. Note that the column data type is based on DB2’s definition, not the ABAP
column data type.

Figure 3.17: The Table Structure tab displays the columns and their data types.

64
CHAPTER 3: Storage Management

The RUNSTATS Control Tab


The RUNSTATS Control tab is divided into two sections. The left side of the tab
controls the scheduling and execution of statistics collection. The right side indi-
cates the statistics profile (the type of statistics collected) for the table.

The data in the left side depends on the configuration of AutoRunstats. When
AutoRunstats is enabled (which is the default for SAP NetWeaver 7.0 installa-
tions), the screen appears as shown in Figure 3.18. The Statistics Attributes area
of the screen contains the following:

• Not VOLATILE (AutoRUNSTATS Included)—This indicates that the table


is eligible for AutoRunstats.

• VOLATILE (AutoRUNSTATS Excluded)—The table is volatile, so it is not


eligible for AutoRunstats.

If AutoRunstats is disabled, a Scheduling section appears in the tab along with a


Statistics Attributes section. The Scheduling section provides the following
information:

• Automatically—Statistics are collected by CCMS jobs scheduled from the


DBA Planning Calendar.

• On User Request—CCMS does not process this table. Statistics must be


collected manually by the user.

• Statistics Is Out-of-Date—Statistics are stale, and RUNSTATS is


recommended.

• Deviation—This value is the difference between the cardinality statistics


and the estimated cardinality based on insert and delete activities.

• Collect Data for Application Monitor—The table is monitored by the


Application Monitor (ST07).

65
CHAPTER 3: Storage Management

With AutoRunstats disabled, the Statistics Attributes are as follows:

• Statistics—Statistics are collected for this table. If the table is currently


marked as volatile, it will be changed to not volatile as soon as statistics
are collected for the first time.

• No Statistics and Volatile—The table is marked as volatile in the system


catalog, and no statistics will be collected.

The RUNSTATS profile for the table is displayed in the right side of the screen.
This displays the type and detail of statistics collected on the table. The Table
Analysis Method shows the options that RUNSTATS will use to collect statistics
for the table. The Index Analysis Method shows the options that RUNSTATS will
use to collect statistics for the indexes.

Figure 3.18: The RUNSTATS Control tab shows how DB2 collects statistics for
this table.

66
CHAPTER 3: Storage Management

The Index Structures Tab


The Index Structures tab, shown in Figure 3.19, lists the Index columns. If the ta-
ble contains more than one index, use the navigation buttons to pro-
ceed to the next index.

Figure 3.19: The Index Structures tab displays the database data type and size of the col-
umns in each index on this table.

The RUNSTATS Profile Tab


RUNSTATS can be executed with a previously stored statistics profile to gather
statistics for a table or a statistics view. This profile must have previously been
created with the SET PROFILE option of the RUNSTATS command. If such a profile
exists, it will be displayed in this tab. Click the RUNSTATS with Profile button at
the bottom of this screen to execute RUNSTATS with this profile.

The Table Status Tab


The Table Status tab, shown in Figure 3.20, holds all the size and technical infor-
mation of the table. The Physical Size and Logical Size sections break down the
space consumed physically and logically by the different object types of the table.

The following key pieces of information are shown within the Availability and
Other Technical Information section:

67
CHAPTER 3: Storage Management

• Inplace REORG Status—If running an online, in-place REORG, the status


could be one of the following: ABORTED, EXECUTING, NULL (no REORG is
currently running), or PAUSED.

• Large RIDs—Is the table using large RIDs? If the value is PENDING, the
table supports large RIDs (that is, the table is in a large table space), but at
least one of the indexes for the table has not yet been reorganized or
rebuilt. Therefore, that index is still using smaller, 4-byte RIDs. It must be
reorganized to convert it to the larger, 6-byte RIDs.

• Large Slots—Does the table support more then 255 rows per page? If the
value is PENDING, the table supports large slots (that is, the table is in a
large table space), but there has not yet been an offline table reorganization
or a table truncation operation. Therefore, the table is still using a
maximum of 255 rows per page.

Figure 3.20: The table size and status are available in the Table Status tab.

68
CHAPTER 3: Storage Management

The Compression Status Tab


DB2 introduced table data compression in DB2 9.1, and further enhanced it with
Automatic Data Compression (ADC) in DB2 9.5. Index and Temporary table
compression are not supported as of DB2 9.5. ADC enables DB2 to automati-
cally compress data that is loaded into a table created with the COMPRESS YES op-
tion, without running a REORG to build the compression dictionary.

The Compression Status tab is shown in Figure 3.21. If compression has been en-
abled on the table, the Compression Details area of the tab displays the compres-
sion statistics. Otherwise, if a compression check has been executed on the table,
the Compression Check Results can be used to evaluate the potential benefits of
compressing that table. The following information is displayed in this section
when the table is enabled for compression, and compression has already been ap-
plied to the data rows of the table:

• Current Dictionary Size—This is the compression dictionary size, in bytes.


The compression dictionary has a maximum of 150 KB, and can store up
to 4,096 compression symbols per table object.

• Approximate Percentage of Pages saved—This is the percentage of data


pages saved, by compression.

The Compression Check Results section shows the estimation of what can be ex-
pected if compression were enabled on the data rows of the table. It contains the
following values:

• Estimated Saved Pages—This is an estimate of the number of pages that


will be saved after compression, as a percentage.

• Estimated Saved Bytes—Commonly referred to as the compression ratio,


this is the estimated bytes that will be saved after compression, as a
percentage.

• Rows too Small—This is the number of rows that were too small to be
used for compression calculations.

69
CHAPTER 3: Storage Management

Figure 3.21: DB2 Deep Compression statistics are shown in the Compression Status tab.

The Application menu bar, shown in Figure 3.22, provides options to run
RUNSTATS, REORG, and Compression on the table. Select one of the buttons to
start the action.

Figure 3.22: Some database utilities can be run from the Single Table Analysis Application
menu bar.

When RUNSTATS is select to be run in the background, the dialog menu shown in
Figure 3.23 is displayed, with choices on how statistics will be collected for both
the table and index. Once the statistics collection method has been chosen, the
job could be run once, or repeated on a schedule from the Recurrence tab.

70
CHAPTER 3: Storage Management

Figure 3.23: Set the RUNSTATS parameters when scheduling


background statistics collection.

To check if a table is a good candidate for compression, click to


schedule a background job. The resulting estimate will be displayed in the Com-
pression Check Results section of the Compression Status tab screen. This action
will not create the compression dictionary, turn on the COMPRESS flag, or com-
press any rows.

To enable Compression, select . The dialog box in Figure 3.24 will


prompt you for the method of compression to use. The Status section indicates if
the COMPRESS YES flag was set on the table, either through CREATE TABLE or
ALTER TABLE, and if a compression dictionary pre-existed. There are then two
options for enabling compression:

• Enable Compression—This will build the compression dictionary, but


leave all existing data in the table uncompressed. New or changed data
will be eligible for compression.

71
CHAPTER 3: Storage Management

• Enable Compression and Run REORG—This will build the compression


dictionary, run an offline REORG against the table, and compress all
existing rows in the table. All new rows added to the table are eligible for
data compression.

Figure 3.24: Select whether to compress just new rows, or both


new and existing rows.

Virtual Tables
Virtual tables were introduced by SAP to save disk storage and help improve the
performance of many utilities, such as Automatic RUNSTATS and Automatic
REORG. The concept of virtual tables is simple. Do not materialize (create) an
empty table in the database. Instead, just logically define it in the SAP DDIC.
When the first row is inserted into a virtual table, the SAP Database Support
Layer (DBSL) determines that the table does not yet exist in the database. It is-
sues the CREATE TABLE statement to materialize the table before inserting that
first row.

SAP systems contain thousands of empty tables. Each empty table may consume
as many as 11 extents (22 pages or 352K, with the default 16K page size and ex-
tent size of 2). These extents are consumed by the following:

72
CHAPTER 3: Storage Management

• One Extent Map Page (EMP) extent


• One data extent
• Two extents for the index object
• One page for each index
• Two extents for a LONG field object
• Four extents for a LOB object

The first screen in the Virtual Table tab, shown in Figure 3.25, lists all of the vir-
tual tables in the system. To materialize a virtual table manually, select it and
click the Materialize button.

Figure 3.25: The Virtual Tables tab contains the list of virtual tables.

73
CHAPTER 3: Storage Management

The second tab in the Virtual Table screen, shown in Figure 3.26, lists all the
empty tables that are eligible to be converted to virtual tables. If the Convert
Empty Tables button was selected, all eligible tables will be dropped from the da-
tabase and re-created as virtual tables in a background job. Eligible tables match
the following criteria:

• Empty
• Non-volatile
• Does not have a partitioning key defined
• Non-MDC

Figure 3.26: Empty tables that can be virtualized are listed in the Candidates for
Virtualization tab.

74
CHAPTER 3: Storage Management

Historical Analysis
The History Overview screen provides a general overview of the size and quan-
tity of table spaces, tables, and indexes in the database. The Database and
Tablespaces tab of the screen is shown in Figure 3.27.

Figure 3.27: A database size overview is provided in the Database and Tablespaces
tab of the History Overview.

This tab provides the following statistics:

• Last Analysis—This date and time indicates when the last analysis was run
to collect the history information of the database objects.

• Total Number—This is the number of table spaces in the database.

75
CHAPTER 3: Storage Management

• Total Size—This is the total size of all table spaces.

• Free Space—This is the amount of free space (in kilobytes) in all the table
spaces.

• Used Space—This is the amount of space used, as a percentage of total space.

• Minimum Free Space in a Tablespace—This is the free space of the table


space with the lowest amount of free space (in kilobytes). This information
is of little value if the table spaces are of Automatic Storage Table Space
or have RESIZE enabled, as they will be resized when they are filled.

• Maximum Used Space in a Tablespace—This is the percentage of used


space of the table space with the most amount of data.

Database Partitions—This value provides the number of partitions in a


multi-partitioned SAP NetWeaver BW database. The Tables and Indexes tab of
the History screen, shown in Figure 3.28, provides an overview of the quantity
and space consumed by the database’s tables and indexes.

Figure 3.28: The Tables and Indexes tab displays the size of the tables and indexes.

76
CHAPTER 3: Storage Management

The Database and Table Spaces


The History—Database and Tablespaces screen, shown in Figure 3.29, displays
the change history for the database, tables, and indexes. You can use the informa-
tion here to help plan capacity.

Figure 3.29: The Space tab displays the change history of database and ta-
ble space storage consumption..

Double-clicking a row in the list of tables and indexes shown in Figure 3.30 dis-
plays a detailed history that documents the item’s size changes over time.

Figure 3.30: The Tables and Indexes tab displays the historical storage consumption for ta-
bles and indexes.

77
CHAPTER 3: Storage Management

In the example in Figure 3.31, the Delta Tables value for 07/03/2008 was nega-
tive, indicating that some tables were deleted from the database. In this case, they
were converted to virtual tables.

Figure 3.31: Historic details of a database’s size changes are available here.

Tables and Indexes


This History—Tables and Indexes screen, shown in Figure 3.32, provides access
to the historical data of tables and indexes. Initially, this screen displays all the
tables and indexes found in the database with their respective sizes and changes,
together with REORGCHK information.

78
CHAPTER 3: Storage Management

Figure 3.32: Histiorical size changes for the tables and indexes are displayed here.

Selecting any object and double-clicking its row provides more historical data for
that object, as shown in Figure 3.33.

Figure 3.33: Double-click a table to see its historical size changes.

79
CHAPTER 3: Storage Management

Summary
Managing database storage, monitoring database object size, and planning stor-
age capacity are all key operations for ensuring the stable, efficient, and
cost-effective operation of any SAP system. The SAP DBA Cockpit provides
easy access to many of the most important DB2 features for storage management.
Regular maintenance tasks, such as reorganization and statistics collection, are all
easily executed on-demand, scheduled as repeating jobs, or enabled for automatic
DB2 maintenance with a couple of mouse clicks. Powerful performance and
space-optimization features, such as compression and virtual tables, are also fully
integrated by SAP into the DB2 cockpit, making the unique benefits of DB2 easy
to implement in an otherwise complex environment.

80
Chapter 4

Job Scheduling
Flying on Auto-Pilot

The DBA Planning Calendar saves time and reduces


administrative effort by giving DBAs a simple interface to
schedule the most common repeating maintenance jobs,
and an efficient way to create, save, and schedule
complex database administration tasks.

A utomating tasks is one of the easiest ways to reduce workload. SAP on


DB2 provides an integrated interface for both job scheduling and monitor-
ing. Administrators can monitor all SAP systems on the central planning calen-
dar, or modify recurring database jobs through the DBA Planning Calendar.
There is even an interface to create and store custom scripts, and then schedule
those scripts in the calendar.

The Central Calendar


The SAP Central Calendar, shown in Figure 4.1, displays the complete list of
user-defined database background jobs scheduled on all SAP systems configured
for central monitoring. It is a read-only, holistic view that provides administrators
a single point from which they can monitor database jobs for remote systems

81
CHAPTER 4: Job Scheduling

running different versions of SAP, different relational databases, and databases


for Java stack and non-SAP systems.

When registering a remote system is registered in the DBA Cockpit, you must
click the Collect Central Planning Calendar Data checkbox to allow SAP to
update the central calendar with the job status for this system. Then, schedule the
Central Calendar Log Collector job to run every morning on the system normally
used for monitoring. This will collect and consolidate all the remote systems’ cal-
endar data on that one SAP system.

Figure 4.1: The Central Calendar shows all database jobs for all registered SAP systems.

The central planning calendar shows a single entry for each system with a job
scheduled for that day, in the format “001 <SID> 001,” where <SID> is the SAP
system ID. The first number indicates the number of jobs scheduled for that day
for the given SID. The second number indicates the number of those jobs that
have finished with the same, highest status severity. The severity will be indi-
cated with a color code for that cell in the calendar.

82
CHAPTER 4: Job Scheduling

You can easily see which systems and jobs have had warnings or errors. Dou-
ble-click any date on the calendar to view the details of all the jobs scheduled on
all systems for that date. Double-clicking any specific entry takes you to the
DBA Planning Calendar for that date and SAP system. This allows administra-
tors to view the detailed job logs, and then modify or re-execute those jobs.

The DBA Planning Calendar


The DBA Planning Calendar, shown in Figure 4.2, is used to automate regularly re-
curring database jobs. Any recurring database process can be automated through
the calendar. However, many of the most common jobs are predefined in an Action
Pad within the calendar. Administrators need only drag and drop the jobs from the
Action Pad to the calendar to schedule their execution on the SAP system.

When a new SAP system is installed, no recurring jobs are initially scheduled.
The administrator must determine the pattern of jobs required for that system.
These jobs may, or may not, run in parallel. Therefore, the schedule must take
into account dependencies between the jobs and their impact to the system. There
are also a few database-related jobs that run regularly in every SAP system:

• Collection of database performance history—This job runs every two


hours, starting at 00:00.

• Monitoring database and database manager configuration parameter


changes—This job runs daily at 08:00, 13:00, and 19:00.

• Collection of table and index space history—This job runs weekly on


Sunday, at 12:00.

Keep these jobs in mind when planning the DBA background jobs in the
calendar.

83
CHAPTER 4: Job Scheduling

Figure 4.2: The DBA Planning Calendar provides scheduling and monitoring of background
database jobs.

The DBA Planning Calendar provides a wizard to help with the initial setup of
the recurring administration tasks on each SAP system. To run the wizard, click
the Pattern Setup button. This wizard steps through the setup of a backup sched-
ule, automatic table REORG, and the scheduling of the REORGCHK for all Tables
job. For each job, reasonable default times are provided, but these can be
changed as desired. The remaining jobs can either be scheduled from the list of
common jobs in the Action Pad next to the calendar, or created and scheduled in
the calendar as a command line processor (CLP) script.

REORGCHK for All Tables


The REORGCHK for all Tables job must be scheduled in all DB2 SAP systems.
This job performs much more than just the standard DB2 REORGCHK. It is very
important for the proper display of Space data in the cockpit (formerly transac-
tion DB02). This job performs the following tasks:

84
CHAPTER 4: Job Scheduling

• Executes the DB2 REORGCHK tool to obtain REORG recommendations for


tables and indexes

• Calculates the size of tables and indexes, which is used for creating
incremental space consumption history

• Determines special situations for tables (e.g., REORG_PENDING)

• Can perform compression estimates, starting with SAP BASIS 7.0 SP12

All calculated data is stored in SAP database tables and displayed in the DBA
Cockpit under Space à Tables and Indexes. Therefore, the REORGCHK for all
Tables job should be scheduled to run weekly, to ensure that accurate data is dis-
played in the cockpit. SAP recommends excluding the compression check from
the recurring job, because a compression check of all (potentially over 50,000)
tables can take a long time. If you need a full compression check, schedule it
once during low workload hours, or use the /ISIS/ZCOMP ABAP report, at-
tached to SAP Note 980067. (See SAP Note 1268290 for the most recent recom-
mendations about the REORGCHK for all Tables job.)

Scheduling Backups
The calendar’s Action Pad now contains four options for backup:

• Database Backup into TSM—Back up to Tivoli Storage Manager.

• Database Backup to Device—Back up to a file system directory or tape


device.

• Database Backup with Vendor Library—Back up to a different storage


manager, such as Veritas NetBackup.

• Snapshot Backup—Back up using flash copy and split mirror.

You can schedule full backups, or a combination of delta, incremental, and full
backups to satisfy your time and recovery requirements. All backups scheduled
through the DBA Planning Calendar are now done online.

85
CHAPTER 4: Job Scheduling

Archiving Log Files to a Tape Device


If archive logging is configured to a disk location in LOGARCHMETH1, the DB2
Tape Manager can be used to move those archived logs to a tape device. The Ar-
chive Log Files to Tape job can be scheduled periodically in the DBA Planning
Calendar to perform this task. The default behavior of the Tape Manager is to
copy log files to the specified tape device, and then remove the log files from the
file system specified in the LOGARCHMETH1 database configuration parameter.

Options can be specified in the calendar’s job scheduler to archive each log file
to two different locations on tape (the Double Store option), overwrite expired
tapes, or eject the tape at the end of the operation. You should always keep two
redundant copies of each archive log file. Therefore, either set LOGARCHMETH1
and LOGARCHMETH2 to different file systems, or use the Double Store option of
the Tape Manager to keep two copies of each log file on tape.

Updating Statistics
By default, DB2 updates statistics automatically using its Real Time Statistics
(DB2 9.5) and Automatic RUNSTATS features. Every two hours, a daemon pro-
cess checks tables for change activity and updates the table statistics, if neces-
sary. With DB2 9.5 Real Time Statistics, if the optimizer determines that
statistics are too stale to provide acceptable query performance, it invokes statis-
tics collection or estimation during query optimization. This removes almost all
need for administrators to worry at all about table statistics.

By default in SAP, both regular and distribution statistics are collected for all ta-
bles, and detailed statistics are collected for all indexes using sampling. This aug-
ments the regular table statistics with additional range histogram data for all
columns of the tables, and collects detailed statistics for the indexes by sampling
the individual index keys in each index. For SAP NetWeaver BW tables, table
statistics are only collected on key columns. If a different statistics collection
method is desired for certain tables, administrators can either update the statistics
profile using the DB2 RUNSTATS command, or schedule the RUNSTATS and
REORGCHK for Single Table job, which allows the RUNSTATS parameters to be
tailored specifically for that RUNSTATS invocation.

86
CHAPTER 4: Job Scheduling

Table Reorganization
There are several jobs and maintenance settings for reorganizing tables. Although
there is an Automatic REORG job in the planning calendar, the native DB2 auto-
matic REORGis recommended instead for tables smaller than 1GB. This is ex-
plained further in Chapter 6. For larger tables, REORG jobs can be run on demand,
or scheduled periodically through the calendar.

Since larger tables are excluded from automatic REORG, a REORGCHK must be per-
formed on these tables periodically, to determine when a REORG is required. This
can be done by scheduling the REORGCHK for All Tables job. This job is also a pre-
requisite for the correct functioning of the Space Tables and Indexes dialog screen
in the DBA Cockpit (formerly transaction DB02). Therefore, this must be scheduled
to occur regularly in every SAP system. You should run it at least once weekly.

The Action Pad then contains three additional table REORG jobs:

• REORG and RUNSTATS for Set of Tables—This job allows the administrator
to enter a list of tables to be periodically reorganized. The REORG can be
done in either offline (read-only) or online mode.
• REORG and RUNSTATS of Flagged Tables—This job reads the flagged table
details from the REORGCHK for All Tables job, and generates a list of tables
to be reorganized. Since the list of tables is generated when the job is
scheduled, this job does not recur; it must be scheduled each time it is to be
executed. The administrator can select all or part of the list for offline table
reorganization, and specify an optional maximum runtime for this job.
• REORG of Tables in Tablespace(s)—This job allows the administrator to
select one or more table spaces for reorganization. An offline REORG will
then run on all tables in that table space. The administrator can again select
a maximum runtime for this job.

Custom Job Scripts


Custom command line processor (CLP) scripts can be written, saved, and sched-
uled for recurring maintenance or administrative tasks not available in the Action
Pad. Simply select the CLP Script job from the action pad, and you can write

87
CHAPTER 4: Job Scheduling

new scripts, load scripts from text files, or select predefined CLP scripts created
from the SQL Script Maintenance dialog screen. These custom scripts can then
be scheduled in the calendar in the same manner as any other job.

It is convenient to create background-job CLP scripts in the SQL Script Mainte-


nance screen and save these scripts within SAP. You can then select these scripts
from a drop-down list when scheduling recurring CLP script jobs.

All jobs are listed on the DBA Planning Calendar with a color code to specify
status. Any entry on the calendar can be clicked to display its details. Future jobs
can be modified. Completed jobs can only be viewed, and also contain a tab to
display the Job Log. The Job Log contains the status messages and output pro-
duced by the background job. This provides a good first point of problem deter-
mination for jobs that do not complete successfully.

The DBA Log


The DBA Log, shown in Figure 4.3, displays a color-coded list of database back-
ground jobs run on the current system. It contains the start and end times, the ac-
tion performed, and the return code status of the job.

Figure 4.3: The DBA Log displays the status of database background jobs.

88
CHAPTER 4: Job Scheduling

The display defaults to the list of jobs executed during the current week. Previous
weeks can be displayed by double-clicking dates from the calendar. The display
can also be filtered by severity, by clicking the status icons ( ) in the
Summary. This gives administrators a very easy way to view weekly job status
and identify any jobs that did not complete successfully.

Back-end Configuration
The Change Back End Configuration screen, shown in Figure 4.4, provides an in-
terface to control the execution of the DBA Planning Calendar’s background jobs
on different SAP systems. For each system, a unique background server, user,
and job priority can be configured.

Figure 4.4: The Change Back End Configuration dialog configures the server, priority, and
user for executing background database jobs.

89
CHAPTER 4: Job Scheduling

SQL Script Maintenance


The SQL Script Maintenance screen, shown in Figure 4.5, provides a SAP repos-
itory for custom-coded DB2 CLP scripts. Administrators can add, modify, or de-
lete custom SQL scripts. All saved scripts appear in a list in this screen, from
where they can be selected and executed. The user is prompted for the SAP sys-
tem on which to run the script, and then the output is displayed back into the SAP
GUI screen.

Figure 4.5: SQL Script Maintenance allows administrators to store frequently run SQL
scripts within SAP, so they can be scheduled easily in the DBA Planning Calendar.

More commonly, the saved SQL scripts can be scheduled from the DBA
Planning Calendar, by selecting the CLP Script job from the Action Pad. The
saved scripts can be selected from a drop-down list, and then scheduled to recur
as required. The status of these jobs is then displayed in the DBA Log. To view
detailed results of a job, double-click it from the DBA Planning Calendar or from
the Job Overview (transaction SM37).

90
CHAPTER 4: Job Scheduling

Summary
The DBA Cockpit for DB2 contains all the functionality administrators need to
easily schedule and monitor any type of recurring database task in SAP systems.
Predefined jobs and centralized monitoring greatly simplify normal SAP data-
base maintenance, and the custom repository provides the flexibility to easily de-
fine, maintain, and schedule more complex database maintenance tasks.

91
Chapter 5

Backup and Recovery


Reviewing Your Flight Logs

The DBA Cockpit Backup and Recovery option displays


all the information that a DBA needs to verify the
successful operation of the database backup
and log archival processes.

S AP applications store their data in the underlying database, in this case DB2.
Objects like application and technology tables, ABAP programs, and user
data (customizations and transactional data) are all stored in DB2 database ob-
jects. Therefore, administrators need to protect the database, so it can be recov-
ered in case of a problem (such as a user or application error) or a major
catastrophe (such as a disk crash).

Two components work in conjunction to protect the database: database backups


and transaction log files. Backups can recover the database up to the point when the
backup was taken. Log files can recover the remaining transactions that were com-
mitted after the backup. Transaction log files work in a circular mode by default,
meaning that they are not archived; old transactions are overwritten by new ones.

The first step to enable the necessary protection to the data is to enable archival
logging for the SAP database. To do that, the DBA needs to configure the

92
CHAPTER 5: Backup and Recovery

LOGARCHMETH1, and optionally, LOGARCHMETH2 database configuration param-


eters and take a full offline backup of the database. When archival logging is en-
abled for the database, the transaction logs are automatically archived to the
method(s) specified in these parameters, so no manual intervention is necessary.

The second step to protect the database is to take backups regularly, so that fewer
transaction log files are applied to recover the database to the latest consistent
point in time. Backing up the database is a recurring task. The best way to pro-
gram these backups is to schedule jobs in the DBA Cockpit.

Finally, it is also crucial that the DBA validates the backup procedure by check-
ing the message logs, and most importantly, through programmed restores on a
test system. We recommend a restore test once every three months.

The Backup Strategy


The DBA needs to define a backup strategy for the SAP database based on differ-
ent options and considerations:

• Frequent database backups—More backup images means less dependency


on transaction logs for recovery. However, backups consume resources
like memory and I/O, so they can affect the performance of the database.
Using the backup utility in throttled mode is an option to alleviate the
performance impact.

• Few database backups—With few database backups, the DBA relies more
on the transaction logs for a possible database recovery. The problem with
this approach is that there could be many transaction logs to apply, so
recovery could take longer.

• Use of full and incremental backups—The combination of full and


incremental backups can be used in the backup strategy. Incremental
backups tend to be faster and smaller than full backups, but do not contain
all the data in one single backup image to restore the database completely.

93
CHAPTER 5: Backup and Recovery

Another fact worth noting is that the backup utility spawns multiple DB2 agents,
to increase parallelism during the backup procedure. DB2 uses at most one agent
per table space. In some cases, therefore, you might consider separating some ta-
bles (usually the largest ones) in their own table spaces, to increase parallelism
during the backup of the database.

Utility Throttling
As a DB2 administrator, you must perform some regular maintenance tasks to
keep the database running at optimal performance while protecting its data. Many
of these maintenance tasks are performed through DB2 utilities (in both online
and offline mode), such as these:

• BACKUP, a data backup utility covered in this section


• REORG, which defragments tables and indexes
• RUNSTATS, for statistics collection
• REBALANCE, which rebalances extents among all containers of a table
space

The fact that these utilities must execute regularly causes a dilemma for the
DBA, since they consume system resources and can affect the performance of the
database. You can opt to run these utilities in an offline maintenance window, but
in a 24x7 world, such windows are getting smaller or are even non-existent.
Therefore, in most cases, these utilities must execute online with user transac-
tions. Your challenge is to minimize their impact on the system.

DB2 provides a feature called adaptive utility throttling, which allows mainte-
nance utilities to run concurrently with other user transactions, while keeping
their system resource consumption in controlled limits. Before running utilities in
throttled mode, the DBA has to enable a database manager configuration parame-
ter, called UTIL_IMPACT_LIM. This parameter dictates the overall limit at the in-
stance level that all utilities together can affect. Values for this parameter range
from one to 100, and the unit of measure is a percentage of allowable impact on
workload within this DB2 instance. For example, setting this parameter to 100
means that all utilities run in unthrottled mode.

94
CHAPTER 5: Backup and Recovery

Once this instance-wide limit is specified, you can run utilities in throttled mode
when they are started or after they have started running. To run in throttled mode,
a utility must also be invoked with a non-zero priority. For example, to run the
backup utility in throttled mode, specify the following option when launching the
BACKUP command:

backup database <SID> util_impact_priority 60

The UTIL_IMPACT_PRIORITY option accepts values between one and 100, with
one representing the lowest priority, and 100 the highest. If the
UTIL_IMPACT_PRIORITY keyword is specified with no priority, the backup will
run with the default priority of 50. If UTIL_IMPACT_PRIORITY is not specified, the
backup will run in unthrottled mode.

If there were another utility running at the same time in throttled mode (for ex-
ample, a RUNSTATS with priority 50), both utilities combined should affect the
system at a maximum limit of UTIL_IMPACT_LIM. The utility with higher priority
would get more of the available resources.

The DBA also has the option to specify the backup priority directly on the DBA
Cockpit, when the backup job is scheduled through the DBA Planning Calendar
(described in the following section). Again, this will only have an effect if the
UTIL_IMPACT_LIM (impact policy) has been set to a value other than 100. (SAP’s
standard configuration has this parameter set at ten percent.)

Scheduling Backups in the DBA Cockpit


As described in Chapter 4, database tasks are scheduled through the DBA
Planning Calendar. Four backup options are offered in the Action Pad:

• Database Backup into TSM—Back up to Tivoli Storage Manager.


• Database Backup to Device—Back up to a file system directory or tape
device.
• Database Backup with Vendor Library—Back up to a different storage
manager, such as Veritas NetBackup.

95
CHAPTER 5: Backup and Recovery

• Snapshot Backup—Back up using storage copy technology, such as Flash


Copy and Split Mirror.

When one of these backup actions is dropped into the calendar, a new window
pops up, Schedule a New Action, shown in Figure 5.1. Backup options are speci-
fied in this window.

Figure 5.1: Schedule a new backup action here.

In the Action Description area of this window, the DBA can redefine the action,
date, and time. In the Action Parameters tab, you can choose different options for
the backup, such as these for Backup Mode:

96
CHAPTER 5: Backup and Recovery

• Online—The database is available for other applications, but the backup


image is not consistent. Log files must be applied to bring the database to a
normal state, in case of recovery.

• Online Include Logs—The database is available for other applications, and


the backup image produced contains all the information necessary to bring
the database to a consistent state, should a recovery be necessary. There is
no dependency on separate log files to bring the database to a normal state.

Note that online backups are only possible when log archival mode is enabled.
(Archive logging ensures that log files are saved when they fill up, and are not
reused.)

There are three options on the tab for Backup Type:

• Full—The entire database is backed up.


• Incremental—Only changes since the last full backup are copied.
• Incremental Delta—Only changes since the last successful backup
(whether full or not) are copied.

The TRACKMOD database parameter needs to be set to YES to use the options for
incremental backups.

Clicking the Compress checkbox in the tab means the backup image is created in
compressed format. There are also several optimization parameters on the tab:

• Number of Buffers—This is the number of backup buffers used for the


backup operation.
• Buffer Size—This is the size of each backup buffer.
• Parallelism—This is the number of table spaces that can be read in
parallel by the backup utility.

97
CHAPTER 5: Backup and Recovery

These parameters are not mandatory. If not specified, DB2 will define optimal
values for the parameters without explicit values. The remaining two parameters
in the tab are as follows:

• Priority—The backup will run in throttled mode, with the priority


specified, where one represents the lowest priority, and 100 represents the
highest. Throttling allows the administrator to regulate the performance
impact of the backup operation.

• Device/Directory—This is the device or directory path specified to store


the backup image created.

Once the backup options are specified, add the task to the DBA Planning Calen-
dar using the Add button. The backup can be monitored in the Job Log tab,
shown in Figure 5.2.

Figure 5.2: The Job Log screen can be used to monitor the progress of the backup.

98
CHAPTER 5: Backup and Recovery

If you have access to a terminal and can log into the machine where the database
server resides (as db2<SID> or another user with the necessary authority), you can
display more details about the backup job using the LIST UTILITIES command.
The output of this command, shown in Figure 5.3, includes interesting informa-
tion about all utilities that are running at the moment. For backups, some of the
information presented includes the database name, description, state, throttling
mode, and percentage complete.

Figure 5.3: The LIST UTILITIES command can also be used to monitor the progress of the
backup.

With database backups, logs archived, and validation tests, you now provide the
necessary protection to the SAP database. Should a recovery be needed, you
would restore the database using one of the backup images, and then apply the
logs up to the time of interest.

Multi-partition Databases
To handle a multi-partition database, the DBA Cockpit offers the options to back
up each partition individually or to back up all of them in one single job. On DB2
9.5, there is a new feature called single system view, in which a multi-partition

99
CHAPTER 5: Backup and Recovery

database is managed similarly to a single-partition one, in terms of backups and


configuration of parameters.

To back up all partitions of a database in a single job, just select the option All in
the Partition field (only available when the database is multi-partition), displayed
in the Schedule a New Action window.

Advanced Backup Technology


Today, SAP databases often grow very large. It is common to see SAP systems in
the order of tens of terabytes. With databases this large, instead of using the regu-
lar DB2 backup utility, you might substitute backups taken with storage technol-
ogies like Flash Copy. In this case, DB2 provides a means to stop I/O operations
on the database, so storage commands can be applied to copy the LUNs used by
the SAP database. For recovery, DB2 provides the db2inidb command.

On DB2 9.5, this type of backup is integrated into the backup utility, so fewer
configurations are necessary. This backup option is available in the DBA
Planning Calendar (Action Pad) as Snapshot Backup.

The DB2 Recovery History File


Every DB2 database contains a history file. This file is used to record database
activities, such as database and log files backups, database restores, table space
management, table load, and table reorganizations. The contents of this file can
be displayed with the LIST HISTORY command. For example, this is the command
to see information about all backups for a particular database:

list history backup all for db <SID>

The recovery history file can grow very quickly, so DBAs might have to prune
some of the old information. This is done with the command PRUNE HISTORY.

The num_db_backups and rec_his_retentn database parameters can be used to


manage the amount of information kept in the history file. The num_db_backups

100
CHAPTER 5: Backup and Recovery

parameter specifies the number of backups to keep active in the history file. Once
the number of backups exceeds this value, the oldest backups are marked as ex-
pired in the history file. The entries for these expired backups are then deleted
from the history file when the next backup is performed.

The rec_his_retentn parameter specifies the number of days to keep backup infor-
mation in the history file. This configuration parameter should be set to a value
compatible with the value of num_db_backups. For example, if num_db_backups
is set to a large value, rec_his_retentn should be large enough to support that
number of backups. The PRUNE HISTORY <timestamp> command will remove
backup information from the history file for backups older than the value of this
parameter. If this value is set to -1, um_db_backups determines the expiration of
history file entries.

If the following command is run, the archived log files will also be removed
from the archive storage location:

PRUNE HISTORY <timestamp> AND DELETE

However, the DB2 backup images still need to be manually deleted after they expire.

With DB2 9.5, DBAs can also set the database configuration parameter
auto_del_rec_obj=on, which enables DB2 to automatically do the following oper-
ations when either the PRUNE HISTORY AND DELETE or BACKUP commands are
run:

• Delete expired backup images from the file system.


• Delete corresponding expired archived log files from the archive media.
• Prune the expired entries from the history file.

Setting these parameters allows DB2 9.5 administrators to simply schedule nor-
mal backups. When those backups complete, DB2 will automatically maintain
the required number of backups, archived logs, and history entries, and automati-
cally delete anything that has become expired.

101
CHAPTER 5: Backup and Recovery

The Backup and Recovery Overview Screen


The Backup and Recovery Overview screen displays information about past
backups and log archival information. These two pieces of information are sepa-
rated by two different tabs: Database Backup and Archived Log Files.

The Database Backup Tab


The Database Backup tab, shown in Figure 5.4, contains information about back-
ups that were scheduled and have been processed. By default, the DBA Cockpit
displays backup information from the last 30 days. (This can be changed to see
older information.) For details, double-click in the backup execution of choice.

The DBA Cockpit also categorizes the backup executions using a color scheme.
An execution displayed in green means that it finished successfully. If a backup
execution is red, it was aborted with an error. The DBA can then diagnose the
backup failure using the Diagnostics option of the DBA Cockpit (discussed in
Chapter 8).

Figure 5.4: The execution status of previous database backups can be checked here.

The Archived Log Files Tab


The Archived Log Files tab, shown in Figure 5.5, displays information about
transaction log files that were moved from the active log directory to the archive
destination, such as a different directory or a Storage Manager (TSM, Veritas,
Legato, etc). Notice that the DBA Cockpit also displays the log chain in which a

102
CHAPTER 5: Backup and Recovery

log file belongs. A log chain is a DB2 feature used to control log files that have
the same name but different contents. With this feature, the DBA doesn’t need to
manually control which logs to apply in a recovery scenario. DB2 manages this
automatically.

Figure 5.5: The log files that have been archived are displayed here.

Logging Parameters
The Logging Parameters screen shows information about the transaction log
files. Transaction log files are used to keep track of database transactions. These
files are used in recovery scenarios (crash or roll-forward recovery). The follow-
ing recovery scenarios require log files:

• Crash recovery—This is the procedure to recover the database to a


consistent state when the database aborts in an abnormal way. Log files
containing non-committed transactions, or transactions that have been
committed but not yet applied on the table spaces, are used in this case.
These logs are called active log files.

• Roll-forward recovery—This procedure is used when recovering the


database from a backup image and log files. In this scenario, log files that
have been archived are also used.

This screen is divided into multiple tabs: Log Directory, RCHMETH1, and possi-
bly ARCHMETH2. (The ARCHMETH2 tab will be displayed when you have enabled

103
CHAPTER 5: Backup and Recovery

two methods to archive log files.) The archival methods are controlled by the da-
tabase configuration parameters LOGARCHMETH1 and LOGARCHMETH2.

The Log Directory


The information displayed in the Log Directories tab, shown in Figure 5.6, re-
lates to active and online archive log files. Some of the data provided here in-
cludes the directory name, the number of files and directories, the first active log
file (for crash-recovery purposes), the size of the log files, and the number of pri-
mary and secondary logs.

From here, you can also monitor the space used and available in the file system.
This monitoring is necessary to avoid “log full” error messages, when there is no
more space available for new log files. You can set the blk_log_dsk_ful database
configuration parameter, so that the DB2 database manager will repeatedly at-
tempt to create the new log file until the file is successfully created, instead of re-
turning “disk full” errors.

For performance reasons, the log directory should also be mounted on separate
disks, preferably on RAID 10 LUNs.

Figure 5.6: The Log Directory tab displays information about log files, as well as log space
usage.

104
CHAPTER 5: Backup and Recovery

The ARCHMETH1 Tab


The ARCHMETH1 tab, shown in Figure 5.7, displays information about the archi-
val method specified in the database configuration (parameter LOGARCHMETH1).
This parameter specifies the location of the archived transaction logs.
LOGARCHOPT1 specifies log archive options, which are used when the log files
are archived to Tivoli Storage Manager.

Figure 5.7: The ARCHMETH1 tab displays information about the logs saved by the log ar-
chive method specified.

Summary
Database backups and log file management are essential activities to protect the
SAP system against unplanned situations. Planned situations, such as system
cloning, are also related to backups and log files activities. Such activities can be
easily scheduled and monitored through the DBA Cockpit, as described in this
chapter.

105
Chapter 6

Configuration
Optimize Your Flight Patterns

The default DB2 configuration settings are optimized for


normal SAP workloads and provide the best possible
performance “out of the box.” The DBA Cockpit provides
DB2 DBAs with an SAP interface to further tune their
configurations for the unique workloads on their SAP systems.

D atabase configuration is one of the areas that greatly influences database


performance. Therefore, it is a major area of database performance tuning.
A well-configured database environment can ensure the database manager runs
smoothly in a multi-user system and responds to each user application quickly by
using the resource available on the database server, effectively and efficiently.
With DB2 LUW, four areas of a database environment can be configured:

1. Operating system environment variables


2. DB2 profile registry variables
3. Database manager configuration parameters
4. Database configuration parameters

All of the variables and configuration parameters in these areas have default val-
ues supplied by DB2. However, the DB2 default values will usually not meet the

106
CHAPTER 6: Configuration

performance required by SAP systems. Therefore, SAP provides its own set of
default or recommended values for these variables and configuration parameters.
SAP default values can be obtained from the following SAP notes, one for each
supported DB2 version, respectively:

• 584952, “DB6: DB2 UDB ESE Version 8 Standard Parameter Settings”


• 899322, “DB6: DB2 9 Standard Parameter Settings”
• 1086130, “DB6: DB2 9.5 Standard Parameter Settings”

In these notes, some parameter values are recommended by SAP and should not
be changed. Other parameter values, though, are initial values that should be ad-
justed according to the particular system workload, as well as the hardware re-
source available. These SAP default values will also be set automatically during
the SAP installation.

On one hand, a large number of variables and configuration parameters can be


tuned. This book only discusses the most important ones. Configuration parame-
ters also vary in different DB2 version. Our discussion is based on the latest DB2
version, 9.5. On the other hand, many configuration parameters can be simply set
to AUTOMATIC so that DB2 can automatically set the parameter values, or tune
them dynamically based on the system workload and resource.

Autonomic computing is one of the strategic directions of DB2 product. The ulti-
mate goal for DB2 is to become self-configuring, self-healing, self-optimizing,
and self-protecting. Hence, DB2 becomes a zero-administration database. By
sensing and responding to situations that occur, autonomic computing shifts the
burden of managing a database system from database administrators to DB2
technology. This greatly reduces the total cost of ownership (TCO).

In addition to these DB2 variables and configuration parameters, the DBA Cock-
pit also provides maintenance tool for other areas of database and system config-
uration. All of these tools are organized in the following sections under the
Configuration menu:

107
CHAPTER 6: Configuration

• Overview • Special Table Regarding


• Database Manager RUNSTATS
• Database • File Systems
• Registry Variables • Data Classes
• Parameter Changes • Monitoring Settings
• Database Partition Groups • Automatic Maintenance Settings
• Buffer Pools

The Overview Screen


Choose Configuration à Overview, and you will be able to view the general
information about the database and the operating system, as shown in Figure 6.1.

The general information about the database includes the database name, the in-
stance name, the database version, and the fix pack level. If the database is in-
stalled as a High Available Disaster Recovery (HADR) database, the detailed
HADR status information will also be displayed here.

Figure 6.1: The Overview screen shows general information about the database and the
operating system.

108
CHAPTER 6: Configuration

The Database Manager


A number of configuration parameters are defined on the database manager (or
instance) level. These parameters control the database execution environment,
database diagnostics and monitoring options, database security, system resource
utilization (CPU and memory), network connectivity, etc.

For some database manager configuration parameters, the database manager must
be stopped (db2stop) and restarted (db2start) for the new parameter values to
take effect. Other parameters can be changed online. These are called
configurable online configuration parameters. Some parameters support the
AUTOMATIC value, which means the database manager will tune the runtime
value automatically based on the current system workload and the system re-
source available.

Choose Configuration à Database Manager, and you will be able to view and
maintain database manager configuration parameters. All parameters are nicely
grouped in a tree structure, as shown in Figure 6.2. To view parameters belong-
ing to a particular group, such as Memory, click its name to expand the tree.

Each parameter has a short description, a technical name, the current value, and
the deferred value. The current value is the active value stored in the memory,
while the deferred value is the value stored in the configuration file on the disk,
which will not take effect until the database manager (or instance) is restarted
next time.

109
CHAPTER 6: Configuration

Figure 6.2: View and maintain database manager configuration parameters here.

Note that some parameter values are associated with a unit. For example, the pa-
rameter INSTANCE_MEMORY is measured in 4KB unit. If this parameter is set to
250,000, its actual value is 250,000 multiplied by 4KB, i.e., 1,000MB.

To change a parameter value, follow these steps:

1. Double-click the parameter that you want to change. Detailed


information about this parameter is displayed in a new group box in the
lower part of your screen, as shown in Figure 6.3.

Figure 6.3: Detailed information about a parameter is displayed here.

110
CHAPTER 6: Configuration

2. Click (the “Display <-> Change” button), and enter the new
configuration parameter values. Some configuration parameters are
enabled for automatic value adjustment. In this case, the checkbox
AUTOMATIC is displayed. If you select it, the value will automatically be
maintained by DB2. You can also enter the new value, which will be
used as the starting value for automatic adjustment.

3. Click the Execute Change button to confirm the change.

Table 6.1 lists some parameters that require tuning after the system is installed.
For other parameter settings, please refer to the SAP notes mentioned earlier in
this chapter.

Table 6.1: Examples of Database Manager Configuration Parameters

Technical Name Description Recommended Value

INSTANCE_MEMORY This parameter specifies <Value> 4KB


the maximum amount of
memory that can be The <value> should be the total
allocated for a database amount of physical memory that
partition. is allowed to be consumed by
the database manager. In a
partitioned database, this is the
memory allocated by a single
partition.

SHEAPTHRES This parameter is an 0


instance-wide soft limit
on the total amount of When this parameter is set to
memory that can be zero, no private sort will occur.
consumed by private All sort operations should be
sorts at any given time. done in the database shared
memory, not in the agent private
memory. By allocating sort heap
from the database shared
memory, it allows the sort heap
size to be tuned by DB2
automatically.

111
CHAPTER 6: Configuration

Table 6.1: Examples of Database Manager Configuration Parameters

Technical Name Description Recommended Value

MON_HEAP_SZ This parameter AUTOMATIC


determines the amount
of the memory, in pages, The monitor heap can increase
to allocate for database as needed until the
system monitor data. instance_memory limit is
reached.

INTRA_PARALLEL This parameter specifies This parameter should only be


whether the database turned ON in SAP NetWeaver
manager can use BW and related systems, such
intra-partition parallelism. as APO and SEM, based on a
single-partition database.

MAX_QUERYDEGREE This parameter specifies 1 or <value>


the maximum degree of
intra-partition parallelism If INTRA_PARALLEL = NO, this
that is used for any SQL parameter should be one.
statement executing on Otherwise, use a value equal to
this instance of the the number of CPUs allocated to
database manager. the database partition.

The Database
There are a large number of configuration parameters defined on database level.
Some parameters are informational, as they show the database attributes (such as
database codepage) and the database states (such as backup pending and
roll-forward pending). Most of the other parameters are configurable, as they are
used to control system resource utilization (CPU, memory, and disk I/O), transac-
tion logging, log file management, database automatic maintenance, database
high availability, and so on.

In a partitioned database (DPF), each partition is an independent runtime envi-


ronment because DPF is based on the share-nothing architecture. Therefore, each
partition of the same database has its own set of configuration parameters. The
value for the same database configuration parameter could vary from partition to

112
CHAPTER 6: Configuration

partition, although it is recommended to maintain uniform parameter values


among all partitions belonging to the same database.

Like the database manager configuration parameters (DBM CFG), most of data-
base parameters (DB CFG) are configurable online. In addition, many parameters
can be simply set to AUTOMATIC so that DB2 will tune the values dynamically.

In particular, starting in V9.1, DB2 introduced a new memory-tuning feature that


simplifies the task of memory configuration by automatically setting values for
several memory configuration parameters. This feature is called Self-Tuning
Memory Manager (STMM). When enabled, the memory tuner dynamically dis-
tributes available memory resources among several memory consumers, includ-
ing the sort heap, the package cache, the lock list, and the buffer pools, in
response to significant changes in workload characteristics.

Choose Configurationà Database, and you will be able to view and maintain
database configuration parameters. As you can see in Figure 6.4, all parameters
are nicely grouped in a tree structure similar to the database manager configura-
tion parameters. The same interface layout is used to view and modify the param-
eter values.

Figure 6.4: All parameters are grouped in a tree structure.

113
CHAPTER 6: Configuration

You might also notice the little Show Value History icon beside the configu-
ration parameters in the Self-Tuning Memory Manager group. By clicking the
icon, you will see the value change history for the corresponding parameter. The
result for a parameter is displayed in a separate window. By default, the value
history information is displayed as a chart, as shown in Figure 6.5. To switch to a
tabular view, click the List button. To limit the history time frame, choose From
date and/or To date.

Figure 6.5: Clicking the Show Value History icon for an STMM configuration parameter dis-
plays a chart of value history information.

In a multi-partitioned database environment, each database partition has its own


set of database configuration parameters. In general, we recommend that all data
partitions have the same parameter values if the workload and the system re-
sources are the same on these partitions.

With the DBA Cockpit, it is easy to compare the database configuration parame-
ter settings for multiple partitions. On the Configuration: Database–Display
screen, click the button. Select the partitions that you want to compare
in the Select Partitions to Compare pop-up window, and then click Compare. A

114
CHAPTER 6: Configuration

Configuration: Database–Compare Partitions screen will be displayed, as shown


in Figure 6.6.

Figure 6.6: Clicking the Compare button on the Database–Display screen displays this
comparison.

115
CHAPTER 6: Configuration

Table 6.2 highlights some important parameters related to database memory


settings.

Table 6.2: STMM (Self-Tuning Memory Manager) Parameters

Technical Name Description Recommended Value

DATABASE_MEMORY This parameter specifies the AUTOMATIC


amount of memory that is
reserved for the database shared When it is set to AUTOMATIC,
memory region. If this amount is the initial database shared
less than the amount calculated memory allocation is the
from the individual memory configured size of all heaps and
parameters (for example, buffer pools defined for the
LOCKLIST, utility heap, database. The memory will be
bufferpools, and so on), the larger increased as needed.
amount will be used.

LOCKLIST This parameter indicates the AUTOMATIC


amount of memory that is
allocated to the lock list. There is This parameter is enabled for
one lock list per database, and it self-tuning.
contains the locks held by all
applications concurrently
connected to the database.

MAXLOCKS This parameter defines a AUTOMATIC


percentage of the lock list held by
an application that must be filled This parameter is enabled for
before the database manager self-tuning
performs lock escalation.
.The value of LOCKLIST is tuned,
together with the MAXLOCKS
parameter. Therefore, enabling
self-tuning of the LOCKLIST
parameter automatically enables
self-tuning of the MAXLOCKS
parameter.

PCKCACHESZ This parameter is used for AUTOMATIC


caching sections of static and
dynamic SQL statements on a This parameter is enabled for self
database. tuning.

116
CHAPTER 6: Configuration

Table 6.2: STMM (Self-Tuning Memory Manager) Parameters

Technical Name Description Recommended Value

SORTHEAP This parameter defines the AUTOMATIC


maximum number of memory
pages to be used for sorts. This parameter is enabled for
self tuning. The self-tuning of
SORTHEAP is allowed only
when the sort heap is
allocated from the database
shared memory, i.e., shared
sorts.

SHEAPTHRES_SHR This parameter represents a AUTOMATIC


soft limit on the total amount of
database shared memory that This parameter is enabled for
can be used by sort memory self-tuning. See more details
consumers at any one time. on the database manager
configuration parameter
SHEAPTHRES in Table 6.1.

SELF_TUNING_MEM This parameter determines ON


whether the memory tuner will
dynamically distribute This parameter enables
available memory resources, memory self-tuning. Because
as required between memory self-tuning redistributes
consumers that are enabled memory between different
for self-tuning. memory areas, there must be
at least two memory areas
enabled for self-tuning to
occur.

Registry Variables
Two types of variables can be maintained in the Registry Variables section of da-
tabase configuration: operating system environment variables and DB2 profile
registry variables. These variables control how to start up and run the database
manager. Only a handful of variables need to be set in the OS environment. Most
variables can now be set in the centrally controlled DB2 profile registry.

117
CHAPTER 6: Configuration

Environment Variables
In an SAP database instance, you will find some DB2-related OS environment
variables being defined in db2<dbsid>, <sid>adm, and sap<sid> user profiles,
such as these:

DB2INSTANCE=db2<dbsid>
INSTHOME=/db2/db2<dbsid>

These OS environment variables are defined automatically during the SAP in-
stance installation, and will not be changed. Hence, no ongoing maintenance is
required on the environment variables.

Registry Variables
Registry variables are centrally controlled by DB2 profiles. There are four profile
registries:

• The DB2 Instance Level Profile Registry—Most of the DB2 environment


variables are placed within this registry. The environment variable settings
for a particular instance are kept in this registry. Values defined in this
level override their settings in the global level.

• The DB2 Global Level Profile Registry—If an environment variable is not


set for a particular instance, this registry is used. This registry is visible to
all instances pertaining to a particular copy of DB2 ESE. One global-level
profile exists in the installation path.

• The DB2 Instance Node Level Profile Registry—This registry level


contains variable settings specific to a database partition in a partitioned
database environment. Values defined in this level override their settings
at the instance and global levels.

118
CHAPTER 6: Configuration

• The DB2 Instance Profile Registry—This registry contains a list of all


instance names associated with the current copy. Each installation has its
own list. You can see the complete list of all the instances available on the
system by running db2ilist.

DB2 configures the operating environment by checking for registry values and
environment variables, and resolving them in the following order:

1. Environment variables set with the set command (or the export
command on UNIX platforms).

2. Registry values set with the instance node level profile (using the db2set
-i <instance name> <nodenum> command).

3. Registry values set with the instance level profile (using the db2set -i
command).

4. Registry values set with the global level profile (using the db2set -g
command).

Choose Configuration à Registry Variables, and you will be able to view


these variables.

119
CHAPTER 6: Configuration

As you can see in Figure 6.7, the environment variables and the DB2 profile reg-
istry variables are displayed on the same screen. They are identified by different
“scopes.”

Figure 6.7: Environment variables and the DB2 profile registry variables are displayed on
the same screen.

You will notice that the first registry variable is DB2_WORKLOAD, which is an ag-
gregate variable. An aggregate registry variable allows several registry variables
to be grouped as a configuration that is identified by another registry variable
name. As of DB2 9.5, the only valid aggregate registry variable is
DB2_WORKLOAD. When DB2_WORKLOAD is set to the value SAP, DB2 engine im-
plicitly sets a list of registry variables, depending on the current DB2 version and
fix pack, to the values that are optimized for SAP systems. These variables,
shown in Figure 6.8, can influence different areas of the database manager, such
as the DB2 optimizer, locking behavior, table object creation, and MDC usage.

These variables and their respective values are chosen by the SAP and IBM DB2
development team to optimize the database manager for SAP applications, based
on the team’s customer experience and knowledge of the SAP applications. They
cannot be changed in the DBA Cockpit screen because they are not intended to
be tuned by customers. Some of these variables are even undocumented. The
workload values can be superseded by explicitly setting these registry variables
to different values. However, this should only be done on the advice of SAP
global support or IBM DB2 support, to address a specific need. In general, SAP
customers only need to ensure DB2_WORKLOAD is set to SAP.

120
CHAPTER 6: Configuration

Figure 6.8: Here is the DB2 workload optimized for SAP.

Parameter Changes
Choose Configuration à Parameter Changes, and you will be able to view the
current and previous settings of the registry variables, database manager, and da-
tabase configuration parameters. You can also view the date and time of the
change. This feature can help DBAs keep track of the parameter’s change
history.

The initial screen, shown in Figure 6.9, only displays the active values for the
variables and configuration parameters. To see the change history, select History
in the Parameter field. You can also specify the period of the change history, as
well as the Parameter Type, which can be set to either Registry Variables, DB
Manager, or Database.

121
CHAPTER 6: Configuration

The parameter change history data is collected by a standard DBA job, “Collec-
tion of DB/DBM Config History,” on an hourly basis. The data collected is saved
in an SAP table and can be displayed on this screen.

Figure 6.9: The initial Parameter Changes screen displays the active values for the variables
and configuration parameters.

Database Partition Groups


In an SAP NetWeaver BW or BW-based system, you can use the DB2 database
partitioning feature (DPF) to deploy the SAP database on multiple partitions.
This supports the high performance and scalability required by a large data
warehouse.

122
CHAPTER 6: Configuration

In a multi-partitioned database, you can use a partition group to define a set of


partitions, on which a table space can be created. A table created within that table
space can be distributed across this group of partitions.

Choose Configuration à Database Partition Groups, and you will be able to


view and maintain database partition groups.

By default, the SAP installation program (SAPinst) will only create a database
with a single partition (partition number 0000). Therefore, all predefined partition
groups will be defined on this partition initially, as shown in Figure 6.10.

Figure 6.10: All predefined partition groups will be initially defined on parti-
tion 0000.

After you add a new partition, you can use the Edit button on this screen to mod-
ify the existing partition group, or use the Add button to define a new partition
group. You can also use the Delete button to remove a partition group on which
no table space exists.

Buffer Pools
A buffer pool is an area of main memory that has been allocated by the database
manager for the purpose of caching table and index data as it is read from disk. A
DB2 database can have one or multiple buffer pools.

123
CHAPTER 6: Configuration

Unlike other memory pools in the database, a buffer pool is considered a data-
base object, and its size is not controlled by a configuration parameter. To create
a new buffer pool, change the size of an existing buffer pool, or delete an existing
buffer pool, choose Configuration à Buffer Pools.

By default, the SAP installation program (SAPinst) creates a default buffer pool
named IBMDEFAULTBP, with a 16K page size, as shown in Figure 6.11. Buffer
pools usually take up the biggest portion of the database shared memory. You
can specify a buffer pool size either to a fixed size or to AUTOMATIC. If the buffer
pool size is set to AUTOMATIC, and STMM is enabled, the actual buffer pool size
will be tuned by DB2 automatically, in response to workload requirements.

Figure 6.11: This is the default buffer pool.

When you create a new table space, you need to associate it with a buffer pool of
the same page size. Therefore, if you have table spaces created on different page
sizes, you have to create multiple buffer pools corresponding to those page sizes.

In a partitioned database, a buffer pool will be created on all database partitions,


by default. However, you can also specify the partition group in which a buffer
pool will be created.

To view the buffer pool size, page size, associated partitions, and table spaces,
double-click the buffer pool from the list shown in Figure 6.11. Detailed informa-
tion about the buffer pool will be displayed, as shown in Figure 6.12.

124
CHAPTER 6: Configuration

Figure 6.12: The buffer pool’s detailed information is displayed here.

Special Tables Regarding RUNSTATS


Before a SQL statement can be executed by DB2, it needs to be compiled, so that
an execution plan can be generated by the DB2 optimizer. To generate an effi-
cient execution plan, the optimizer needs to have intimate knowledge about the
tables involved in the SQL statement, such as the table size, table cardinality, and
data distribution. This information is called table statistics. Table statistics
changes when table content is modified, for example, when new rows are in-
serted, or existing rows are updated or deleted. Hence, the table statistics needs to
be refreshed from time to time, so that the optimizer has up-to-date information
to decide on the execution plan.

Table statistics can be refreshed manually by using DB2’s RUNSTATS command,


or automatically by using DB2’s automatic statistics feature. The updated statis-
tics will be stored in the database catalog tables.

125
CHAPTER 6: Configuration

We recommend that you enable DB2 automatic statistics feature for a SAP sys-
tem. To do this, either update the database configuration parameter
AUTO_RUNSTATS, or select Configuration à Automatic Maintenance Setting
in the DBA Cockpit.

There are some special tables whose cardinality and content can vary greatly in run
time. These tables are called volatile tables. For volatile tables, statistics data col-
lected by RUNSTATS often becomes inaccurate. Therefore, the statistics of these ta-
bles should not be collected and should not be used by the optimizer. Volatile
tables are marked in the DB2 system catalog table, so that the optimizer can iden-
tify these tables. The automatic statistics feature will not apply to these tables.

To mark a table as volatile, use DB2’s ALTER TABLE…VOLATILE command. Al-


ternatively, in the DBA Cockpit, select Space à Single Table Analysis à
Runstats Control.

To see a list of volatile tables, choose Configuration à Special Tables Regard-


ing RUNSTATS. A list similar to Figure 6.13 will be displayed.

Figure 6.13: A list of volatile tables is shown here.

126
CHAPTER 6: Configuration

File Systems
Choose Configuration à File Systems, and a list of file systems is displayed, as
shown in Figure 6.14. The information displayed on this screen can help you to
determine how much free space is available in these file systems. (This function
is not available for systems monitored using a remote database connection.)

Figure 6.14: The File Systems screen can help you to determine how much free space is
available.

Data Classes
A data class is used by the SAP DDIC (Data Dictionary) to define the physical
area of the database (i.e., the table space) in which the table should be created.
On DB2 LUW databases, each data class is mapped to two table spaces, the Data
Tablespace and the Index Tablespace.

This function can be used to maintain the relationship between a data class and
DB2 table spaces. It is only available for SAP ABAP systems.

Choose Configuration à Data Classes. A list of SAP ABAP data classes and
their corresponding DB2 table spaces is displayed, as shown in Figure 6.15. On
this screen, you can click the Edit button to modify the data class and table
spaces mapping, the Add button to create a new data class as well as its associa-
tion to table spaces, or the Delete button to drop a data class.

127
CHAPTER 6: Configuration

Figure 6.15: A list of SAP ABAP data classes and their corresponding DB2 table spaces is
displayed here.

A table space must be created before it can be associated to a data class. To cre-
ate a table space from the DBA Cockpit, select Space à Tablespaces. A new
data class name must also conform to SAP naming convention. (For details, see
“SAP Note 46272.”)

Monitoring Settings
Choose Configuration à Monitoring Settings to set the user-defined function
libraries’ (UDFs’) path, and change the retention periods for the history data.

There are a few DB2 UDFs developed by SAP. They are required for monitoring
remote DB2 database system through the DBA Cockpit. These UDFs are pack-
aged in a shared library file named db6pmudf, which is part of the SAP kernel.
On the Configuration: Monitoring Settings screen, you need to set the path for
this library, as shown in Figure 6.16. Normally, this path should be the standard
SAP kernel path, “/usr/sap/<SID>/D*/exe.” To be sure about this, click the Test
button to test the UDF library loading.

128
CHAPTER 6: Configuration

Figure 6.16: Set the path for the UDFs’ library here.

During the SAP installation, SAP defines a number of standard DBA jobs, such
as “Collection of DB Performance History,” “Collection of DB/DBM Config
History,” and “Collection of Bufferpool History.” The history data collected by
these jobs will be saved to internal SAP tables. You can specify the retention pe-
riod of history data on the screen shown in Figure 6.17.

Figure 6.17: Specify the retention period of history data here.

It is also a good practice to archive the DB2 diagnostic log file “db2diag.log”
regularly, so that it will not grow to an unmanageable size. Do this by clicking

129
CHAPTER 6: Configuration

the Switch Weekly checkbox for this file. The current “db2diag.log” will be
saved under a new name with a timestamp, and a new “db2diag.log” file will be
created automatically.

Automatic Maintenance Settings


DB2 provides automatic maintenance capabilities for performing database back-
ups, keeping statistics current, and reorganizing tables and indexes as necessary,
to reduce the cost of database administration. Performing maintenance activities
on your databases is essential in ensuring that they are optimized for performance
and recoverability. These automatic maintenance capabilities are fully integrated
into the SAP DBA Cockpit. To enable and configure these capabilities, use the
functions provided by Configuration à Automatic Maintenance Settings.

Automatic Backups
Automatic database backups help to ensure that your database is backed up prop-
erly and regularly, so that you don’t have to worry about when to back up or
know the syntax of the DB2 BACKUP command. An automatic database backup
can be either online or offline. It is triggered by predefined conditions, based on
the considerations of database recoverability and performance impact. Using the
Starting Conditions area of the Automatic Backup tab shown in Figure 6.18, you
can choose a predefined condition or customize the condition by specifying the
number of days and amount of log space created since the last backup. You also
need to specify the backup media.

Figure 6.18: Choose a predefined starting condition or customize the condition here.

130
CHAPTER 6: Configuration

In general, automatic backup should be enabled on a small database or on devel-


opment and test systems. For a production database, schedule a backup job on a
specified time and frequency through the DBA Cockpit’s Jobs à DBA
Planning Calendar.

Automatic RUNSTATS
Automatic statistics collection can improve the database performance by main-
taining up-to-date table statistics. This feature is fully supported and works very
well with SAP systems. Therefore, you should enable automatic RUNSTATS for
all SAP systems. Automatic statistics collection is a background process that runs
approximately every two hours. The process evaluates all active tables, to check
whether or not tables require statistics to be updated. It then schedules RUNSTATS
jobs for tables whose statistics are out of date. The background RUNSTATS jobs al-
ways run in online and throttled mode, which means they do not affect the normal ac-
cess to the tables.

By default, automatic RUNSTATS jobs collect the basic table statistics with distri-
bution information and detailed index statistics using sampling. (The RUNSTATS
command is issued, specifying the WITH DISTRIBUTION and SAMPLED DETAILED
INDEXES ALL options.) You can customize the type of statistics collected by en-
abling statistics profiling, which uses information about previous database activ-
ity to determine which statistics are required by the database workload. You can
also customize the type of statistics collected for a particular table by creating
your own statistics profile for that table. As you can see in Figure 6.19, volatile
tables are excluded from automatic RUNSTATS.

Figure 6.19: Volatile tables are excluded from automatic RUNSTATS.

131
CHAPTER 6: Configuration

Automatic REORG
Automatic reorganization determines the need for reorganization on tables and
indexes by using the REORGCHK formulas. It periodically evaluates tables and in-
dexes that have had their statistics updated, to see if reorganization is required. If
so, it internally schedules reorganization on the table and indexes.

Automatic reorganization on a table is always performed in offline mode, which


means any write access to the table currently being reorganized is not allowed. On
the other hand, automatic reorganization on an index can be performed in either on-
line or offline mode, which can be selected on the tab shown in Figure 6.20.

Since the reorganization of large tables will generally take a long time, you
should enable automatic reorganization only on small tables. SAP has defined a
policy to select tables for automatic reorganization. The policy is based on the ta-
ble size. The default table filter size is set to 1GB, although this can be changed
on the Automatic REORG tab. A filter size of 1GB allows tables smaller than
that to be qualified for automatic reorganization. Larger tables would need to be
reorganized manually, using the DBA Cockpit’s Jobs à DBA Planning Calen-
dar or Space à Single Table Analysis. If you want to specify a more granular ta-
ble filter policy, you need to use the DB2 Control Center tool.

Figure 6.20: Set automatic reorganization options here.

132
CHAPTER 6: Configuration

All automatic maintenance activities will only occur within a specified time pe-
riod, called the maintenance window. An online maintenance window is used to
specify the time period for performing online activities, such as automatic
RUNSTATS, online automatic database backup, or online automatic index reorga-
nization. An offline maintenance window is used to specify the time period for
performing offline activities, such as offline automatic database backup and
offline table reorganization. Both online and offline maintenance windows can be
defined on the General tab of the Automatic Maintenance Settings screen, shown
in Figure 6.21.

Figure 6.21: Set automatic maintenance settings here.

Summary
Database configuration is critical to system performance, and to ensure smooth
operations. In an SAP environment, the database configuration must be tuned to
meet the demands of SAP applications, and to be consistent with SAP system
configuration, such as SAP Data Classes and the ABAP Dictionary (DDIC).

133
CHAPTER 6: Configuration

The SAP DBA Cockpit provides easy tools to help maintain every area of data-
base configuration and the database-specific SAP configuration. The joint
IBM-SAP development team has made a huge effort to optimize DB2 databases
for SAP applications and to enhance the autonomic computing features of DB2
database. The goal is to make a DB2 database a zero-administration database, so
that DBAs can concentrate on higher value work, and thus lower the total cost of
ownership (TCO).

134
Chapter 7

The Alert Monitor


Avoiding Air Turbulence

The DBA Cockpit for DB2 allows SAP DB2


administrators to monitor their CCMS database alerts
and thresholds in the same transaction used for database
diagnostics and performance monitoring. Thus, the cockpit
makes it easier to maintain the health of your SAP database.

T he CCMS alert monitors for the DB2 database are integrated into the Alerts
section of the DBA Cockpit. All database alert monitoring, the alert mes-
sage history, and some alert configuration parameters are now easily accessible
here. The monitors include thresholds for disk space consumption, memory utili-
zation, buffer pool quality, locking, database backup, and log archival. If the da-
tabase exceeds the defined thresholds, emails can automatically notify
administrators, who can then implement corrections before the system is affected.
First, however, background monitoring must be activated. Execute transaction RZ21
and click Technical Infrastructure à Local Method Execution à Activate
Background Dispatching. Then, return to RZ21, in the Methods section, select
Method Definitions and click the Display Overview button. Search for, and dou-
ble-click either CCMS_OnAlert_Email or CCMS_OnAlert_Email_V2. Config-
ure the Parameters tab with the proper email sender, recipients, subject, etc. Then,
the specified recipients will be alerted via email when an alert threshold is crossed.

135
CHAPTER 7: The Alert Monitor

The CCMS system in SAP comes with pre-configured alert categories, parame-
ters, and thresholds for the DB2 database. Experienced users may modify this
configuration or change threshold values in transaction RZ21. In most cases,
though, we recommend keeping the default values for these thresholds.

The Alert Monitor


The DBA Cockpit provides an overview of all database alert monitor elements
under Alerts Alert Monitor. This screen, shown in Figure 7.1, displays an easily
readable, color-coded, hierarchical list of alert categories and monitors for all
DB2 database partitions on the current SAP system. Elements operating in their
normal range appear with green squares. The warning thresholds are flagged yel-
low, and the error thresholds are flagged red.

Figure 7.1: The Alert Monitor displays a clear overview of overall system health.

Administrators can drill down through the categories to the individual monitor el-
ements, see status messages, and compare current values with the assigned

136
CHAPTER 7: The Alert Monitor

threshold values. For more detail, load the CCMS Monitor Sets (transaction
RZ20), and drill down through SAP CCMS Monitor Templates à Database à
DB2 Universal Database for NT/UNIX. You will be able to view the complete
monitor element details for the database.

The Alert Message Log


SAP saves the history of alert messages in the Alert Message Log, shown in Fig-
ure 7.2. By default, this screen displays all of the error and warning alerts from
the previous week, ordered by date and time. The Summary section provides an
overview of the number of alerts for each category and severity. The Current Se-
lection provides the ability to filter alert logs based on Severity, Category, Ob-
ject, and Attribute. Historical alert messages can be accessed for very specific
objects, to identify any trends or recurring issues.

Figure 7.2: The Alert Message Log displays the history of alert messages.

137
CHAPTER 7: The Alert Monitor

Alert Configuration
The Alert Configuration screen provides access to the alert threshold properties
from transaction RZ21. The main screen, shown in Figure 7.3, provides a list of
all alert monitors and threshold values.

Figure 7.3: The Alert Configuration screen displays a list of database alert monitor elements
from SAP CCMS.

Double-click any individual row to see the detailed information on that monitor
element, including threshold value details and data collection schedules. Through
this screen, shown in Figure 7.4, you can enable or disable email notification for
certain monitor thresholds, and activate or deactivate monitor elements. For ele-
ments not related to performance (such as the backup elements), the alert thresh-
olds can also be configured within the DBA Cockpit. However, for any of the
elements related to performance, attribute and threshold value maintenance must
be done within transaction RZ21.

138
CHAPTER 7: The Alert Monitor

Figure 7.4: Alert thresholds can be changed here for elements not related to database per-
formance.

Summary
The integration of the SAP CCMS database monitor elements into the DBA
Cockpit alert monitor simplifies the process of proactive problem analysis. Ev-
erything is easily visible within a single transaction, and automatic alert notifica-
tion ensures that the proper people are notified as soon as warning and error
thresholds are crossed. This allows problems to be caught and prevented before
they affect the system.

139
Chapter 8

Database Diagnostics
Dealing with Air Turbulence

The database diagnostics in the DBA Cockpit for DB2


give DB2 database administrators an integrated set
of powerful tools for problem determination,
application optimization, and reference documentation.

O ne of the tasks that a DBA must perform every day involves monitoring
the health of the database to look for possible problems and inconsisten-
cies. No database is perfect, and administrators will face a challenge sooner or
later. What differentiates database managers from one another is the way they
deal with challenges, based on the mechanisms and tools available. In that sense,
DB2 and SAP offer a variety of tools that can help the DBA quickly identify, di-
agnose, and solve a problem.

The deep integration of DB2 and SAP is showcased again in the DBA Cockpit’s
diagnostic option. It is composed of many tools that you can use to troubleshoot
diverse problems, such as database security, query performance, concurrency,
and inconsistencies between ABAP and database objects.

140
CHAPTER 8: Database Diagnostics

The Audit Log


The Audit Log, shown in Figure 8.1, keeps track of actions executed against the
database from the DBA Cockpit. Such changes include SQL statements (whether
executed successfully or not); configuration changes made at the database man-
ager (instance) and database level; and table space creation and deletion. Changes
performed using native DB2 tools (CLP for instance) are not tracked here.

Figure 8.1:The Audit Log displays information about actions performed at the database
level.

By default, changes that happened in the current week are displayed. However,
the calendar can be used to choose a different week. DBAs can also change the
number of days for the messages. The fields listed in the Audit Log are explained
in Table 8.1.

141
CHAPTER 8: Database Diagnostics

Table 8.1: Audit Log Fields

Field Meaning

Date Start date of action

Time Start time of action

System System affected

Action Type of action

Command Command (SQL, add table space, delete table space, edit
configuration)

Object Object modified

User SAP user who performed the action

The EXPLAIN Option


As described in Chapter 2, the DBA Cockpit offers many different views under the
Performance options, which allow the DBA to quickly analyze whether the system
is performing optimally. Under normal circumstances, the key database perfor-
mance indicators displayed by ST04 give the DBA a very good idea of what needs
to be tuned in the database to maintain good overall performance in the system.

However, there are special situations that can bring the performance down for a
particular application, or sometimes even affect the performance of the entire
system. In such cases, the DBAs must apply their knowledge to analyze and re-
solve the performance issue using diagnostic tools, historic data for comparison,
and their best judgment.

One of the most important parts of performance troubleshooting is isolating the


problem. In many cases, a performance problem can be isolated to a poorly per-
forming SQL statement. This is the granularity for which you should aim. Once
the bad SQL is discovered, you can use diagnostic tools to analyze it deeply. The
EXPLAIN option of the DBA Cockpit plays a major role here. In DB2, every DML
(select, insert, update, and delete) statement sent by an application goes through a

142
CHAPTER 8: Database Diagnostics

compilation phase. One of the components involved in this phase is the DB2
cost-based optimizer.

For query processing, one of the tasks performed by the optimizer is to develop di-
verse strategies, called access plans, to process the SQL statement. The optimizer
attributes a certain cost (optimizer’s best estimate of the resource usage for a query)
to each plan, using an arbitrary IBM unit called timerons. The optimizer then
chooses the plan with the lowest cost, and follows its execution strategy.

Of course, the optimizer chooses the plan based on the information available, so
providing correct information is vital for a good optimizer decision. Some data
used by the optimizer include the following:

• Statistics in system catalog tables. (If statistics are not current, update them
using the RUNSTATS command or configure the AUTO RUNSTATS feature
through the DBA Cockpit.)
• Configuration parameters.
• Bind options.
• The query optimization class.
• Available CPU and memory resources.

The execution strategy can include such factors as which objects will be used to
execute the query (index or table scan), the join methods (nested loop, hash,
merge, etc.), whether the query involves multiple tables, the access order of the
objects, and the use of auxiliary tables.

The EXPLAIN option of the DBA Cockpit allows the administrator to generate the
access plan used by the optimizer in a particular query. Based on this informa-
tion, you can study the internal characteristics of the objects involved, and take
the proper actions. Some of these actions can include the following:

• Statistics collection of objects included in the plan—Outdated statistics


information might lead the optimizer to choose the wrong plan. For
instance, it might decide to run a full table scan rather than using an index
because it doesn’t have the correct row count information.

143
CHAPTER 8: Database Diagnostics

• Table or index reorganization—Tables and indexes get fragmented over


time. This might prevent the optimizer from being able to select the
optimal access plan. For example, suppose a query that has been
performing well suddenly starts to take a long time to execute. By
checking the EXPLAIN output, you conclude that the optimizer is not
choosing the best index for the query. In this case, the optimizer might be
picking a different index because the optimal one has become too
fragmented and contains more levels than the current one. Reorganizing
the original index will fix this problem.

• Creation of new indexes—By looking at the EXPLAIN output and analyzing


the predicates involved in the query, you might find that a new index will
improve execution time. The index advisor can also be used in these
situations, which allows DB2 itself to offer recommendations about new
indexes. (This is explained in more detail in the next section of this chapter.)

To access the EXPLAIN facility, choose Diagnostics à EXPLAIN. (Alterna-


tively, you can call it from the Performance option, Performance Applications
and Performance SQL Cache, or from transaction ST05.) Using Diagnostics
EXPLAIN, you can paste in a SQL statement, click the Explain button, and re-
trieve an access plan similar to the one shown in Figure 8.2.

Figure 8.2: The EXPLAIN option allows you to display the SQL access plan.

144
CHAPTER 8: Database Diagnostics

Notice that the information in Figure 8.2 is displayed in a tree format, containing
the operators and objects used in the query. The cost of the access plan is also
displayed in timerons, as well as the optimization level and the degree of
parallelism.

A set of extra options is provided via buttons at the top of the screen. If you need
to study the access plan in more detail, or if you need to collect data to sent to
SAP support, use these buttons, as follows:

• Details—When you click this button, you will see very detailed
information about the query execution plan. CPU speed, buffer pool size,
optimization level, optimized statement, and estimated number of rows are
just some of the information displayed. If you select an operator, only
information related to that operator is displayed.

• Optimizer—You might be able to change the access plan by specifying


optimizer parameters, like the optimization level and query degree. These
options can be accessed through the Optimizer button. Optimization levels
range from zero to nine, and define how much optimization effort (use of
resources) is necessary to generate the access plan. The higher the number,
the more resources the optimizer uses to create the access plan. The default
optimization level is five, which is adequate in most cases. Higher
optimization levels might be used in very complex queries, but the
compilation time and memory usage can increase significantly, as well. In
this option, however, you might increase the optimization level and
re-explain the query just to see if it would make a difference; no changes
are actually made.

Another parameter that can be changed for testing purposes is the query
degree. A degree of one (the default) means that no intra-partition
parallelism (parallelism inside the partition) is used. A value greater than
that might activate intra-partition parallelism, provided that this
functionality is activated at the database manager.

145
CHAPTER 8: Database Diagnostics

• DB Catalog—When you select a database object like a table or an index


and click this button, a window with the object’s characteristics is
displayed. This information is retrieved from the DB2 system catalog,
which is a set of internal tables that contain metadata information about
the database objects. Some tables used in this option are SYSCAT.TABLES,
SYSCAT.INDEXES, and SYSCAT.COLUMNS.

• Dictionary—This button displays the ABAP dictionary definition for the


table chosen in the access plan.

• Test Execute—This button lets you execute a query using different


optimizer options (which are set using the Optimizer button), so you can
test the real execution time of the query. Other pieces of information, like
buffer pool access and locks wait, are also provided.

• Tree Info—Additional information can be displayed or hidden in the


access plan tree with this button.

• Edit—This button allows you to edit the original query and explain it
again.

• Collect—Sometimes, even an experienced DBA might need to seek help


diagnosing a poorly performing query. The DBA Cockpit provides a very
convenient way to collect the necessary information to send to SAP
support. By clicking just one button, you can collect information, such as
the DB2 version, configuration files, table structure, statistics, and the
explain information. Each piece of information is copied to a file, so you
can zip them up and quickly send them to SAP support. This type of
functionally is also provided through the DB2 support tool, called
db2support.

The New Version of EXPLAIN


The DBA Cockpit also offers a new version of the EXPLAIN facility, developed
on the WebDynpro technology. In this case, the EXPLAIN option is displayed in a

146
CHAPTER 8: Database Diagnostics

web browser, as shown in Figure 8.3. However, it contains basically the same op-
tions as the traditional version.

Figure 8.3: Here is an access plan displayed in the new version of EXPLAIN.

Missing Tables and Indexes


The DBA Cockpit also allows administrators to do a consistency check in the
SAP system. In SAP, objects like tables, indexes, and views are defined in the
ABAP dictionary, and then the necessary objects are created in the underlying
database.

There might be some situations, however, in which the ABAP dictionary is not in
sync with the database. Some objects might be defined in the dictionary, but
don’t exist in DB2, and vice versa.

The administrator can use the Diagnostics option of the DBA Cockpit to check if
there are any inconsistencies between the ABAP dictionary and the database.

147
CHAPTER 8: Database Diagnostics

Access this option by choosing Diagnostics à Missing Tables and Indexes.


The results will look similar to Figure 8.4.

Figure 8.4: Discrepencies between the ABAP dictionary and the database are displayed
here.

The information displayed in Figure 8.4 includes the following:

• Objects missing in the database—The ABAP dictionary might contain


objects that do not exist in the database. This could be caused by an error
in the database during creation of the object or by somebody with enough
privileges dropping an object after its creation. In this scenario, the DBA
Cockpit allows the creation of the missing object in the database, thus
avoiding transport errors.

• Unknown objects in ABAP dictionary—Objects that are not known by the


ABAP dictionary do not belong to SAP. These are objects created directly
at the database level. For this reason, the list displayed here is purely
informational; no action can be taken from the DBA Cockpit. (This
usually should not occur in SAP systems, because all objects should
always be created in the ABAP dictionary.)

148
CHAPTER 8: Database Diagnostics

• Inconsistent objects—The definitions of objects (tables, indexes, and


views) in the ABAP dictionary might not be the same as in the database
catalog, and vice versa. The database contains an internal catalog that
contains the definition of all database objects, and in some cases, this
might not be in sync with the ABAP dictionary. The administrator should
review these inconsistencies and take action.

• Other checks—Other consistency checks are performed, including


checking whether the primary indexes of the tables defined in the ABAP
dictionary were created exclusively in the database instance, and whether
there are objects in the SAP base tables that cannot be described (or not
described completely) in the ABAP dictionary.

• Optional indexes—This check is related to secondary indexes. It reports


mismatches between the ABAP dictionary and the database, regarding
secondary indexes.

Click the Refresh button to run a new consistency check.

The Deadlock Monitor


DB2 uses internal memory structures called locks to provide concurrency control
and prevent uncontrolled data access by multiple applications. Locks must be ac-
quired by applications when they need to access or modify data. Occasionally,
DBAs can face a situation called deadlock, when two or more applications are
trying to acquire locks that are not compatible with those already held by other
applications. Each application is waiting for a lock that is owned by a different
application, but at the same time, the waiting application itself is holding a lock
that is wanted by others. In this situation, none of the applications can advance
until one gives up on a lock. Concurrency and performance problems will inevi-
tably occur in systems with frequent deadlocks.

Although the consequence of many deadlocks is reflected in the database perfor-


mance, the reason for their existence is mostly attributed to the applications that

149
CHAPTER 8: Database Diagnostics

are accessing and modifying the database. There are application development
guidelines that specifically deal with avoiding deadlocks, including these:

• Perform frequent commits so locks can be released.


• Avoid lock escalations by not locking too many rows.
• Use less-strict isolation levels.
• Avoid too many reads before a write.
• Modify tables in a certain order in all applications.

On SAP systems, most deadlocks are caused by customized programs


(Z-programs), rather than standard SAP code.

DB2 has mechanisms that monitor and resolve deadlock situations in specific in-
tervals, dictated by the database configuration parameter DLCHKTIME. When a
deadlock is detected, the database manager resolves the situation by randomly
picking one of the participating applications (the victim) to roll back, which al-
lows the other application to continue.

In a system with many deadlocks, it is important to understand what might be


causing these undesirable situations. The Deadlock Monitor option in the DBA
Cockpit can help analyze past deadlocks and study the SQL statements involved,
so that corrective actions can be taken. This feature is especially useful when the
deadlock can be reproduced. The DBA Cockpit displays the deadlock occur-
rences in a user-friendly way, making them very straightforward to analyze and
diagnose.

Here are the steps to follow to display the deadlocks that must be analyzed:

1. Create the Deadlock Monitor.


2. Enable the Deadlock Monitor.
3. Analyze the information collected by the Deadlock Monitor.
4. Stop the Deadlock Monitor.
5. Reset or drop the Deadlock Monitor.

150
CHAPTER 8: Database Diagnostics

Creating the Deadlock Monitor


It is necessary to create a Deadlock Monitor, if one does not already exist in the
system. Choose Diagnostics à Deadlock Monitor and click the Create Dead-
lock Monitor button. If the system does not have a Monitor, a wizard will be used
to help create one, as shown in Figure 8.5. The wizard asks for some inputs, like
the buffer size for the monitor (28,000 pages is recommended) and the table
space used for the tables written to by the monitor (a dedicated table space is
recommended).

Figure 8.5: Use the wizard to create the Deadlock Monitor.

Enabling the Deadlock Monitor


Once the Deadlock Monitor is created, it needs to be started. To do this, click the
Start Monitor button.

Analyzing the Information Collected


After running the system for a while, you can check whether deadlocks were de-
tected in the system using the Performance option of the DBA Cockpit. (See

151
CHAPTER 8: Database Diagnostics

Chapter 2 for more information.) If so, the information about these deadlocks can
be analyzed using the historic information collected by the Deadlock Monitor. The
information is displayed when the screen is refreshed or the Monitor is stopped.

As shown in Figure 8.6, the deadlocks recorded are displayed, and each occur-
rence can be expanded into more detail. Information on each occurrence is con-
tained in a root folder called “Deadlock Victim: <application that got rolled
back>.” Inside the folder, there is a summary of the agents involved in the dead-
lock. Information about the agents includes the client PID, host, authorization ID,
and waiting lock information (table, type, mode, etc.). Special arrow buttons can
be used to expand and collapse the detailed information.

Figure 8.6: This is an example of a deadlock situation captured by the Deadlock Monitor.

To find out about the SQL statements involved in the deadlock, click the State-
ments History button. This information can also be viewed separately, for each
agent involved in the scenario. Click the Agent Details button, and the Agent
Details window opens. This window has two tabs:

152
CHAPTER 8: Database Diagnostics

• Locks Held—This tab shows information about the locks held by the agent
and the locks that are needed (waiting).

• Statement History—This tab lists the SQL statements executed by this


particular agent.

The SQL statement history is one of the most important pieces of information to
diagnose a deadlock scenario. As you can see in Figure 8.7, it contains the full
stack of SQL statements executed by the agent in the transaction involved in the
deadlock. By looking at the statements involved, the administrator can easily find
which ABAP program or report generated the SQL, and then talk to the devel-
oper of the application. The problem might not necessarily be caused by the pro-
gram found here, but the developer and the administrator can work together to
see if more commit points can be introduced so locks are released faster, or
whether more significant changes need to be made.

Figure 8.7: The statement history information can also be viewed here.

153
CHAPTER 8: Database Diagnostics

Stopping the Deadlock Monitor


Gathering deadlock information constantly will cause an overhead in the system.
Therefore, the administrator should only enable the Deadlock Monitor for the pe-
riod of time needed to get information about deadlocks. To stop it, click the Stop
Monitor button.

Resetting or Dropping the Deadlock Monitor


When a new study on deadlocks is necessary, the administrator can opt to delete
the old information collected (assuming that the old occurrences have been
solved) by clicking the Reset button. Afterwards, only the relevant, new informa-
tion is displayed. The administrator can also drop the monitor by choosing the
Monitor menu option and selecting Drop Monitor. If you drop the monitor, it will
have to be created again if another deadlock analysis is necessary.

The SQL Command Line


DB2 offers a variety of tools that can be used to access the database. Probably the
most frequently used of these tools is the Command Line Processor, also known
as the CLP. The CLP is a character-based interface that accepts multiple com-
mands, including SQL statements.

The DBA Cockpit offers an interface to the CLP, which allows administrators to
run SQL statements and some administrative commands. To access the interface,
select Diagnostics à SQL Command Line. The administrative commands that
can be executed through this interface are the ones supported by the ADMIN_CMD
stored procedure. This procedure is used by applications to run administrative
commands using the SQL CALL statement. Figure 8.8 shows an example of the
commands that can be executed in this interface.

154
CHAPTER 8: Database Diagnostics

Figure 8.8: You can execute SQL and administrative commands in the SQL Command
Line interface.

The administrative commands in the following table are supported by


ADMIN_CMD. This interface will most commonly be used for SQL statements,
since most administrative operations can be performed graphically using other
options of the DBA Cockpit. SQL commands not supported by the ADMIN_CMD
procedure will fail. SQL statements that modify data are not permitted, either.

155
CHAPTER 8: Database Diagnostics

• ADD CONTACT • RESET DATABASE


• ADD CONTACTGROUP CONFIGURATION

• AUTOCONFIGURE • RESET DATABASE MANAGER


CONFIGURATION
• BACKUP (online only)
• REWIND TAPE
• DESCRIBE
• RUNSTATS
• DROP CONTACT
• SET TAPE POSITION
• DROP CONTACTGROUP
• UNQUIESCE DATABASE
• EXPORT
• UPDATE ALERT
• FORCE APPLICATION
CONFIGURATION
• IMPORT
• UPDATE CONTACT
• INITIALIZE TAPE
• UPDATE CONTACTGROUP
• LOAD
• UPDATE DATABASE
• PRUNE HISTORY/LOGFILE CONFIGURATION
• QUIESCE DATABASE • UPDATE DATABASE MANAGER
• QUIESCE TABLESPACES FOR CONFIGURATION
TABLE
• UPDATE HEALTH NOTIFICATION
• REDISTRIBUTE CONTACT LIST
• REORG INDEXES/TABLE • UPDATE HISTORY
• RESET ALERT CONFIGURATION

The Index Advisor


As explained earlier, the lack of proper indexes on a table can cause severe per-
formance problems for a query or a set of queries. Depending on how heavily the
table is accessed, the performance degradation can spread to the entire system.

Thankfully, the DBA Cockpit comes to the rescue again. One of the most inter-
esting features provided in the Diagnostics option is the Index Advisor. The In-
dex Advisor is a subset of the DB2 Design Advisor. It is used to help you find
better indexes to support your workload. You can use the Index Advisor to create
virtual indexes and to let DB2 recommend indexes for SQL statement.

156
CHAPTER 8: Database Diagnostics

Indexes Recommended by DB2


Click the Recommend Indexes button in the Index Advisor to have DB2 recom-
mend indexes for the SQL statement specified in the text field.

Creating Virtual Indexes


A virtual index is a user-defined index that only exists “virtually” within the in-
dex advisor. It does not exist yet in the database. To create a virtual index in the
Index Advisor, click the Add Virtual Index button. As shown in Figure 8.9, a
new window pops up to specify the schema, table, and columns that are part of
the virtual index.

Figure 8.9: Use the Index Advisor to define a virtual index.

After defining virtual indexes, you can explain the query again and have the
optimizer consider the virtual indexes, as well as the existing ones, when building
the access plan. If the optimizer selects a virtual index (whether user-defined or

157
CHAPTER 8: Database Diagnostics

recommended by the Index Advisor), you can create such an index in the data-
base, with the touch of a button.

In Figure 8.10, for example, the Index Advisor is recommending one new index
to support the execution of the query, and there is one user-defined virtual index.

Figure 8.10: The EXPLAIN option is shown here with existing, recommended, and
user-defined indexes.

Right beneath the indexes is an EXPLAIN button, which can be selected to


re-explain the query using different options:

• Only existing indexes


• Existing and recommended indexes
• Existing, recommended, and user-defined indexes

158
CHAPTER 8: Database Diagnostics

You can compare the plans and the costs (in timerons), and based on the EXPLAIN
outputs, decide whether or not to create the indexes in the database and the
ABAP dictionary. To do that, just click the appropriate button (the magic wand)
next to the recommended index, and fill out the index description information.

The Cumulative SQL Trace


SAP business applications, such as ERP, CRM, and SRM, run on top of SAP
NetWeaver, which is the SAP application platform. All SAP business applica-
tions are database-independent, so that access to the data is transparent. The core
component of the SAP NetWeaver platform is the Web Application server. It is
the component that directly interfaces with the database. Therefore, it has a layer
of code that abstracts the differences of the native databases, and provides a com-
mon database interface to higher layers of SAP code. This abstraction layer is
called the Database Support Layer (DBSL). SQL statements coming from busi-
ness applications go through the DBSL and are translated into native DB2 SQL
(in most cases).

For this reason, tracing the DBSL layer can give the DBA a good idea of the
SQL statements that can be affecting the performance of the database. SAP pro-
vides a way to use a cumulative trace of the database interface, so that the infor-
mation collected can be analyzed by the administrator later.

To use the cumulative trace, you must first activate it. There are two different
ways to do this:

• Activate the trace dynamically via profile parameter. Run transaction RZ11
and enable the profile parameter dbs/db6/dbsl_cstrace =1.

• Activate the trace using an environment variable: DB6_DBSL_CSTRACE=1.

Note that you might have to restart the SAP system if all work processes are to be
traced. Configuration through the profile parameter is dynamic, but not
permanent.

159
CHAPTER 8: Database Diagnostics

All SAP systems that use the database interface, and are now executed, write
trace data to the table DB6CSTRACE in the table space PSAPUSER1D. The data
collected can be analyzed directly in the DBA Cockpit, by selecting Diagnostics
Cumulative SQL Trace. Alternatively, you can run report RSDB6CSTRACE using
transaction SA38, and analyze the data from there.

No statements are displayed when the trace has never been activated. After the
trace is activated and SQL statements are being logged, click the Refresh button
to refresh the window.

The actions PREPARE, EXECUTE, and FETCH are summed up in tabs, as shown in
Figure 8.11, and can be evaluated separately.

Figure 8.11: Trace information collected by the Cumulative SQL Trace facility helps DBAs
in their performance monitoring activities.

To display more detailed information, double-click a line, or select a line and


click (the Details icon). The following information is displayed:

160
CHAPTER 8: Database Diagnostics

• Statement information—Information is provided about the SQL statement,


the application server where the statement was executed, and all of the
ABAP reports in which the statement can be found. The source code of the
ABAP report can also be accessed from here.

• Time histograms—Histograms display the distribution times of the


selected SQL statement.

On this detailed view, the administrator has the option to run the EXPLAIN facil-
ity. Click the button to display the access plan. (For more information on
how to activate the cumulative SQL trace, refer to “SAP Note 139286.”)

The DBSL Trace Directory


SAP provides two other ways to trace the Database Support Layer (DBSL), in
addition to the cumulative trace option: the sequential DBSL trace and the dead-
lock DBSL trace. The cumulative trace is suitable for performance analysis work
performed over long periods of time (where information is aggregated). The se-
quential and deadlock traces are mostly used for shorter periods of time, when
the problem has been isolated to a certain degree.

The Sequential DBSL Trace


The sequential DBSL trace logs all important function calls sent from the data-
base interface in R/3 programs (for example, disp+work, tp, R3trans, and so on)
to the database. Trace data is logged in log files at the operating system level. By
default, the trace files are stored in the /tmp/TraceFiles directory on UNIX sys-
tems or in the \usr\sap\TraceFiles folder on Windows systems.

The sequential trace can be activated using three different methods:

• Using transaction SM50 for individual work processes (disp+work)—From


SM50, select the work process to be traced, and choose Process à Trace
Components. Then, select trace level 3 and the component database

161
CHAPTER 8: Database Diagnostics

(DBSL). Run the transaction. The trace information will be logged in the
work directory of the instance.

• For all processes of a LOGON session—This method allows tracing for


different SAP processes, like disp+work, tp, and saplicense. The
administrator enables a set of environment variables for the <sid>adm user.
Some variables include DB6_DBSL_TRACE, DB6_DBSL_TRACE_DIR,
DB6_DBSL_TRACE_FLUSH, and DB6_DBSL_TRACE_STRING.

• Using SAP profile parameters—The trace can be activated using the


profile parameter dbs/db6/dbsl_trace = <tracelevel> (where 3 is the
highest trace level). The remaining optional parameters are set with the
above-mentioned environment variables.

Note that the trace directory must exist and be accessible, for all of these meth-
ods. Refer to “SAP Note 31707” for more details on how to activate the sequen-
tial DBSL trace.

The Deadlock Trace


As explained earlier, deadlocks can affect the concurrency and performance of a
database. DB2 provides internal mechanisms to alleviate these unwanted
scenarios.

The DBA Cockpit’s Deadlock Monitor can help analyze the occurrences of dead-
locks. SAP provides another way to track deadlocks. The DBSL deadlock trace
can be enabled in the following ways:

• Dynamically activate the DBSL deadlock trace for all work processes via
transaction RZ11, by changing the profile parameter
dbs/db6/dbsl_trace_deadlock_time = <seconds>. SAP recommends a time
interval of 20 to 26 seconds. The other parameter is
dbs/db6/dbsl_trace_dir = <tracepath>.

162
CHAPTER 8: Database Diagnostics

• Activate the trace for all processes of a LOGON session. Set the following
environment variables for user <sid>adm:
DB6_DBSL_TRACE_DEADLOCK_TIME = <time in seconds> and
DB6_DBSL_TRACE_DIR = <path>.

The default trace path is /tmp/TraceFiles for UNIX and \\sapmnt\TraceFiles for
Windows. (Refer to “SAP Note 175036” for more information about the DBSL
deadlock trace.)

To access information on the sequential DBSL trace and the DBSL deadlock
trace, choose Diagnostics à DBSL Trace Directory in the navigation frame of
the DBA Cockpit.

Figure 8.12 shows that the trace directory is set to the default, /tmp/TraceFiles. A
subdirectory <SID> is created under the trace directory, which is where the trace
files are generated. Notice that there are sequential trace files
(TraceFile<Appl-ID>.txt) and Deadlock Trace files (DeadlockTrc<App-ID>.txt)
in this directory, since both traces are using the default directory. To see the con-
tents of each file directly from here, double-click it.

Figure 8.12: You can see the trace files generated here.

163
CHAPTER 8: Database Diagnostics

Trace Status
SAP provides three different ways to trace the Database Support Layer: cumula-
tive SQL trace, sequential DBSL trace, and deadlock trace. These traces work in-
dependently of each other, so one trace can be activated despite the fact that the
others are disabled. They can also all be activated at the same time.

You can check if a cumulative DBSL trace is activated by checking whether new
records are being inserted in table sap<SID>.DB6CSTRACE. For sequential and
deadlock traces, check whether files are being updated or created in the trace di-
rectory. You can also check environment variables and profile parameters.

None of this is really necessary, however, because the DBA Cockpit provides a
very convenient way to check which DBSL traces are active at the moment. To
access this information, just select Diagnostics à Trace Status.

In the example in Figure 8.13, you can see that all three DBSL traces are cur-
rently activated. For the sequential trace, some options can be updated from this
same screen.

Figure 8.13: All three DBSL traces are currently activated here.

164
CHAPTER 8: Database Diagnostics

Besides checking the status of the traces, you can also activate and deactivate
traces dynamically from this window, by using the icon. The DBSL trace re-
quires the Trace Level information before being activated, and the deadlock trace
requires the Detection Interval value.

The Database Notification Log


So far, you have seen many different diagnostic tools that can help solve specific
problems, like ABAP dictionary consistency, deadlocks, and performance. How-
ever, there are other areas in the database that can potentially report a warning or
an error. It would be not feasible to have additional options in the Diagnostic
folder for each one of them.

Therefore, to have an overall look at the health of the database, you can use two
diagnostic files. The first one is the Database Notification Log (also known as the
Administration Notification Log), which is located in the directory specified by
the DIAGPATH database manager configuration parameter. The name of the file is
<instance name>.nfy. Since it is an ASCII file, it can be opened directly on the
database server machine, using an editor.

The DB2 database manager writes the following kinds of information to the Ad-
ministration Notification Log:

• The status of DB2 utilities, such REORG and BACKUP


• Client application errors
• Service class changes
• Licensing activity
• Log file paths and storage problems
• Monitoring and indexing activities
• Table space problems

A database administrator can use this information to diagnose problems, tune the
database, or simply monitor the database.

165
CHAPTER 8: Database Diagnostics

To access the Database Notification Log directly from the DBA Cockpit, choose
Diagnosticsà Database Notification Log. You can filter what messages get
displayed by choosing the date and the starting time. You can also filter by the
severity of the messages, which can vary from informational to error.

The level of detail reported in the Database Notification Log is controlled by the
NOTIFYLEVEL database manager configuration parameter. It ranges from zero to
four. The default value of three is appropriate for most systems.

The Database Diagnostic Log


The other diagnostic file used by DB2 is the Database Diagnostic Log
(db2diag.log), which is probably the most important file used to troubleshoot
problems with the database. This file is also located in the directory specified by
the DIAGPATH database manager configuration parameter. Its level of detail is
controlled by the DIAGLEVEL database manager configuration parameter. The
DIAGLEVEL accepts values from zero to four as well, and the default value of
three is suitable for most systems. A higher level (four) should only be used for a
very short period of time and when explicitly requested by SAP support, since at
this level, DB2 records many details and the file can grow very quickly.

The db2diag.log file can grow very big even at the default level, so from time to
time, the administrator should archive it. DB2 offers a tool for that, called
db2diag. By using the –A option (db2diag –A), the current db2diag.log file gets a
timestamp appended to it, and a new log file is created.

To access the contents of the db2diag.log file directly from the DBA Cockpit,
choose Diagnostics à Database Diag Log. You can also filter which messages
to display. Filters are available for date, time, and severity of the message.

The db2diag.log can also be accessed directly on the database server machine, since it
is an ASCII file. We usually recommend this method, since you can use OS com-
mands like grep (in UNIX/Linux systems) to apply other filters on the db2diag.log
file. Alternatively, you can use the db2diag tool, which provides grep-like and
tail-like functionality (among others), so more restrictive filters can be applied.

166
CHAPTER 8: Database Diagnostics

DB2 Logs
The DB2 Logs option, shown in Figure 8.14, is available in DB2 version 9.5.
This option shows the combined information of the Database Notification Log,
the Database Diagnostic Log, and the Statistics Log (information generated by
the autonomic computing daemon, db2acd). There are several filters you can ap-
ply to display only a subset of the information:

• Log Facility—Choose from the main logs (Diagnostic and Notification),


the Statistics Log, or all logs.
• Record Type—Choose from diagnostic records, event records, or all
records.
• Minimum Impact Level—Choose from the options Critical, Immediate,
Potential, Unlikely, None, or All.
• Messages From…To—Specify a range of date and time.

After applying the filters, press the Find button to refresh the messages.

Figure 8.14: Check the DB2 message logs here.

167
CHAPTER 8: Database Diagnostics

The Dump Directory


DB2 uses an internal mechanism to collect diagnostic information automatically
when database errors occur. The term used for the set of diagnostic files collected
is First Occurrence Data Capture, or simply FODC. The files captured are lo-
cated in the directory specified by the DIAGPATH database manager parameter.
These are some of the files collected:

• Database Notification Log


• Database Diagnostic Log (db2diag.log)
• DB2 event logs
• FODC packages
• DB2DART directory
• STMM log directory

To access these diagnostic files in the DBA Cockpit, select Diagnostics à


Dump Directory. To open a specific file, just double-click it. As shown in Fig-
ure 8.15, the directory where these files are located (DIAGPATH) is also displayed.
This gives you the option to log onto the machine where the database server re-
sides, and access the files directly.

Figure 8.15: You can view the DB2 diagnostic files here.

168
CHAPTER 8: Database Diagnostics

The DB2 Help Center


You can find a lot of documentation about DB2 online, including all the manuals,
which can be downloaded in PDF format. However, you might spend a lot of
time searching for specific information, if each individual PDF file has to be
opened and searched. For this reason, IBM has created an online tool called the
DB2 Help Center (also known as the DB2 Information Center). The DB2 Help
Center, shown in Figure 8.16, provides a searching capability based on keywords
across all DB2 manuals, so minimal time is spent researching a command’s syn-
tax or getting information about a certain database feature.

The DB2 Help Center can be accessed through a browser. It can also be accessed
directly from the DBA Cockpit by choosing Diagnostics à DB2 Help Center.

Figure 8.16: The DB2 documentation can be viewed directly from the DBA Cockpit.

Summary
Just like a pilot must deal with air turbulence during a flight, a DBA must deal
with problems that might occur in the database. The DBA Cockpit provides

169
CHAPTER 8: Database Diagnostics

diverse tools to quickly diagnose the most common problems in a SAP database,
such as ABAP consistency, SQL performance, and concurrency. For other prob-
lems, the DBA Cockpit provides a convenient way to access FODC information
captured by DB2. Even novice DB2 administrators can easily access these vital
files without needing to log onto the database server machine and know their
locations.

Troubleshooting database problems can be intimidating and time-consuming for


most administrators, but the DBA Cockpit leverages the most important DB2 di-
agnostic tools in an easy-to-use graphical interface. This significantly simplifies
problem analysis and reduces resolution time.

170
Chapter 9

New Features
Flying into the Future

SAP and IBM continue to jointly develop new technology


and integrate new DB2 features into the DBA Cockpit.
DB2 SAP users benefit from having timely access to the
latest and greatest database technology, integrated
seamlessly into their SAP applications.

N ew technology is continuously being developed by both SAP and IBM,


and the partnership between SAP and IBM allows SAP users to exploit
this new technology immediately. For example, the latest versions of SAP and
DB2 provide users more control over resource allocation and better support for
performance tuning.

The previous chapters have outlined the current benefits of the integrated SAP
DBA Cockpit for DB2 LUW. In this chapter, you will see that these benefits con-
tinue to grow as the SAP-DB2 partnership continues to mature.

Workload Management (WLM)


DB2 9.5 includes new Workload Management (WLM) features that can ensure
Service Level Agreements (SLAs) are achieved for overall system performance.
Using WLM, client requests are grouped into workloads defined in DB2, based

171
CHAPTER 9: New Features

on connection attributes. These workloads are mapped to different service


classes, which define the resource limits, alert thresholds, and priorities of those
workloads within the database.

SAP Enhancement Package 1 for SAP NetWeaver 7.0 integrates DB2 9.5 Work-
load Management into the SAP kernel. SAP delivers a predefined WLM configu-
ration proposal, which defines workloads and service classes for each unique
work process type. This basic configuration can then be enhanced by creating
one additional workload and service class, which can prioritize work based on
the SAP user, SAP transaction, or SAP application server.

Workloads and Service Classes


Configuration of the WLM settings is integrated into the DBA Cockpit, which
then launches the Workloads and Service Classes content area in a web
browser-based user interface, as shown in Figure 9.1. The Overview tab provides
a graphical view of the workloads and their associated service classes. The
Workloads tab displays the details of each workload, including service class
mappings and workload status. The Service Classes tab (shown in Figure 9.1)
displays the status of each service class, and its agent and pre-fetch priorities.

Figure 9.1: Workloads and service classes from DB2 Workload Management are now inte-
grated into SAP.

172
CHAPTER 9: New Features

The service class priorities can be maintained within the General tab in the bot-
tom half of the display. The Statistics tab contains detailed information and
graphical histograms displaying performance characteristics of the applications
that have run within that service class.

Critical Activities
The Critical Activities screen, shown in Figure 9.2, provides an administrative in-
terface for the thresholds defined for WLM. There is one area to maintain and
configure thresholds for various database activities, and another to view histori-
cal information on threshold violations.

Figure 9.2: Threshold violations can be viewed within the Critical Activities screen.

The thresholds define the Service Level Agreements for the system. The thresh-
old violations allow administrators to quickly identify performance problems re-
lated to these SLAs, and then take measures to resolve any issues.

173
CHAPTER 9: New Features

Finally, the SAP WLM Setup Status provides an overview of the WLM configu-
ration. This displays the status of the various WLM configuration steps, and dis-
plays the areas of WLM that have been successfully set up.

BI Administration
DB2 provides several key, unique features to improve the performance and man-
ageability of large SAP NetWeaver BW data warehouses. Here are two of these
key features:

• Database Partitioning Feature (DPF)—DPF allows large database tables


to be distributed across multiple database partitions, perhaps on multiple
physical servers. This can drastically improve performance and
manageability, because each partition only operates on the portion of the
data that is distributed on that partition.

• Multi-Dimensional Clustering (MDC)—MDC allows the rows of SAP


NetWeaver BW objects to be clustered on disk by multiple key columns.
Each data page on disk only contains rows for one unique combination of
MDC column values. Any query containing restrictions on any of these
columns only reads the data pages containing relevant rows, and every row
on those pages is relevant to the query. This drastically reduces I/O during
large BW reports, and can improve performance by orders of magnitude.

Due to the importance of these features, SAP has integrated DB2 DPF and MDC
tooling into the DBA Cockpit, within a folder named either “BW Administra-
tion” or “Wizards,” depending on the release of SAP being used.

BI Data Distribution
DB2 table spaces are created in partition groups. When an SAP NetWeaver BW
system is installed on a partitioned DB2 database, the objects in the BW table
spaces may be distributed across multiple database partitions. If a DBA changes
the partition layout (usually by adding partitions to the BW partition groups), the
data residing in those table spaces needs to be redistributed, so that the same

174
CHAPTER 9: New Features

amount of data resides on each partition. This ensures that each partition has
nearly the same workload when processing large BW reports. For example, if a
partition group with four partitions is altered to add two new partitions, the data
previously distributed across the original four partitions must be redistributed
across all six partitions.

Data redistribution is an online utility. It is throttled by the UTIL_IMPACT_LIM da-


tabase manager configuration parameter setting. However, it may require the
movement of a large amount of data, and therefore, take a very long time to run.
It is usually recommended that you redistribute data during a maintenance win-
dow, or during low system usage.

There are several steps involved in changing the partitioning scheme of a


database:

1. Alter the partition group(s) to add or remove partitions.


2. Create temporary table space containers on all new partitions.
3. Redistribute the existing data across the new partition layout.

The BW Data Distribution Wizard in the DBA Cockpit provides a very simple
interface for this process, shown in Figure 9.3. First, you select the partitions for
each partition group from a grid of checkboxes. Next, the wizard defines tempo-
rary table space containers, based on the default SAP container paths. Finally,
you schedule the redistribution job to run during low system usage.

The wizard immediately alters the partition groups, creates the temporary table
space containers, and schedules the redistribution job in the DBA Planning Cal-
endar. Once the redistribution job completes, the partition layout changes are
done.

175
CHAPTER 9: New Features

Figure 9.3: The BI Data Distribution wizard guides users through the steps required to re-
partition a DB2 SAP BW system.

The MDC Advisor


As mentioned earlier, DB2 Multi-Dimensional Clustering is a type of clustered
index that can be created on SAP NetWeaver BW objects. MDC indexes point to
extents rather than rows, and every row within an MDC extent contains the same
MDC column values. When large SAP NetWeaver BW reports are generated,
MDC indexes can drastically reduce the number of pages read from disk, because
each page only contains rows for the requested values of the MDC indexed
columns.

Creating proper MDC indexes can greatly improve SAP NetWeaver BW perfor-
mance. However, finding the best columns for the MDC index on a BW object
can be challenging. The MDC index will benefit performance most if its columns
are frequently used as query restrictions in the WHERE clause of many large BW
queries. Therefore, optimal MDC index selection requires you to search through
the SQL cache for BW object queries, and identify the frequently used columns.
Those columns will be the best candidates for MDC dimensions on that table.

176
CHAPTER 9: New Features

Then, you must identify the cardinality (number of unique values) of each poten-
tial MDC dimension. High-cardinality columns might not be desirable, because
DB2 will allocate one extent for each unique combination of MDC index values.
Therefore, if a unique index is included in the MDC index, each extent will only
contain one row, resulting in wasted disk space. The best MDC index columns
are low-cardinality columns frequently used in query restrictions. This can im-
prove performance without increasing table size.

DB2 contains several Advisors, which are able to assist you with some of the
more intensive tasks. DB2 has both a traditional Index Advisor (discussed in the
previous chapter), and an MDC Index Advisor. The MDC Index Advisor collects
queries run on selected tables, analyzes their characteristics, and recommends op-
timal MDC indexes to improve performance without increasing disk consump-
tion. This takes into account all queries run during the collection period, and
greatly reduces the effort involved in MDC index creation.

SAP Enhancement Package 1 for SAP NetWeaver 7.0 includes a graphical inter-
face to the DB2 MDC Index Advisor in the DBA Cockpit, under BI Administra-
tion à MDC Advisor. The Input tab, shown in Figure 9.4, contains methods for
collecting and analyzing queries for specific BW objects (InfoCube FACT tables
and the active table of DataStore Objects).

Figure 9.4:Add InfoProviders to the MDC Advisor, and let DB2 recom-
mend beneficial MDC indexes.

177
CHAPTER 9: New Features

The steps for query analysis are as follows:

1. Click the Add InfoProvider button to input the BW object(s) you want
to analyze.

2. Select one or more InfoProviders to analyze, and click the Start


Collection button to begin collecting query information from those
objects.

3. Execute the BW reports that run on the objects being analyzed. The
queries that execute against the selected BW objects will be stored in
database tables in the SYSTOOLS table space.

4. After the reports finish, select the BW object(s) with a status of


RUNNING, and click the Stop Collection button to halt their collection
processes.

5. Select the BW object(s) to analyze, and click the Analyze button to start
the query analysis process. The analysis is scheduled as a background
job, which can be monitored through the DBA Planning Calendar. Once
the analysis job completes, the MDC Advisor displays the results and
deletes any saved BW temporary tables and query information.

The MDC proposals can be viewed in the Result tab, shown in Figure 9.5. The
recommended MDC index is listed beneath each analyzed InfoProvider, with es-
timates for performance and space consumption. The Estimated Improvement
gives the overall performance improvement expected for all queries on that
InfoProvider. The Estimated Space Increase specifies the percentage that the
InfoProvider may increase in size. The MDC Advisor will only recommend
MDC indexes with an estimated space increase of less than 10 percent. The pro-
posed MDC indexes can then be implemented from transaction RSA1.

178
CHAPTER 9: New Features

Figure 9.5: The Results tab contains the MDC Indexes recommended by DB2.

Summary
These pages have presented countless examples of DB2 administration integrated
into the core SAP NetWeaver technology. SAP DBAs can perform almost any
DB2 administrative task through standard SAP transactions. This integration sim-
plifies many SAP database administration tasks, and eases the transition from
other relational databases to DB2. The partnership between DB2 and SAP, and
the complete integration of DB2 into the DBA Cockpit, are two of the many rea-
sons why DB2 is the preferred and recommended database for SAP systems.

The DBA Cockpit provides SAP database administrators a single interface for al-
most all DB2 monitoring and administration, such as the following:

• Monitoring key performance indicators


• Space management and administration tasks
• Analysis of backup and recovery processes
• Changing DB2 configuration parameters
• Defining database partitions for SAP NetWeaver BW and redistributing
data
• Optimizing buffer pool memory allocation
• Configuring automatic RUNSTATs, REORGs, and backups
• Creating, scheduling, and monitoring both standard and custom database
jobs in the DBA Planning Calendar

179
CHAPTER 9: New Features

• Monitoring CCMS database health alerts and thresholds


• Auditing DBACOCKPIT activity
• Performing consistency checks between SAP and DB2 metadata
• Executing SQL commands
• Analyzing optimizer access plans
• Recommending and testing traditional and MDC indexes
• Tracing database calls
• Viewing diagnostic log files
• Accessing the online DB2 Help Center

As IBM releases new DB2 technology, new features are continually integrated
into the SAP DBA Cockpit. This enables SAP database administrators to easily
exploit the new technology in their SAP systems.

To close with one final airline pilot analogy: fly the latest and greatest jet. Select
the cockpit that allows you the most control to perfect the performance of your
aircraft. Pilot the best technology, which is integrated completely and optimized
specifically for your cockpit. Launch your SAP business systems into the future
on DB2.

180
SAP DBA
COCKPIT
Flight Plans for DB2 LUW
Database Administrators
DB2 is now the database most recommended for use with SAP applications, and DB2
skills are now critical for all SAP technical professionals. The most important tool
within SAP for database administration is the SAP DBA Cockpit, which provides a
more extensive administrative interface on DB2 than any other database. This book
steps through every aspect of the SAP DBA Cockpit for DB2. Readers will quickly
learn how to use the SAP DBA Cockpit to perform powerful DB2 administration tasks
and performance analysis. This book provides both DB2 beginners and experts an
invaluable reference for the abundance of information accessible from within the SAP
DBA Cockpit for DB2. It makes it easy for SAP NetWeaver administrators, consultants,
and DBAs to understand the strengths of DB2 for SAP, and how to leverage those
strengths within their own unique application environments.

EDUARDO AKISUE LIWEN YEOW


Certified DB2 9 Administrator Certified Technology Associate–System
Certified Informix Administrator Administration (DB2) for SAP NetWeaver 7.0

Certified SAP Technology Consultant SAP Certified Technology Consultant


for DB/OS Migration
SAP Certified OS/DB Migration Consultant

PATRICK ZENG
JEREMY BROUGHTON
Certified DB2 Solutions Expert
SAP Certified Basis Consultant for
DB2 on NetWeaver 2004 Certified SAP Technology Consultant

SAP Certified OS/DB Migration Consultant


IBM Certified DB2 9 Administrator

MC Press Online, LP
125 N. Woodland Trail
Lewisville, TX 75077

Vous aimerez peut-être aussi