Vous êtes sur la page 1sur 542

IBM DB2 UDB for Oracle

DBAs
Student Manual

Education Services

Course #: L1-251.4
IBM Part #: Z251-1601-00
November 7, 2003
Copyright, Trademarks, Disclaimer of Warranties, and
Limitation of Liability
Copyright IBM Corporation 2003.
IBM Software Group
One Rogers Street
Cambridge, MA 02142

All rights reserved. Printed in the United States.

IBM and the IBM logo are registered trademarks of International Business Machines Corporation.

The following are trademarks or registered trademarks of International Business Machines Corporation in the United States,
other countries, or both:
Answers OnLine DPI FFST/2 OnLine Dynamic Server S/390
AIX DRDA Foundation.2000 OS/2 Sequent
APPN Dynamic Scalable Illustra OS/2 WARP SP
AS/400 Architecture Informix OS/390 System View
BookMaster Dynamic Server Informix 4GL OS/400 Tivoli
C-ISAM Dynamic Server.2000 Informix Extended PTX TME
Client SDK Dynamic Server with Parallel Server QBIC UniData
Cloudscape Advanced Decision Informix Internet QMF UniData and Design
Connection Services Support Option Foundation.2000 RAMAC Universal Data
Database Architecture Dynamic Server with Informix Red Brick Red Brick Design Warehouse Blueprint
DataBlade Extended Parallel Option Decision Server Red Brick Data Mine Universal Database
DataJoiner Dynamic Server with J/Foundation Red Brick Decision Components
DataPropagator Universal Data Option MaxConnect Server Universal Web Connect
DB2 Dynamic Server with Web MVS Red Brick Mine Builder UniVerse
DB2 Connect Integration Option MVS/ESA Red Brick Decisionscape Virtual Table Interface
DB2 Extenders Dynamic Server, Workgroup Net.Data Red Brick Ready Visionary
DB2 Universal Database Edition NUMA-Q Red Brick Systems VisualAge
Distributed Database Enterprise Storage Server ON-Bar Relyon Red Brick Web Integration Suite
Distributed Relational WebSphere

Microsoft, Windows, Window NT, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.

Java, JDBC, and all Java-based trademarks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United
States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

All other product or brand names may be trademarks of their respective companies.

The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis
without any warranty either express or implied. The use of this information or the implementation of any of these techniques is
a customer responsibility and depends on the customers ability to evaluate and integrate them into the customers operational
environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that
the same or similar results will result elsewhere. Customers attempting to adapt these techniques to their own environments do
so at their own risk. The original repository material for this course has been certified as being Year 2000 compliant.

This document may not be reproduced in whole or in part without the prior written permission of IBM.

Note to U.S. Government Users Documentation related to restricted rights Use, duplication, or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

iii
iv
Course Description
This course provides cross-training for a DB2 UDB Administrator and DBA and is geared toward
students with prior experience as either an Oracle System Administrator or DBA. Students will
use DB2 UDB version 7.2 or 8.1, so they should have equivalent Oracle experience with version
8 or 9i. During the course, students will build and run a DB2 UDB database using data made
available in Oracle unload format.

Objectives
At the end of this course, you will be able to:

Understand Oracle and DB2 UDB terminology differences


Create a DB2 UDB instance
Recreate an Oracle database in DB2 UDB
Migrate Oracle data to a DB2 UDB database
Understand Oracle and DB2 UDB indexing differences
Compare Oracle and DB2 UDB constraint methods
Perform DB2 UDB backup and recovery tasks
Explore DB2 UDB performance tuning methods

Prerequisites
To maximize the benefits of this course, we require that you have met the following prerequisites:
Understanding and use of relational database elements
Understanding and use of SQL statements
Oracle System Administration or DBA knowledge
UNIX/Linux systems knowledge
Microsoft Windows knowledge

v
Acknowledgments
Course Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Glen Mules
Contributing Course Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bob Bernard
Technical Review Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bob Bernard, Nora Sokolof
Course Production Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susan Dykman

Further Information
To find out more about IBM education solutions and resources, please visit the IBM Education
website at http://www-3.ibm.com/software/info/education.
Additional information about IBM Data Management education and certification can be found at
http://www-3.ibm.com/software/data/education.html.
To obtain further information regarding IBM Informix training, please visit the IBM Informix
Education Services website at http://www-3.ibm.com/software/data/informix/education.

Comments or Suggestions
Thank you for attending this training class. We strive to build the best possible courses, and we
value your feedback. Help us to develop even better material by sending comments, suggestions
and compliments to dmedu@us.ibm.com.

vi
Table of Contents
Module 1 Differences Between Oracle and IBM DB2 Instances
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
DB2 UDB Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
DB2 UDB Terminology (cont.) . . . . . . . . . . . . . . . . . . . . . . . 1-5
Oracle Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
DB2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
DB2 Architecture (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Environment and Registry Variables . . . . . . . . . . . . . . . . . 1-12
Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
DAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
DAS (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Client Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
Concurrency Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
DB2 UDB Isolation Levels . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
DB2 UDB and Oracle Terminology Comparison . . . . . . . . 1-25
Additional DB2 UDB Terminology . . . . . . . . . . . . . . . . . . . 1-26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27

Module 2 Client Connectivity


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Run-Time Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-3
Client Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . 2-4
Command Line Processor Design . . . . . . . . . . . . . . . . . . . . 2-6
Administration Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10

vii
Module 3 Creating a DB2 UDB Instance
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Requirements to Create an Instance . . . . . . . . . . . . . . . . . . 3-3
The SYSADM User and Group . . . . . . . . . . . . . . . . . . . . . . 3-4
Fenced User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Creating the DAS Instance . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
The db2icrt Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
The db2icrt Command in Detail . . . . . . . . . . . . . . . . . . . . . .3-8
The Instance Directory Structure . . . . . . . . . . . . . . . . . . . . . 3-9
Initializing the Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
User Authentication and Instance Authorities . . . . . . . . . . 3-11
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13

Module 4 Creating a Database


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Buffer Pool Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Attributes of DB2 UDB Databases . . . . . . . . . . . . . . . . . . . . 4-5
Database Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-6
Authority to Create Databases . . . . . . . . . . . . . . . . . . . . . . 4-7
Authorities versus Privileges . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Create a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
CREATE DATABASE Actions . . . . . . . . . . . . . . . . . . . . . . 4-11
Database Subdirectory Structure . . . . . . . . . . . . . . . . . . . 4-12
Database Configuration File . . . . . . . . . . . . . . . . . . . . . . . 4-13
Creating Default Table Spaces . . . . . . . . . . . . . . . . . . . . . 4-14
Creating System Catalog Tables . . . . . . . . . . . . . . . . . . . . 4-15
Granting Database Administrator Authority . . . . . . . . . . . . 4-16
Privileges Granted to PUBLIC . . . . . . . . . . . . . . . . . . . . . . 4-17
Database Startup and Shutdown . . . . . . . . . . . . . . . . . . . . 4-18
QUIT vs. TERMINATE vs. CONNECT RESET . . . . . . . . . 4-20
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21

Module 5 Planning Disk Usage


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Privileges and Authorizations . . . . . . . . . . . . . . . . . . . . . . . 5-3
Typical DB2 UDB Storage Diagram . . . . . . . . . . . . . . . . . . 5-4
Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
SMS Disk Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
DMS Disk Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10

viii
Adding and Extending Containers in a DMS Table Space 5-12
Characteristics of SMS and DMS Table Spaces . . . . . . . . 5-14
Table-Space Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Extent Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Creating Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Basic Disk Storage Requirements . . . . . . . . . . . . . . . . . . . 5-21
Basic Disk Storage Requirements . . . . . . . . . . . . . . . . . . . 5-24
Monitoring Disk Storage Usage . . . . . . . . . . . . . . . . . . . . . 5-26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27

Module 6 Data Type Mapping


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
DB2 UDB Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
DB2 Numeric Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Mapping the Oracle Number Data Type . . . . . . . . . . . . . . . 6-6
DB2 String Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8
Oracle Character Data Types . . . . . . . . . . . . . . . . . . . . . . . 6-9
Large Object String Data Types . . . . . . . . . . . . . . . . . . . . 6-10
DB2 UDB Date-time Data Types . . . . . . . . . . . . . . . . . . . . 6-11
Other Oracle Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Oracle Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Handling Nulls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18

Module 7 Creating Tables and Views


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Authorizations and Privileges . . . . . . . . . . . . . . . . . . . . . . . 7-4
Create Table Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Altering Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Creating Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
Monitoring Disk Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13

ix
Module 8 Data Migration Methods Loading Tables
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Data Copy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-3
Privileges and Authorities Needed . . . . . . . . . . . . . . . . . . . 8-4
Import Data File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
Import Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Using Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Import Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Load Input Data Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
Load Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Load Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Using Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
Load Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Checking for Constraints Violations . . . . . . . . . . . . . . . . . . 8-18
LOAD QUERY Command . . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Monitoring Disk Usage After Load . . . . . . . . . . . . . . . . . . . 8-20
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-21

Module 9 Accessing Data Through Indexes


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Benefits of Using Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
Costs of Using Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Types of DB2 UDB Indexes . . . . . . . . . . . . . . . . . . . . . . . . .9-7
Type-2 Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
Multi-dimensional Clustering . . . . . . . . . . . . . . . . . . . . . . . 9-10
Multi-dimensional Clustering (cont.) . . . . . . . . . . . . . . . . . 9-12
Multi-dimensional Clustering (cont.) . . . . . . . . . . . . . . . . . 9-13
Indexes: SMS or DMS? . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
DB2 UDB and Oracle Indexing Differences . . . . . . . . . . . . 9-16
Creating the Index First . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-20
Example Index Placement . . . . . . . . . . . . . . . . . . . . . . . . . 9-22
Create Index Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
About Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25
Using Visual Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Visual Explain Output Example . . . . . . . . . . . . . . . . . . . . . 9-28
Visual Explain Details Report . . . . . . . . . . . . . . . . . . . . . . 9-29
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-30

x
Module 10 Using Constraints to Manage Business Requirements
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Types of Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Constraint Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6
Referential Constraint Delete Rules . . . . . . . . . . . . . . . . . 10-7
Check Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Constraint Syntax Similarities & Differences . . . . . . . . . . . 10-9
Informational Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 10-10
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-11

Module 11 Using DB2 Tools and Utilities


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Overview of DB2 UDB Tools . . . . . . . . . . . . . . . . . . . . . . . 11-3
DB2 & SQL Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-5
db2look & Obtaining DDL Schemas . . . . . . . . . . . . . . . . . 11-7
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16

Module 12 Managing Backup and Recovery


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2
Topics Covered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4
Types of Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-5
Planning the Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-8
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 12-9
Configuration Parameters (cont.) . . . . . . . . . . . . . . . . . . 12-10
Backing Up a Database . . . . . . . . . . . . . . . . . . . . . . . . . . 12-11
Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-12
Restoring a Table Space . . . . . . . . . . . . . . . . . . . . . . . . . 12-13
Roll Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-14
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-15

Module 13 Performance Monitoring and Tuning


Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-2
General Performance Issues . . . . . . . . . . . . . . . . . . . . . . . 13-3
Tuning Oracle versus DB2 UDB . . . . . . . . . . . . . . . . . . . . 13-4
DB2 UDB Memory Elements . . . . . . . . . . . . . . . . . . . . . . . 13-5
Tuning is Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-6
RUNSTATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-7
RUNSTATS Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-9

xi
REORG INDEXES/TABLE . . . . . . . . . . . . . . . . . . . . . . . 13-10
REORG Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
REORGCHK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
Buffer Pool Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 13-16
Page Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-19
Tuning Buffer Pool Parameters . . . . . . . . . . . . . . . . . . . . 13-21
Disk Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-22
Page Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-23
Extent Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-25
Prefetch Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-27
Process Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-28
Tuning MAXAGENTS and MAXAPPLS . . . . . . . . . . . . . . 13-29
DB2 UDB Self-tuning Capability . . . . . . . . . . . . . . . . . . . 13-30
Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-31
The AUTOCONFIGURE Command . . . . . . . . . . . . . . . . 13-32
Monitoring the Server/Database . . . . . . . . . . . . . . . . . . . 13-34
Snapshot Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-36
Snapshot Switch Settings . . . . . . . . . . . . . . . . . . . . . . . . 13-37
Snapshot Example Output . . . . . . . . . . . . . . . . . . . . . . . . 13-39
Event Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-42
Event Monitor Example . . . . . . . . . . . . . . . . . . . . . . . . . . 13-44
Performance Configuration Wizard . . . . . . . . . . . . . . . . . 13-48
Health Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-50
Memory Visualizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-52
Memory Visualizer Panel . . . . . . . . . . . . . . . . . . . . . . . . . 13-53
Other Data Management Tools . . . . . . . . . . . . . . . . . . . . 13-55
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-58

Module 14 Course Summary


Course Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
Where to Go From Here . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3
Description of DB2 UDB Courses . . . . . . . . . . . . . . . . . . . 14-4
Description of DB2 UDB Advanced Courses . . . . . . . . . . . 14-6
To Enroll in Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
DB2 UDB Technical Documents . . . . . . . . . . . . . . . . . . . 14-10
Additional Technical Documents . . . . . . . . . . . . . . . . . . . 14-11
Evaluation Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-12

xii
Appendix A Oracle and DB2 UDB Comparisons

Appendix B Data Types Comparison Chart

Appendix C Example import and load Utilities Results

Appendix D Example Configuration Parameters

Appendix E Additional Reference Information

Appendix F The StoresDB Database

Appendix LE Lab Exercises Environment


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-2
Client Setup (Windows) . . . . . . . . . . . . . . . . . . . . . . . . . . LE-3
DB2 Server Setup (Windows) . . . . . . . . . . . . . . . . . . . . . LE-4
DB2 Server Setup (UNIX/Linux) . . . . . . . . . . . . . . . . . . . LE-5
DB2 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-7
DB2 Command Line Syntax . . . . . . . . . . . . . . . . . . . . . . LE-9
DB2 Online Reference . . . . . . . . . . . . . . . . . . . . . . . . . . LE-10
Starting a Command Line Session . . . . . . . . . . . . . . . . LE-11
QUIT vs. TERMINATE vs. CONNECT RESET . . . . . . . LE-12
List CLP Command Options . . . . . . . . . . . . . . . . . . . . . LE-13
Modify CLP Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-15
Input File - No Operating System Commands . . . . . . . . LE-16
Input File - Operating System Commands . . . . . . . . . . LE-17

xiii
xiv
Module 1

Differences Between Oracle and IBM DB2


Instances

Differences Between Oracle and IBM DB2 Instances 02-2003 1-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Explain the DB2 UDB terminology and architecture
Contrast the DB2 UDB terminology and architecture with that of
an Oracle server
Describe the purpose of the DB2 UDB Administration Server
Describe the purpose of the runtime client
Describe the purpose of the administration client
Describe the differences between DB2 UDB security and Oracle
security
Describe the differences between DB2 UDB concurrency
control and Oracle concurrency control

1-2

This course provides an introduction to DB2 UDB administration both server administration
and DBA functionality for Oracle administrators. The functionality covered is that DB2 UDB
on Unix/Linux/Windows (Intel) environments and it is aimed at transactional systems (OLTP)
rather than datawarehouse or business intelligence systems (OLAP).
Other courses provide the information specific to DW/BI systems and DB2 UDB Enterprise -
Extended Edition (EEE) servers. Those courses build up the material covered here.

1-2 Differences Between Oracle and IBM DB2 Instances


DB2 UDB Terminology

An instance in DB2 UDB:


Is one application of the DB2 UDB code
Is similar in concept to an instance in Oracle
Can support multiple databases

Table space is:


A logical space
Similar in concept to tablespace in Oracle

1-3

Instance
An instance in DB2 UDB has similar meaning as an instance in Oracle. Each instance in DB2
UDB refers to one set of processes that link back to the installed binary files in the DB2 UDB
directory. An instance includes:
Memory usage
Processor usage
Disk usage
Oracle restricts the definition of an instance to the processes and memory components. Thus,
with Oracle 9i, an instance consists of a number of background processes:
smon system monitor
pmon process monitor
dbwr database writer
lgwr log writer
ckpt checkpoint

Differences Between Oracle and IBM DB2 Instances 1-3


and a system global area (SGA) that provides an allocation of shared memory that is available to
all database users. An instance provides the mechanism to access a set of Oracle database files.
An Oracle instance can only access one database, but multiple instances can access the same
database (Oracle Parallel Server Option). An Oracle database is a collection of physical
operating system files; an instance can mount and open a single database at a time (but does not
have to open the same database each time that it is started).
With DB2 UDB, each instance can be assigned multiple databases. Also, as with Oracle, there
can be multiple instances on one UNIX host. In DB2 UDB the instance is often referred to as the
Database Manager (DBM) because it manages all of the databases within it.
The Oracle Enterprise Management (OEM) in Oracle 9i installed on a Windows client with
its own database on the server provides some of the features and functionality of DB2 DBM
combined with the DB2 Command Center. Since the differences are more striking than the
similarities, we will not compare their functionality in detail, but concentrate rather on the DB2
UDB instance management.

Table space
A table space is logical space allocated for storing table data and indexes. This logical space is
comprised of one or more physical containers (either files, devices, or directories). You can
create any number of table spaces, but three are created by default when you create the database.
These are SYSCATSPACE, TEMPSPACE1, and USERSPACE1.
You might want to add other table spaces to partition data from indexes or to use as a temporary
table space. You will learn more about table-space usage in a later module.
In DB2 terminology we talk of or write about a table space (two words), whereas Oracle
generally uses tablespace (one word). In SQL and administrative statements for both servers,
the technical reference to TABLESPACE is one word (e.g., ALTER TABLESPACE ...).

1-4 Differences Between Oracle and IBM DB2 Instances


DB2 UDB Terminology (cont.)

A Container is:
A physical storage location
Similar to a data file in Oracle

Extent size is:


The unit of contiguous space within a container

Buffer pool is:


Used to cache table data
Used to cache indexes

1-5

Container
A container is a physical storage location that is similar to a segment of a data file in Oracle. For
DB2 UDB, this location can be a directory, if assigned to an SMS table space, or a file or device,
if assigned to a DMS table space.
An SMS (system managed space) table space is one that is managed by the operating system as
part of its file system manager. A DMS (database managed space) table space is managed by the
database server instance.

Extent size
An extent is a unit of space within a container of a table space. The extent size is configured to
be divisible by pages and is defined when a table space is created. In DB2 UDB, the extent size
is configured for each table space.
In Oracle terminology, tablespaces are collections of data files and tables, indexes, and other
database objects are placed within these tablespaces. Storage is allocated as extents consisting of
contiguous collections of Oracle data blocks (equivalent to the DB2 UDB page). Prior to Oracle
9i, the blocksize (2KB, 4KB, 8KB, and occasionally 16KB or 32KB) was determined by a single

Differences Between Oracle and IBM DB2 Instances 1-5


setting in INIT.ora when the database was created. With Oracle 9i, different tablespaces can be
based on different blocksizes and extents can be managed independently of the tablespaces
setting (CREATE TABLESPACE ... EXTENT MANAGEMENT LOCAL AUTOALLOCATE).

Buffer pool
Similar to Oracle database buffers, the DB2 UDB database buffer pool is used to cache table and
index pages. In DB2 UDB, a buffer pool is exclusive to one database and is not shared across the
databases supported by the instance. When creating a database, a default buffer pool is also
created for that database. In addition, a database can have multiple buffer pools to take
advantage of the following two features.
First, DB2 UDB table spaces can have different page sizes. Since disk I/O requires that the page
size of the buffer pool match the page size of the table space, a separate buffer pool is required
for each different page size used in any table spaces.
Second, a table space in DB2 UDB can be assigned its own exclusive buffer pool. This
facilitates the caching of data for specific tables.
These DB2 UDB options are discussed in detail in the module on table spaces.

1-6 Differences Between Oracle and IBM DB2 Instances


Oracle Architecture
Instance
Main
Components Shared Pool
of the Shared Pool
Oracle Server

Library Cache

Database Redo Log


Data Dictionary Buffer Buffer
Cache Cache Cache
Server
Process
PGA
PMON SMON DBWR LGWR CKPT Others

Listener
listener.ora

Data files Archived log file

Remote User
Parameter file
Process Control files Database
User (init.ora)
tnsnames.ora

Redo Log files Password file

1-7

Memory Structures
Oracle creates and uses memory structures to complete several jobs. Thus, for example, memory
is used to store program code being executed and data that is shared among users. Two basic
memory structures are associated with the Oracle server: the system global area (which includes
the database and redo log buffers, and the shared pool) and the program global area.

The System Global Area


The System Global Area (SGA) is a shared memory region allocated by Oracle that contains
data and control information for one Oracle instance. An Oracle instance contains the SGA and
the background processes. The SGA is allocated when an instance starts and deallocated when
the instance shuts down. Each instance that is started has its own SGA.

Differences Between Oracle and IBM DB2 Instances 1-7


The Program Global Area
The Program Global Area (PGA) is a memory buffer that contains data and control information
for a server process. A PGA is created by Oracle when a server process is started. The
information held in a PGA depends on the configuration of Oracle.

Processes
Processes are jobs or tasks that work in the memory of these computers. A process is a "thread
of control" or a mechanism in an operating system that can execute a series of steps some
operating systems use the terms "job" or "task" to describe this mechanism. The Oracle database
system has two general types of processes: user processes and Oracle processes.

User Processes
A user process is created and maintained to execute the software code of an application program
(such as a PRO*C program) or an Oracle tool (such as SQL*PLUS). The user process also
manages the communication with the server processes. User processes communicate with the
server processes through the program interface.

Oracle Processes
Oracle processes are called by other processes to perform functions on behalf of the invoking
process a server is created process to handle requests from connected user processes. In
addition there are a set of background processes for each instance.
The background processes are:
DBW0 (Database Writer) writes data from database buffer cache to data files.
LGWR (Log Writer) registers changes in redo log buffer to redo log files.
SMON (System Monitor) checks for consistency in the database.
PMON (Process Monitor) cleans up resources when one of the Oracle processes fails.
CHPT (CheckPoint) updates database status information in the control files and data
files whenever changes in buffer cache are permanently recorded in the database.

1-8 Differences Between Oracle and IBM DB2 Instances


DB2 Architecture

DB2 ARCHITECTURE
client client client

UDB Client Library

$DB2INSTANCE Database directory file


Registry
Database Manager (DBM) config file
Diagnostic file

System Local Remote Other


Watchdog
Controller Listener Listeners Processes

Co-ordinator Co-ordinator Co-ordinator Idle


Agent Memory Agent Memory Agent Memory Agents
SubAgents SubAgents SubAgents
Fenced

UDF

1-9

On the client side, either local or remote applications, or both, are linked with the DB2 UDB
client library. Local clients communicate using shared memory and semaphores; remote clients
use a protocol such as Named Pipes (NPIPE), TCP/IP, NetBIOS, or SNA.
On the server side, activity is controlled by engine dispatchable units (EDUs). In the above and
on the next page, EDUs are shown as circles or groups of circles.

Processes
EDUs are implemented as threads in a single process on Windows-based platforms and as
processes on UNIX (single-threaded). DB2 agents are the most common type of EDUs. These
agents perform most of the SQL processing on behalf of applications. Prefetchers and page
cleaners are other common EDUs.
A set of subagents might be assigned to process the client application requests. Multiple
subagents can be assigned if the machine where the server resides has multiple processors or is
part of a partitioned database.
All agents and subagents are managed using a pooling algorithm that minimizes the creation and
destruction of EDUs.

Differences Between Oracle and IBM DB2 Instances 1-9


DB2 Architecture (cont.)
database1 database2

Database (DB) config file Database (DB) config file


deadlock pre- page deadlock pre- page
logger logger
detector fetchers cleaners detector fetchers cleaners

Log Buffer Buffer Locklist Log Buffer Locklist


buffers Pool Pool buffers Pool
Packages Packages
4 kb 32 kb 4 kb
pages pages Others pages Others
tablespace1 tablespace2 tablespace1
Primary Primary
log files 4 kb pages 32 kb pages log files 4 kb pages

Device
Directory File container
container container

File
container
Secondary
Directory Device log files
container container Device
container

1-10

Shared Memory
Buffer pools are areas of database server memory where database pages of user table data, index
data, and catalog data are temporarily moved and can be modified.
The configuration of the buffer pools, as well as prefetcher and page cleaner EDUs, controls
how quickly data can be accessed and how readily available it is to applications.
Prefetchers retrieve data from disk and move it into the buffer pool before applications
need the data. Agents of the application send asynchronous read-ahead requests to a
common prefetch queue. As prefetchers become available, they implement those
requests by using big-block or scatter-read input operations to bring the requested pages
from disk to the buffer pool.
Page cleaners move data from the buffer pool back out to disk. Page cleaners are
background EDUs that are independent of the application agents. They look for pages
from the buffer pool that are no longer needed and write the pages to disk. Page
cleaners ensure that there is room in the buffer pool for the pages being retrieved by the
prefetchers.
Without the independent prefetchers and the page cleaner EDUs, the application agents would
have to do all of the reading and writing of data between the buffer pool and disk storage.

1-10 Differences Between Oracle and IBM DB2 Instances


Configuration Files

With DB2 UDB, there is one configuration file for the instance:
Contains parameter values for that instance

Each database also has its own configuration file:


Contains parameters for one database

Oracle uses one parameter file (or init file, i.e., initxxxxx.ora) for
database settings for the whole instance (with approximately 200
possible settings in Oracle 8 and 260 settings in Oracle 9i).
Additional control files are used for networking and connecting:
LISTENER.ora, SQLNET.ora, TNSNAMES.ora, ...

1-11

Oracle configuration
Oracle uses one parameter file for the instance, and it tells the instance where to find the
instance control files. In DB2 UDB, there is one configuration file for the instance, but each
database also has its own configuration file.

DBM configuration file


Each DB2 UDB instance has a configuration file that contains parameter values for that
instance. These are instance level parameters, which control the use of all the databases in that
instance.

DB configuration file
Every DB2 UDB database also has a configuration file that contains parameters for just that one
database. The various databases in an instance are configured separately within the bounds of
the instance.

Differences Between Oracle and IBM DB2 Instances 1-11


Environment and Registry Variables

Environment Variables:
Very few (DB2INSTANCE, PATH)
Only take effect after instance is restarted
Set manually
Stored in the db2profile and userprofile files on UNIX

Registry Variables:
More than 100 in number
Take effect immediately
Set with the db2set command
Stored in the profile.env file on UNIX

1-12

Environment variables
Unlike Oracle, the operating environment for DB2 UDB does not rely heavily on environment
variables. In fact, there are only two environment variables that are needed to operate a DB2
UDB instance.
DB2INSTANCE operates much the same as the Oracle variable ORACLE_SID it
stores the name of the current instance.
PATH is a UNIX variable and it must contain the directory where the DB2 UDB binary
files are installed. With an Oracle server, the path to the Oracle binaries is found in
ORACLE_PATH and other product files are found via ORACLE_HOME.

Registry variables
Registry variables do not have an equivalent feature in Oracle and are unique to DB2 UDB.
Registry variables function like environment variables with the major advantage being that they
take effect immediately and do not require the instance to be restarted. Registry variables are set
using the DB2 set command and are stored in a file called profile.env in the sqllib directory for
the instance.

1-12 Differences Between Oracle and IBM DB2 Instances


The dividing line between environment and registry variables is hazy because many of the
registry variables can also be set in the environment, but when set in the environment, they
require the instance to be restarted before they take effect.
db2set examples:
db2set
DB2ACCOUNTNAME=BOB-LTOP\db2admin
DB2INSTOWNER=BOB-LTOP
DB2PORTRANGE=60000:60003
DB2INSTPROF=C:\IBM\SQLLIB
DB2COMM=TCPIP

db2set -g
DB2_DOCCDPATH=C:\IBM\SQLLIB\
DB2SYSTEM=BOB-LTOP
DB2PATH=C:\IBM\SQLLIB
DB2INSTDEF=DB2
DB2ADMINSERVER=DB2DAS00

db2set -all
[e] DB2PATH=C:\IBM\SQLLIB
[i] DB2ACCOUNTNAME=BOB-LTOP\db2admin
[i] DB2INSTOWNER=BOB-LTOP
[i] DB2PORTRANGE=60000:60003
[i] DB2INSTPROF=C:\IBM\SQLLIB
[i] DB2COMM=TCPIP
[g] DB2_DOCCDPATH=C:\IBM\SQLLIB\
[g] DB2SYSTEM=BOB-LTOP
[g] DB2PATH=C:\IBM\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS00

For More Information


The environment and registry variables are listed and described in DB2 UDB
Administration Guide: Performance manual, Appendix A.

Differences Between Oracle and IBM DB2 Instances 1-13


Package

Contains the SQL statements used in an application


Is stored in the system catalog of the database
Is created by the BIND or PREP statement
Contains the optimized query plan for each SQL statement
An Oracle package is a completely different concept

1-14

Package
Applications that connect to DB2 UDB databases use packages to execute the SQL statements.
One application has one associated package, which contains all of the SQL statements found in
that application, plus the optimized query plan for each SQL statement.
A package is created using either the BIND or PREP command, which examines the application
looking for SQL statements and then creates an optimized query plan for each SQL statement
found. When the application runs and executes an SQL statement, the associated SQL statement
is located in the package and the optimized query plan for that SQL statement is used to access
the data.
The SQL statements that exist in the package are considered STATIC SQL statements, since
their query plans remain the same until the next BIND or PREP command is executed against
the application.
Example of the PREP & BIND stages.
db2 PREP <filename> VERSION V1.1
db2 BIND <filename>

1-14 Differences Between Oracle and IBM DB2 Instances


Note Bind is not required for ODBC/CLI connections. Also, RUNSTATS and some
FIXPAKS may require a REBIND.

There is no equivalent concept in Oracle to IBM DB2s package.

Packages in Oracle
A package in Oracle is a collection of procedures and functions bundled together.

Differences Between Oracle and IBM DB2 Instances 1-15


DAS

DAS Database Administration Server


Special DB2 UDB instance to perform administration tasks
Created automatically at installation time
Executes administration and monitoring tasks requested by
remote clients
Executes scheduled jobs
Collects information for DB2 UDB Discovery
No equivalent in Oracle

1-16

DAS
The DB2 Administration Server (DAS) is a special instance of DB2 UDB that keeps track of
other instances of DB2 UDB. It is automatically created and configured when DB2 UDB is
initially installed on the host machine and is automatically started whenever the host machine is
booted. The DAS provides the following specific functions:
Enables remote administration and monitoring of DB2 UDB instances
Provides a scheduler that is used to execute user-defined jobs. These jobs may include
operating system commands.
Allows DB2 UDB Discovery to return information to remote clients
Queries the operating system for user and group information

Note In DB2 UDB version 8, the DAS is not an instance, but a separate process that
manages instances.

In Oracle, each instance is atomic, having nothing to do with any other instance that may be
running on the same hardware. There is no concept of a DAS in Oracle.

1-16 Differences Between Oracle and IBM DB2 Instances


DAS (cont.)

To manually create the DAS:


dasicrt ServerName (UNIX)
db2admin create (INTEL)
Starting and stopping the DAS:
db2admin start
db2admin stop
Listing the DAS:
db2set -g DB2ADMINSERVER
Removing the DAS:
dasidrop ServerName (UNIX)
db2admin drop (INTEL)

1-17

DAS
Although the DAS is automatically created at installation time and is automatically started when
the system boots, you can also manually create, start, stop, list, and remove the DAS.
The DAS is the connection between the GUI tools on the client and those on the server. If the
DAS is not installed or has been stopped, you cannot connect to the database(s) using the GUI
tools. Since there can be only one DAS running on the server, multiple versions of DB2 UDB,
such as V6.1 or V7.2, connect through the same DAS.
Some customers do not want to use the GUI administration tools and their applications connect
through JDBC (GUI uses ODBC), so they do not create the DAS. Most customers want the GUI
interface, however.

Differences Between Oracle and IBM DB2 Instances 1-17


Client Tools

Run-Time Client:
Must be installed on every client workstation used to access the
database server
Contains the support necessary to connect to the server using
ODBC, JDBC and the Command Line Processor (CLP)
Supported communication protocols are APPC, IPX/SPX,
named pipes, NetBIOS, and TCP/IP
Administration Client:
Installed on a client workstation and consists of a suite of GUI
tools that provide remote administration of databases and
instances

1-18

DB2 UDB Run-Time Client


The DB2 UDB Run-Time Client must be installed on every client workstation used to access the
database server. It contains the support necessary to connect to the server using ODBC, JDBC
and the Command Line Processor (CLP). The supported communication protocols are APPC,
IPX/SPX, named pipes, NetBIOS, and TCP/IP.

DB2 UDB Administration Client


The Administration Client is installed on a client workstation and consists of a suite of GUI
tools that provide remote administration of databases and instances.
More client information is included in the next module.

1-18 Differences Between Oracle and IBM DB2 Instances


Security

Instance security provided by:


User logon authentication
Instance configuration parameters

Database security:
Provided by the GRANT SQL statement

1-19

DB2 UDB Instance Security


Security on the DB2 UDB instance begins at the instance level instead of the database level, as
is the case with Oracle. The DB2 UDB instance uses the group definitions of the operating
system to determine what authority a user has on the instance. Thus, the SYSADM_GROUP
parameter in the DBM configuration file of the instance names the UNIX group that a user must
belong to in order to have system administration authority (SYSADM) on the instance.

Database Security
The permissions on the database are all granted using the GRANT SQL statement. A user that
has SYSADM authority on the instance grants the initial database administrator (DBADM)
authority on a database. From that point on, the user with DBADM authority on the database can
grant further database object permissions, such as select permission on a table, to other users.

Differences Between Oracle and IBM DB2 Instances 1-19


Concurrency Control

DB2 UDB and Oracle differ considerably in how they handle:


Concurrency and locks
Isolation levels
Transactions
Logging

1-20

The Problem
Database and transaction processing differs considerably among the various vendors and require
potentially different approaches by developers. One of the most significant differences users
notice when they port applications from Oracle to DB2 is the difference in concurrency control
between the two databases. Here we will addresses the locking behaviors of each database and
thereby introduce you to how to map application behavior from Oracle to DB2 UDB.
Some Oracle applications, when ported to DB2, appear to behave identically, and the topic of
concurrency can be ignored. However, if your applications involve frequent accesses to the same
tables, you may find that your applications behave differently.
To get the best results, sometimes it is worth redesigning applications to achieve the best
concurrency in DB2 UDB. Understanding concurrency control in DB2 UDB helps you know
how to rework an application.

Concurrency and Locks


For both Oracle and DB2, a transaction is an atomic unit of work, in which all changes are either
committed or rolled back. However, while both DB2 and Oracle have row-level locking, there
are differences in when the locks are acquired.
1-20 Differences Between Oracle and IBM DB2 Instances
In general terms, there are shared locks and exclusive locks. To update a row, the database server
needs to acquire an exclusive lock on that row first. When a share lock is acquired for an object
on behalf of an application, other applications can also acquire a share lock on the object, but
requests for exclusive locks are denied. An exclusive lock, on the other hand, blocks other
applications from acquiring locks, even share locks, on the object. The time an application is
blocked because of the unavailability of a lock is called the lock wait time.
In Oracle, when an application requests a row to be fetched, no locks are acquired. In DB2
UDB, when an application issues a read request, a share lock is acquired on the row. A DB2
UDB application may acquire more share locks based on the isolation level of the application
and the access plan of the query. For example, for a table scan, a share lock may be acquired for
each row touched by the table scan, which may contain more rows than the result set.
Alternatively, one table lock may be acquired.
Due to these differences, ported applications that have updaters and readers accessing the same
data concurrently from the same table may experience more lock wait time in DB2, but with a
more consistent view of the data.

Isolation Levels
Before looking at how to improve the concurrency of ported applications, it is useful to have a
quick description of differences between DB2 UDBs and Oracles implementation of
concurrency control.
Oracle implements an optimistic view of locking. The Oracle assumption is that in most cases,
the data fetched by an application is unlikely to be changed by another application. It is up to the
application to take care of the situation in which the data is modified by another concurrent
application.
For example, when an Oracle application starts an update transaction, the old version of the data
is kept in the rollback segment. When any other application makes a read request for the data, it
gets the version from the rollback segment. Once the update transaction commits, the rollback
segment version is erased and all other applications see the new version of the data. Different
readers of the data may hold a different value for the same row, depending on whether the data is
fetched before or after the update commits. Hence, it is also called the Oracle versioning
technique.
To ensure read consistency in Oracle, the application must issue SELECT FOR UPDATE. In this
case, all other readers and updaters are blocked.
DB2 UDB has a suite of concurrency control schemes to suit the needs of applications. An
application can set the level of isolation to provide the proper level of concurrency. One of these
means can be used:

Differences Between Oracle and IBM DB2 Instances 1-21


DB2 UDB Isolation Levels

DB2 UDB has four isolation levels:


Repeatable Read (RR)
Read Stability (RS)
Cursor Stability (CS)
Uncommitted Read (UR)

1-22

Here is a brief description of each isolation level. For more details, consult the DB2 UDB
Administration Guide.

Repeatable Read (RR)


This is the highest level of isolation. It blocks other applications from changing data that has
been read in the RR transaction, until the transaction commits or rolls back. If you fetch from a
cursor twice in the same transaction, you are guaranteed to get the same answer set. Phantom
rows cannot occur by showing up in the second execution of the query but not the first.

Read Stability (RS)


Like RR, this level of isolation guarantees that the rows read by the application remain
unchanged in a transaction. However, it does not prevent the introduction of phantom rows.

1-22 Differences Between Oracle and IBM DB2 Instances


Cursor Stability (CS)
This level guarantees only that a row of a table cannot be changed by another application while
your cursor is positioned on that row. This means the application can trust the data it reads by
fetching from the cursor and updating it. This is the default.

Uncommitted Read (UR)


This level is commonly known as a dirty read. When a UR transaction issues a read, it reads the
only version of the data that DB2 UDB has, even though the data may have been read, inserted,
or updated by another transaction. The data is labeled as dirty because if the updater were to roll
back, the UR transaction would have read data that was never committed. Unlike Oracle, DB2
does not use the rollback segment to store the old version of the data. Access to the data is
controlled using locks. This is the only type of read access where DB2 UDB does not acquire a
share lock on the object; hence, it does not prevent the data from being updated by another
application.

Comparisons with Oracle


Oracles implementation most resembles Read Stability (RS) in DB2 UDB for writers and most
resembles Uncommitted Read (UR ) for readers. Note that DB2 UDB UR is not the same as
Oracle's versioning. UR reads uncommitted data if there is a transaction in progress, as opposed
to the last version of the data read by Oracle. Suppose an application is reading a column from a
row that has being modified by another transaction from a value of 3 to a value of 5. In Oracle,
the application reads 3. In DB2 UDB, UR reads 5. If the update transaction commits, then DB2
has the right data. If the update transaction rolls back, then Oracle has the right data.
With the exception of UR transactions, DB2 UDB can guarantee that the data an application
reads does not change under it. The application can trust the data it fetched. This behavior can
simplify the application design.
On the other hand, because DB2 UDB requests an exclusive lock on behalf of the application
during an update, no other applications can read the row (except when the UR isolation level is
used). This can reduce concurrency in the system if there are a lot of applications attempting to
access the same data at the same time.

Differences Between Oracle and IBM DB2 Instances 1-23


Performance Issues
To increase the concurrency of the system, commit your transactions often, including read-only
transactions. If possible, reschedule the applications that compete for access to the same table.
Also, use Uncommitted Read transactions where read consistency is not an issue. Use Cursor
Stability whenever possible for other applications.
Some applications do cross-table consistency checking in the application code instead of using
the referential integrity (RI) constraints on the tables. This approach can increase the number of
locks acquired by DB2 because the applications need more cursors and fetch more data. Use of
RI constraints can reduce the lock-wait time by reducing the number of cursors and the number
of rows fetched in the application, and hence reduce the amount of locking done by the database.
Other performance issues concerning locks will be discussed in Module 8.

1-24 Differences Between Oracle and IBM DB2 Instances


DB2 UDB and Oracle Terminology Comparison

DB2 UDB Oracle Comments


Instance Instance Processes & shared memory. One DB for Oracle.
Database Database Physical structure containing data.
Table space Tablespace Contains actual database data.
Container Data files Entities inside tablespaces. Holds objects/segments.
Extents Extents Entites inside objects/segments.
Pages Data blocks Smallest storage entity.
DBM & DB INIT.ora files & For DB2, each instance (DBM) and database (DB) has its
configuration control files own set of configuration parameters. For Oracle, provide
files, etc the location of files that provide configuration values.
Buffer pools Data cache Buffer data from tablespaces to reduce disk I/O.
Shared SGA Shared memory for the database server. For Oracle, there
memory is one SGA. For DB2, there is shared memory at the DBM
(instance) level and for each active database.

1-25

For More Information


See the Appendix A of this training manual for more terminology comparisons.

Differences Between Oracle and IBM DB2 Instances 1-25


Additional DB2 UDB Terminology

Some additional terms used in DB2 UDB documents:


DASD Disk drive device
SCALAR A single variable that has relationship value
SARGABLE A conjugation of searchable argument
PREDICATE A condition, such as in a WHERE clause
FIXPAK An intermediate revision of DB2 UDB between
version releases
DARI An obsolete term for a stored procedure

1-26

These are a few terms that are used freely in the IBM publications, and need some definition.
DASD (direct access storage device) is simply another term for a disk drive device.
SCALAR is a property assigned to a variable. It means that the value of the variable is singular,
as opposed to a range or a set, and it has a relationship to other singular values on a scale or line.
For example, the value 3 would be scalar since it is singular, and it is less than 2 and more
than 4 on a number line. In like manner, the value c is scalar since in can be compared to
other characters in alphabetical order.
SARGABLE is a conjugation of the two words searchable argument. For example the name
Smith is sargable in the last name column of the customer table.
PREDICATE is simply a condition. For example, the condition last_name = Smith could be
a predicate in a WHERE clause. All rows returned by the server are predicated on having a last
name of Smith.
FIXPAK is an intermediate revision of DB2 UDB that fixes small bugs or adds minor new
features. These revisions occur between scheduled releases of major versions. In some cases a
FIXPAK modifies the code sufficiently to increment the decimal part of the version number. For
example, there is a fixpak that changes DB2 UDB version 7.1 to version 7.2.

1-26 Differences Between Oracle and IBM DB2 Instances


Summary

You should now be able to:


Explain the DB2 UDB terminology and architecture
Contrast the DB2 UDB terminology and architecture with that of
an Oracle server
Describe the purpose of the DB2 UDB Administration Server
Describe the purpose of the runtime client
Describe the purpose of the administration client
Describe the differences between DB2 UDB security and Oracle
security
Describe the differences between DB2 UDB concurrency
control and Oracle concurrency control

1-27

Differences Between Oracle and IBM DB2 Instances 1-27


1-28 Differences Between Oracle and IBM DB2 Instances
Exercises

Differences Between Oracle and IBM DB2 Instances 1-29


Exercise 1
This is a multiple choice exercise testing your knowledge of DB2 UDB terminology.
1.1 The term instance, when referring to DB2 UDB, means:
A Only one database.
B Only the Database Administrator Server (DAS).
C One instantiation of the DB2 UDB code. There can be multiple instances
on one UNIX host.

1.2 There can be only one database assigned to one DB2 UDB instance:
A True.
B False.

1.3 In the DB2 UDB architecture, the term table space refers to:
A One table.
B A logical space allocated for storing table data.
C A physical file on disk for storing data.

1.4 A container is:


A A disk storage device used exclusively for storing indexes.
B A disk storage device used exclusively for storing system catalog data.
C A physical storage location, such as a directory, file, or device used for
storing table data.

1.5 Each database has its own exclusive buffer pool.


A True.
B False.

1.6 Extent size is initially defined when:


A The table space is created.
B The container is defined.
C Never. It has no meaning in DB2 UDB architecture.

1-30 Differences Between Oracle and IBM DB2 Instances


1.7 The DBM configuration file is used to:
A Set registry variable values, such as the name of the DB2 UDB server.
B Set environment variable values, such as PATH.
C Set instance level parameters, such as the group name that has SYSADM
authority.

1.8 The DB configuration file is used to:


A Set database level parameters, such as the default size of the buffer pool.
B Set environment variables such as PATH.
C Set registry variable values, such as the name of the DB2 UDB instance.

1.9 In DB2 UDB terminology, a package is:


A The complete installation of DB2 UDB.
B A set of optimized query plans for an application.
C An instance and its associated databases.

1.10 The Database Administration Server (DAS) instance allows an administrator to manage
local instances from a remote location.
A True.
B False.

1.11 The DB2 UDB Run-Time Client is installed on the client machine and provides:
A A bundle of IBM supplied applications for common business functions,
such as finance, human relations, and inventory control.
B A bundle of connectivity products such as ODBC, JDBC, and the
Command Line Processor.
C A bundle of administration tools that allows for remote management of
DB2 UDB instances.

1.12 The DB2 UDB Administration client provides:


A A bundle of GUI tools for administration of local and remote instances.
B A bundle of IBM supplied applications for common business functions
such as finance, human relations, and inventory control.
C A bundle of tools for developing new applications.

Differences Between Oracle and IBM DB2 Instances 1-31


1.13 Security for the DB2 UDB instance starts at the instance level, as opposed to security for
the Oracle instance, which starts at the database level.
A True.
B False.

1-32 Differences Between Oracle and IBM DB2 Instances


Solutions

Differences Between Oracle and IBM DB2 Instances 1-33


Solution 1
This is a multiple choice exercise testing your knowledge of DB2 UDB terminology.
1.1 The term instance, when referring to DB2 UDB, means:
A Only one database.
B Only the Database Administrator Server (DAS).
C One instantiation of the DB2 UDB code. There can be multiple
instances on one UNIX host.
Each instance in DB2 UDB refers to one set of processes that have links back to the
installed binary files in the DB2 UDB directory.

1.2 There can be only one database assigned to one DB2 UDB instance:
A True.
B False.
There can be multiple databases assigned to one DB2 UDB instance.

1.3 In the DB2 UDB architecture, the term table space refers to:
A One table.
B A logical space allocated for storing table data.
C A physical file on disk for storing data.
A logical space is comprised of one or more physical containers.

1.4 A container is:


A A disk storage device used exclusively for storing indexes.
B A disk storage device used exclusively for storing system catalog data.
C A physical storage location, such as a directory, file, or device used
for storing table data.
This location can be a directory if assigned to an SMS table space, or a file or device
if assigned to a DMS table space.

1-34 Differences Between Oracle and IBM DB2 Instances


1.5 Each database has its own exclusive buffer pool.
A True.
B False.
Each database has its own exclusive buffer pool, and may have several buffer pools
depending on the page size parameters used in the table spaces.

1.6 Extent size is initially defined when:


A The table space is created.
B The container is defined.
C Never. It has no meaning in DB2 UDB architecture.

1.7 The DBM configuration file is used to:


A Set registry variable values, such as the name of the DB2 UDB server.
B Set environment variable values, such as PATH.
C Set instance level parameters, such as the group name that has
SYSADM authority.

1.8 The DB configuration file is used to:


A Set database level parameters, such as the default size of the buffer
pool.
B Set environment variables, such as PATH.
C Set registry variable values, such as the name of the DB2 UDB instance.

1.9 In DB2 UDB terminology, a package is:


A The complete installation of DB2 UDB.
B A set of optimized query plans for an application.
C An instance and its associated databases.

1.10 The Database Administration Server (DAS) instance allows an administrator to manage
local instances from a remote location.
A True.
B False.

Differences Between Oracle and IBM DB2 Instances 1-35


1.11 The DB2 UDB Run-Time Client is installed on the client machine and provides:
A A bundle of IBM supplied applications for common business functions,
such as finance, human relations, and inventory control.
B A bundle of connectivity products such as ODBC, JDBC and the
Command Line Processor.
C A bundle of administration tools that allows for remote management of
DB2 UDB instances.

1.12 The DB2 UDB Administration client provides:


A A bundle of GUI tools for administration of local and remote
instances.
B A bundle of IBM supplied applications for common business functions
such as finance, human relations, and inventory control.
C A bundle of tools for developing new applications.

1.13 Security for the DB2 UDB instance starts at the instance level, as opposed to security for
the Oracle instance, which starts at the database level.
A True.
B False.

1-36 Differences Between Oracle and IBM DB2 Instances


Module 2

Client Connectivity

Client Connectivity 02-2003 2-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Understand what is included in the Run-Time Client
Use the Client Configuration Assistant
Understand the Command Line Processor Design
Understand what is included in the Administration Client
Note the version 8 client differences

2-2

2-2 Client Connectivity


Run-Time Client

The Run-Time Client includes:


ODBC support
JDBC support
Command Line Processor (CLP)

2-3

DB2 UDB Run-Time Client


The DB2 UDB Run-Time Client must be installed on every client workstation used to access the
database server. It contains the support necessary to connect to the server using ODBC, JDBC
and the Command Line Processor (CLP). The supported communication protocols are APPC,
IPX/SPX, named pipes, NetBIOS, and TCP/IP.

Client Connectivity 2-3


Client Configuration Assistant

From the Client Configuration Assistant you can perform the


following tasks:
Add, modify, delete database connection entries
Test the connection to a selected database
Configure database manager configuration parameters
Configure CLI/ODBC settings
Bind DB2 utilities and other applications to a selected database
Import/export configuration information
Change the password for the user ID that you use to connect to
a selected database

2-4

The Client Configuration Assistant (CCA) is a tool that contains wizards to help set up clients to
local or remote DB2 servers. The tool can also be used to easily help configure DB2 Connect
servers.
The Client Configuration Assistant lets you maintain a list of databases to which your
applications can connect, cataloging nodes and databases while shielding you from the inherent
complexities of these tasks.
The Client Configuration Assistant provides the following methods to assist in adding new
database connection entries:
Use a profile. A profile can be exported from a previously configured machine and used
to configure new machines.
Search the network. The CCA can search the network for DB2 systems which have an
administration server running.
Manually configure a connection to a database. All information must be provided, but a
wizard is started to help make the task simpler.

2-4 Client Connectivity


The Client Configuration Assistant has been renamed to the Configuration Assistant
V8 and significant enhancements with many new features such as:

The ability to invoke the Control Center from the Configuration Assistant.
The option to configure both local and remote servers, including DB2 Connect servers.
The ability to create configuration templates without affecting the local configuration.
Import and export capabilities for exchanging configuration templates with other
systems.
Improved response time for discovery requests along with the option to refresh the list
of discovered objects at any time.
The ability to view and update applicable database manager configuration parameters
and DB2 registry variables.
The following Version 8 tools support Version 7 servers (with some restrictions) and Version 8
servers:
Configuration Assistant (This tool has different components, of which only the import/
export configuration file can be used with Version 7 servers; all of the components
work with Version 8)
Data Warehouse Center
Replication Center
Command Center (including the Web-version of this center)
SQL Assist
Development Center
Visual Explain
In general, any Version 8 tool that is only launched from within the navigation tree of the
Control Center, or any details view based on these tools, will not be available or accessible to
Version 7 and earlier servers. You should consider using the Version 7 tools when working with
Version 7 or earlier servers.
You can access the Configuration Assistant by navigating:
Start=> Programs=> IBM DB2=> Set-up Tools=>
Configuration Assistant

Client Connectivity 2-5


Command Line Processor Design

The command line processor consists of two processes:


The front-end process (the db2 command), which acts as the
user interface
The back-end process (db2bp), which maintains a database
connection
All front-end processes with the same parent are serviced by a single
back-end process, and therefore share a single database connection
These processes communicate through three message queues:
Request queue
Input queue
Output queue

2-6

Maintaining Database Connections


Each time that db2 is invoked, a new front-end process is started. The back-end process is
started by the first db2 invocation, and can be explicitly terminated with TERMINATE. All
front-end processes with the same parent are serviced by a single back-end process, and
therefore share a single database connection.
For example, the following db2 calls from the same operating system command prompt result in
separate front-end processes sharing a single back-end process, which holds a database
connection throughout:
db2 'connect to sample
db2 'select * from org
. foo (where foo is a shell script containing DB2 commands)
db2 -tf myfile.clp

Communication between Front-end and Back-end Processes


The front-end process and back-end processes communicate through three message queues: a
request queue, an input queue, and an output queue.

2-6 Client Connectivity


CLP Autocommit Option
The autocommit option can be specified as ON or OFF. The default value is ON.
Autocommit Option (-c):
This option specifies whether each command or statement is to be treated independently. If set
ON (-c), each command or statement is automatically committed or rolled back. If the command
or statement is successful, it and all successful commands and statements that were issued before
it with autocommit OFF (+c or -c-) are committed. If, however, the command or statement fails,
it and all successful commands and statements that were issued before it with autocommit OFF
are rolled back. If set OFF (+c or -c-), COMMIT or ROLLBACK must be issued explicitly, or
one of these actions will occur when the next command with autocommit ON (-c) is issued.
The auto-commit option does not affect any other command line processor option.
Example: Consider the following scenario:
1. db2 create database test
2. db2 connect to test
3. db2 +c "create table a (c1 int)"
4. db2 select c2 from a

The SQL statement in step 4 fails because there is no column named C2 in table A. Since that
statement was issued with auto-commit ON (default), it rolls back not only the statement in step
4, but also the one in step 3, because the latter was issued with auto-commit OFF.
The command:
db2 list tables
then returns an empty list.

Tip The various commands and options for the Command Line Processor are explained
in Appendix LE: Lab Exercises Environment of this document.

Client Connectivity 2-7


Administration Client

The Administration Client contains the following GUI tools:


Control Center
Data Warehouse Center
Script Center
Alert Center
Journal
License Center
Stored Procedure Builder

2-8

DB2 UDB Administration Client


A DB2 Administration Client provides the ability for workstations from a variety of platforms to
access and administer DB2 databases. The DB2 Administration Client has all the features of the
DB2 Run-Time Client and also includes all the DB2 administration tools and support for Thin
Clients.

Note DB2 Administration Clients are available for the following platforms: AIX, HP-UX,
Linux, the Solaris Operating Environment, and Windows operating systems.

The Administration Client is installed on a client workstation and consists of a suite of GUI
tools that provide remote administration of databases and instances. The administration tools
consist of the following:
Control Center This is the primary screen from which the tools are accessed.
Data Warehouse Center This provides a central control point from which to manage
the extraction and transformation of data for your data warehouse.

2-8 Client Connectivity


Command Center This is a GUI command and statement processor screen.
Script Center This tool allows you to create, save, schedule, and execute scripts for
server administration.
Alert Center This is central point for alert notification.
Journal This tool allows you to monitor scripted jobs, server history, and DB2 UDB
messages and alerts.
License Center This screen allows you to display license status and usage history for
DB2 UDB products installed on your system.
Stored Procedure Builder This is a GUI tool designed to simplify the process of
building stored procedures.
The Administration Client connects to the DAS instance on the remote server in order to connect
to the instances and databases.
The Administration Client will be used in the modules covering monitoring and performance
tuning.

V8
DB2 V7 tools Equivalent DB2 V8 tools
Control Center Control Center
Data Warehouse Center Data Warehouse Center
Command Center Command Center
Script Center Task Center
Alert Center Health Center
Journal Journal
License Center License Center
Stored Procedure Builder Development Center
Satellite Administration Center Satellite Administration Center
Information Center Information Catalog Center
Replication Center

Client Connectivity 2-9


Summary

You should now be able to:


Understand what is included in the Run-Time Client
Use the Client Configuration Assistant
Understand the Command Line Processor Design
Understand what is included in the Administration Client
Note the version 8 client differences

2-10

2-10 Client Connectivity


Exercises

Client Connectivity 2-11


Exercise 1
In this exercise you will set up a connection to an existing database, and use the CLP to query
the tables in the database.
1.1 Open a DB2 Command Window, and connect to the sample database.

1.2 Once connected to the sample database, query all columns and all rows in the sales table.

2-12 Client Connectivity


Solutions

Client Connectivity 2-13


Solution 1
In this exercise you will set up a connection to an existing database, and use the CLP to query
the tables in the database.
1.1 Open a DB2 Command Window, and connect to the sample database.
db2 "CONNECT TO sample"

Database Connection Information

Database server = DB2/LINUX 7.2.6


SQL authorization ID = INST101
Local database alias = SAMPLE

1.2 Once connected to the sample database, query all columns and all rows in the sales table.
db2 "SELECT * FROM sales"

SALES_DATE SALES_PERSON REGION SALES


---------- --------------- --------------- -----------
12/31/1995 LUCCHESSI Ontario-South 1
12/31/1995 LEE Ontario-South 3
12/31/1995 LEE Quebec 1
12/31/1995 LEE Manitoba 2
12/31/1995 GOUNOT Quebec 1
03/29/1996 LUCCHESSI Ontario-South 3
03/29/1996 LUCCHESSI Quebec 1
03/29/1996 LEE Ontario-South 2
03/29/1996 LEE Ontario-North 2
03/29/1996 LEE Quebec 3
03/29/1996 LEE Manitoba 5
03/29/1996 GOUNOT Ontario-South 3
.
.
.
04/01/1996 LEE Manitoba 9
04/01/1996 GOUNOT Ontario-South 3
04/01/1996 GOUNOT Ontario-North 1
04/01/1996 GOUNOT Quebec 3
04/01/1996 GOUNOT Manitoba 7

41 record(s) selected.

2-14 Client Connectivity


Module 3

Creating a DB2 UDB Instance

Creating a DB2 UDB Instance 02-2003 3-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


List the requirements to create an instance
Specify the authorization level needed to create an instance
Specify user authority levels for an instance
Create a DB2 UDB instance

3-2

3-2 Creating a DB2 UDB Instance


Requirements to Create an Instance

Prior to creating an instance, the following users must exist:


SYSADM user
Fenced user
DAS user

3-3

Each of these will be discussed in detail in the following pages.

Creating a DB2 UDB Instance 3-3


The SYSADM User and Group

A DB2 UDB systems administrator user and systems administrator


group (at the operating system level) must exist before the instance
can be created
You will need to create:
A systems administrator user (SYSADM)
A systems administrator group (SYSADM_GROUP)

3-4

Before a database manager instance can be created, a user must exist to function as the systems
administrator (SYSADM) for the instance. Some thought should be given to the name chosen
for this user, because the name of the database manager instance is the same as the name for this
user. This user also becomes the owner of the instance. When the instance is created, this
user's primary group name is used to set the value of the database manager configuration
parameter SYSADM_GROUP. Any additional users that wish to have SYSADM authority on the
instance must also belong to this group. SYSADM authority has total authority over all
functions for the instance in a similar way that root has total authority on a UNIX system.

3-4 Creating a DB2 UDB Instance


Fenced User

Create a fenced user:


Allows user-defined functions to run in fenced mode
Prevents a poorly written UDF from corrupting the DB2 UDB
memory structures

3-5

Before a database manager instance can be created, a user must exist that can run any user-
defined functions (UDFs) and stored procedures in a fenced mode. This user is necessary since
UDFs are created using the C programming language, which can use pointers to reference
memory addresses outside of their defined memory space. To prevent a poorly written UDF
from corrupting the DB2 UDB memory, UDFs are commonly run in a fenced section of
memory, which prohibits references to memory addresses outside of the fence.

Creating a DB2 UDB Instance 3-5


Creating the DAS Instance

During DB2 installation, a DAS is created requires a SYSADM


user for the DAS:
To remove the DAS, use the dasidrop command (UNIX)
To create the DAS, use the dasicrt command (UNIX)

3-6

If this installation of DB2 UDB is a new installation, then a Database Administration Server
(DAS) is created along with the database manager instance. During the installation process, you
are asked to provide a name for the DAS. The user name that you provide becomes the name of
the DAS and the installing user has SYSADM authority on the DAS. In addition, the registry
variable DB2ADMINSERVER is set to the name of the DAS. If you do not plan on using the GUI
administration tools, the DAS is not needed and can be dropped after the database manager
instance has been created.
Drop the DAS using the dasidrop command (UNIX) or the db2admin drop command (INTEL).
If, at a later date, you decide to use the GUI Administration tools and you need to have a DAS,
you can create one using the dasicrt command (UNIX), or the db2admin create command
(INTEL).

In DB2 UDB version 8, the DAS is not an instance, but a separate process that
V8 manages instances.

3-6 Creating a DB2 UDB Instance


The db2icrt Command

Use the db2icrt command to create a database manager instance:


Normally, only root can execute this command

Syntax:
db2icrt -u <fenced_user> <sysadm_user>

Example:
db2icrt -u fence101 inst101

3-7

The DB2 command used to create the database manager instance is db2icrt. In the above
example a database manager instance is created with the name inst101, and a fenced user is
created named fence101. The user inst101 is the owner of the instance and is assigned SYSADM
authority over the instance.
In addition, all files associated with the instance, plus any default SMS table spaces, are created
in the $HOME directory for user inst101.

V8 of DB2, known as Enterprise Server Edition (ESE), is a combination of the


V8 Enterprise Edition (EE) and Enterprise - Extended Edition (EEE) version 7 DB2
products.
It is important to note that by default, V8 will create a multinode instance, used for
partitioned databases. To create an instance that is single node use this command:
db2icrt -s ese -u <fenceduser> <instancename>

Creating a DB2 UDB Instance 3-7


The db2icrt Command in Detail

The db2icrt command:


Creates the database manager instance
Sets the environment variables DB2INSTANCE and PATH
Creates the /sqllib subdirectory in the $HOME directory of the
SYSADM
Creates the DAS if it is a new installation
Configures communications based on the server's available
protocols
Creates the db2profile and userprofile files

3-8

The db2icrt command installs and configures the database manager instance on the server.
Normally only the user root has authority to run this command, but in our classroom
environment, the student logins have been given authority to run this command.
The environment variable DB2INSTANCE is set to the name of the database manager instance and
PATH is set to include the path to the DB2 UDB binary files. A new directory, sqllib, is created
in the $HOME directory of the user specified as the SYSADM.
If it is a new installation, a DAS is created.
The communications protocols that are supported on the server are examined and entries are
made in the operating system services file to allow communications with the DAS and the
database manager instance.
Finally, the files necessary to set environment variables are created. The first of these two files is
db2profile (or db2bashrc or db2cshrc, depending on your shell), which sets the default
environment variables. This file is often overwritten by new versions of DB2 UDB or by
fixpacks, and you should not make any changes to it. The second file is called userprofile and is
provided for your use to set environment variables unique to your installation. It will not be
overwritten by new versions of DB2 UDB or by fixpacks.

3-8 Creating a DB2 UDB Instance


The Instance Directory Structure
path / drive
| instance name
| sqllib

| adm
| backup
| cfg
| ctrl
| db2dump
| function
| log
| .netls
| security
| sqldbdir
| tmp

3-9

The db2diag.log file resides in the DIAGPATH directory. In the directory structure shown
above, the default location is /instname/sqllib/db2dump.
The amount of detail captured in the db2diag.log file is controlled by the DIAGLEVEL
configuration parameter. This parameter can be set to 0, 1, 2, 3 or 4.
The DBM configuration file (db2systm) resides in /instname/sqllib, but this file is non-human
readable, so you cannot edit it directly. You must use the db2 update dbm cfg command.

In v8, the db2diag.log file is split into db2diag.log and <instname>.nfy. The
V8 admin log file (.nfy) is intended for administrators, while the diagnostic log file is
for troubleshooting personnel. Both files' default location is /instname/sqllib/
db2dump.A new DBM parameter, NOTIFYLEVEL, controls the level of
information in the <instname>.nfy file. This parameter can be set to 0, 1, 2, 3 or 4.

Note Appendix D in this document contains examples of the DBM configuration file
parameters.

Creating a DB2 UDB Instance 3-9


Initializing the Instance

Use the db2start command to initialize the instance


The DB2 command attach to <instance_name> establishes an
attachment to the instance

Example:
db2 attach to inst101

3-10

Now that an instance is installed, it can be started. When the db2start command is executed, the
system reads the value of DB2INSTANCE and starts the specified database manager instance.
The process of starting consists of reading the DBM configuration file and setting up UNIX
processes (called agents) and memory control structures to allow communication with the
instance.
The DB2 attach command assigns agents to an application so that instance-level utilities and
commands, such as CREATE DATABASE, can work. Use the DB2 attach to <instance_name>
command to establish this attachment.
At this point there is still no connection to any of the databases on the instance. The CONNECT
command creates a connection to a database and is discussed in the next module.

3-10 Creating a DB2 UDB Instance


User Authentication and Instance Authorities

DB2 UDB authentication:


Performed at the O/S level
No passwords are stored in the instance

Three levels of instance authority:


SYSADM Complete authority over the instance including
access to data
SYSCTRL No access to data and less authority than
SYSADM
SYSMAINT No access to data and less authority than
SYSCTRL

3-11

DB2 UDB has three levels of instance authority.


SYSADM authority is the highest level of database manager instance authority. All operations are
available to users who have SYSADM authority, including access to the data in the databases.
Users who belong to the group named in the SYSADM_GROUP DBM configuration parameter
have SYSADM authority on the instance.
SYCTRL authority is the second-highest level of database manager instance authority. Users who
have SYSCTRL authority cannot access databases, migrate databases, change the DBM
configuration parameters, or read log files. Users who belong to the group named in the
SYSCTRL_GROUP DBM configuration parameter have SYSCTRL authority on the instance.

SYSMAINT authority is the third highest level of database manager instance authority. Users who
have SYSMAINT authority have the same restrictions as users with SYSCTRL authority, but in
addition they cannot terminate user sessions, create/drop databases, or create or drop table
spaces. Users who belong to the group named in the SYMAINT_GROUP DBM configuration
parameter have SYSMAINT authority on the instance.
Unlike an Oracle configuration file, the DBM configuration file used by DB2 UDB cannot be
updated directly, but is updated using the db2 update dbm cfg command.
db2 update dbm cfg using <parameter_name> <value>

Creating a DB2 UDB Instance 3-11


For example, the following command assigns SYSMAINT authority to the smaint101 group:
db2 update dbm cfg using sysmaint_group smaint101

System and Database Authority Summary


Function SYSADM SYSCTRL SYSMAINT DBADM
Migrate database X
Update DBM configuration X
Grant/revoke DBADM X
Update db/node/dcs directories X X
Force users off system X X
Create/drop databases X X
Create/drop/alter table spaces X X
Restore to new database X X
Update DB configuration X X X
Backup database or table space X X X
Restore to existing database X X X
Perform rollforward recovery X X X
Start/stop database instance X X X
Restore to table space X X X
Run trace X X X
Get snapshots X X X
Query table space state X X X X
Update log history files X X X X
Quiesce table space X X X X
REORG table X X X X
Run RUNSTATS utility X X X X
Read log files X X
Create/activate/drop event X X
monitors

3-12 Creating a DB2 UDB Instance


Summary

You should now be able to:


List the requirements to create an instance
Specify the authorization level needed to create an instance
Specify user authority levels for an instance
Create a DB2 UDB instance

3-13

Creating a DB2 UDB Instance 3-13


3-14 Creating a DB2 UDB Instance
Exercises

Creating a DB2 UDB Instance 3-15


Exercise 1
In this exercise, you will create your instance.
NOTE: Your instructor will provide you with a student number between 101 and 416. This
student number will be used to name your training server logins and your instance. For example,
if your instance administrator login is inst### and your student number is 101, then your
instance administrator login becomes inst101.
For simplicity, the password for all logins is the same as the user name. For example, the
password for the login inst101 is inst101. If for some reason the passwords are different, your
instructor will give you the new passwords. There are blanks below to write in these new
passwords.
Your instructor will provide the following information:
Windows workstation login: _________
Windows workstation password: _________
Student number: _________
Training server: _____________________________
Instance administrator (inst###) password: _______________________
Fenced user (fence###) password: ______________________________
If you are working on a stand-alone Linux laptop, log in as root (your instructor will give you
the password), skip exercises 1.1 through 1.3, and proceed to exercise 1.4.
1.1 Log in to the Windows workstation.

1.2 Double-click the telnet icon and select the name of the training server supplied by your
instructor.

1.3 Log in to the training server as the instance administrator.

3-16 Creating a DB2 UDB Instance


1.4 Before an instance can be created, two users need to exist. One is the instance
administrator and the other is the fenced user. Since you were able to log in as the
instance administrator, it exists. However you need to get the group name, which will be
used to define the system administrators for your instance. To obtain the group name,
type:
id inst###
group name: ______________________
To verify that the fenced user exists, type:
id fence###

Note The db2icrt command can only be run as root, so when you run db2icrt in the IBM
classroom, you will automatically run it with root permissions.

1.5 Now, create the instance. The command to create the instance is:
db2icrt -u fence### inst####
Where fence### is your fenced user, and inst### is your instance administrator. It may
be necessary to type the entire path to the db2icrt command, which would be:
For Linux:
/usr/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
For Solaris:
/opt/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
This could take a few minutes to execute. When it completes, you will get the message:
Program db2icrt completed successfully

Creating a DB2 UDB Instance 3-17


Exercise 2
In this exercise, you will explore the DB2 UDB UNIX environment.
If you are working on a stand-alone Linux laptop, switch from being the root user to the instance
administrator by typing the following command. Be sure to include the dash (-) so that your
profile is set up, and replace the ### with your student number.
su - inst###

2.1 When db2icrt ran, it created a directory called sqllib in the home directory of inst###.
To examine this directory type:
cd /home/inst###/sqllib
ls -l |more

2.2 There are some environment variables that need to be set in order to use DB2 UDB, such
as DB2INSTANCE and PATH. When db2icrt ran, it created a file called db2profile (or
db2bashrc), which is used to set default DB2 UDB environment variables. Examine this
file to determine how these variables are being set:
more db2profile

2.3 The db2profile file may be overwritten with subsequent releases of DB2 UDB or with
fixpacks, so if any other environment variables need to be set, they can be established in
a file called userprofile. Examine userprofile:
more userprofile

2.4 When db2icrt ran it modified your .profile (or .bashrc) file to execute the db2profile
command file. Examine your .profile:
cd
more .profile
or:
more .bashrc

3-18 Creating a DB2 UDB Instance


Exercise 3
Explore the DB2 UDB instance configuration (DBM Configuration file).
3.1 Find the options available for the DB2 command by typing:
db2 ? | more

3.2 Find the options available for the db2 get command by typing:
db2 ? get | more

3.3 Obtain the instance name:


db2 get instance

3.4 Obtain the values for the database manager configuration file:
db2 get dbm cfg | more

Tip Consult the Appendix for example DBM and DB configuration file values.

Creating a DB2 UDB Instance 3-19


Exercise 4
Explore the DB2 registry variables.
4.1 Examine the registry variables by typing:
db2set -all

4.2 What is the value of DBSYSTEM?

4.3 What is the value of DB2ADMINSERVER?

4.4 What is the value of DB2COMM?

Note If DB2COMM is not set, see the solutions of this exercise for instructions to set it.

3-20 Creating a DB2 UDB Instance


Exercise 5
Initialize your instance.
5.1 Initialize your instance using the db2start command. If you receive an error about
communications, contact your instructor. If you receive an error about evaluation
periods, ignore it.

5.2 In a DB2 Command Window, change to your student HOME directory and list it .
Change to your instance directory and list it. Finally, change to the db2dump directory,
and list it.

5.3 Use the view command to see the contents of the db2diag.log file. Find the following
information:
Timestamp of the last time the Database manager was started ____________.
PID of the last db2star2 (startdbm) process _________________.

Creating a DB2 UDB Instance 3-21


3-22 Creating a DB2 UDB Instance
Solutions

Creating a DB2 UDB Instance 3-23


Solution 1
1.1 Log in to the Windows workstation.

1.2 Double-click the telnet icon and select the name of the training server supplied by your
instructor.

1.3 Log in to the training server as the instance administrator.

1.4 Before an instance can be created, two users need to exist. One is the instance
administrator and one is the fenced user, whose purpose will be discussed later. Since
you were able to log in as the instance administrator, it exists. However you need to get
the group name, which will be used to define the system administrators for your instance.
To obtain the group name type:
id inst###
group name: ______________________
Write the NAME (not the number) of the GID into the blank.
To verify that the fenced user exists, type:
id fence###

1.5 Now, create the instance. The command to create the instance is:
db2icrt -u fence### inst####
Where fence### is your fenced user, and inst### is your instance administrator. It may
be necessary to type the entire path to the db2icrt command, which would be:
For Linux:
/usr/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
For Solaris:
/opt/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
This could take a few minutes to execute. When it completes, you will get the message:
Program db2icrt completed successfully
Several minutes can go by while the instance is being created. The program should
end by giving you a message that the program completed successfully. If not,
contact your instructor.

3-24 Creating a DB2 UDB Instance


Solution 2
You will explore the DB2 UDB UNIX environment in this exercise.
If you are working on a stand-alone Linux laptop, switch from being the root user to being the
instance administrator by typing the following command. Be sure to include the dash (-) so that
your profile is set up, and replace the ### with your student number.
su - inst###

2.1 When db2icrt ran, it created a directory called sqllib in the home directory of inst###.
To examine this directory type:
cd /home/inst###/sqllib
ls -l |more

2.2 There are some environment variables that need to be set in order to use DB2 UDB, such
as DB2INSTANCE and PATH. When db2icrt ran, it created a file called db2profile (or
bd2bashrc), which is used to set default DB2 UDB environment variables. Examine this
file to determine how these variables are being set:
more db2profile
You should see the shell scripts used to set up the values for DB2INSTANCE and
PATH

2.3 The db2profile file may be overwritten with subsequent releases of DB2 UDB or with
fixpacks, so if any other environment variables need to be set, they can be established in
a file called userprofile. Examine userprofile:
more userprofile
The userprofile file will most likely be blank, since no other environment variables
are being set on the server.

2.4 When db2icrt ran it modified your .profile (or .bashrc) file to execute the db2profile
command file. Examine your .profile:
cd
more .profile
or:
more .bashrc
You should see some lines added that execute the .db2profile file.

Creating a DB2 UDB Instance 3-25


Solution 3
Explore the DB2 UDB instance configuration (DBM Configuration file).
3.1 Find the options available for the DB2 command by typing:
db2 ? | more

3.2 Find the options available for the db2 get command by typing:
db2 ? get | more

3.3 Obtain the instance name:


db2 get instance
The command should return the name of the database manager instance.

3.4 Obtain the values for the database manager configuration file:
db2 get dbm cfg | more
The database manager (DBM) configuration parameters should be returned. Note
that SYSADM_GROUP has been set to the group name that you obtained earlier.
Also note that DFTDBPATH has been set to the home directory of the user
SYSADM that created the instance. This is where databases will be created by
default.

Tip Consult the Appendix for example DBM and DB configuration file values.

3-26 Creating a DB2 UDB Instance


Solution 4
Explore the DB2 registry variables.
4.1 Examine the registry variables by typing:
db2set -all

4.2 What is the value of DBSYSTEM?


DB2SYSTEM should be the name of your UNIX host. The [g] indicates it is a global
level registry variable.

4.3 What is the value of DB2ADMINSERVER?


DB2ADMINSERVER should be the name of your database administration server
(DAS) instance which controls remote administration of database manager
instances.

4.4 What is the value of DB2COMM?


DB2COMM should be TCPIP, and the [i] indicates it is an instance level registry
variable. If DB2COMM is not set, then execute the following commands.
To set the SVCENAME database manager configuration parameter type:
db2 update dbm cfg using svcename inst###_tcp
where ### is your student number. Example, for student inst101:
db2 update dbm cfg using svcename inst101_tcp
To verify that the configuration parameter was set type:
db2 get dbm cfg |more
To set the DB2COMM instance registry variable type:
db2set DB2COMM=TCPIP -i inst###
To verify that the registry variable was set type:
db2set -all
To verify that the service name inst###_tcp is in the /etc/services file type:
grep inst###_tcp /etc/services
If it is not, contact your instructor.

Creating a DB2 UDB Instance 3-27


Solution 5
Initialize your instance.
5.1 Initialize your instance using the db2start command. If you receive an error about
communications, contact your instructor. If you receive an error about evaluation
periods, ignore it.
The db2start command may take a few minutes to execute while it allocates
memory. It should return a message that says the program completed processing
successfully. If this does not happen, contact your instructor.

5.2 In a DB2 Command Window, change to your student HOME directory and list it .
Change to your instance directory and list it. Finally, change to the db2dump directory,
and list it.
cd
ls -l
cd sqllib
ls
cd db2dump
ls -l
-rw-rw-rw- 1 insta8 insta8 361 Jan 22 09:48 db2diag.log
-rw-r----- 1 insta8 insta8 5242044 Jan 22 09:47 db2eventlog.000
-rw-rw-rw- 1 insta8 insta8 363 Jan 22 09:48 insta8.nfy

5.3 Use the view command to see the contents of the db2diag.log file. Find the following
information:
Timestamp of the last time the Database manager was started ____________.
PID of the last db2star2 (startdbm) process _________________.

3-28 Creating a DB2 UDB Instance


Module 4

Creating a Database

Creating a Database 02-2003 4-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Describe how buffer pools are allocated
Explain the attributes of a DB2 UDB database
Understand the purpose of schemas
Specify the authorization levels needed to create a database
Create a DB2 UDB database using the command line processor

4-2

4-2 Creating a Database


Buffer Pool Allocation

Each database requires at least one buffer pool


A buffer pool may be assigned to a specific table space
The buffer pool page size must match the table space page size

4-3

Buffer pools are allocated at the database level in DB2 UDB, and every database must have at
least one buffer pool. One default buffer pool, IBMDEFAULTBP, is created when the CREATE
DATABASE command is processed. Additional buffer pools can be created using the CREATE
BUFFERPOOL SQL statement. For example, the syntax to create a buffer pool named bp_four_k
with 10,000 pages and a page size of 4096 bytes is shown below:
Syntax:
CREATE BUFFERPOOL bufferpool_name
SIZE number_of_pages
PAGESIZE integer K
Example:
CREATE BUFFERPOOL bp_four_k
SIZE 10000
PAGESIZE 4K
Once a buffer pool has been created, it can be associated with one or more table spaces. Use the
following syntax to create (or alter) a DMS table space to associate it with this buffer pool:

Creating a Database 4-3


Syntax:
CREATE TABLESPACE tablespace_name
MANAGED BY DATABASE
USING (DEVICE 'device_string' number_of_pages)
EXTENTSIZE number_of_pages
PREFETCHSIZE number_of_pages
BUFFERPOOL bufferpool_name

Example:
CREATE TABLESPACE customer_tablespace
MANAGED BY DATABASE
USING (DEVICE '/dev/rdsk/device101' 16000)
EXTENTSIZE 16
PREFETCHSIZE 32
BUFFERPOOL bp_four_k

To alter an existing DMS table space:


Syntax:
ALTER TABLESPACE tablespace_name
BUFFERPOOL buffer_pool_name

Example:
ALTER TABLESPACE customer_tablespace
BUFFERPOOL bp_four_k

There are several reasons you might want to have multiple buffer pools. If your database has a
relatively large table that is randomly accessed (in which case data caching would be of limited
value), creating a small buffer pool for this table space would prevent the pages from being
cached in the main buffer pool. If your database has a table space with an 8K, 16K, or 32K page
size, then you need a buffer pool with a page size to match. At least one buffer pool must exist
for each page size used in the database.

Oracle Buffer Pools


Oracle 8 introduced the idea of 3 separate buffer pools which, in effect, offer 3 separate LRU
chains so that objects with different characteristic use can age out of the buffer in different ways.
These buffer pools are named the default pool, the keep pool, and the recycle pool.

4-4 Creating a Database


Attributes of DB2 UDB Databases

Some attributes equivalent to ANSI-compliant databases:


Log buffer is flushed on COMMIT (unbuffered logging)
Fully qualified table names are required

Some attributes NOT equivalent to ANSI-compliant databases:


Logging can be selectively turned off
Create table privilege is automatically granted to PUBLIC

4-5

There is only one type of database that is available with DB2 UDB, and it has several of the
attributes of an ANSI-compliant database.
There is only one log buffer per database. It is flushed when a COMMIT or ROLLBACK statement is
executed or it becomes full.
Table names must include the owner and are considered fully-qualified. For example, the
following SELECT statement will not run in DB2 UDB (unless martha is the user):
SELECT * FROM customer;

The owner needs to be included in the table name:


SELECT * FROM martha.customer;

ANSI-compliant databases, by definition, cannot have logging turned off. In DB2 UDB,
however, logging can be turned off selectively for specific tables (for example during a load),
and DB2 UDB does not follow the strict ANSI logging standard.
ANSI-compliant databases do not automatically grant permission to the user PUBLIC. However,
in DB2 UDB, the user PUBLIC is granted a generous set of default permissions when a database
object is created.

Creating a Database 4-5


Database Schemas

DB2 UDB uses schemas:


The schema name is the owner of the object
Schema names are assigned the login name of the user that
creates the object
Identical table names can exist between different schemas

4-6

One of the attributes of an ANSI database is the fully qualified table name. DB2 UDB
implements this attribute using schemas. For example, suppose that the user bobjones creates a
table called customer. In the database, this table would have the name bobjones.customer.
Next, suppose that the user martha creates a table called customer. Her table would be named
martha.customer in the database. When bobjones accessed the customer table he would access
the bobjones.customer table. If bobjones attempted to access the marth.customer table, he
could receive an error, depending on permissions granted to him on that table.
It is conceivable that one database could contain multiple schemas; however, for simplicity, most
DB2 UDB databases have only one user-defined schema.
There are several system-defined schemas created for a database. They are:
NULLID
SYSCAT
SYSFUN
SYSIBM
SYSSTAT

4-6 Creating a Database


Authority to Create Databases

Only the SYSADM and SYSCTRL authorities can create databases.

4-7

In DB2 UDB only users in the SYSADM or the SYSCTRL user groups can create databases. When
a user creates a database, they are given database administrator (DBADM) authority on that
database.
However, only users with SYSADM authority on the instance can grant DBADM authority on
databases. Therefore, it is possible for a user with SYSCTRL authority to create a database and be
granted DBADM authority on the database, but not be able to grant DBADM to other users. In
addition, when SYSADM authority is revoked at the instance level, DBADM authority is not
revoked at the database level. Therefore, it is again possible for a user to have DBADM authority
on a database but not be able to grant DBADM authority to other users.

Oracle restrictions on CREATE DATABASE


With Oracle, the Oracle software owner accountusually named oracleis the account used
to install the Oracle software. Different accounts can be used for totally separate installations of
the software, but you must use the same account that installed the software for all subsequent
maintenance tasks on that installation. They generally recommend that the software owner has
the ORAINVENTORY group as its primary group and the OSDBA group as its secondary group.

Creating a Database 4-7


Authorities versus Privileges

Authorities:
Exist at the instance level
Exist for the database level

Privileges:
Exist for actions on database objects

4-8

Once a user is connected to a database, any actions that they can execute are controlled by
privileges granted to them on the database objects. The following is a partial list of privileges
available for database objects. Due to time limitations, it is beyond the scope of this class to
describe all of the privileges available on all of the database objects. However, selected
privileges will be discussed throughout the rest of this course in conjunction with other topics.
A point to be made here is that the privileges are more granular and give you more control on a
DB2 UDB database as compared to an Oracle database.
For a complete discussion of authorities and privileges, you can attend the DB2 Universal
Database Administration Workshop, which is available for the following operating systems:
Linux (CF20)
UNIX (CF21)
Windows NT (CF23)
Solaris (CF27)
There is also a CBT self study course, Fast Path to DB2 UDB for Experienced Relational DBAs
(CT28), which contains a superb explanation of privileges and is available for download free of
charge at:
www.ibm.com/software/data/db2/selfstudy

4-8 Creating a Database


Privileges that exist for database objects include:
CREATE NOT FENCED (database)
BINDADD (database)
CONNECT (database)
CREATETAB (database)
CONTROL (indexes)
IMPLICIT_SCHEMA (database)
CONTROL (packages)
BIND, EXECUTE
CONTROL (tables)
ALL, ALTER, DELETE, INDEX, INSERT, REFERENCES, SELECT, UPDATE
CONTROL (views)
ALL, DELETE, INSERT, SELECT, UPDATE
ALTERIN, CREATIN, DROPIN (schema owners)

Two new privileges added in version 8 can be used when creating and executing
V8 external programs containing user defined functions. They are:
CREATE_EXTERNAL_ROUTINE
EXECUTE (UDF privileges)

Creating a Database 4-9


Create a Database

Execute either of these two command line statements:


db2 create db database_name
db2 create database database_name

4-10

IN DB2 UDB the CREATE DATABASE statement is a command line statement and not an SQL
statement. Either of the CREATE DATABASE commands shown above create a database using the
default settings.

4-10 Creating a Database


CREATE DATABASE Actions

When the CREATE DATABASE command is executed, the instance:


Creates a subdirectory to hold the database information
Creates a database configuration file with default settings
Creates SYCATSPACE, TEMPSPACE1, and USERSPACE1
table spaces
Creates system catalog tables in the SYSCATSPACE table
space
Creates SYSCAT, SYSFUN, and SYSSTAT schemas
Grants DBADM authority to the database creator
Grants selected database privileges to PUBLIC

4-11

When the CREATE DATABASE statement is executed, the database manager (instance) executes
the actions listed above. Each of these actions will be described in the following pages.

Creating a Database 4-11


Database Subdirectory Structure

The database manager creates a subdirectory:


path / drive
| instance name
| NODE0000
| SQL0001
| Contains database files and table spaces

4-12

The instance creates a subdirectory to store the database files and table spaces. This subdirectory
is named /NODE0000/SQL0001 for the first database created on the instance. The starting point
for this subdirectory can be an option specified in the CREATE DATABASE statement, or the
default is the value of the DFTDBPATH DBM configuration parameter.
The next few pages show the subdirectories created under the database subdirectory.

4-12 Creating a Database


Database Configuration File

Database configuration file:


Initially set to default settings
Cannot be directly accessed

4-13

The database configuration file that is created by the database manager is initially populated
with default settings. Once the database has been created, the configuration file can be modified
with the db2 update db cfg command.

Important!
The database configuration file cannot be accessed directly using an editor. The db2
update db cfg using <parameter> <parameter_value> command must be used
instead.

Note Appendix D in this document contains examples of the DB configuration file


parameters.

Creating a Database 4-13


Creating Default Table Spaces
path / drive
| instance name
| NODE0000
| SQL0001
| SQLT0000.0 (container) SYSCATSPACE (table space)
| SQLT0001.0 (container) TEMPSPACE1 (table space)
| SQLT0002.0 (container) USERSPACE1 (table space)
| db2event stores abnormal events, such as deadlock info
| SQLOGDIR primary transaction logs

4-14

When a database is created, three SMS table spaces are created by default in the subdirectory
defined for the database. There are options available in the CREATE DATABASE command to
create the default table spaces as DMS table spaces or to create them in different locations.
The database configuration file, SQLDBCON, resides in the /NODE0000/SQL00001
subdirectory. The default location of the database log files is /NODE0000/SQL00001/
SQLOGDIR for the first database created.

4-14 Creating a Database


Creating System Catalog Tables

System catalog tables:


Define the database
Are created in SYSCATSPACE table space
Consist of SYSCAT, SYSSTAT, and SYSFUN schemas
Are actually views into SYSIBM schema tables

4-15

The system catalog tables contain all the information necessary to define the database and are
created in the SYSCATSPACE table space. The SYSCAT, SYSFUN, and SYSTAT tables are
actually views into underlying tables, which belong to the SYSIBM schema. The SYSCAT
views contain all of the data necessary to define the database objects. The SYSFUN views
contain all the data for functions, and the SYSSTAT views contain all of the statistical
information used by the optimizer to determine query plans.
The system catalog views and tables cannot be explicitly created or dropped and have select
privilege granted to PUBLIC by default.

Creating a Database 4-15


Granting Database Administrator Authority

Database administrator authority (DBADM):


Is initially granted to the database creator
Can only be granted by users with SYSADM authority

4-16

DBADM authority is initially granted to the creator of the database. In addition, the CONNECT,
CREATETAB, BINDADD, IMPLICIT_SCHEMA, CREATE _NOT_FENCED, and LOAD privileges are
granted to the database creator.
If the DBADM authority is revoked from the database creator, the other privileges still remain
and must also be explicitly revoked.

4-16 Creating a Database


Privileges Granted to PUBLIC

PUBLIC is granted:
SELECT privilege on system catalog tables and views
CONNECT, IMPLICIT_SCHEMA, CREATETAB, and BINDADD
BIND and EXECUTE privilege on each utility
USE privilege on USERSPACE1 table space

4-17

When a database is created, the user PUBLIC is granted several privileges by default.
SELECT privilege is granted on all of the system catalog tables.
The CONNECT privilege allows access to the database.
The IMPLICIT_SCHEMA privilege allows a user to create schemas.
The CREATETAB privilege allows a user to create tables within the database.
The BINDADD privilege allows a user to generate packages.
The BIND privilege is really a rebind privilege, since the package must first be created
using the BINDADD privilege.
The EXECUTE privilege allows a user to execute an existing package.
The USE privilege allows a user to specify (or default to) a table space when creating a
table.
Privileges can be granted to individual users or to groups of users (defined at the operating
system level). This functionality is equivalent to ROLE.

Creating a Database 4-17


Database Startup and Shutdown

Database startup Database shutdown

db2 connect to <dbname> After last connection disconnects

db2 activate db <dbname> db2 deactivate db <dbname>


(with no connections)

db2 db2 terminate


(with no connections)

4-18

The commands and conditions listed above show the three methods for starting up a database
and the associated command or condition required to shutdown the database.

Connect command
If a database is not yet running, the db2 connect command will start up the database and then
establish a connection for the application issuing the command. After the database is running,
any additional applications can connect and subsequently disconnect from the database
including the initial connection. The database will remain running as long as there is at least
one connection attached to it. When all the applications have disconnected from the database,
the database will automatically shutdown.

4-18 Creating a Database


Activate command
If an application executes the db2 activate command, the database will start up. After the
database is running, additional applications can connect and subsequently disconnect from the
database including the initial connection. But in this case, the database will continue to run even
after all applications have disconnected. The database will continue to run until the db2
deactivate command is executed and all applications have disconnected from the database. If
there are connections to the database when the db2 deactivate command is issued, the database
will wait until all applications have disconnected and then shutdown.

DB2 Command Line Processor (CLP)


When the Command Line Processor (CLP) is used to operate the database, the database is
started and back-end processes are started on the database to handle the requests from the CLP.
The database will remain running until the CLP back-end processes are specifically stopped with
the db2 terminate command. If there are other connections to the database when the terminate
command is executed, the database will continue running until all applications have
disconnected and then it will shutdown.

Creating a Database 4-19


QUIT vs. TERMINATE vs. CONNECT RESET

CLP Terminate CLP Disconnect database


COMMAND Back-end Process Connection

quit No No

terminate Yes Yes


Yes if
connect reset
No CONNECT=1
(RUOW)

4-20

There are several ways to finish your DB2 session.


As shown above, simply issuing a quit command while in the CLP does not terminate your
resource use of the server.
For a clean separation, like when you want new database parameter values to take effect, you
must terminate your database connection.
You may also need to force other applications off the server.
Example commands:
db2=> quit
$
db2 terminate
$
db2 force applications all
$

4-20 Creating a Database


Summary

You should now be able to:


Describe how buffer pools are allocated
Explain the attributes of a DB2 UDB database
Understand the purpose of schemas
Specify the authorization levels needed to create a database
Create a DB2 UDB database using the command line processor

4-21

Creating a Database 4-21


4-22 Creating a Database
Exercises

Creating a Database 4-23


Exercise 1
In this exercise, you will create a database that will be used for the rest of the course.
1.1 Creating a database can be done with the DB2 command create db. Do not create the
database until instructed to do so. Use the online help utility to find the syntax and
options for the DB2 create db command.
db2 ? create db

1.2 What information can you specify for the create db command?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________

1.3 What three default table spaces will be created?


_______________________________________________________________
_______________________________________________________________
_______________________________________________________________

1.4 What type of table space (SMS or DMS) will they be by default?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________

1.5 What is the default path on which the database is created?


_______________________________________________________________

1.6 Create your database with a name of storesdb using the default settings. It may take a
few minutes to create the database.
cd $HOME
db2 create db storesdb

4-24 Creating a Database


1.7 The System Database Directory maintains a list of databases that are known to the
instance. Verify that the database was created by using the DB2 command list db
directory:
db2 list db directory

1.8 After the database is created you need to connect to it. To find the current connection
state for the database use the DB2 command get connection state:
db2 get connection state

1.9 What is the current connection state?


_______________________________________________________________

1.10 Connect to the storesdb database using the DB2 connect command:
db2 connect to storesdb

1.11 What is the connection state now?


_______________________________________________________________

Creating a Database 4-25


Exercise 2
Examine the configuration file and physical layout of the database.
2.1 Every database has its own configuration file. Use the DB2 command get db cfg to
examine the file for the storesdb database:
db2 get db cfg for storesdb | more

2.2 Some of the configuration parameters can be changed. Update the MAXAPPLS parameter
to allow for a maximum of 50 connections.
db2 update db cfg for storesdb using maxappls 50

2.3 Changes to the configuration file do not take effect until the database is deactivated and
activated again. Deactivate and activate the database by using the following commands:
db2 terminate
db2 force application all
db2 connect to storesdb

2.4 List the names and ID numbers for the three table spaces that were created during
creation of the database:
db2 list tablespaces |more
Table space name ID number

2.5 List the container information for table space ID 0 using the DB2 command list
tablespace containers. What is the physical location of the container?
db2 list tablespace containers for 0

_______________________________________________________________

2.6 To look at the files used to store tables, change to the SQLT0000.0 directory and list it:
cd /home/inst###/inst###/NODE0000/SQL00001/SQLT0000.0
ls -l |more

4-26 Creating a Database


2.7 By selecting tableid from the system table syscat.tables, you can map tables to files.
The file names are derived from the tableid in the format SQLXXXXX.DAT for data
files, .INX for index files, and .LB for LOB files. Run the following SELECT statement
to find the table ID for SYSIBM.SYSTABLES:
db2 "SELECT SUBSTR(tabname,1,18)
AS table_name, tableid
FROM syscat.tables
WHERE tabschema ='SYSIBM'ORDER BY 2 " |more

SYSIBM.SYSTABLES.tableid = ________

Note Type the SELECT as one line. When the ENTER key is pressed, DB2 considers the
command as complete and starts processing it.

2.8 Does the SYSIBM.SYSTABLES table have any index or LOB data?

2.9 What are the default path containers for the TEMPSPACE1 (table space 1) table space
and the USERSPACE1 (table space 2) table space?
db2 list tablespace containers for 1
db2 list tablespace containers for 2

2.10 In a DB2 Command Window, change to your student HOME directory, then change to
your database directory, then change to the NODE0000 directory and list it. Change to
your database top directory SQL00001 (the first one you created) and list it.

2.11 Identify the table space storage containers.


SYSCATSPACE container ____________
TEMPSPACE1 container ____________
USERSPACE1 container ____________

2.12 How many transaction log files are there?


Which directory are they in?

Creating a Database 4-27


4-28 Creating a Database
Solutions

Creating a Database 4-29


Solution 1
In this exercise, you will create a database that will be used for the rest of the course.
1.1 Creating a database can be done with the DB2 command create db. Do not create the
database until instructed to do so. Use the online help utility to find the syntax and
options for the DB2 create db command.
db2 ? create db

1.2 What information can you specify for the create db command?
The name and location of the database, an alias name, the codeset and territory for
storing the data, a collating sequence, and table space information such as extent
size and location can all be specified using the create db command.

1.3 What three default table spaces will be created?


SYSCATSPACE for storing system catalog tables, USERSPACE1 For storing user
defined tables, and TEMPSPACE1 for storing temporary tables.

1.4 What type of table space (SMS or DMS) will they be by default?
SMS.

1.5 1.5 What is the default path on which the database is created?
By default, the database will be created in the path specified in the DFTDBPATH
database manager configuration parameter. This should be the /home/inst###
directory.

1.6 Create your database with a name of storesdb using the default settings. It may take a
few minutes to create the database.
cd $HOME
db2 create db storesdb
This can take several minutes to execute since the disk space needs to be initialized
for all of the system catalog tables, the user space, and the temp space. The
command should return a message saying that the command completed
successfully. If not, contact your instructor.

4-30 Creating a Database


1.7 The System Database Directory maintains a list of databases that are known to the
instance. Verify that the database was created by using the DB2 command list db
directory:
db2 list db directory
You should see a listing of the database name, alias, and other information. A
directory entry of indirect means this database is local to this system. A directory
entry of remote means that this database is located on a remote system.

1.8 After the database is created you need to connect to it. To find the current connection
state for the database use the DB2 command get connection state:
db2 get connection state

1.9 What is the current connection state?


The connection state is Connectable and Unconnected.

1.10 Connect to the storesdb database using the DB2 connect command:
db2 connect to storesdb

1.11 What is the connection state now?


The connection state is Connectable and Connected.

Creating a Database 4-31


Solution 2
Examine the configuration file and physical layout of the database.
2.1 Every database has its own configuration file. Use the DB2 command get db cfg to
examine the file for the storesdb database:
db2 get db cfg for storesdb |more
The command returns a listing of the database configuration file parameters.

2.2 Some of the configuration parameters can be changed. Update the MAXAPPLS parameter
to allow for a maximum of 50 connections.
db2 update db cfg for storesdb using maxappls 50
The command should return a message indicating that it completed successfully.

2.3 Changes to the configuration file do not take effect until the database is deactivated and
activated again. Deactivate and activate the database by using the following commands:
db2 terminate
db2 force application all
db2 connect to storesdb

2.4 List the names and ID numbers for the three table spaces that were created during
creation of the database:
db2 list tablespaces |more
Table space name ID number
SYSCATSPACE 0
TEMPSPACE1 1
USERSPACE1 2

2.5 List the container information for table space ID 0 using the DB2 command list
tablespace containers. What is the physical location of the container?
db2 list tablespace containers for 0
This is an SMS table space so the container is a directory which is /home/inst###/
inst###/NODE0000/SQL00001/SQLT0000.0. The files that make up the tables will
be located in this directory.

4-32 Creating a Database


2.6 To look at the files used to store tables, change to the SQLT0000.0 directory and list it:
cd /home/inst###/inst###/NODE0000/SQL0001/SQLT0000.0
ls -l |more
You should see a listing of files that have names like SQL00001.DAT,
SQL00001.INX, and SQL00001.LB. The .DAT files contain table row data, the .INX
files contain index data for one or more indexes per table, and the .LB or .LBA files
contain LOB data.

2.7 By selecting tableid from the system table syscat.tables, we can map tables to files. The
file names are derived from the tableid in the format SQLXXXXX.DAT for data files,
.INX for index files, and .LB for LOB files. Run the following SELECT statement to
find the table ID for SYSIBM.SYSTABLES:
db2 "SELECT SUBSTR(tabname,1,18)
AS table_name, tableid
FROM syscat.tables
WHERE tabschema ='SYSIBM' ORDER BY 2" | more

SYSIBM.SYSTABLES.tableid = 2
Table ID = 2 for SYSIBM.SYSTABLES so the file SQL00002.DAT would contain
the row data. Any index data would be stored in a file named SQL00002.INX, and
any LOB data would be in SQL00002.LB or SQL00002.LBA.

Note Type the SELECT as one line. When the ENTER key is pressed, DB2 considers the
command as complete and starts processing it.

2.8 Does the SYSIBM.SYSTABLES table have any index or LOB data?
Yes. There is an SQL00002.INX file and an SQL00002.LB file.

2.9 What are the default path containers for the TEMPSPACE1 (table space 1) table space
and the USERSPACE1 (table space 2) table space?
db2 list tablespace containers for 1
db2 list tablespace containers for 2
These are both SMS table spaces so the containers will be directories. The container
for TEMPSPACE1 is /home/inst###/inst###/NODE0000/SQL00001/SQLT0001.0,
and the container for USERSPACE1 is /home/inst###/inst###/NODE0000/
SQL00001/SQLT0002.0.

Creating a Database 4-33


2.10 In a DB2 Command Window, change to your student HOME directory, then change to
your database directory, then change to the NODE0000 directory and list it. Change to
your database top directory SQL00001 (the first one you created) and list it.
cd
cd <instname>/NODE0000
ls -l
drwxr-x--- 9 insta8 insta8 4096 Jan 21 17:37 SQL00001
drwxr-x--- 7 insta8 insta8 4096 Jan 22 08:19 SQL00002
drwxrwxr-x 2 insta8 insta8 4096 Jan 21 17:35 sqldbdir
cd SQL00001
ls -l
-rw------- 1 inst101 inst101 512 Jan 20 11:35 SQLBP.1
-rw------- 1 inst101 inst101 512 Jan 20 11:35 SQLBP.2
-rw------- 1 inst101 inst101 4096 Jan 20 11:35 SQLDBCON
-rw-r----- 1 inst101 inst101 9 Jan 20 11:36 SQLINSLK
-rw------- 1 inst101 inst101 24576 Jan 20 11:36 SQLOGCTL.LFH
drwxr-x--- 2 inst101 inst101 4096 Jan 20 11:35 SQLOGDIR
-rw------- 1 inst101 inst101 196608 Jan 20 11:36 SQLSPCS.1
-rw------- 1 inst101 inst101 196608 Jan 20 11:36 SQLSPCS.2
drwxr-x--- 2 inst101 inst101 4096 Jan 20 11:35 SQLT0000.0
drwxr-x--- 2 inst101 inst101 4096 Jan 20 11:35 SQLT0001.0
drwxr-x--- 2 inst101 inst101 4096 Jan 20 11:35 SQLT0002.0
-rw-r----- 1 inst101 inst101 0 Jan 20 11:35 SQLTMPLK
drwxr-x--- 2 inst101 inst101 4096 Jan 20 11:35 db2event
-rw-r----- 1 inst101 inst101 1024 Jan 20 11:35 db2rhist.asc
-rw-r----- 1 inst101 inst101 1024 Jan 20 11:35 db2rhist.bak

2.11 Identify the table space storage containers.


SYSCATSPACE container SQLT0000.0___
TEMPSPACE1 container SQLT0001.0___
USERSPACE1 container SQLT0002.0___

2.12 How many transaction log files are there?


3
Which directory are they in?
/NODE0000/SQL00002/SQLOGDIR

4-34 Creating a Database


Module 5

Planning Disk Usage

Planning Disk Usage 02-2003 5-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Identify the privileges and authorizations needed to allocate disk
Understand the concepts of table spaces and containers
Explain the differences between SMS and DMS storage
approaches
Describe how extents are defined and allocated
Create table spaces with basic parameters
Compute basic disk requirements for storage and understand
default allocations

5-2

5-2 Planning Disk Usage


Privileges and Authorizations

You need one of the following authorities:


SYSADM
SYSCTRL

5-3

An operating system user with sufficient permissions (usually root) is required to create the
storage areas in the form of files, directories, or disk devices.
Operating system authentication is needed to allow a user to work within the DB2 UDB
instance. This user is required to belong to the DB2 UDB administration group, however it is
named. Authorizations that allow that user to allocate containers to the instance are SYSADM
and SYSCTRL.
There are no database privileges needed to create new table spaces or add containers to table
spaces. This activity is controlled through authorities as discussed above.

Planning Disk Usage 5-3


Typical DB2 UDB Storage Diagram

DB2 instance 1

Database 1
table space 0 table space 1 table space 2
SYSCATSPACE TEMPSPACE1 USERSPACE1
table table table

table space 3 table space 4


USERTEMP1 USERSPACE2
table table table table

Database 2
table space 0 table space 1 table space 2
SYSCATSPACE TEMPSPACE1 USERSPACE1
table table table table

5-4

Above is a diagram of a typical DB2 UDB instance, including:


Database 1, with three default table spaces and two extra table spaces
Database 2, with the three default table spaces
There is no indication of table-space type.

5-4 Planning Disk Usage


Table Spaces

Table spaces can be of two types:


SMS system managed space
Created using directory containers
Space is not allocated until it is required
Creating a table space requires less initial work
DMS database managed space
Created using either file containers or device containers
The size of a table space can be increased by adding or extending
containers
A table can be split across multiple table spaces

A well-tuned set of DMS table spaces will outperform SMS table


spaces
5-5

Table spaces can be of two types: SMS or DMS. An SMS table space is created using directory
containers. The DMS table space is created using either file containers or device containers.
SMS is system managed space (managed by the operating system) and DMS is database
managed space (managed by DB2 UDB).
The selection of which type of table space to use impacts the types of data that can be stored in
each, as well as performance and ease-of-use considerations. These topics will be addressed in a
later module.

Advantages of an SMS Table Space:


Space is not allocated by the system until it is required.
Creating a table space requires less initial work, because you do not have to predefine
the containers.

Planning Disk Usage 5-5


Advantages of a DMS Table Space:
The size of a table space can be increased by adding or extending containers, using the
ALTER TABLESPACE statement. Existing data can be automatically rebalanced
across the new set of containers to retain optimal I/O efficiency.
A table can be split across multiple table spaces, based on the type of data being stored:
Long field and LOB data
Indexes
Regular table data
You might want to separate your table data for performance reasons, or to increase the amount of
data stored for a table.
For example, you could have a table with 64 GB of regular table data, 64 GB of index data and 2
TB of long data. If you are using 8 KB pages, the table data and the index data can be as much
as 128 GB. If you are using 16 KB pages, it can be as much as 256 GB. If you are using 32 KB
pages, the table data and the index data can be as much as 512 GB.
The location of the data on the disk can be controlled, if this is allowed by the operating
system.
If all table data is in a single table space, a table space can be dropped and redefined
with less overhead than dropping and redefining a table.
In general, a well-tuned set of DMS table spaces will outperform SMS table spaces.
In general, small personal databases are easiest to manage with SMS table spaces. On the other
hand, for large, growing databases you will probably only want to use SMS table spaces for the
temporary table spaces and catalog table space, and separate DMS table spaces, with multiple
containers, for each table. In addition, you will probably want to store long field data and
indexes on their own table spaces.
If you choose to use DMS table spaces with device containers, you must be willing to tune and
administer your environment.

5-6 Planning Disk Usage


SMS Disk Architecture

In an SMS table space:


Operating systems file system manager allocates and manages
the space where the table is stored
Storage model typically consists of many files, representing
table objects, stored in the file system space
User decides on the location of the files and the file system is
responsible for managing them
Each table has at least one SMS physical file associated with it
A file is extended one page at a time as the object grows
You must specify the number of containers
Extent size can only be specified during table space creation

5-7

In an SMS (System Managed Space) table space, the operating systems file system manager
allocates and manages the space where the table is stored. The storage model typically consists
of many files, representing table objects, stored in the file system space. The user decides on the
location of the files (DB2 controls their names) and the file system is responsible for managing
them. By controlling the amount of data written to each file, the database manager distributes the
data evenly across the table space containers. By default, the initial table spaces created at
database creation time are SMS.
Each table has at least one SMS physical file associated with it.
In an SMS table space, a file is extended one page at a time as the object grows. If you need
improved insert performance, you can consider enabling multipage file allocation. This allows
the system to allocate or extend the file by more than one page at a time. For performance
reasons, if you will be storing multidimensional (MDC) tables in your SMS table space, you
should enable multipage file allocation. Run db2empfa to enable multipage file allocation. In a
partitioned database environment, this utility must be run on each database partition. Once
multipage file allocation is enabled, it cannot be disabled.
SMS table spaces are defined using the MANAGED BY SYSTEM option on the CREATE
DATABASE command, or on the CREATE TABLESPACE statement. You must consider two
key factors when you design your SMS table spaces:

Planning Disk Usage 5-7


Containers for the table space.
You must specify the number of containers that you want to use for your table space. It
is very important to identify all the containers you want to use, because you cannot add
or delete containers after an SMS table space is created. In a partitioned database
environment, when a new partition is added to the database partition group for an SMS
table space, the ALTER TABLESPACE statement can be used to add containers for the
new partition.
Each container used for an SMS table space identifies an absolute or relative directory name.
Each of these directories can be located on a different file system (or physical disk). The
maximum size of the table space can be estimated by:
number of containers * (maximum file system size supported by the operating
system)
This formula assumes that there is a distinct file system mapped to each container, and that each
file system has the maximum amount of space available. In practice, this may not be the case,
and the maximum table space size may be much smaller. There are also SQL limits on the size
of database objects, which may affect the maximum size of a table space.

Note Care must be taken when defining the containers. If there are existing files or
directories on the containers, an error (SQL0298N) is returned.

Extent size for the table space.


The extent size can only be specified when the table space is created. Because it cannot
be changed later, it is important to select an appropriate value for the extent size.
If you do not specify the extent size when creating a table space, the database manager will
create the table space using the default extent size, defined by the DFT_EXTENT_SZ database
configuration parameter. This configuration parameter is initially set based on information
provided when the database is created. If the DFT_EXTENT_SZ parameter is not specified on
the CREATE DATABASE command, the default extent size will be set to 32.
To choose appropriate values for the number of containers and the extent size for the table space,
you must understand:
The limitation that your operating system imposes on the size of a logical file system.
For example, some operating systems have a 2 GB limit. Therefore, if you want a 64 GB
table object, you will need at least 32 containers on this type of system.
When you create the table space, you can specify containers that reside on different file
systems and as a result, increase the amount of data that can be stored in the database.

5-8 Planning Disk Usage


How the database manager manages the data files and containers associated with a table
space.
The first table data file (SQL00001.DAT) is created in the first container specified for
the table space, and this file is allowed to grow to the extent size. After it reaches this
size, the database manager writes data to SQL00001.DAT in the next container. This
process continues until all of the containers contain SQL00001.DAT files, at which time
the database manager returns to the first container. This process (known as striping)
continues through the container directories until a container becomes full (SQL0289N),
or no more space can be allocated from the operating system (disk full error). Striping is
also used for index (SQLnnnnn.INX), long field (SQLnnnnn.LF), and LOB
(SQLnnnnn.LB and SQLnnnnn.LBA) files.

Note The SMS table space is full as soon as any one of its containers is full. Thus, it is
important to have the same amount of space available to each container.

To help distribute data across the containers more evenly, the database manager determines
which container to use first by taking the table identifier (1 in the above example) modulo the
number of containers. Containers are numbered sequentially, starting at 0.

Planning Disk Usage 5-9


DMS Disk Architecture

In a DMS table space:


The database manager controls the storage space
The storage model consists of a limited number of devices or files whose
space is managed by DB2
The database administrator decides which devices and files to use, and
DB2 manages the space on those devices and files
In a DMS table space can be defined as:
A regular table space to store any table data and indexes
A large table space to store long field or LOB data or indexes

The minimum size of a DMS table space is five extents


Three extents are reserved for overhead
At least two extents are used to store any user table data

5-10

In a DMS (Database Managed Space) table space, the database manager controls the storage
space. The storage model consists of a limited number of devices or files whose space is
managed by DB2. The database administrator decides which devices and files to use, and DB2
manages the space on those devices and files. The table space is essentially an implementation
of a special purpose file system designed to best meet the needs of the database manager.
A DMS table space containing user defined tables and data can be defined as:
A regular table space to store any table data and optionally index data
A large table space to store long field or LOB data or index data.
When designing your DMS table spaces and containers, you should consider the following:
The database manager uses striping to ensure an even distribution of data across all
containers.
The maximum size of regular table spaces is 64 GB for 4 KB pages; 128 GB for 8 KB
pages; 256 GB for 16 KB pages; and 512 GB for 32 KB pages. The maximum size of
large table spaces is 2 TB.
Unlike SMS table spaces, the containers that make up a DMS table space do not need to
be the same size; however, this is not normally recommended, because it results in
uneven striping across the containers, and sub-optimal performance. If any container is
full, DMS table spaces use available free space from other containers.

5-10 Planning Disk Usage


Because space is pre-allocated, it must be available before the table space can be
created. When using device containers, the device must also exist with enough space
for the definition of the container. Each device can have only one container defined on
it. To avoid wasted space, the size of the device and the size of the container should be
equivalent. If, for example, the device is allocated with 5,000 pages, and the device
container is defined to allocate 3,000 pages, 2,000 pages on the device will not be
usable.
By default, one extent in every container is reserved for overhead. Only full extents are
used, so for optimal space management, you can use the following formula to
determine an appropriate size to use when allocating a container:
extent_size * (n + 1)
where extent_size is the size of each extent in the table space, and n is the number of
extents that you want to store in the container.
The minimum size of a DMS table space is five extents. Attempting to create a table
space smaller than five extents will result in an error (SQL1422N).
Three extents in the table space are reserved for overhead.
At least two extents are required to store any user table data. (These extents are
required for the regular data for one table, and not for any index, long field or large
object data, which require their own extents.)
Device containers must use logical volumes with a "character special interface", not
physical volumes.
You can use files instead of devices with DMS table spaces. No operational difference
exists between a file and a device; however, a file can be less efficient because of the
run-time overhead associated with the file system.
Files are useful when:
Devices are not directly supported
A device is not available
Maximum performance is not required
You do not want to set up devices.
If your workload involves LOBs or LONG VARCHAR data, you may derive
performance benefits from file system caching. Note that LOBs and LONG
VARCHARs are not buffered by DB2s buffer pool.
Some operating systems allow you to have physical devices greater than 2 GB in size.
You should consider partitioning the physical device into multiple logical devices, so
that no container is larger than the size allowed by the operating system.

Planning Disk Usage 5-11


Adding and Extending Containers in a DMS Table Space

The ALTER TABLESPACE statement lets you add or extend a


container to an existing table space
Adding a container which is smaller than existing containers results
in a uneven distribution of data can cause parallel I/O operations
to perform less efficiently
When new containers are added or existing containers are extended,
a rebalance of the table space data may occur:
Access to the table space is not restricted during rebalancing
The rebalancing operation can have a significant impact on performance
If you need to add more than one container, you should add them at the
same time within a single ALTER TABLESPACE

5-12

When a table space is created, its table space map is created and all of the initial containers are
lined up such that they all start in stripe 0. This means that data will be striped evenly across all
of the table space containers until the individual containers fill up.
The ALTER TABLESPACE statement lets you add a container to an existing table space or
extend a container to increase its storage capacity.
Adding a container which is smaller than existing containers results in a uneven distribution of
data. This can cause parallel I/O operations (such as prefetching data) to perform less efficiently
than they otherwise could on containers of equal size.
When new containers are added to a table space or existing containers are extended, a rebalance
of the table space data may occur.

Rebalancing
The process of rebalancing when adding or extending containers involves moving table space
extents from one location to another, and it is done in an attempt to keep data striped within the
table space.

5-12 Planning Disk Usage


Access to the table space is not restricted during rebalancing; objects can be dropped, created,
populated, and queried as usual. However, the rebalancing operation can have a significant
impact on performance. If you need to add more than one container, and you plan on rebalancing
the containers, you should add them at the same time within a single ALTER TABLESPACE
statement to prevent the database manager from having to rebalance the data more than once.
The table space high-water mark plays a key part in the rebalancing process. The high-water
mark is the page number of the highest allocated page in the table space. For example, a table
space has 1000 pages and an extent size of 10, resulting in 100 extents. If the 42nd extent is the
highest allocated extent in the table space that means that the high-water mark is 42 * 10 = 420
pages. This is not the same as used pages because some of the extents below the high-water
mark may have been freed up such that they are available for reuse.
Before the rebalance starts, a new table space map is built based on the container changes made.
The rebalancer will move extents from their location determined by the current map into the
location determined by the new map. The rebalancer starts at extent 0, moving one extent at a
time until the extent holding the high-water mark has been moved. As each extent is moved, the
current map is altered one piece at a time to look like the new map. At the point that the
rebalance is complete, the current map and new map should look identical up to the stripe
holding the high-water mark. The current map is then made to look completely like the new map
and the rebalancing process is complete. If the location of an extent in the current map is the
same as its location in the new map, then the extent is not moved and no I/O takes place.
When adding a new container, the placement of that container within the new map depends on
its size and the size of the other containers in its stripe set. If the container is large enough such
that it can start at the first stripe in the stripe set and end at (or beyond) the last stripe in the
stripe set, then it will be placed that way. If the container is not large enough to do this, it will be
positioned in the map such that it ends in the last stripe of the stripe set. This is done to minimize
the amount of data that needs to be rebalanced.

Planning Disk Usage 5-13


Characteristics of SMS and DMS Table Spaces

Characteristics SMS DMS


Can dynamically increase the number of containers in the table space X
Can store index data for a table in a separate table space X
Can store LOB data for a table in a separate table space X
Can store long data for a table in a separate table space X
One table can span several table spaces X*
Space allocated only when needed X
Table space can be placed on different disks X X
Extent size can be changed after creation

5-14

Note * Tables cannot be partitioned across table spaces in DB2 UDB Enterprise Edition
server (but can be in Enterprise - Extended Edition server). However, an index on a
table can be placed in a different table space than the table data.

The Enterprise Server Edition (ESE) product is the combination of the Enterprise
V8 Edition server and the Enterprise - Extended Edition server products.

5-14 Planning Disk Usage


Table-Space Usage

Three table spaces are created by default

Table Space Name Usage Preferred Table Space Type


SYSCATSPACE Used to store the system catalog tables for SMS
the database. These tables contain data
about the database structures: where they
are and how they are used (very similar to
the Oracle system catalog for the database)
TEMPSPACE1 Used by the instance when system SMS
temporary space is needed in the database
(internal sorts and index builds)
USERSPACE1 Used to store user data tables and indexes Must be DMS if you are
partitioning data separate from the
indexes

5-15

Three table spaces are created by default when the database is created.
Other table spaces need to be created to hold other structures and data, such as a user temporary
space (for user temporary tables).
Although the three default table spaces are created as SMS type by default, it is better to specify
the table-space type during database creation, allowing you more latitude in data placement and
better performance.

Planning Disk Usage 5-15


Containers

SMS table space uses only directory containers:


Can use one or more directory containers
Must be specified at table-space create time
Cannot be altered after being created
DMS table spaces can use device containers:
Only on AIX, Windows NT, and Solaris
Can alter a DMS table space to add containers
Similar to raw disk
DMS table spaces can use file containers:
Files are preallocated by DB2 UDB
Can alter a DMS table space to add containers

5-16

As stated earlier, a container is a physical storage location, which could be a directory, device,
or file.

Directory Containers
An SMS table space uses only directory containers. Each SMS table space uses one or more
directory containers, but they must be specified at table-space create time. You cannot alter the
SMS table space to include other directories after it has been created.
You may add a directory container to an SMS table space only during a redirected restore.

Device Containers
A DMS table space can use device containers, but only on AIX, Windows NT, and Solaris
operating systems. These are considered raw disk space in Oracle. Using a logical volume
manager allows a physical disk to be partitioned into multiple devices for the DB2 UDB
instance. You can alter a DMS table space to add device containers after creation.

5-16 Planning Disk Usage


File Containers
DMS table spaces can also use file containers. These files are preallocated by DB2 UDB, an
extent at a time. File containers are used by the DMS table space the same way that device
containers are used. You can alter a DMS table space to add file containers after creation.

Planning Disk Usage 5-17


Extent Allocation

An extent is a unit of contiguous space within a container:


Sized to be divisible by pages
Page size can be specified in DB2 UDB, just as data block size
can be specified in Oracle starting with Oracle 9i
For 4 KB, 8 KB, 16 KB, and 32 KB

5-18

A page is the smallest unit of storage space and I/O for a table space in the instance. Rows of
DB2 UDB data are organized in blocks of data called pages. The page size can be 4 KB, 8 KB,
16 KB, or 32 KB.
An extent is a contiguous unit of space within a container of a table space. In this regard, both
page (or Oracle data block) and extent definitions are similar for both DB2 UDB and Oracle.
Page size can be specified in DB2 UDB, but block size in Oracle (determined by
DB_BLOCK_SIZE) was fixed for the database prior to Oracle 9i. With Oracle 9i up to four
additional nonstandard block sizes are allowed per database, and supported block sizes are 2KB,
4KB, 8KB, 16KB, and 32KB.
In DB2, when creating a table space, the default extent size is used (DB configuration parameter
DFT_EXTENT_SZ), or it can be overridden in the CREATE TABLESPACE statement.

5-18 Planning Disk Usage


Creating Table Spaces

CREATE TABLESPACE basic parameters:


Table space name
Type of management

5-19

The basic parameters needed to explicitly create a table space are the table-space name, how it is
managed, and the container(s). Optionally you can specify the use of the table space, such as
regular or user-temporary. You can also optionally specify page size, extent size, prefetch size,
and buffer pool name. In DB2 UDB sizes can be entered in four different units:
integer pages
integer K kilobytes
integer M megabytes
integer G gigabytes
Example of creating an SMS table space:
CREATE TABLESPACE retail_sales
MANAGED BY SYSTEM USING
('/dbdata/database/container1','/dbdata/database/container2')
PREFETCHSIZE 32

Example of creating a DMS table space:


CREATE TABLESPACE housing
MANAGED BY DATABASE USING
(DEVICE '/dev/rdisk21' 1024, DEVICE '/dev/rdisk38' 1024)
EXTENTSIZE 32 PREFETCHSIZE 32

Planning Disk Usage 5-19


Example of creating a DMS table space for long objects:
CREATE LONG TABLESPACE pictures
MANAGED BY DATABASE USING
(FILE '/dbdata/database/longobj/pictures21.tbs' 8000)
PREFETCHSIZE 32

5-20 Planning Disk Usage


Basic Disk Storage Requirements

Compute basic disk requirements for storage, considering:


Type of table space (SMS or DMS)
System catalog tables
Data tables

5-21

Basic disk storage requirements must be computed for the table spaces, and it must be
determined how they are allocated and of which type (SMS or DMS).
You must consider storage size for the system catalog tables, data tables, indexes, long and LOB
data types, and log space.

System catalog tables


DB2 system catalog tables (in an SMS table space) initially use about 3.5 megabytes of storage
space. Because the system catalog tracks database objects, the catalog tables can grow
depending on the number and complexity of database objects that are created in the database. It
is recommended that the system catalog be put in an SMS table space because SMS table space
is allocated one page at a time. If the system catalog table space is DMS with an extent size of
32, the table space is initially allocated 20 megabytes of space. This is because the minimum
data table size in DMS is two extents.

Planning Disk Usage 5-21


Data tables
In a table space for data tables, each data page incurs 76 bytes of overhead for the database
manager. On a 4K-page table space (default), this leaves 4020 bytes of useful storage space. A
row on a 4K page cannot exceed 4005 bytes, and can be in up to 500 columns. There can be a
maximum of only 255 rows per page.
Other page sizes available are 8K, 16K, or 32K. A single table or index object can be as large as
512 gigabytes, if using a 32K page size.
Page size Max row length Max columns Max size for
(bytes)* regular DMS
4K 4,005 500 64 gigabytes
8K 8,101 1012 128 gigabytes
16K 16,293 1012 256 gigabytes
32K 32,677 1012 512 gigabytes

* Page size less 91 bytes overhead (including 15 bytes for first slot)
The number of pages for a table can be estimated by:
4K page size:
(number_of_rows / TRUNCATE(4020 / (average_row_size + 10))) * 1.1
8K page size:
(number_of_rows / TRUNCATE(8116 / (average_row_size + 10))) * 1.1
16K page size:
(number_of_rows / TRUNCATE(16308 / (average_row_size + 10))) * 1.1
32K page size:
(number_of_rows / TRUNCATE(32692 / (average_row_size + 10))) * 1.1

In DB2 the rows must fit entirely on a single page rows are not broken across pages using
row chaining (as in Oracle).

For More Information


DB2 Universal Database Administration Guide: Planning Chapter 8: Physical
Database Design

5-22 Planning Disk Usage


In a table space for data tables, each data page incurs 68 bytes of overhead for the
V8 database manager. On a 4K-page table space (default), this leaves 4028 bytes of
useful storage space. A row on a 4K page still cannot exceed 4005 bytes, and can be
in up to 500 columns. There can be a maximum of only 255 rows per page.

Other page sizes available are 8K, 16K, or 32K. A single table or index object can be as large as
512 gigabytes, if using a 32K page size.
Page size Max row length Max columns Max size for
(bytes)* regular DMS
4K still 4,005 500 64 gigabytes
8K still 8,101 1012 128 gigabytes
16K still 16,293 1012 256 gigabytes
32K still 32,677 1012 512 gigabytes

* Page size less 83 bytes overhead (including 15 bytes for first slot), but the maximum size
of a row on a 4K page is still 4,005, eventhough it calculates to 4,013 bytes.

The number of pages for a table can be estimated by:


4K page size:
(number_of_rows / TRUNCATE(4028 / (average_row_size + 10))) * 1.1
8K page size:
(number_of_rows / TRUNCATE(8124 / (average_row_size + 10))) * 1.1
16K page size:
(number_of_rows / TRUNCATE(16316 / (average_row_size + 10))) * 1.1
32K page size:
(number_of_rows / TRUNCATE(32700 / (average_row_size + 10))) * 1.1

Planning Disk Usage 5-23


Basic Disk Storage Requirements

More basic disk requirements for storage:


Indexes
Long data types
LOB data types
Log space

5-24

Indexes
For each unique index, the space needed can be estimated as:
(average_index_key_size + 8) * number_of_rows * 2
average_index_key_size byte count of each column in the index key. For VARCHAR
and VARGRAPHIC columns, use an average of the current data size, plus one byte.
Factor of 2 overhead, such as non-leaf pages and free space.
Add one extra byte for the null indicator for every column that allows NULLs.
The maximum amount of temporary space required during index creation can be estimated as:
(average_index_key_size + 8) * number_of_rows * 3.2
Factor of 3.2 index overhead and space required for sorting during index creation.
For nonunique indexes, four bytes are required to store duplicate key entries. The estimates
shown above assume no duplicates.
For SMS, the minimum required space is 12 KB. For DMS, the minimum is the size of an
extent.

5-24 Planning Disk Usage


Long data types
Long field data types (LONG VARCHAR or LONG VARGRAPHIC) require use of an allocation map,
which is in 4K pages stored in the long table space extents. Long data is stored in segments
whose size is a multiple of 512 bytes (such as 1024 bytes or 2048 bytes) in 32K areas. This can
result in a fair amount of unused space, depending on the size of the long field data.

LOB
Large object (LOB) data is stored in two separate objects (structured differently from other data
types):
LOB data objects
Data is stored in 64-megabyte areas that are broken up into segments whose sizes
are double-multiples of 1024 bytes. (1024 bytes, 2048 bytes, 4096 bytes, and so
on), up to 64 megabytes.
You can specify the COMPACT option of the CREATE TABLE and the ALTER TABLE
statements to reduce the amount of disk space used by LOB data. This allows the
LOB data to be split into smaller segments. This uses the minimum amount of
space to the nearest 1K boundary.
LOB allocation objects
Allocation and free space information is stored in 4K allocation pages separate from the actual
data. The overhead for these pages is calculated as one 4K page for every 64 gigabytes, plus one
4K page for every 8 megabytes.

Log file space


The amount of space (in bytes) required for primary log files is:
(LOGPRIMARY * (LOGFILSIZ + 2) * 4096) + 8192

The amount of space (in bytes) required for primary and secondary log files is:
((LOGPRIMARY + LOGSECOND) * (LOGFILSIZ + 2) * 4096) + 8192

The total active log space cannot exceed 32 gigabytes.

Planning Disk Usage 5-25


Monitoring Disk Storage Usage

The DB2 list command is used to view database storage information


such as:
Table space type
Total pages
Total pages used
Page size
Extent size

Use system commands to view:


Container information
Log space usage

5-26

You need to use several tools to understand how much disk space you are using for the database
objects. Within DB2 UDB, you can use the DB2 command list tablespaces show detail to view
the type of table space it is, how many pages are being used, how many total pages are allocated,
how many free pages, extent size, page size, and number of containers. However, if you are
using an SMS table space, all of the allocated pages are shown as used, and the free page value
is not applicable. Because this table space is system-managed, the instance information is not as
detailed as the DMS type of table space.
To find more information, you can use the DB2 command list tablespace containers for
<tablespace_number> show details. This provides the path to the storage container. With this
information, you can view the container using system tools, such as ls -al or du -k -s
<directory_name>.
You need to monitor the log-storage directory to ensure you have enough file system space to
hold the logs.

5-26 Planning Disk Usage


Summary

You should now be able to:


Identify the privileges and authorizations needed to allocate disk
Understand the concepts of table spaces and containers
Explain the differences between SMS and DMS storage
approaches
Describe how extents are defined and allocated
Create table spaces with basic parameters
Compute basic disk requirements for storage and understand
default allocations

5-27

Planning Disk Usage 5-27


5-28 Planning Disk Usage
Exercises

Planning Disk Usage 5-29


Exercise 1
In this exercise you will be quizzed on your table space knowledge:
1.1 A DB2 UDB table space is a logical space allocated for storing data and indexes. What
is its counterpart in Oracle?

1.2 Is a DB2 UDB table space defined and controlled as part of the database, or as part of
the instance?

1.3 How does this differ from how a tablespace is created and controlled in Oracle?

1.4 If an SMS table space is created and it uses a directory as a container for physical data
storage, what controls the structure of the container and I/O to that container?

1.5 If a DMS table space is created and it uses a file as a container for physical data storage,
what controls the structure of the container and I/O to that container?

1.6 If a DMS table space is created and it uses a device as a container for physical data
storage, what controls the structure of the container and I/O to that container?

1.7 When would you use an SMS type of table space?

1.8 If you are using an SMS type of table space, how would you increase the physical storage
space, if needed?

1.9 What would need to be done to manually increase the storage size of an SMS type of
container?

1.10 Is there a performance consideration when comparing SMS and DMS types of table
spaces?

5-30 Planning Disk Usage


1.11 Which type of table space allows you to separate data and indexes?

1.12 What is the difference between DB2 UDB and Oracle when an extent is created? How
is it sized?

Planning Disk Usage 5-31


Exercise 2
In this exercise, you will compute estimated storage requirements based on some simple
arithmetic. Be aware that there are different overhead values for indexes between DB2 UDB and
Oracle. (For further detail on these calculations, see Administration Guide: Planning). You will
also determine which type of table spaces to use, and where to place the objects.
In the following scenario, Table 1 is medium sized and somewhat volatile. Table 2 is very small
and static. Table 3 is very volatile and contains LOB data.
2.1 Given that you would like to migrate an Oracle database with the following sizes, what
equivalent sizes would you use for DB2 UDB (assume 25% growth in a year)?
Object DB2 UDB size
Table 1data
number of columns: 10
row size: 200 bytes
number of rows: 10000
total raw data: 2MB
Table 1 indexes
number of indexes: 5
total key bytes/row: 51 (avg 10.2 bytes/key)

The estimates made above do not allow for nullable overhead (1 byte). Indexes include
overhead. For DB2 UDB, index size (bytes) = (key + 8) * #_of_rows * 2.

2.2 In the migration scenario below, what types of table spaces would you use for the
objects, and why?
Object Object usage Table space type
Table 1 medium sized volatile table
Table 1 indexes volatile indexes
Table 2 small static table
Table 2 indexes static index
Table 3 large volatile table with LOB
Table 3 indexes volatile indexes

5-32 Planning Disk Usage


Optional Exercise Using the Estimate Size wizard
You can attempt this exercise if you have the Administration Client GUIs available
V8 for use with your student server, and you are using version 8 of DB2.

In this exercise you will use the Estimate Size wizard for a table, and compare its results to your
calculated values.
2.3 Use a DB2 Command Window to find the number of rows and average row size of the
employee table in the sample database.
Number of rows in the employee table 32_______
Average row size of employee table 77_____

2.4 Now caclulate the size (in pages) required for 1000 rows in the employee table.
Use the following formula to calculate the storage size required for the employee table,
assuming a 4K page size.
Number of data pages =
(number_of_rows / TRUNCATE (4028 /(avge_row_size + 10))) * 1.1
Insert your calculated value in the table shown below.

2.5 Open the Control Panel, and drill down in the left pane until you select the sample
database. Click on the Tables icon, then right click on the employee table. Select the
Estimate Size wizard.

2.6 What information do you see for Number of rows and Average row length? Why are
you shown that number of rows?

2.7 Currently, the Display size units is MB; change it to Pages. Click on the right-left arrow
combination on the right side of the display, near the column titles. Select the following
columns to display:
Name
Tablespace
Current size
Estimated size
Maximum size

Planning Disk Usage 5-33


2.8 For the wizard to help you, you must update its view of the data, using the Run statistics
button. Follow the steps below:
a. Click on the Run statistics button.
b. On the Columns tab, click the Collect basic statistics on all columns radio
button.
c. Click the Index tab. Click on the Collect statistics for all indexes radio button.
d. Click on the Schedule tab, and click the Run now without saving task history
radio button.
e. Finally, click on the OK button at the bottom of the panel to get the statistics.
f. You will see a message indicating RUNSTATS worked correctly. Click Close.
g. Now click Refresh several times on the wizard panel - eventually the correct
current row count will be displayed.
h. Enter 1000 in the New total number of rows field, and click Refresh. Enter
your new estimated size in the table below.

2.9 Fill in the table below with your results, and be prepared to explain your conclusions.
Your Calculation Estimate size wizard
Avg. row size
Number of rows
Pages used

Number of rows
Pages used

5-34 Planning Disk Usage


Exercise 3
In this exercise, you will draw a graphical representation of an instance and its databases.
3.1 Given the following instance and database descriptions, fill in the diagram on the
following page with the appropriate table space types.
Instance name: Casper Electronics-production
Inventory database (international, high-volume electronics parts supplier)
parts table (columns to track part number, part description, manufacturer code, re-
order point, and quantity in stock)
location table (columns to track warehouse location, area within the warehouse,
and rack and bin locations)
manufacturer table (columns to track ID, name, address, phone, sales rep,
leadtime, and type of products)
Default table spaces
3 DMS user table spaces
1 SMS user temporary table space

Sales database (international sales of electronics parts, associated with the inventory
database)
customer table (columns to track ID, name, address, phone, and credit info)
orders (columns to track ID, items ordered, item quantity, item backorder, PO
number, ship instructions, and how paid)
items table (view that links to the parts table in the inventory database)
Use the default table space for the system catalog
Four other table spaces must be DMS type.
Containers for all DMS table spaces are devices.

Planning Disk Usage 5-35


Casper Electronics-production

Inventory DB Sales DB

table space 0 table space 1 table space 0 table space 1


(SMS) (SMS) (SMS) (___)
SYSCATSPACE TEMPSPACE1 SYSCATSPACE TEMPSPACE1
system catalog system temporary system catalog system temporary
tables area tables area
/opt/inventory /opt/inventory /opt/sales /opt/sales

table space 2 table space 3 table space 2 table space 3


(SMS) (DMS) (___) (___)
USERSPACE1 TBLSPACE3 USERSPACE1
user data and parts table user data and orders table
indexes indexes
/opt/inventory rdisk45 rdisk22 rdisk23

table space 4 table space 5 table space 4 table space 5


(DMS) (DMS) (___) (___)
TBLSPACE4 TBLSPACE5
location table all indexes and items table all indexes and
manufacturer table customer table
rdisk46 rdisk47 rdisk24 rdisk25

table space 6
(SMS)
USERTEMP1
user temporary
area
/opt/inventory

5-36 Planning Disk Usage


Exercise 4
In this exercise, you will create two new bufferpools of different page sizes. You will use these
bufferpools in the next exercise.
4.1 Open a DB2 Command Window. Use the proper DB2 command-line commands to
create two new bufferpools in the sample database, specified as:
Bufferpool 1 name: 8K-BP
Page size: 8K
Number of pages: 80

Bufferpool 2 name: 16K-BP


Page size: 16K
Number of pages: 40

Warning!
Do not use extended storage.

Planning Disk Usage 5-37


Exercise 5
You will experiment with types of table spaces, bufferpool usage, and disk space usage in this
exercise. The steps include:
1. You will create a new table space of type SMS.
2. You will create a new table space of type DMS.
3. You will associate the previously created bufferpools to these table spaces.
4. You will create a table in each new table space, and check its disk usage as you add new
rows to each table.

5.1 Open a DB2 Command Window, and change to your HOME directory. Create two
directories here, named SMS-space and DMS-space.

5.2 Change directory to your DMS-space directory, and create a file named tbspc1. This file
will be used as a container for a table.

5.3 Using the DB2 Command Window, create a table space named SMS-TBSPC, using the
following criteria:
regular tablespace
pagesize 8K
SMS, using your SMS-space directory as a container
bufferpool 8K-BP

5.4 Now create a table space named DMS-TBSPC, using the following criteria:
regular tablespace
pagesize 16K
DMS, using your DMS-space/tbspc1 as a container
size of 20 MB (1280 16 K pages)
bufferpool 16K-BP

5-38 Planning Disk Usage


5.5 In the DB2 Command Window, get the detailed information about the table spaces you
just created. Fill in the table below with the data you received. Also find the size of the
containers using the ls -l command.
SMS-space DMS-space
Total pages
Extent size
Used pages
Page size
Number of containers
Number of extents
Size of container (ls -l) 0 bytes (the DAT file) 20,971,520 bytes

5.6 Create a new table in each table space with these specifics:
SMS-space DMS-space
Table name sms1 dms1
Column name col1 col1
Column data type varchar (800) varchar (800)

5.7 Get a new listing of the table space information and fill in the following table with the
new dimensions:
SMS-space DMS-space
Total pages
Used pages
Number of containers
Number of extents used
Size of container (ls -l) 8192 bytes (the DAT file) 20,971,520 bytes

Planning Disk Usage 5-39


5.8 Create a file named load.data in your home directory, and make it executable.
cd
touch load.data
chmod 770 load.data
Edit the file to include the following:
db2 connect to sample
x = 1
until
[ $x = 501 ]
do
db2 insert into sms1 values (ABCDEFGHIJKLMNOPQRSTUVWXYZ)
x = expr $x + 1
done
Note: the mark near expr and the end mark on the same line are the back-quote marks.
Execute the file (this will take some time to run).
./load.data
Edit the file, and change the table name to dms1, then execute it again.
This will put data into the two new tables.

5.9 Insert data into both tables, get a new table space list, and enter the storage size in the
following table:
SMS-space DMS-space
Total pages
Used pages
Number of containers
Number of extents used
Size of container (ls -l) 81920 bytes (the DAT file) 20,971,520 bytes

5.10 Explain your conclusions.

5-40 Planning Disk Usage


Solutions

Planning Disk Usage 5-41


Solution 1
In this exercise you will be quizzed on your table space knowledge:
1.1 A DB2 UDB table space is a logical space allocated for storing data and indexes. What
is its counterpart in Oracle?
A logical space allocated for storing data and indexes in Oracle is also known as a
tablespace (generally written as a single word). Though named the same, the
implementation is totally different.

1.2 Is a DB2 UDB table space defined and controlled as part of the database, or as part of
the instance?
In DB2 UDB the table space is created for a database, and its usage is controlled by
that database.

1.3 How does this differ from how a tablespace is created and controlled in Oracle?
A tablespace in Oracle is created using the CREATE TABLESPACE statement in
SQL*Plus, or by using the GUI interface of the Oracle Enterprise Manager (OEM)
of Oracle 9i. In Oracle a tablespace is a logical storage container. The hierarchy is:
database is made up of tablespaces; tablespaces are made up of one or more data
files; tablespaces contain segments; segments are made up of one or more extents;
extents are contiguous sets of blocks.

1.4 If an SMS table space is created and it uses a directory as a container for physical data
storage, what controls the structure of the container and I/O to that container?
The operating system controls the structure and use of a directory-type container
and the scheduling and buffering of data I/O as it is passed to and from the
container.

1.5 If a DMS table space is created and it uses a file as a container for physical data storage,
what controls the structure of the container and I/O to that container?
The database manager controls the structure and use of a file-type container, while
the operating system performs I/O of data to and from the container.

5-42 Planning Disk Usage


1.6 If a DMS table space is created and it uses a device as a container for physical data
storage, what controls the structure of the container and I/O to that container?
The database manager controls the structure and use of a device-type container,
while the operating system performs I/O of data to and from the container.

1.7 When would you use an SMS type of table space?


The default type of table space created for a database is SMS. This type of table
space requires minimal administration and can be used where there is minimal
data and index growth over time.

1.8 If you are using an SMS type of table space, how would you increase the physical storage
space, if needed?
Generally, the operating system handles the allocated storage space for a SMS type
of container. The controlling criteria is whether or not there is enough space to
grow in the file system where the container resides. Adding more file space to that
file system would allow the container to grow.

1.9 What would need to be done to manually increase the storage size of an SMS type of
container?
You need to backup the database in question and use a redirected restore to a new
file system with more space. You cannot simply add more containers to a SMS table
space.

1.10 Is there a performance consideration when comparing SMS and DMS types of table
spaces?
Generally, an SMS table space using a directory container performs slower than a
DMS table space because of the operating system overhead used to manage the
containers.

1.11 Which type of table space allows you to separate data and indexes?
You can store the same kind of data in both types of table spaces. However, use
DMS table spaces to separate the index data, table data, and LOB data.

Planning Disk Usage 5-43


1.12 What is the difference between DB2 UDB and Oracle when an extent is created? How
is it sized?
In DB2 UDB, an extent is created in the table space when the table space is defined
for a database. Its size and its page size are defined at that time.
In Oracle, an extent is a contiguous set of blocks on disk. Prior to Oracle 8.1.5, there
was only one method to manage the allocation of extents within a tablespace. This
method was called a dictionary-managed tablespace. In Oracle 8.1.5 and later, the
concept of a locally managed tablespace was introduced, removing the need to use
the data dictionary to manage space within the tablespace this approach
removes the serialization of space allocation and thus is considerably faster.

5-44 Planning Disk Usage


Solution 2
In this exercise, you will compute estimated storage requirements based on some simple
arithmetic. Be aware that there are different overhead values for indexes between DB2 UDB and
Oracle. (For further detail on these calculations, see Administration Guide: Planning). You will
also determine which type of table spaces to use, and where to place the objects.
In the following scenario, Table 1 is medium sized and somewhat volatile. Table 2 is very small
and static. Table 3 is very volatile and contains LOB data.
2.1 Given that you would like to migrate an Oracle database with the following sizes, what
equivalent sizes would you use for DB2 UDB (assume 25% growth in a year)?
Calculation for Table 1, using a 4K page size: Number of data pages =
(number_of_rows / TRUNCATE (4020 /(avge_row_size + 10))) * 1.1
a. Average_row_size + 10 = 210 bytes
b. Round down (4,020 / 210) = 19 rows/page
c. 10,000 / 19 = (526.32 * 1.1) + 25% = 724 pages
d. 4,096 * 724 = 2,965,504 bytes
Number of index pages = ((key + 8) * #_of_rows * 2) / 4,020
a. key_size + (8 * 5 indexes) = 91 bytes
b. 91 * 10000 = 910,000 * 2 = 1,820,000 + 25% = 2,275,000 bytes
c. 2,275,000 / 4020 = 566 pages = 2,318,335 bytes
The 8 bytes is a per-row overhead. This includes the record identifier (RID) which
is a three-byte page number followed by a one-byte slot number. There are 5
indexes, so there are 40 bytes being used per row for row overhead.
Object DB2 UDB size
Table 1data 2.97MB (including overhead)
number of columns: 10
row size: 200 bytes
number of rows: 10000
total raw data: 2MB
Table 1 indexes 2.31MB (including overhead)
number of indexes: 5
total key bytes/row: 51 (avg 10.2 bytes/key)

The estimates made above do not allow for nullable overhead (1 byte). Indexes include
overhead. For DB2 UDB, index size (bytes) = (key + 8) * #_of_rows * 2.

Planning Disk Usage 5-45


2.2 In the migration scenario below, what types of table spaces would you use for the
objects, and why?
Object Object usage Table space type
Table 1 medium sized volatile table DMS
Table 1 indexes volatile indexes DMS
Table 2 small static table SMS
Table 2 indexes static index SMS
Table 3 large volatile table with LOB DMS
Table 3 indexes volatile indexes DMS
Table 1 should be placed in a DMS table space and its indexes in a different table
space.
Table 2 can be placed in a SMS table space, along with its index, because this table
appears to be rather static in nature.
Table 3 should be placed in a DMS table space because:
It is not a small table, and it will grow (not static)
The indexes should also be placed in a DMS tablespace, preferably different
than the one containing the data

5-46 Planning Disk Usage


Optional Exercise Using the Estimate Size wizard
You can attempt this exercise if you have the Administration Client GUIs available
V8 for use with your student server, and you are using version 8 of DB2.

In this exercise you will use the Estimate Size wizard for a table, and compare its results to your
calculated values.
2.3 Use a DB2 Command Window to find the number of rows and average row size of the
employee table in the sample database.
CONNECT TO sample
SELECT COUNT(*) FROM employee
DESCRIBE TABLE employee
TERMINATE
Number of rows in the employee table 32_______
Average row size of employee table 77_____

2.4 Now caclulate the size (in pages) required for 1000 rows in the employee table.
Use the following formula to calculate the storage size required for the employee table,
assuming a 4K page size.
Number of data pages =
(number_of_rows / TRUNCATE (4028 /(avge_row_size + 10))) * 1.1
Number of data pages = (number_of_rows / TRUNCATE (4028 /( 77 + 10))) *1.1
Number of data pages = (number_of_rows / 46) *1.1
Number of data pages = (1000 / 46) *1.1
Number of data pages = 23.9
Insert your calculated value in the table shown below.

2.5 Open the Control Panel, and drill down in the left pane until you select the sample
database. Click on the Tables icon, then right click on the employee table. Select the
Estimate Size wizard.

Planning Disk Usage 5-47


2.6 What information do you see for Number of rows and Average row length? Why are
you shown that number of rows?
Number of rows is Not available, and Average row length is 83 bytes. The statistics
for this table are not up to date, therefore the wizard cannot provide accurate
numbers.

2.7 Currently, the Display size units is MB; change it to Pages. Click on the right-left arrow
combination on the right side of the display, near the column titles. Select the following
columns to display:
Name
Tablespace
Current size
Estimated size
Maximum size
Note there is 1 page being shown, eventhough there are 0 rows displayed.

2.8 For the wizard to help you, you must update its view of the data, using the Run statistics
button. Follow the steps below:
a. Click on the Run statistics button.
b. On the Columns tab, click the Collect basic statistics on all columns radio
button.
c. Click the Index tab. Click on the Collect statistics for all indexes radio button.
d. Click on the Schedule tab, and click the Run now without saving task history
radio button.
e. Finally, click on the OK button at the bottom of the panel to get the statistics.
f. You will see a message indicating RUNSTATS worked correctly. Click Close.
g. Now click Refresh several times on the wizard panel - eventually the correct
current row count will be displayed.
h. Enter 1000 in the New total number of rows field, and click Refresh. Enter
your new estimated size in the table below.

5-48 Planning Disk Usage


2.9 Fill in the table below with your results, and be prepared to explain your conclusions.
Your Calculation Estimate size wizard
Avg. row size 77 77
Number of rows 32 32
Pages used N/A 2

Number of rows 1000 1000


Pages used 23.9 23

Planning Disk Usage 5-49


Solution 3
In this exercise, you will draw a graphical representation of an instance and its databases.
3.1 Given the following instance and database descriptions, fill in the diagram on the
following page with the appropriate table-space types.
Instance name: Casper Electronics-production
Inventory database (international, high volume electronics parts supplier)
parts table (columns to track part number, part description, manufacturer code, re-
order point, and quantity in stock)
location table (columns to track warehouse location, area within the warehouse,
and rack and bin locations)
manufacturer table (columns to track ID, name, address, phone, sales rep, lead
time, and type of products)
Default table spaces
3 DMS user table spaces
1 SMS user temporary table space

Sales database (international sales of electronics parts, associated with the inventory
database)
customer table (columns to track ID, name, address, phone, and credit info)
orders (columns to track ID, items ordered, item quantity, item backorder, PO
number, ship instructions, and how paid)
items table (view that links to the parts table in the inventory database)
Use the default table space for the system catalog
Four other table spaces must be DMS type.
Containers for all DMS table spaces are devices.

5-50 Planning Disk Usage


Casper Electronics-production

Inventory DB Sales DB

table space 0 table space 1 table space 0 table space 1


(SMS) (SMS) (SMS) (SMS)
SYSCATSPACE TEMPSPACE1 SYSCATSPACE TEMPSPACE1
system catalog system temporary system catalog system temporary
tables area tables area
/opt/inventory /opt/inventory /opt/sales /opt/sales

table space 2 table space 3 table space 2 table space 3


(SMS) (DMS) (DMS) (DMS)
USERSPACE1 TBLSPACE3 USERSPACE1 TBLSPACE3
user data and parts table user data and orders table
indexes indexes
/opt/inventory rdisk45 rdisk22 rdisk23

table space 4 table space 5 table space 4 table space 5


(DMS) (DMS) (DMS) (DMS)
TBLSPACE4 TBLSPACE5 TBLSPACE4 TBLSPACE5
location table all indexes and items table all indexes and
manufacturer table customer table
rdisk46 rdisk47 rdisk24 rdisk25

table space 6
(SMS)
USERTEMP1
user temporary
area
/opt/inventory

Planning Disk Usage 5-51


Solution 4
In this exercise, you will create two new bufferpools of different page sizes. You will use these
bufferpools in the next exercise.
4.1 Open a DB2 Command Window. Use the proper DB2 command-line commands to
create two new bufferpools in the sample database, specified as:
Bufferpool 1 name: 8K-BP
Page size: 8K
Number of pages: 80

Bufferpool 2 name: 16K-BP


Page size: 16K
Number of pages: 40

Warning!
Do not use extended storage.

CONNECT TO sample
CREATE BUFFERPOOL "8K-BP" IMMEDIATE SIZE 80 PAGESIZE 8K
CREATE BUFFERPOOL "16K-BP" IMMEDIATE SIZE 40 PAGESIZE 16K
These two bufferpools will be used in the next exercise.

5-52 Planning Disk Usage


Solution 5
You will experiment with types of table spaces, bufferpool usage, and disk space usage in this
exercise. The steps include:
1. You will create a new table space of type SMS.
2. You will create a new table space of type DMS.
3. You will associate the previously created bufferpools to these table spaces.
4. You will create a table in each new table space, and check its disk usage as you add new
rows to each table.

5.1 Open a DB2 Command Window, and change to your HOME directory. Create two
directories here, named SMS-space and DMS-space.
cd
mkdir SMS-space
mkdir DMS-space

5.2 Change directory to your DMS-space directory, and create a file named tbspc1. This file
will be used as a container for a table.
cd DMS-space
touch tbspc1

5.3 Using the DB2 Command Window, create a table space named SMS-TBSPC, using the
following criteria:
regular tablespace
pagesize 8K
SMS, using your SMS-space directory as a container
bufferpool 8K-BP
CONNECT TO sample
CREATE REGULAR TABLESPACE "SMS-TBSPC" PAGESIZE 8 K
MANAGED BY SYSTEM USING ('$HOME/SMS-space')
BUFFERPOOL "8K-BP"

Planning Disk Usage 5-53


5.4 Now create a table space named DMS-TBSPC, using the following criteria:
regular tablespace
pagesize 16K
DMS, using your DMS-space/tbspc1 as a container
size of 20 MB (1280 16 K pages)
bufferpool 16K-BP
CONNECT TO sample
CREATE REGULAR TABLESPACE "DMS-TBSPC" PAGESIZE 16 K
MANAGED BY DATABASE USING
(FILE '$HOME/DMS-space/tbspc1' 1280)
BUFFERPOOL 16K-BP

5.5 In the DB2 Command Window, get the detailed information about the table spaces you
just created. Fill in the table with the data you received. Also find the size of the
containers using the ls -l command.
LIST TABLESPACES SHOW DETAIL
SMS-space DMS-space
Total pages 1 1280
Extent size 32 pages 32 pages
Used pages 1 96
Page size 8192 bytes 16384 bytes
Number of containers 1 1
Number of extents 1 3
Size of container (ls -l) 0 bytes (the DAT file) 20,971,520 bytes

5.6 Create a new table in each table space with these specifics:
SMS-space DMS-space
Table name sms1 dms1
Column name col1 col1
Column data type varchar (800) varchar (800)
CREATE TABLE sms1 (col1 VARCHAR (800)) IN "SMS-TBSPC"
CREATE TABLE dms1 (col1 VARCHAR (800)) IN "DMS-TBSPC"

5-54 Planning Disk Usage


5.7 Get a new listing of the table space information and fill in the following table with the
new dimensions:
LIST TABLESPACES SHOW DETAIL
SMS-space DMS-space
Total pages 2 1280
Used pages 2 160
Number of containers 1 1
Number of extents used 1 5
Size of container (ls -l) 8192 bytes (the DAT file) 20,971,520 bytes

5.8 Create a file named load.data in your home directory, and make it executable.
cd
touch load.data
chmod 770 load.data
Edit the file to include the following:
db2 connect to sample
x = 1
until
[ $x = 501 ]
do
db2 insert into sms1 values (ABCDEFGHIJKLMNOPQRSTUVWXYZ)
x = expr $x + 1
done
Note: the mark near expr and the end mark on the same line are the back-quote marks.
Execute the file (this will take some time to run).
./load.data
Edit the file, and change the table name to dms1, then execute it again.
This will put data into the two new tables.

Planning Disk Usage 5-55


5.9 Insert data into both tables, and enter the storage size in the following table:
LIST TABLESPACES SHOW DETAIL
SMS-space DMS-space
Total pages 11 1280
Used pages 11 160
Number of containers 1 1
Number of extents used 1 5
Size of container (ls -l) 81920 bytes (the DAT file) 20,971,520 bytes

5.10 Explain your conclusions.


The SMS container file grew as needed to contain the data, still in one extent. The
operating file system is responsible for its size and growth.
The DMS container file stayed the same size, because it was pre-allocated to a
specific size, and you have not inserted enough data to fill it. All of your data so far
has fit into 5 extents, managed by DB2.

5-56 Planning Disk Usage


Module 6

Data Type Mapping

Data Type Mapping 02-2003 6-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Determine the differences between Oracle and DB2 UDB data
types
Use the proper DB2 UDB data types
Explore other data type mapping possibilities
Understand the use of NULL data

6-2

6-2 Data Type Mapping


DB2 UDB Data Types

DB2 UDB data types can be placed into these three categories:
Numeric
String (including LOB)
Date-time

6-3

Numeric data types are used to store various numerical values, such as decimal data, scientific
notation data, and integer data. The size of the value as well as the precision required help
determine which numeric data type to use.
String data types are used to store alphanumeric, character-type data. The particular data type to
choose for a column depends on the size of the character-type data and if that size is expected to
remain the same for all rows in the table.
There is a subcategory of string data types for storing large object (LOB) data, such as whole
documents, pictures, or sound information.
Date-time data types provide a means of storing date, time, and complete timestamp
information. These data types allow for various types of formatting.

Data Type Mapping 6-3


DB2 Numeric Data Types

DB2 UDB numeric data types include:


Integer
SMALLINT
INTEGER
BIGINT
Floating Point
REAL
FLOAT(n)
DOUBLE / DOUBLE PRECISION
Decimal / Numeric
DECIMAL(n)

6-4

Integer
Smallint is used for values ranging from -32,768 to 32,767 and provides precision of up to 5
digits (left of decimal). Smallint requires 2 bytes for storage.
Integer is used for values ranging from -2,147,483,648 to 2,147,483,647. A precision of 10
digits (left of decimal) is possible. Integer requires 4 bytes for storage.
Bigint is used for 64-bit integers, with values ranging from -9,223,372,036,854,775,808 to
9,223,372,036,854,775,807. Bigint requires 8 bytes for storage.

Floating Point
Real data type is used when floating point data (scientific notation) is required. It is single-
precision with a length between 1 and 24. This data type requires 4 bytes for storage.
Double / Double Precision data type is used when floating point data (scientific notation) is
required. It is double precision and has a length between 25 and 53. This data type requires 8
bytes for storage.

6-4 Data Type Mapping


FLOAT(n) is effectively a synonym for REAL, if 0 < n < 25. FLOAT(n) is a synonym for DOUBLE,
if 24 < n < 54.

Note Float can be used for both real and double data types, depending on the size of the
value. See Appendix B for further details.

Decimal
Decimal/Numeric data type is used when you need precision from 1 to 32 (default 5) decimal
positions. This data type requires (p/2) + 1 bytes for storage (where p is the number of digits of
precision).
With DECIMAL(p,s) and NUMERIC(p,s), the two components are precision and scale. If the
scale is zero, it can be omitted, e.g., DECIMAL(p) but one of the INTEGER values may be
preferred, because of their speed and lower storage requirements.

Data Type Mapping 6-5


Mapping the Oracle Number Data Type

The Oracle number data type should be mapped, according to the


business content and actual values held, to:
One of the three integer types, by preference
Decimal
Floating Point

6-6

Different Mappings
The Oracle data type NUMBER can be mapped to many DB2 types.
The type of mapping depends on whether the NUMBER is used to store:
an integer (NUMBER(p), or NUMBER(p,0))
a number with a fixed decimal point (NUMBER(p,s), s > 0)
a floating-point number (NUMBER).

An Oracle INTEGER is a synonym for NUMBER(38). The DB2 UDB INTEGER is a true operating
system integer.

6-6 Data Type Mapping


Storage Considerations
Each DB2 type requires a different amount of space: SMALLINT uses 2 bytes, INTEGER uses 4
bytes, and BIGINT uses 8 bytes.
The space usage for Oracle type NUMBER depends on the parameter used in the declaration.
NUMBER, with the default precision of 38 significant digits, uses 20 bytes of storage. Mapping
NUMBER to SMALLINT, for example, can save 18 bytes per column.

Note that in DB2, unless you specify NOT NULL, another byte is required for the null indicator.
See Handling Nulls on page 6-17

Data Type Mapping 6-7


DB2 String Data Types

DB2 UDB string data types include:


CHAR
VARCHAR
LONG VARCHAR

6-8

Char data type is used when a fixed-length character string is required. It can store a length of
from 1 to 254 characters and requires the specified number of bytes (one byte for each character)
for storage.
Varchar data type is used when a variable-length character string is required. It can store a string
with a maximum length of 32,672 bytes. There is no optional minimum-length capability. You
must set the maximum size when creating a column with this data type. This is the only data
type that allows altering (using the ALTER TABLE command) after creation. You can alter the
maximum-length value.
Long varchar data type is used when a variable-length character string is required but the
varchar data type is not long enough. The maximum length of a long varchar column is 32,700
bytes.

6-8 Data Type Mapping


Oracle Character Data Types

The Oracle character data types should be mapped according to


the business content, actual values, and need to:
CHAR for fixed length and small amounts of data
VARCHAR for truly variable length and up to 32,672
characters

6-9

Oracle CHAR has maximum length of 255 bytes in Oracle 7, and 2000 bytes in Oracle 8 & 9.
Oracle provides VARCHAR(n) to store variable-length strings up to n characters, as well as
VARCHAR2(n) for the same purpose (maximum is 2000 characters in Oracle 7, and 4000
characters in Oracle 8 & 9).
The storage for VARCHAR2 is truly varying-length, whereas VARCHAR uses a fixed array of
characters with an endmarker. VARCHAR in Oracle is deprecated (subject to discontinuance in
the future), and generally Oracle DBAs use use VARCHAR2.
Oracle applications often use VARCHAR2 for very small character strings. Generally, it is better
to port these fields to the fixed length DB2 datatype CHAR(N), as it is more efficient and takes
less storage than VARCHAR. In DB2 UDB on Unix, VARCHAR(N ) uses n+4 bytes of storage and
CHAR(N) uses only N bytes of storage. (Note: in DB2/390, VARCHAR(N) uses n+2 bytes.)

CHAR should always be used for columns of 10 bytes or fewer, and probably should be used for
longer columns that are relatively full of non-blank data.

Data Type Mapping 6-9


Large Object String Data Types

DB2 UDB large object (LOB) string data types include:


CLOB
GRAPHIC
VARGRAPHIC
LONG VARGRAPHIC
DBCLOB
BLOB

6-10

Clob (character large object) data type handles varying-length character data and can be SBCS
(single-byte character set) or MBCS (multibyte character set). This data type can store up to 2
gigabytes of character data.
Graphic stores double-byte character strings (characters that use two bytes of storage). This data
type uses 2 bytes for each character, is fixed length, and has a maximum length of 127
characters.
Vargraphic stores double-byte character strings (characters that use two bytes of storage). This
data type uses 2 bytes for each character, is variable length, and has a maximum size of 16,336
characters.
Long vargraphic stores double-byte character strings (characters that use two bytes of storage).
This data type uses 2 bytes for each character, is variable length, and has a maximum size of
16,350 characters.
Dbclob data type is a double-byte character large object used for columns with large amounts of
double-byte data (>32K) of varying length. This data type uses 2 bytes of storage for each
character.
Blob data type stores binary large object data strings. The maximum size of blob column is two
gigabytes.

6-10 Data Type Mapping


DB2 UDB Date-time Data Types

DB2 UDB date-time data types include:


Date
Time
Timestamp

6-11

Date data type is used to store the date in a variety of formats. This data type requires four bytes
for storage (packed) with a string length of ten bytes. The default format is MM/DD/YYYY, but
this can vary depending on the country code. The Oracle date data type maps directly to this.
Time data type is used to store the time in a variety of formats. This data type requires three
bytes for storage (packed) with a string length of 8 bytes. The default format is HH.MM.SS, but
this can vary depending on the country code.
Timestamp data type is used to store the date-time combination. This data type requires 10 bytes
for storage (packed) with a string length of 26 bytes. The only format is for timestamp data is
YYYY-MM-DD-HH.MM.SS.NNNNNN.

Note DB2 also has a noncategorized data type of datalink that is used for external file
linking.

DB2 UDB does not support the interval data type.

Data Type Mapping 6-11


Other Oracle Data Types

Oracle has other data types that need special consideration and
treatment:
LONG
RAW(n)
NCLOB
BFILE
TIMESTAMP(f) / TIMESTAMP(f) WITH {LOCAL} TIMEZONE
INTERVAL YEAR(y) TO MONTH(m) / INTERVAL DAY(d) TO SECOND(f)
User Defined Types (UDT)
ROWID / UROWID

6-12

Oracle LONG Data Type


The Oracle LONG data type is used to store variable-length character data.
Columns defined as LONG can store variable-length character containing up to 2GB of
information. An Oracle table can have multiple LOB (BLOB, CLOB, NCLOB, BFILE) columns but
only one LONG column.
Oracle itself recommends that one use LOB data types rather than LONG data types.
LONG RAW and LONG VARCHAR are also considered to be obsolete. All the long data types are
stored in-line in the data block.

6-12 Data Type Mapping


Oracle RAW Data Type
The Oracle RAW(n) data type is a variable-length data type like the VARCHAR2 character data
type. It is intended for binary data or byte strings, such as graphics, sound, documents, or arrays
of binary data. RAW has a maximum size of 255 bytes in Oracle 7, and 2000 bytes in Oracle 8
& 9. LONG RAW refers to a maximum size of 2GB, but is deprecated as of Oracle 9.
The corresponding IBM DB2 data type isCHAR(n) FOR BIT DATA, or VARCHAR(n) FOR BIT
DATA, or BLOB(n).

Oracle NCLOB Data Type


The Oracle NCLOB data type is used to store up to 4GB of double-byte character data.
The corresponding IBM DB2 data type is DBCLOB(n/2).

Oracle BFILE Data Type


The BFILE data type is used as a pointer to an external (operating system) file.
IBM DB2 DATALINK data type provides similar functionality to the Oracle BFILE data type.

Oracle DATE Data Type


The Oracle DATE data type maps directly to the IBM DB2 date data type.

Oracle TIMESTAMP and INTERVAL Data Types


These data types are new to Oracle 9i.
Acceptable values for fractions-of-a-second precision (f) are 0 through 9 (with default 6).
Acceptable values for years precision (y) are 0 through 9 (with default 2). Acceptable values for
days precision (d) are 0 through 9 (with default 2).
The DB2 UDB TIMESTAMP has only one precision. There is no equivalent of Oracle INTERVAL.

Oracle User Defined Types (UDT)


The detailed definition and use of user defined types (UDTs) for both Oracle and IBM DB2 are
beyond the scope of this course.

Data Type Mapping 6-13


Note that there are similarities in syntax between the Oracle use of object relational features,
e.g.,
CREATE OR REPLACE TYPE address
AS OBJECT
(
street VARCHAR2(25),
city VARCHAR2(25),
state VARCHAR2(2),
zip VARCHAR2(10)
);

with a similar statement in IBM DB2:


CREATE TYPE address
AS
(
street VARCHAR(25),
city VARCHAR(25),
state CHAR(2),
zip CHAR(10)
)
MODE DB2SQL;

Oracle ROWID & UROWID


Oracle ROWID and UROWID (Universal RowID) are hexadecimal string representations of the
logical address of a row of an index-organized table either physical, logical, or foreign (non
Oracle) and are primarily used for values returned by the ROWID pseudocolumn.
DB2 UDB does not support the concept of ROWID or this data type.

6-14 Data Type Mapping


Oracle Sequences

Oracle sequences can be mapped into a DB2 UDB database using


any of the following methods:
Implement a trigger to generate a sequential number (older
method)
Use a DB2 UDB sequence
Define an identity column for the table

6-15

Using a Trigger to Implement Oracle Sequences


You can use a trigger to generate a sequential number. For example:
CREATE TRIGGER AutoIncrement
NO CASCADE BEFORE INSERT
ON tablename
REFERENCING NEW AS n
FOR EACH ROW MODE DB2SQL
SET (n.key) = (
SELECT value(MAX(key),0) + 1
FROM tablename);

where key is the primary key column and tablename is the table you wish to update.
The value(MAX(key),0) statement returns the first non-null value. Thus, if the table is empty, it
returns 0 and increments the key by one.

DB2 UDB Sequences


DB2 version 7.2 introduced the sequence as a new type of database object to allow the
generation of values such as unique keys for database tables. Applications can use sequences to

Data Type Mapping 6-15


avoid possible concurrency and performance issues resulting from generating unique counters
outside the database. A sequence is not tied to a particular table column (see identity columns
below), nor is it bound to a unique table column and only accessible through that table column.
A sequence object can be created, or altered, so that it generates values by incrementing or
decrementing values either without a limit; up to a user-defined limit, and then stopping; or up to
a user-defined limit, then cycling back to the beginning and starting again. Sequences are only
supported in single partition databases.
CREATE SEQUENCE orderseq
START WITH 1
INCREMENT BY 1
NOMAXVALUE
NOCYCLE
CACHE 25

The CACHE parameter specifies the maximum number of sequence values that the database
manager preallocates and keeps in memory. If the server is brought down, cached sequence
numbers are lost.
Sequences are used in the following manner (incomplete code):
INSERT INTO orders (order_num, customer_num, ...)
VALUES (orderseq.NEXTVAL, 101, ...);

INSERT INTO items(order_num, ...)


VALUES (orderseq.PREVVAL, ...);

Note that the same order number is needed for the individual items. NEXTVAL causes the
sequence to be incremented, and PREVVAL returns the most recently generated value for the
specified sequence within the current session (Oracle uses CURRVAL to do this).

Defining an Identity Column in a Database Table


An identity column allows the database manager to automatically generate a unique numeric
value for each row that is added to a table. If you are creating a table and you know that you will
need to uniquely identify each row that is added to the table, you can add an identity column to
the table definition.
The syntax to create an identity column as part of the CREATE TABLE statement is:
CREATE TABLE customer
(
customer_num INT NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 100,
INCREMENT BY 1),
...
)

Once a table has been created, you cannot alter the table description to include an identity
column at a later point in time. You can alter an existing table to later include a generated
column using the db2gncol utility, but not an identity column.
6-16 Data Type Mapping
Handling Nulls

NULL is an unknown data value:


The result of a calculation with a null value is unknown
Nullable columns require an extra byte of storage
Can specify a default value for a nullable column

6-17

NULL is not a data type, but a data value. Specifically, a null is an unknown value. As such, nulls
are handled differently than non-null values. When columns containing null values are used in
calculations, the result is unknown. Special syntax is used in SQL statements to work with null
value data.
If null values are allowed in a column, that column requires an extra byte of storage to flag it as
nullable. This extra byte must be considered during space allocation for the table. This is the
only difference of NULL between Oracle and DB2 UDB.
You can specify a default value for a nullable column when creating the table. An insert on a
table with a NOT NULL column causes an error if data is not supplied for that column, or if a
default value is not specified.

Data Type Mapping 6-17


Summary

You should now be able to:


Determine the differences between Oracle and DB2 UDB data
types
Use the proper DB2 UDB data types
Explore other data type mapping possibilities
Understand the use of NULL data

6-18

6-18 Data Type Mapping


Summary: Mapping Oracle Data Types to DB2 UDB Data
Types
The following table summarizes the mapping from the Oracle data types to corresponding DB2
data types. The mapping is one to many and depends on the actual usage of the data. (This table
is also in Appendix B for your convenience.)

Oracle DB2
Data Type Notes Data Type Notes
DATE TIME If only MM/DD/YYYY is required, use DATE
TIMESTAMP If only HH:MM:SS is required, use TIME
DATE If both date and time are required (MM/DD/YYYY-
HH:MM:SS.000000), use TIMESTAMP
Use Oracle TO_CHAR() function to format a DATE for
subsequent DB2 load. Note that the Oracle default
DATE format is DD-MON-YY
VARCHAR2(n) n <=4000 VARCHAR(n) n <= 32672
LONG n <= 2GB VARCHAR (n) If n <= 32672 bytes, use VARCHAR
LONG VARCHAR (n) If 32672 < n <= 32700 bytes, use LONG VARCHAR or
CLOB(n) CLOB
If 32672 < n <= 2 GB, use CLOB
RAW(n) n <= 255 CHAR(n) FOR BIT DATA If n <= 254, use CHAR(n) FOR BIT DATA
VARCHAR(n) FOR BIT DATA If n <= 32672, use VARCHAR(n) FOR BIT DATA
BLOB(n) If n<= 2 GB, use BLOB(n)
LONG RAW n <= 2 GB VARCHAR(n) FOR BIT DATA If n <= 32672 bytes, use VARCHAR(n) FOR BIT DATA
LONG VARCHAR FOR BIT DATA If 32672 < n <= 32700 bytes, use LONG VARCHAR
BLOB(n) FOR BIT DATA
If n <= 2 GB, use BLOB(n)
BLOB n <= 4 GB BLOB(n) If n <= 2GB use BLOB(n)
CLOB n <= 4GB CLOB(n) If n <= 2GB use CLOB(n)
NCLOB n <= 4GB DBCLOB(n) If n <= 2GB use DBCLOB(n/2)
NUMBER SMALLINT If Oracle decl is NUMBER(p) or NUMBER(p,0), use
INTEGER SMALLINT, if 1 <= p <= 4;
BIGINT INTEGER, if 5 <= p <= 9;
DECIMAL(p,s) BIGINT, if 10 <= p <= 18
DOUBLE / FLOAT(n) / REAL If Oracle decl is NUMBER(p,s), use DECIMAL(p,s) (s >
0)
If Oracle decl is NUMBER, use DOUBLE / FLOAT(n) /
REAL

Data Type Mapping 6-19


6-20 Data Type Mapping
Exercises

Data Type Mapping 6-21


Exercise 1
In this exercise, you will test your knowledge on Data Type Mapping between Oracle and DB2
UDB.
1.1 In the following chart, which of the following Oracle and DB2 UDB data types would
you consider map very closely? Which data types would map conditionally on the data
involved? Which of these data types do not map at all?

Oracle data types DB2 UDB data types Type of match (exact, conditional, none)
Varchar / Varchar2 Varchar

Date Timestamp

Date Date

Number Decimal

Number Real

Char Char

Blob Blob

Clob Clob

Raw Blob

1.2 Can you automatically assume that Oracle CHAR = DB2 UDB CHAR? What about Oracle
VARCHAR / VARCHAR2 and DB2 UDB VARCHAR?

6-22 Data Type Mapping


Exercise 2
In this exercise, we will explore methods for handling certain data types in converting from
Oracle to IBM DB2. The exercise will be handled as a class discussion and we will also
consider special problems that you might have with your data.
2.1 How would you unload the following data from Oracle? and load it to IBM DB2?
Date

Varchar2

Clob / Blob

2.2 What kind of data type conversion tools might you use to accomplish the mapping?

Data Type Mapping 6-23


6-24 Data Type Mapping
Solutions

Data Type Mapping 6-25


Solution 1
In this exercise, you will test your knowledge on Data Type Mapping between Oracle and DB2
UDB.
1.1 In the following chart, which of the following Oracle and DB2 UDB data types would
you consider map very closely? Which data types would map conditionally on the data
involved? Which of these data types do not map at all?

Oracle data types DB2 UDB data types Type of match (exact, conditional, none)
Varchar / Varchar2 Varchar exact, subset, but other DB2 data types
may be more suitable
Date Timestamp exact, but may have too much precision
Date Date conditional
Number Decimal exact, but other DB2 data types may be
more suitable
Number Real conditional number can hold larger
numbers, and with more exact precision
Char Char almost exact, but differences in max
length (254 vs. 255)
Blob Blob exact, but DB2 data type is limited to
2GB
Clob Clob exact, but DB2 data type is limited to
2GB
Raw Blob exact, but DB2 data type is limited to
2GB and other data types may be
more suitable

1.2 Can you automatically assume that Oracle CHAR = DB2 UDB CHAR? What about Oracle
VARCHAR / VARCHAR2 and DB2 UDB VARCHAR?
No. DB2 UDB CHAR allows from 1 to 254 bytes, while Oracle allows from 1 to 255
bytes.
DB2 UDB VARCHAR allows up to 32,672 bytes, while Oracle allows from 0 to 255
bytes for VARCHAR and 2000 bytes for VARCHAR2.

6-26 Data Type Mapping


Module 7

Creating Tables and Views

Creating Tables and Views 02-2003 7-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Use proper SQL syntax to create tables
Use proper SQL syntax to create views
Create example tables in the DB2 UDB database
Create example views in the DB2 UDB database
Monitor disk usage

7-2

The example tables and views that will be used in this course represent a small sporting goods
wholesaler. This sales database (storesdb) has the following tables:
Customer (PK: customer_num*)
Orders (PK: order_num*)
Items (composite PK: order_num, item_num)
Cust_Calls (PK: customer_num, call_dtime)
Manufact (PK: manu_code)
State (PK: code)
Call_type (PK: call_code)
Stock (composite PK: stock_num, manu_code)
Catalog (PK: catalog_num*)
This database is documented in Appendix F, The StoresDB Database. The origin of the data
for this database is IBM Informix. Columns marked with * are intended to be auto-numbered
(serial, generated, sequence). Appendix C, Example import and load Utilities Results, shows
the loading of this data into a student database for this course.

7-2 Creating Tables and Views


The equivalent sample database in Oracle is the scott/tiger databasenamed after Bruce Scott
and his daughters cat, tigerwith the following tables:
Dept (PK: deptno)
Emp (PK: empno)
Bonus (PK: ename)
Salgrade (PK: grade)
You will find that the storesdb database has a larger variety of tables and structures and thus is
more suitable for the classroom/learning environment.
In other courses on DB2 you will probably be introduced to the musicdb.

Creating Tables and Views 7-3


Authorizations and Privileges

As a minimum, you will need the following authorization and privilege


to create tables and views in the database:
DBADM authority for the instance
CREATETAB privilege on that database

All tables and views are created within a SCHEMA. If no schema is


provided, the users authorization ID is used.

7-4

You need DBADM authority or the CREATETAB privilege on an individual database to be


able to create tables or views.

Creating Database Objects


In DB2 UDB, all tables and views are created within a SCHEMA and this is used as part of the
object name. If no explicit schema is provided, the users authorization ID is used. Thus, for
example, if you are creating a table and your connection ID is projmgr and no explicit schema is
specified, the table tab1 is created as projmgr.tab1 and others must fully qualify the table with
the schema to use that table in their SQL statements.
The users authorization id (or login id) is established at the operating system level, but schemas
are local to the database.
In Oracle, to create a table in your own schema, you must have the CREATE TABLE system
privilege. To create a table in another users schema, you must have the CREATE ANY TABLE
system privilege. In addition, the owner of the schema to contain the table must have either
sufficient space quota on the tablespace that will hold the table or the UNLIMITED TABLESPACE
system privilege. Normally you specificy the schema for the table, as with DB2 UDB. Similarly,
if you omit schema, Oracle creates the table in your own schema, again as with DB2 UDB.

7-4 Creating Tables and Views


Creating tables in DB2 UDB is very similar to creating them in Oracle.
The usual rules apply. For example, the primary key cannot be null and must be unique.
The column data types cannot be changed after creation; to change them, you must drop and re-
create the table.
In DB2 UDB Enterprise Edition (EE), version 7.2, you cannot explicitly partition tables across
different table spaces. Thus there are no PARTITION clauses in the DB2 UDB CREATE statement.
But, when more than one container is assigned to a table space, the server load balances rows
across those containers providing an effective round robin partitioning. Load balancing is
performed on an extent-by-extent, rather than a row-by-row, basis.
Oracle 9 offers range-based, list, and hash partitioning for tables, and local and global
partititioning for indexes.
Partititioning is available in DB2 UDB EEE.

Creating Tables and Views 7-5


Create Table Example
CREATE TABLE employee
(
ID SMALLINT NOT NULL,
Name VARCHAR(9) NOT NULL,
Dept SMALLINT,
Job CHAR(5)
CHECK (job IN ('Sales','Mgr','Clerk')),
Hiredate DATE WITH DEFAULT CURRENT DATE,
Salary DECIMAL(7,2),
Comm DECIMAL(7,2)
)
IN tablespace1;

7-6

During table creation, it is good practice to defer the primary key and foreign key declarations
until after the table is created. This allows you to explicitly create indexes on the key columns
with your own naming convention, which the keys will use. Also, if you drop the primary key or
foreign key, the index remains for later use.

Logging of Table Activity


Normally, DML (data manipulation language statements) activity insert, update, delete,
create index against a table is logged.
Among the additional syntax for the CREATE TABLE statement is the NOT LOGGED INITIALLY
clause that allows you to defer logging in the same unit of work (UOW) until the next commit.
This can reduce logging and potentially improve performance for the single transaction
underway. The same results can be achieved on an existing table by using the ALTER TABLE
statement with the NOT LOGGED INITIALLY parameter.
While LOAD does not log inserts, this cannot be controlled by the DBA. Data loading is covered
in Module 7.
Temporary tables can be logged or not logged, and can have indexes and statistics.
V8
7-6 Creating Tables and Views
Altering Tables

After creation, tables can be altered by:


Adding one or more columns to a table
Adding or dropping a primary key
Adding or dropping unique or referential constraints
Adding or dropping check constraints
Altering a VARCHAR column length
Altering a reference type column
Changing table attributes
Activating the NOT LOGGED INITIALLY attribute

7-7

After creating a table, various object elements can be changed, such as adding a column,
dropping a constraint, or adding a key.
Indexes can created in their own table space, if DMS table spaces are used and this separation
of index table spaces from the data can be changed later, if desired.

Note You cannot alter column data types (except the size of a VARCHAR column).

There are differences between Oracle and DB2 UDB in the way you alter a table. You are very
limited in the elements that can be altered after creation. For example, you cannot drop a
column.

Creating Tables and Views 7-7


Temporary Tables

The system tempspace1 table space is used for system generated


temporary tables.
Explicit temporary tables created by the user must be placed in a
user temporary table space. For example:
DECLARE GLOBAL TEMPORARY TABLE temp1
LIKE table1
ON COMMIT DELETE ROWS
NOT LOGGED
IN usertemp1;
SELECT * FROM temp1;

where usertemp1 is a user temporary table space

7-8

System temporary tables (not user temporary tables) are created implicitly in the system
tempspace1 table space.
Explicit (declared) temporary user tables, such as summary tables for report purposes, must be
created in a user temporary table space.
User temporary tables that are created explicitly (declared) are placed in previously-defined user
temporary table spaces (such as usertemp1).

Note User temporary tables must be placed in a table space defined as user temporary and
cannot be placed in TEMPSPACE1 which is the system temporary table space.

7-8 Creating Tables and Views


Temporary Tables in Oracle
In Oracle, temporary tables are statically definedyou create them once per database. They
always exist and are entered in the data dictionary as objects, but always appear to be empty.
Oracle temporary tables may be session-based (data survives across commits, but not across a
disconnect/reconnect) or transaction-based (data disappears after a commit). E.g.,
CREATE GLOBAL TEMPORARY TABLE temp_session
ON COMMIT PRESERVE ROWS
AS
SELECT * FROM scott.emp;

Creating Tables and Views 7-9


Creating Views

The view object provides a logical, or virtual, view of the data in


underlying table(s). A view is used to:
Restrict access to sensitive data, such as salary
Simplify the use of SQL statements, such as joins or unions

Views in DB2 UDB are similar to views in Oracle and the DB2
summary table has similarities to the Oracle materialized view
Example:
CREATE VIEW ca_cust_names AS
SELECT lname, fname
FROM customer
WHERE state = 'CA';

7-10

A view is a logical selection of data from the underlying table(s). A view is used to restrict
access to data or to simplify data retrieval in SQL statements (hide the detail of the SQL). A
view in DB2 UDB is created the same way as a view in Oracle. As with tables, the view is
created with the current schema as part of the object name.

DB2 Summary Tables


DB2 supports the concept of a summary table that is based on the result set of a query. Some
examples of creating a summary table are:
CREATE SUMMARY TABLE s1
AS (SELECT order_num, order_date FROM orders WHERE ship_date IS NOT NULL)
DEFINITION ONLY;

CREATE SUMMARY TABLE s2


AS (SELECT ...)
DATA INITIALLY DEFERRED REFRESH DEFERRED;

CREATE SUMMARY TABLE s2


AS (SELECT ...)
REFRESH IMMEDIATE;

7-10 Creating Tables and Views


When DEFINITION ONLY is used, the query is only used to define the table the table s1 is not
actually populated as a result of this query.
When DATA INITIALLY DEFERRED is specified, the data in the table can be refreshed at any time
by a separate REFRESH TABLE statement:
REFRESH TABLE s2;

If instead of REFRESH DEFERRED, the REFRESH IMMEDIATE clause is used, then changes to the
underlying table(s) as part of INSERT, UPDATE, or DELETE statements are cascaded to the
summary table. Thus the content of the summary table, in this case, is the same as the specified
full-select statement were procressed. Summary tables do not themselves allow INSERT, UPDATE,
or DELETE statements.
Thus, summary tables with REFRESH IMMEDIATE behave like Oracle materialized views.

Creating Tables and Views 7-11


Monitoring Disk Usage

Use the DB2 list function to view storage information:


db2 connect to storesdb
db2 list tablespaces show detail

You can obtain the following information:


Table space name
Type
Total pages used
Page size

7-12

To determine disk space usage for database data tables or indexes, use the DB2 list command.
You can see storage particulars for all table spaces for that database, but only the userspace1
and other user table spaces contain the data and indexes.
Look for the information for these items:
Item
Tablespace ID
Name
Type (SMS or DMS)
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers

7-12 Creating Tables and Views


Summary

You should now be able to:


Use proper SQL syntax to create tables
Use proper SQL syntax to create views
Create example tables in the DB2 UDB database
Create example views in the DB2 UDB database
Monitor disk usage

7-13

Creating Tables and Views 7-13


7-14 Creating Tables and Views
Exercises

Creating Tables and Views 7-15


Exercise 1
In this exercise, you will create two tables in your database and insert data into them. You will
also create a view on those two tables and execute some SELECT SQL statements on the view.
1.1 Using a DB2 command window or a telnet session running the DB2 utility, create the
following tables in your storesdb database (these tables will be used in exercises in a
later module):
The first table, named parent, should have three columns. The first column (named p1)
is an integer data type and has a not null constraint, column two (named p2) is a three-
character column, and column three (named p3) is a decimal with a precision of ten and
a scale of two.
The second table, named child, should also have three columns. The first column
(named c1) is an integer data type and has a not null constraint, the second column
(named c2) is a three-character column, and the third column (named c3) is also an
integer with a NOT NULL constraint.

Tip You will find it easier to create a script file with your SQL in it and run that script
file from the DB2 command line. For example, you can put the CREATE TABLE
statement for the parent table in a file called parent.sql, and then execute it using the
DB2 command line. Execute it using:
db2 -tvf parent.sql
This way, you can make changes more easily without having to enter it all again.

1.2 Insert the following data into these tables:


Parent table:
column 1 = 1
column 2 = E
column 3 = 35.22
Child table:
column 1 = 100
column 2 = ABC
column 3 = 1

7-16 Creating Tables and Views


1.3 Create a view in your DB2 UDB storesdb database that selects all columns and all rows
from both tables; the tables are related by the first column in the parent table to the third
column in the child table.

1.4 Execute a few SELECT statements on all of these structures and examine the data.

Creating Tables and Views 7-17


Exercise 2
This exercise will help you learn to create tables in the storesdb database using some script files
provided for you. The differences will be addressed by viewing and comparing the two script
files.
You will add more structure to these tables in a later module, so make sure they are created
properly.
2.1 Create the tables in your DB2 UDB storesdb database using the storesdb.sql script. In
a system command window, change to your storesdb directory and execute the
storesdb.sql script file as shown below:
cd
cd storesdb
./storesdb.sql
db2 list tables
To get help for the DB2 utility, execute:
db2 ?

2.2 Were your tables created correctly in your storesdb database?


db2 list tables
You should find that you have created the storesdb tables in your storesdb database.
You will load data into these tables in a later module.

2.3 Determine how much space the table structures for your storesdb database took. Look
for the following:
Item Without Data
Table space ID 2
Name USERSPACE1
Type (SMS or DMS) System managed space
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers

7-18 Creating Tables and Views


Solutions

Creating Tables and Views 7-19


Solution 1
In this exercise, you will create two tables in your database and insert data into them. You will
also create a view on those two tables and execute some SELECT SQL statements on the view.
1.1 Using a DB2 command window or a telnet session running the DB2 utility, create the
following tables in your storesdb database (these tables will be used in exercises in a
later module):
The first table, named parent, should have three columns. The first column (named p1)
is an integer data type and has a not null constraint, column two (named p2) is a three-
character column, and column three (named p3) is a decimal with a precision of ten and
a scale of two.
The second table, named child, should also have three columns. The first column
(named c1) is an integer data type and has a not null constraint, the second column
(named c2) is a three-character column, and the third column (named c3) is also an
integer with a NOT NULL constraint.
CREATE TABLE parent (
p1 INT NOT NULL,
p2 CHAR(3),
p3 DECIMAL(10,2)
);
CREATE TABLE child (
c1 INT NOT NULL,
c2 CHAR(3),
c3 INT NOT NULL
);

1.2 Insert the following data into these tables:


Parent table:
column 1 = 1
column 2 = E
column 3 = 35.22
Child table:
column 1 = 100
column 2 = ABC
column 3 = 1
INSERT INTO parent VALUES (1, 'E', 35.22);
INSERT INTO child VALUES (100, 'ABC', 1);

7-20 Creating Tables and Views


1.3 Create a view in your DB2 UDB storesdb database that selects all columns and all rows
from both tables; the tables are related by the first column in the parent table to the third
column in the child table.
CREATE VIEW v1 AS
SELECT * FROM parent, child
WHERE parent.p1 = child.c3;

1.4 Execute a few SELECT statements on all of these structures and examine the data.
SELECT * FROM parent;
SELECT * FROM child;
SELECT * FROM parent, child
WHERE parent.p1 = child.c3;

SELECT * FROM v1;

Creating Tables and Views 7-21


Solution 2

This exercise will help you learn to create tables in the storesdb database using some script files
provided for you. The differences will be addressed by viewing and comparing the two script
files.
You will add more structure to these tables in a later module, so make sure they are created
properly.
2.1 Create the storesdb tables in your DB2 UDB storesdb database using the storesdb.sql
script. In a system command window, change to your storesdb directory and execute the
storesdb.sql script file as shown below:
cd
cd storesdb
./storesdb.sql
To get help for the DB2 utility, execute:
db2 ?

2.2 Were your tables created correctly in your storesdb database?


db2 list tables

Table/View Schema Type Creation time


------------------------------- --------------- ----- --------------------------
CALL_TYPE INST001 T 2002-02-20-10.55.34.640405
CATALOG INST001 T 2002-02-20-10.55.34.779436
CUST_CALLS INST001 T 2002-02-20-10.55.34.669466
CUSTOMER INST001 T 2002-02-20-10.55.34.382415
CUSTVIEW INST001 V 2002-02-20-10.55.34.902069
ITEMS INST001 T 2002-02-20-10.55.34.530308
LOG_RECORD INST001 T 2002-02-20-10.55.34.740430
MANUFACT INST001 T 2002-02-20-10.55.34.480467
ORDERS INST001 T 2002-02-20-10.55.34.449543
SOMEORDERS INST001 V 2002-02-20-10.55.34.921538
STATE INST001 T 2002-02-20-10.55.34.608451
STOCK INST001 T 2002-02-20-10.55.34.510496

12 record(s) selected.
You will also see the parent and child tables and the v1 view.
You should find that you have created the storesdb tables in your storesdb database.
You will load data into these tables in a later module.

7-22 Creating Tables and Views


2.3 Determine how much space the table structures for your storesdb database took. Look
for the following approximate values:
Item Without Data
Table space ID 2
Name USERSPACE1
Type (SMS or DMS) System managed space
Total pages 16
Usable Pages 16
Used Pages 16
Page Size 4096
Extent Size (pages) 32
Prefetch Size (pages) 32
# of Containers 1

Creating Tables and Views 7-23


7-24 Creating Tables and Views
Module 8

Data Migration Methods Loading Tables

Data Migration Methods Loading Tables 02-2003 8-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Explore various methods of data migration
Understand the various file types supported
Perform data migration of the storesdb database data using the
DB2 import utility
Perform data migration of the storesdb database data using the
load utility
Monitor disk usage after loading tables

8-2

8-2 Data Migration Methods Loading Tables


Data Copy Methods

The two data movement utilities used for data migration are import
and load:
Input data from files and insert it into the target tables
Use data files produced from another database in, for example,
delimited format
One file for each table
Comma delimited (other delimiters available)

8-3

There are a variety of data movement utilities provided in DB2 UDB, but the ones you can use
to migrate data from an Oracle database to a DB2 UDB database are either import or load.
These two utilities take input data from files and insert it into the target tables.
Other data movement utilities in DB2 UDB work with other database environments, such as
moving data to and from a DRDA (distributed relational database architecture) database system
(mainframe).
In this course, you will learn to use both the import and load utilities and determine the best one
to use for your data migration needs.
We will be using data from another database (storesdb) in delimited format (using the character
| as the field delimiter), since this is an efficient format for Unix/Linux systems and even for
NT/2000 systems. The same data has also been made available in the laboratory files as comma-
separated-variable (CSV) for those would prefer to use that format, if it better represents typical
data on your system. There is one file for each table.

Data Migration Methods Loading Tables 8-3


Privileges and Authorities Needed

You need one of these authorties:


SYSADM
DBADM
LOAD (if using the load utility)

And one of these privileges:


CONTROL
SELECT and INSERT

8-4

The import utility can create and load data into a target table, provided you use the correct file
type for import and you have the proper authorities and privileges. In this course, assume that
the target table structure has already been created by the time you are ready to import data into
it.
The load utility requires that the target table structure is created before data can be loaded into it.
If you want to import or load data into a database, you must be connected to it and have the
proper privileges to insert data into the table in question. These include the SYSADM or
DBADM authorities, and the CONTROL or SELECT and INSERT privileges.
A special authority, LOAD, can be used only for the load utility. This can be used if the user
running the load utility does not have SYSADM or DBADM authorities. The user must still
have INSERT privilege in order to use the load utility.

8-4 Data Migration Methods Loading Tables


Import Data File Types

File types for the import utility include:


Nondelimited ASCII format (ASC) as fixed-length format
Delimited ASCII format (DEL)
Worksheet format (WSF)
Integrated exchange format (IXF) special internal format for
IBM DB2 data exchange Oracle

8-5

The import utility requires data in the following file types:


Nondelimited ASCII format (ASC) data is aligned in columns, no delimiter.
Delimited ASCII format (DEL) data is in fields separated by a delimiter character,
such as a comma.
Worksheet format (WSF) spreadsheet data.
Integrated exchange format (IXF) data is in the format of a table or view. The table
can be created during the import with this format.
To migrate data from an Oracle database to a DB2 UDB database, use the DEL (delimited
ASCII file type), or comma-separated-variable (CSV).

Data Migration Methods Loading Tables 8-5


Import Parameters

You must supply the following information when importing data:


Path and name of the input file
Name or alias of the target table or view
Format of the data in the input file

Optionally, you can also specify:


Method of mapping field positions in the file to table columns
Number of rows to INSERT before committing
Number of file records to skip before beginning the import
operation

8-6

The import utility uses the SQL INSERT statement to write data from an input file into a table or
view. If the target table or view already contains data, you can either replace or append to the
existing data.
Performing periodic COMMITs reduces the number of rows that are lost if a failure occurs during
the import operation. It also prevents the DB2 UDB logs from getting full when processing a
large input file.

8-6 Data Migration Methods Loading Tables


Using Import

You must be connected to the database that contains the target table
for the import:
Complete all transactions before executing an import

You can use the import utility by:


The command line processor (CLP)
The Import notebook in the Control Center

8-7

You must be connected to the database that contains the target table for the import. The import
utility issues a COMMIT or ROLLBACK statement, so you should complete all transactions before
executing the import utility.
The import utility can be started by:
The command line processor (CLP).
db2 "import from customer.unl of
del insert into inst001.customer"
The Import notebook in the Control Center.
From the Control Center, open the Tables folder for the proper database.
Select the table you want by clicking the right mouse button, then select Import
from the pop-up menu.

Data Migration Methods Loading Tables 8-7


Import Syntax

Basic import syntax is:


db2 "import from <filename>
of <file_type>
insert into <tablename>"
For example:
db2 connect to storesdb
db2 "import from /home/inst001/customer.unl
of del modified by coldel|
insert into customer"

8-8

In this course, we will concentrate on using the command line version of the import and load
utilities. The DB2 UDB syntax that we will use for the import utility is:
db2 "import from <filename>
of <file_type> insert into <tablename>"

You can also specify the use of a message log, apply modifiers, specify column mapping,
specify commit count and restart count, and specify table-space usage.
For example:
db2 connect to storesdb
db2 "import from /home/inst001/customer.unl
of del modified by coldel|
savecount 100
messages /tmp/cust.msg
insert into customer"

This shows you the use of the pipe character as the column delimiter (default value is a comma).
You can also specify a character string delimiter (default is double quotation mark).

8-8 Data Migration Methods Loading Tables


Load Input Data Method

The load utility can move data from files, named pipes, or devices.
The following file types are support:
Non-delimited ASCII format (ASC)
Delimited ASCII format (DEL)
Integrated exchange format (IXF) not produced by Oracle

8-9

The DB2 UDB load utility can move data from files, named pipes, or devices into a DB2 UDB
table. The data sources can reside on the same node as the database or on a remotely connected
client.
The target table must exist. If the target table already contains data, you can replace or append to
the existing data.

Data Migration Methods Loading Tables 8-9


Load Basics

The load utility is faster than the import utility because it writes
formatted pages directly into the database.
There are three phases to the load process:
Load data is written to the table
Build indexes are created
Delete rows that caused a unique key violation are removed
from the table and placed into the exception table, if specified

8-10

The load utility can quickly move large quantities of data into newly-created tables or into tables
that already contain data. The utility can handle all data types including large objects (LOBs)
and user-defined types (UDTs). The load utility is faster than the import utility because it writes
formatted pages directly into the database. The import utility performs SQL INSERTs. The load
utility does not fire triggers and does not perform referential or table constraint checking (other
than validating the uniqueness of the indexes).

Note Each deletion event is logged. If you have a large number of records that violate the
uniqueness condition, the log could fill up during the delete phase.

8-10 Data Migration Methods Loading Tables


The following table summarizes the important differences between the DB2 load and import
utilities.
Import Utility Load Utility
Slow when moving large amounts of data Faster than the import utility when moving large
amounts of data because the load utility writes
formatted pages directly into the database
Limited exploitation of intrapartition parallelism Exploitation of intrapartition parallelism
Typically, this requires symmetric multiprocessor
(SMP) machines
Supports hierarchical data Does not support hierarchical data
Creation of tables, hierarchies, and indexes Tables and indexes must exist
supported with PC/IXF format
No support for importing into summary tables Support for loading into summary tables
WSF format is supported WSF format is not supported
No BINARYNUMERICS support BINARYNUMERICS support
No PACKEDDECIMAL support PACKEDDECIMAL support
No ZONEDDECIMAL support ZONEDDECIMAL support
Cannot override columns defined as GENERATED Can override GENERATED ALWAYS columns, by
ALWAYS using the GENERATEDIGNORE and
IDENTITYIGNORE file type modifiers
Supports import into tables and views Supports loading into tables only
The table spaces in which the table and its indexes The table spaces in which the table and its indexes
reside are online for the duration of the import reside are offline for the duration of the load
operation. operation. In V8, table spaces are not locked during
load.
All rows are logged Minimal logging is performed
Trigger support No trigger support
If an import operation is interrupted, and a If a load operation is interrupted and a savecount
commitcount was specified, the table is usable and was specified, the table remains in load pending
will contain the rows that were loaded up to the last state and cannot be used until the load operation is
COMMIT. The user can restart the import restarted, a load terminate operation is invoked, or
operation, or accept the table as is. until the table space is restored from a backup
image created some time before the attempted load
operation.
Space required is approximately equivalent to the Space required is approximately equivalent to the
size of the largest index plus 10%. This space is sum of the size of all indexes defined on the table,
obtained from the temporary table spaces within the and can be as much as twice this size. This space is
database. obtained from temporary space within the database.
All constraints are validated during an import Uniqueness is verified during a load operation, but
operation. all other constraints must be checked using the SET
INTEGRITY statement.

Data Migration Methods Loading Tables 8-11


The key values are inserted into the index one at a The key values are sorted and the index is built
time during an import operation. after the data has been loaded.
If updated statistics are required, the runstats Statistics can be gathered during the load operation
utility must be run after an import operation. if all the data in the table is being replaced.
You can import into a host database through DB2 You cannot load into a host database.
Connect.
Import files must reside on the node from which the In a partitioned database environment, load files or
import utility is invoked. pipes must reside on the node that contains the
database. In a nonpartitioned database environment,
load files or pipes can reside on the node that
contains the database or on the remotely connected
client from which the load utility is invoked.
A backup image is not required. Because the A backup image can be created during the load
import utility uses SQL INSERTs, DB2 UDB logs operation.
the activity and no backups are required to recover
these operations in case of failure.

8-12 Data Migration Methods Loading Tables


Load Parameters

You must supply the following information when loading data:


Path and name of the input file, device, or named pipe
Name or alias of the target table
Format of the data in the input file (DEL, ASC, or PC/IXF)

To load data into a table, you must have one of the following:
SYSADM authority
DBADM authority
LOAD authority on the database and INSERT, or INSERT and
DELETE privilege

8-13

To load data into a table, you must have one of the following:
SYSADM authority
DBADM authority
LOAD authority on the database and
INSERT privilege on the table when the load utility is invoked in INSERT mode,
TERMINATE mode (to terminate a previous load insert operation), or RESTART
mode (to restart a previous load insert operation)
INSERT and DELETE privilege on the table when the load utility is invoked in
REPLACE mode, TERMINATE mode (to terminate a previous load replace
operation), or RESTART mode (to restart a previous load replace operation)
INSERT privilege on the exception table, if such a table is used as part of the load
operation.

Data Migration Methods Loading Tables 8-13


Using Load

You must be connected to the database that contains the target table
for the load:
Complete all transactions before a load

You can use the load utility by:


The command line processor (CLP)
The Load notebook in the Control Center

Exception tables can be used to track rows with errors.


You can now do an online load, because version 8 removes
V8 the problem of table space lock during a LOAD.

8-14

You must be connected to the database that contains the target table for the load. The load utility
issues a COMMIT or ROLLBACK statement, so you should complete all transactions before
executing the load utility.
You can sort the data in the input file to reflect the desired load sequence. However, if clustering
is required, the data should be sorted on the clustering index before you attempt the load.
The load utility can be started by:
The command line processor (CLP).
db2 "load from customer.unl of del
modified by coldel| insert into customer"
The Load notebook in the Control Center.
From the Control Center, open the Tables folder for the proper database.
Select the table you want by clicking the right mouse button, then select Load
from the pop-up menu.
The load API
The DB2 load operation provides the ability to capture error information (and data) in
EXCEPTION tables.

8-14 Data Migration Methods Loading Tables


Exception Table
The exception table is a user-created table that reflects the definition of the table being loaded
and includes some additional columns. It is specified by the FOR EXCEPTION clause on the LOAD
command. An exception table cannot contain an identity column or any other type of generated
column. If an identity column is present in the primary table, the corresponding column in the
exception table should only contain the column type, length, and nullability attributes. The
exception table is used to store copies of rows that violate unique index rules; the utility does not
check for constraints or foreign key violations other than violations of uniqueness.

Note Any rows rejected before the building of an index because of invalid data are not
inserted into the exception table.

Rows are appended to existing information in the exception table; this can include invalid rows
from previous load operations. If you want only the invalid rows from the current load
operation, you must remove the existing rows before invoking the utility.
Two methods of creating the exception table are shown here:
CREATE TABLE customerexc
LIKE customer;
ALTER TABLE customerexc
ADD COLUMN ts TIMESTAMP
ADD COLUMN msg CLOB(32K);

CREATE TABLE customerexc AS


(SELECT customer.*, CURRENT TIMESTAMP AS ts,
CLOB(' ', 32767) AS msg
FROM customer)
DEFINITION ONLY;

Data Migration Methods Loading Tables 8-15


Use the following query to determine if the LOAD has left any table in a CHECK PENDING
state:
SELECT tabname, status, const_checked
FROM syscat.tables;

Example output from the query:


TABNAME STATUS CONST_CHECKED
------------------ ------ --------------------------------
ORG N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
STAFF N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
DEPARTMENT N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
EMPLOYEE N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
EMP_ACT N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
PROJECT N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
EMP_PHOTO N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
EMP_RESUME N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
SALES N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
CL_SCHED N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
IN_TRAY N YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY

FOREIGN KEY SUMMARY TABLE


CHECK CONSTRAINT

Meaning of the values in the CONST_CHECKED column:


Value Description
Y Checked by SYSTEM
N Not checked (in CHECK PENDING)
U Checked by USER(as a result of SET INTEGRITY...IMMEDIATE
UNCHECKED)
W Previously checked by USER and some data needs to be verified by SYSTEM(in
CHECK PENDING)

The Y's in the CONST_CHECKED column indicate that various constraints have been checked
by the system in our example.
The N in the STATUS column means the table is in a normal state.

8-16 Data Migration Methods Loading Tables


Load Syntax

Basic load syntax is:


db2 "load from <filename>
of <file_type> insert into <tablename>"

For example:
db2 connect to storesdb
db2 "load from /home/inst001/customer.unl
of del modified by coldel|
insert into customer"

8-17

You can also: specify the use of a message log, apply modifiers, specify column mapping,
specify commit count and restart count, and specify table-space usage, such as:
db2 "load from <filename>
of <file_type> modified by coldel<value>
savecount <value>
messages <filename>
insert into <tablename>
for exception <exception_table>"

For example:
db2 connect to storesdb
db2 "load from /home/inst001/customer.unl
of del modified by coldel|
savecount 100
messages /tmp/cust.msg
insert into customer
for exception customerexc"

This shows you how to use the pipe character (|) as the field delimiter.

Data Migration Methods Loading Tables 8-17


Checking for Constraints Violations

After a load, the table may be in check-pending state:


Use the SET INTEGRITY statement to remove the check-
pending state
By default, this checks only the appended portion of the table
for constraint violations

8-18

After a load operation, the loaded table may be in check-pending state if it has table check
constraints or referential integrity constraints defined on it.
The STATUS flag of the SYSCAT.TABLES entry for that loaded table indicates the check-
pending state of the table. For the loaded table to be usable, the STATUS must have a value of
N, indicating a normal state.
Use the SET INTEGRITY statement to remove the check-pending state. The SET INTEGRITY
statement checks a table for constraints violations, then takes the table out of check-pending
state.
By default, the SET INTEGRITY statement checks only the appended portion of the table for
constraints violations if all the load operations are performed in INSERT mode.
For example:
db2 "load from infile1.unl of del insert into table1"
db2 set integrity for table1 immediate checked
Only the appended portion of TABLE1 is checked for constraint violations, which is faster than
checking the entire table.

8-18 Data Migration Methods Loading Tables


LOAD QUERY Command

Used to check the status of a load operation


Can be used either by local or remote users
Example:
db2 connect to storesdb
db2 "load query table stock
to /home/inst001/stock.tempmsg"

8-19

Use the LOAD QUERY command to check the status of a load operation during processing. You
must be connected to the same database with a separate CLP session. It can be used either by
local or remote users.
A user loading a large amount of data into the STOCK table should check the status of the load
operation.
The output of file /home/inst101/stock.tempmsg might look like the following:
SQL3500W The utility is beginning the "LOAD" phase at time
"02-13-2002 17:45:28.562345".
SQL3519W Begin Load Consistency Point. Input record count = "0".
SQL3520W Load Consistency Point was successful.
SQL3109N The utility is beginning to load data from file
"/home/inst101/stock.unl".
SQL3519W Begin Load Consistency Point. Input record count = "100"
SQL3520W Load Consistency Point was successful.
SQL3519W Begin Load Consistency Point. Input record count = "200"
SQL3520W Load Consistency Point was successful.

LOAD QUERY checks the status of a load operation during processing and returns
V8 the table state. If a load is not processing, then the table state alone is returned. A
connection to the same database, and a separate CLP session are also required to
successfully invoke this command. It can be used either by local or remote users.

Data Migration Methods Loading Tables 8-19


Monitoring Disk Usage After Load

Use the DB2 list function to view storage information:


db2 connect to storesdb
db2 list tablespaces show detail
You can obtain the following information:
Table space name
Type
Total pages used
Page size

8-20

To determine disk space usage for a database data tables or indexes, use the DB2 list function.
You can see storage particulars for all table spaces for that database, but only the USERSPACE1
and other user table spaces contain the data and indexes.
Look for the information for these items:
Item
table space ID
Name
Type (SMS or DMS)
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers

8-20 Data Migration Methods Loading Tables


Summary

You should now be able to:


Explore various methods of data migration
Understand the various file types supported
Perform data migration of the storesdb database data using the
DB2 import utility
Perform data migration of the storesdb database data using the
load utility
Monitor disk usage after loading tables

8-21

Data Migration Methods Loading Tables 8-21


8-22 Data Migration Methods Loading Tables
Exercises

Data Migration Methods Loading Tables 8-23


Exercise 1
In this exercise, you will explore different methods of data migration.
1.1 Given that you have exported delimited data files from an Oracle or other database, what
possible ways might you use to load that data into a DB2 UDB database?

1.2 From you last exercise, which method would you use to load a small database, such as
storesdb? Explain your reasoning.

1.3 Which method would you use to load a larger database? Why?

8-24 Data Migration Methods Loading Tables


Exercise 2
In this exercise, you will practice using both the DB2 import and load utilities to insert data into
a database.
Previously, you have created a database (storesdb), with tables, views, and triggers in it.
2.1 Using a command window local to your server, ensure you are in the storesdb directory
containing the course script files. Execute the import.sql script to perform an import.
cd $HOME/storesdb
./import.sql > import.out

2.2 What kind of information was provided during the import?

2.3 Did you check to see that all rows were inserted? How many rows were inserted for each
table?

2.4 Were there any records rejected?

2.5 Perform a few SELECT statements.

2.6 In order for you to practice with the load utility, you need to remove the data in the
tables. To do this, execute the delete.sql script.

2.7 Perform a load by executing the load.sql script file.


This inserts data into the tables in your storesdb database with the load utility. Ignore
the set integrity errors. DB2 generates this error when it does not find any constraints,
and there are no constraints on the tables yet. Load commands are commonly run as
batch processes with set integrity statements. This ensures that the check-pending state
is cleared and the database is ready to be used.

2.8 What kind of information was provided during the load? You should see much more
information during a load then you did during an import.

Data Migration Methods Loading Tables 8-25


2.9 Did you check to see that all rows were inserted? How many rows were inserted for each
table?

2.10 Were there any records rejected?

2.11 Perform a few SELECT statements.

2.12 Determine how much space the data load took by comparing the size of the
USERSPACE1 table space without data (from an exercise in the last module) to the table
space as it is now with data in it. Look for the following:
Item Without Data With Data
table space ID 2
Name USERSPACE1
Type (SMS or DMS) System managed space
Total pages 16
Usable Pages 16
Used Pages 16
Page Size 4096
Extent Size (pages) 32
Prefetch Size (pages) 32
# of Containers 1

2.13 How much space did the data and indexes for storesdb take?
Item Difference
Total pages
Usable Pages
Used Pages

8-26 Data Migration Methods Loading Tables


Solutions

Data Migration Methods Loading Tables 8-27


Solution 1
1.1 Given that you have exported data files from an Oracle or other database, what possible
ways might you use to load that data into a DB2 UDB database?
The DB2 import and load utilities would be appropriate for inserting data from files.
Both utilities allow you to specify the delimiter character, and you would signify the
file type to be DEL for DB2 UDB.

1.2 From you last exercise, which method would you use to load a small database, such as
storesdb? Explain your reasoning.
The storesdb database is very small, and would load quickly no matter which
method you used. For small databases such as this, it would be better to use the
import utility because constraint checking is done on a row-by-row insert frequency.
This means that after an import, the tables would not be left in a check-pending
state.

1.3 Which method would you use to load a larger database? Why?
To load large databases, the load utility would be best to use.The load utility is faster
than the import utility, because it writes formatted pages directly into the database.
The load utility does not fire triggers and does not perform referential or table
constraint checking (other than validating the uniqueness of the indexes).

8-28 Data Migration Methods Loading Tables


Solution 2
2.1 Using a command window local to your server, ensure you are in the storesdb directory
containing the course script files. Execute the import.sql script to perform an import.
cd $HOME/storesdb
./import.sql > import.out
This will insert data into the tables in your storesdb database using the import utility.
See the Appendix for an example of the results of an import.

2.2 What kind of information was provided during the import?


Information included in the output screen of the import utility includes the
following:
Actual import command, including the input file and target table
Number of rows read
Number of rows skipped
Number of rows inserted
Number of rows updated
Number of rows rejected
Number of rows committed

2.3 Did you check to see that all rows were inserted? How many rows were inserted for each
table?
table rows
state 52
manufact 9
call_type 5
stock 74
customer 28
orders 23
items 67
catalog 74
cust_calls 7
log_record 0

Data Migration Methods Loading Tables 8-29


2.4 Were there any records rejected?
No.

2.5 Perform a few SELECT statements.


db2 SELECT * FROM customer;
db2 SELECT * FROM orders;
db2 SELECT order_num, customer.*
FROM customer, orders
WHERE customer.customer_num = orders.customer.num;

2.6 In order for you to practice with the load utility, you need to remove the data in the
tables. To do this, execute the delete.sql script.
./delete.sql

2.7 Perform a load by executing the load.sql script file.


This inserts data into the tables in your storesdb database with the load utility. Ignore
the set integrity errors. DB2 generates this error when it does not find any constraints,
and there are no constraints on the tables yet. Load commands are commonly run as
batch processes with set integrity statements. This ensures that the check-pending state
is cleared and the database is ready to be used.
./load.sql > load.out
This inserts data into the tables in your storesdb database with the DB2 load utility.
See the Appendix for an example load result.

2.8 What kind of information was provided during the load? You should see much more
information during a load then you did during the import.
Information included in the output screen of the load utility includes the following:
Actual load command, including the input file and target table
Information about backup pending state
Indicates the start and end of the load phase (timestamped)
Information about the Load Consistency Point
Indicates the start and end of the build phase (timestamped)
Number of rows read
Number of rows skipped
Number of rows inserted

8-30 Data Migration Methods Loading Tables


Number of rows updated
Number of rows rejected
Number of rows committed
You may see other messages regarding check pending state.

2.9 Did you check to see that all rows were inserted? How many rows were inserted for each
table?
table rows
state 52
manufact 9
call_type 5
stock 74
customer 28
orders 23
items 67
catalog 74
cust_calls 7
log_record 0

2.10 Were there any records rejected?


No.

2.11 Perform a few SELECT statements.


db2 SELECT * FROM customer;
db2 SELECT * FROM orders;
db2 SELECT order_num, customer.*
FROM customer, orders
WHERE customer.customer_num = orders.customer.num;

Data Migration Methods Loading Tables 8-31


2.12 Determine how much space the data load took by comparing the size of the database
without data (from an exercise in the last module) to the database as it is now with data
in it. Look for the following:
Item Without Data With Data
table space ID 2 2
Name USERSPACE1 USERSPACE1
Type (SMS or DMS) System managed space System managed space
Total pages 16 41
Usable Pages 16 41
Used Pages 16 41
Page Size 4096 4096
Extent Size (pages) 32 32
Prefetch Size (pages) 32 32
# of Containers 1 1

2.13 How much space did the data and indexes for storesdb take?
Item Difference
Total pages 25
Usable Pages 25
Used Pages 25
The data and indexes took 25 pages of 4096 bytes each for a total of 100 kilobytes.

8-32 Data Migration Methods Loading Tables


Module 9

Accessing Data Through Indexes

Accessing Data Through Indexes 02-2003 9-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Describe the benefits and costs of using indexes in database
implementations
List and describe the DB2 UDB index types
Explain SMS and DMS storage implementations of indexes
Compare and contrast Oracle and DB2 UDB index functionality
Create DB2 UDB indexes
Use DB2 UDB Visual Explain

9-2

9-2 Accessing Data Through Indexes


Benefits of Using Indexes

Indexes can help data retrieval by:


Facilitating a key-only search The key values are already in
sorted order. The index is built and stored in sorted order and
scanned in either ascending order, descending order, or both
Using Index Include to get frequently used data
Facilitating table joins
Ensuring uniqueness

9-3

When querying the database, you want the data to be returned to you as soon as possible.
Indexes are used to help speed up queries in several different ways.
Indexes provide the optimizer another way of retrieving data other than with a sequential scan.
You may only want 10 or so rows from a table that has 2 million rows. Without an index to find
the specific rows you want, the optimizer must scan all the pages of that table to find your rows.
With an index, only the requested rows are read and selected; it is not necessary to scan all rows.
Some queries return just the index key values. In this case the optimizer goes to the index nodes
and scans them in their already-sorted order (key-only search). There is no need to search
through the table for the values.
Indexes facilitate the process of joining multiple tables. When a column(s) is defined with a
primary key constraint, a unique index is created on that column(s) if one does not already exist.
When a column(s) is defined with a foreign key constraint, a duplicate index is created on that
column(s) if one does not already exist.

Accessing Data Through Indexes 9-3


s

Costs of Using Indexes

Costs of using indexes can include:


Storage space
Temporary space
Processing costs
Potential for incorrect access strategy

9-4

Storage Space
Storage space (in a user table space) required for each unique index can be estimated as:
(average_index_key_size + 8) * number_of_rows * 2
where:
The average_index_key_size is the byte count of each column in the index key.
The 2 is for overhead, such as nonleaf pages and free space.

Note For every column that allows NULLs, add one extra byte for the null indicator.

9-4 Accessing Data Through Indexes


Temporary Space
The maximum amount of temporary space required for each index during index creation can be
estimated as:
(average_index_key_size + 8) * number_of_rows * 3.2
where 3.2 is for index overhead, and space required for sorting during index creation.

Leaf Pages
The following formula can be used to estimate the number of leaf pages. The accuracy of this
estimate depends largely on how well the averages reflect the actual data.
An estimate of the average number of keys per leaf page is:
L = number_of_leaf_pages = X / (avg_number_of_keys_on_leaf_page)
where X is the total number of rows in the table.
You can estimate the original size of an index as:
(L + 2L / (average_number_of_keys_on_leaf_page)) * pagesize

Note For SMS, the minimum required space is 12K. For DMS, the minimum is one
extent.

For DMS table spaces, add all the sizes of the indexes for a table and round up to a multiple of
the extent size for the table space where the index is stored.
Remember to provide additional space for index growth due to INSERT/UPDATE activity, which
can result in page splits.

Processing costs
Using an index to search for data requires extra processing by the instance. The optimizer
calculates how much time and processing is required to use an index, factoring in the size of the
index. Then the optimizer decides whether or not to use the index. The optimizer generates a
query plan and, if that plan includes use of the index, the index is accessed, scanned, and the
data rows requested are retrieved.

Accessing Data Through Indexes 9-5


The processing cost is larger for indexes with more levels because of the need to link down
through the various branch nodes to the leaf level nodes.

Incorrect access strategy


There is always the potential for an incorrect access strategy, usually because the database and
index storage statistics are not up-to-date. This can be remedied by executing the runstats utility
to update the system catalog tables with the current data and index distribution statistics.

9-6 Accessing Data Through Indexes


Types of DB2 UDB Indexes

Indexes can be either unique or nonunique, with these


characteristics:
Single column or multiple columns
Ascending or descending
Forward scan only, or ALLOW REVERSE SCANS
Include (include non-key information in the index)

Oracle index-clustered tables perform a totally different function


they place rows from different tables having the same key in the
same data block, but do not physically order the blocks by key value.

9-7

Indexes can be defined as unique or nonunique, and can be defined on a single column or on
multiple columns in a table. Indexes can be defined as ascending (default) or descending. Also
by default, DB2 UDB indexes are created for forward scanning only. You must specify that you
want to allow reverse scanning, if so desired.
Indexes can also be created on calculated columns of a table, which are maintained
automatically as data changes the calculated results.
Additional information beyond the key data is stored in an index (include). This extra data is not
considered in the search strategy, but if present, it must be considered for storage space.
Although indexes generally help data retrieval in the SELECT statements and facilitate join
conditions, there is maintenance overhead involved with each index during INSERT, UPDATE, or
DELETE statements.

By clustering a table by an index, the table data is sorted and reorganized into index-sorted
order.
In Oracle, the concept of index clusters is completely different. In the cluster database object,
tables can be stored in a prejoined mannerthe rows of several different tables can be stored
together in the same data block. There is, however, no necessary physical relationship between a
particular block with one key value (e.g., 100) and those with the adjacent key values (99, 101).

Accessing Data Through Indexes 9-7


Type-2 Indexes

The primary advantages of type-2 indexes are:


V8
They improve concurrency because the use of next-key locking
is reduced to a minimum
An index can be created on columns that have a length greater
than 255 bytes
A table must have only type-2 indexes before online table reorg
and online load can be used
They are required for the new multidimensional clustering
facility

9-8

Version 8 adds support for type-2 indexes. The primary advantages of type-2 indexes are:
They improve concurrency because the use of next-key locking is reduced to a
minimum. Most next-key locking is eliminated because a key is marked deleted instead
of being physically removed from the index page. For information about key locking,
refer to topics that discuss the performance implications of locks.
An index can be created on columns that have a length greater than 255 bytes.
A table must have only type-2 indexes before online table reorg and online table load
can be used against the table.
They are required for the new multidimensional clustering facility.
All new indexes are created as type-2 indexes, except when you add an index on a table that
already has type-1 indexes. In this case the new index will also be a type-1 index because you
cannot mix type-1 and type-2 indexes on a table.
All indexes created before Version 8 were type-1 indexes. To convert type-1 indexes to type-2
indexes, uses the REORG INDEXES command. To find out what type of index exists for a
table, use the INSPECT command.

9-8 Accessing Data Through Indexes


REORG INDEXES/TABLE
Reorganizes all indexes defined on a table by rebuilding the index data into unfragmented,
physically contiguous pages. If you specify the CLEANUP ONLY option of the index option,
cleanup is performed without rebuilding the indexes. This command cannot be used against
indexes on declared temporary tables (SQLSTATE 42995).
The table option reorganizes a table by reconstructing the rows to eliminate fragmented data,
and by compacting information.
This command affects all database partitions in the database partition group.
Using the INSPECT command to determine the index type can be slow. CONVERT allows you
to ensure that the new index will be Type-2 without your needing to determine its original type.

Accessing Data Through Indexes 9-9


Multi-dimensional Clustering

Multidimensional clustering (MDC) provides a method for


V8 flexible, continuous, and automatic clustering of data along
multiple dimensions.

This can result in:


Significant improvements in the performance of queries
Significant reduction in the overhead of data maintenance
operations
Multidimensional clustering enables a table to be physically clustered
on more than one key, or dimension, simultaneously.

9-10

Multidimensional clustering (MDC) provides a method for flexible, continuous, and automatic
clustering of data along multiple dimensions. This can result in significant improvements in the
performance of queries, as well as significant reduction in the overhead of data maintenance
operations such as reorganization, and index maintenance operations during insert, update and
delete operations. Multidimensional clustering is primarily intended for data warehousing and
large database environments, and it can also be used in online transaction processing (OLTP)
environments.
Multidimensional clustering enables a table to be physically clustered on more than one key, or
dimension, simultaneously.
Using a clustering index, DB2 attempts to maintain the physical order of data on pages in the
key order of the index, as records are inserted and updated in the table.
MDC benefits:
Clustering is extended to more than one dimension, or clustering key.
Range queries involving any, or any combination of, specified dimensions of the table
will benefit from clustering.
Not only will these queries access only those pages having records with the correct
dimension values, these qualifying pages will be grouped by extents.

9-10 Accessing Data Through Indexes


An MDC table is able to maintain its clustering over all dimensions automatically and
continuously, thus eliminating the need to reorganize the table in order to restore the
physical order of the data.

Accessing Data Through Indexes 9-11


Multi-dimensional Clustering (cont.)

You can specify one or more keys as dimensions along which to


cluster the data:
A dimension block index will be automatically created for each
of the dimensions specified
Used by the optimizer to quickly and efficiently access data
along each dimension
A composite block index will also automatically be created
Used to maintain the clustering of data over insert and update
activity
Can also be used by the optimizer to efficiently access data
having particular dimension values

9-12

When you create a table, you can specify one or more keys as dimensions along which to cluster
the data. Each of these dimensions can consist of one or more columns, as index keys do. A
dimension block index will be automatically created for each of the dimensions specified, and it
will be used by the optimizer to quickly and efficiently access data along each dimension. A
composite block index will also automatically be created, containing all dimension key columns,
and will be used to maintain the clustering of data over insert and update activity.
In an MDC table, every unique combination of dimension values form a logical cell, which is
physically comprised of blocks of pages, where a block is a set of consecutive pages on disk.
The set of blocks that contain pages with data having a certain key value of one of the dimension
block indexes is called a slice. Every page of the table is part of exactly one block, and all blocks
of the table consist of the same number of pages: the blocking factor. The blocking factor is
equal to extent size, so that block boundaries line up with extent boundaries.

9-12 Accessing Data Through Indexes


Multi-dimensional Clustering (cont.)

9-13

Consider an MDC table that records sales data for a national retailer. The table is clustered along
the dimensions YearAndMonth and Region. Records in the table are stored in blocks, which
contain an extents worth of consecutive pages on disk. In the figure above, a block is
represented by an rectangle, and is numbered according to the logical order of allocated extents
in the table. The grid in the diagram represents the logical partitioning of these blocks, and each
square represents a logical cell. A column or row in the grid represents a slice for a particular
dimension. For example, all records containing the value South-central in the region column
are found in the blocks contained in the slice defined by the South-central column in the grid.
In fact, each block in this slice also only contains records having South-central in the region
field. Thus, a block is contained in this slice or column of the grid if and only if it contains
records having South-central in the region field.
A dimension block index is created on the YearAndMonth dimension, and another on the
Region dimension. Each dimension block index is structured in the same manner as a traditional
RID index, except that at the leaf level the keys point to a block identifier (BID) instead of a
record identifier (RID). Since each block contains potentially many pages of records, these block
indexes are much smaller than RID indexes and need only be updated as new blocks are needed
and therefore added to a cell, or existing blocks are emptied and therefore removed from a cell.

Accessing Data Through Indexes 9-13


A slice, or the set of blocks containing pages with all records having a particular key value in a
dimension, will be represented in the associated dimension block index by a BID list for that key
value.
When a record is inserted into the Sales table, DB2 will determine if a cell exists for its
dimension values. If one does, DB2 will insert the record into an existing block of that cell if it
can, or add another block to that cell if the current blocks are full. If the cell does not yet exist,
DB2 will create a new cell and add a block to it. This automatic maintenance is implemented
with an additional block index, also created when the MDC table is created. This block index
will be on all the dimension columns of the table, so that each key value corresponds to a
particular cell in the table, and its BID list corresponds to the list of blocks comprising that cell
(see figure below). This type of index is a composite block index.
A key is found in this composite block index only for each cell of the table containing records.
This block index assists in quickly and efficiently finding those blocks with records having a
particular set of values for their dimensions. The composite block index is used for query
processing, to access data in the table having particular dimension values. It is also used to
dynamically manage and maintain the physical clustering of data along the dimensions of the
table over the course of insert activity.

9-14 Accessing Data Through Indexes


Indexes: SMS or DMS?

SMS table spaces:


Grow as needed up to the size allowed by the operating system
Cannot store table data separate from indexes

DMS table spaces:


Can store table data separately from indexes
Should be separate physical devices to minimize disk
contention

9-15

SMS table spaces are used by default during the database create operation. An SMS-type table
space requires that there is enough disk space in the file system where the SMS table space is
created. The table space grows as needed to accommodate the data stored in it, up to the size
allowed by the operating system. An SMS table space cannot contain long types of data.
DMS table spaces can be used to store the table data separately from the indexes on that table.
All indexes on a table are placed in the same table space. Placing indexes in their own table
space, away from the table data, helps to improve performance. Ideally, these table spaces are
separate physical devices, thereby spreading the I/O out among several disk drives. This
minimizes disk contention when accessing indexes and table data at the same time.

Accessing Data Through Indexes 9-15


DB2 UDB and Oracle Indexing Differences

Differences between Oracle and DB2 UDB indexes:


Attribute Oracle DB2 UDB
Include columns Not available, but does Index Include
have INDEX ONLY tables
Free space PCTFREE only; INITRANS and Index Free Space
MAXTRANS reserve transaction PCTFREE, MINPCTUSED
slots in each block.
Clustering Clustering of indexes/data Clustering maintained
is a different concept
Reverse scans Provides bidirectional By default, set to scan in
scanning the forward direction only
Placement GLOBAL indexes can be Can be placed in separate
placed in separate table space from data only
tablespace(s) from data during table create or alter

9-16

Index Include
Indexes in DB2 UDB allow nonkey values to be stored along with the index key values. This
included data is not part of the key itself, but it allows a key-only scan so that data pages do not
need to be read. Oracle does not have an equivalent INCLUDE mechanism.
The INCLUDE attribute requires extra space in the index nodes to accommodate the included
information.

Important!
The INCLUDE attribute can only be used on unique indexes.

9-16 Accessing Data Through Indexes


Oracle Index-Only Tables
Oracle has an index-only table that is not available in IBM DB2.
An index-only table is created with the ORGANIZATION INDEXED clause in the CREATE TABLE
statement:
CREATE TABLE "hr".emp
(
empno NUMBER(4) NOT NULL,
...
PRIMARY KEY (empno)
)
ORGANIZATION INDEXED
TABLESPACE "employees";

All of the data is included in the index node pages. This type of table is generally suitable only
for tables with relatively small rows. A primary key is required and the index is created on the
primary key.

Index Free Space


There are two parameters that can be specified when creating an index that governs the amount
of free space in the index nodes.
The PCTFREE value specifies what percentage of each index page to leave as free space when
building the index. However, if a value greater than 10 is specified, only 10 percent free space is
left in nonleaf pages. The default is 10 percent.
The MINPCTUSED value indicates if indexes are reorganized online. If they are, MINPCTUSED is
the threshold for the minimum percentage of space used on an index leaf page.
After a key is deleted from an index leaf page, the percentage of space used on the page is
checked. If it is at or below MINPCTUSED, an attempt is made to merge the remaining keys on
this page with those of a neighboring page. A MINPCTUSED value of 50 or below is
recommended for performance reasons.

FYI DB2 supports PCTFREE for data pages as well as index pages this feature is important for use
with clustered indexes so that inserts of new rows can be placed on the appropriate data page in
clustering order. Most DB2 datatypes are fixed length (thus differing from Oracle) and hence
PCTFREE for data pages is used mostly for new inserts.

Accessing Data Through Indexes 9-17


Oracle allows you to set the number of transaction slots reserved in each block (INITRANS,
default is 2) and set the maximum number of transaction slots (MAXTRANS) that can be created
in a block (default 255). These parameters are necessary for the way that Oracle handles indexes
and thus have no IBM DB2 equivalent.

Clustering
The DB2 command REORG can be used to cluster or recluster data into indexed order. An index
that is defined as a clustering index on a table helps DB2 UDB keep the data in more of an
indexed order during inserts, updates, and deletes of that data. The form of clustering is
persistent in DB2 UDB (but differs considerably from the Oracle notion of clustering). The
degree of clustering can be determined by viewing the clusterratio or clusterfactor columns of
the syscat.indexes system catalog table.

SQL Syntax
The SQL syntax of the DB2 UDB index creation is similar to the Oracle index creation, with the
following differences:
By default, index creation in DB2 UDB is set to scan in the forward direction only. If
you want dual-direction capability, you need to specify that you want to allow reverse
scans while creating the index.
The index can only be placed in a different table space from the data if it is created in
the CREATE TABLE statement.
Various other options and parameters of the CREATE INDEX SQL statement have
differences that make for subtle differences.

Oracle BitMap Indexes


Oracle has a bitmap index that is not available in IBM DB2. This type of index is aimed at data
warehousing and is suitable for an index where there are very low number of values (low
cardinality) for the key (e.g., gender, state).
Although there is no specific syntax for creating a permanent bitmap index in DB2 UDB, the
optimizer in both EE and EEE may create dynamic bitmap indexes during the execution of
certain types of queries.

9-18 Accessing Data Through Indexes


Other Index Characteristics
Oracle b-tree indexes are not inherently self-balancing.They can become fragmented after a
large number of INSERTs and DELETEs, which may lead to significant performance
degradation.Typically an Oracle DBA runs a program from time to time to detect out-of-balance
indexes and rebuild them by utilizing index statistics from the INDEX_STATS view to restore
those indexes to good shape.

Implied Indexes Created with Constraints


Implied indexes are created when one of two constraints are added to a table: (1) primary key,
(2) unique constraint.

Accessing Data Through Indexes 9-19


Creating the Index First

Create the unique index before defining the primary key constraint:
The primary key uses an existing index
The index uses your naming convention
The index remains in place after dropping a primary key
constraint
Create the duplicate index before defining the foreign key constraint:
The foreign key uses the index
The index uses your naming convention
The index remains in place after dropping a foreign key
constraint

9-20

When creating a table with a column (or columns) that you use as a primary key, it is good
practice to create a unique index on that column(s) before declaring the column(s) as the primary
key. This means the table structure and index are created before altering the table to include the
primary key.
This practice:
Ensures uniqueness on the key column(s)
Provides an index that uses your naming convention
Retains the index if the primary key constraint were to be removed.

9-20 Accessing Data Through Indexes


Primary Key Example
Below is an example of creating the primary key constraint on a column that already has an
index on it.
CREATE TABLE employee (
empno INTEGER NOT NULL,
f_name CHAR(20),
l_name CHAR(20),
address CHAR(30),
city CHAR(15),
state CHAR(2),
zip CHAR(10),
phone CHAR(14)
);

CREATE UNIQUE INDEX idx_emp_id


ON employee(empno)
ALLOW REVERSE SCANS;

ALTER TABLE employee


ADD CONSTRAINT emp_pk
PRIMARY KEY(empno);

Accessing Data Through Indexes 9-21


Example Index Placement

An index can be placed in a table space that is different from the


table data:
Only when the index is created as part of the table or added to a
previously partititioned table
Cannot be placed in a separate table space during a CREATE
INDEX statement
Index Advisor is a utility that is used to determine which indexes
need to be created.

9-22

Suppose we have created other table spaces to use for our data and indexes, named:
dept1_data
dept1_idx
If that is the case, we could have created our employee table this way:
CREATE TABLE employee (
empno INTEGER NOT NULL,
f_name CHAR(20),
l_name CHAR(20),
address CHAR(30),
city CHAR(15),
state CHAR(2),
zip CHAR(10),
phone CHAR(14)
)
IN dept1_data
INDEX IN dept1_idx
NOT LOGGED INITIALLY;

9-22 Accessing Data Through Indexes


Note The index can be placed in its own table space only when the table is created. It
cannot be placed in a separate table space during a CREATE INDEX statement.

Index Advisor
The DB2 UDB Index Advisor is a utility that can be used to determine which indexes need to be
created. The Advisor can indicate what indexes would be best for a query(s), and can also test an
index without actually creating it.

Accessing Data Through Indexes 9-23


Create Index Syntax

The CREATE INDEX statement creates an index on a table and


informs the optimizer about it.
Here are some examples of CREATE INDEX statements:
CREATE UNIQUE INDEX idx_order_num
ON orders (order_num)
ALLOW REVERSE SCANS;
CREATE INDEX idx_state_code
ON state (scode) CLUSTER;
CREATE UNIQUE INDEX idx_cust_num
ON customer (customer_num)
INCLUDE (lname, fname);

9-24

The privileges required to create an index include at least one of the following:
SYSADM or DBADM authority.
One of:
CONTROL privilege on the table
INDEX privilege on the table
and one of:
IMPLICIT_SCHEMA authority on the database, if the implicit or explicit schema
name of the index does not exist
CREATEIN privilege on the schema, if the schema name of the index refers to an
existing schema.

9-24 Accessing Data Through Indexes


About Explain

Optimizer information about an access plan is kept in explain tables


and can be accessed from one of these tables:
EXPLAIN_ARGUMENT EXPLAIN_INSTANCE
EXPLAIN_OBJECT EXPLAIN_OPERATOR
EXPLAIN_PREDICATE EXPLAIN_STATEMENT
EXPLAIN_STREAM ADVISE_INDEX
Use Visual Explain:
A graphical tool used to view explain snapshot information
about a query
This tool is usually launched from the Control Center GUI panel
on a Windows GUI

9-25

Detailed optimizer information is kept in explain tables separate from the actual access plan
itself. This information can be accessed from the explain tables by:
Writing queries against explain tables
Use the db2exfmt tool
Use Visual Explain (to view explain snapshot information)

For More Information


Reference the SQL Explain Facility chapter in IBM DB2 Universal Database
Administration Guide: Performance

Visual Explain is a graphical tool that accesses the explain tables and provides information on
the optimizer access plans. Static and dynamic SQL statements can be analyzed with Visual
Explain.

Accessing Data Through Indexes 9-25


This tool is usually launched from the Control Center GUI panel, but it can also be started from
the command line by using the db2vexp command.
Typically, Visual Explain is used on a Windows client to analyze SQL statements executed on
local or remote instances.

Related Classes
DB2 Universal Database Administration Workshop for UNIX (CF211): Unit 7.6
(Explain)

9-26 Accessing Data Through Indexes


Using Visual Explain

Visual Explain is launched from the IBM DB2 UDB Control Center in
Windows by selecting: Start > Programs > IBM DB2 UDB
On Control Center:
Double-click your system
Double-click your instance
Double-click Databases
Right-click on your database
Select Explain SQL...
Enter your query
Check the box labeled Populate all columns in Explain tables
Click OK

9-27

In this course, you will learn to use Visual Explain from a Windows client, analyzing various
access plans in your storesdb database.
Visual Explain is launched from the IBM DB2 UDB Control Center, which is found on the IBM
DB2 UDB menu on the Windows Programs list by selecting: Start > Programs > IBM DB2
UDB menu.
Once Control Center is started, double-click on the system your instance is on, double-click on
the instance of choice, double-click on Databases, right-click on the database you want, and
select Explain SQL...
Enter your query in the box provided, check the Populate all columns in Explain tables box,
and click OK.This produces a graphical output of the access plan for your query.

Tip By double-clicking on the various blocks in the graphical output, you can get more
detailed information about the query. You can also review the access plans for
previously executed SQL statements.

Accessing Data Through Indexes 9-27


Visual Explain Output Example

9-28

The Visual Explain output graphic shown above was produced by the following SQL statement:
SELECT *
FROM customer, orders
WHERE customer.customer_num = orders.customer_num
AND customer.customer_num > 103
ORDER BY order_num;

9-28 Accessing Data Through Indexes


Visual Explain Details Report

By double-clicking on one of the graphical blocks, you can get more


detailed information about that step in the access plan.
You can get an overview of the detailed information or the full report
of the information.
The graphic shown below is the detailed overview of the MSJOIN(4)
block shown on the previous page.

9-29

Accessing Data Through Indexes 9-29


Summary

You should now be able to:


Describe the benefits and costs of using indexes in database
implementations
List and describe the DB2 UDB index types
Explain SMS and DMS storage implementations of indexes
Compare & contrast Oracle and DB2 UDB index functionality
Create DB2 UDB indexes
Use DB2 UDB Visual Explain

9-30

9-30 Accessing Data Through Indexes


Exercises

Accessing Data Through Indexes 9-31


Exercise 1
You will learn to create indexes in this exercise.
1.1 Create indexes on the parent and child tables that were used in the Creating Tables and
Views module:
CREATE UNIQUE INDEX idx_p1
ON parent(p1)
ALLOW REVERSE SCANS;
CREATE UNIQUE INDEX idx_c1
ON child(c1)
ALLOW REVERSE SCANS;
CREATE INDEX idx_c3
ON child(c3)
ALLOW REVERSE SCANS;

1.2 Create sample indexes on various storesdb tables:


Duplicate index
Unique index
Clustered index
At least one table should have multiple indexes created on it. Suggested indexes are:
Unique index on the customer number for the customer table
Duplicate index on the zip code for the customer table
Clustered index on the order number and item number (composite) for the items
table

1.3 Create an index with extra data included, but not indexed.
CREATE UNIQUE INDEX o_order_num
ON orders (order_num)
INCLUDE (customer_num) ALLOW REVERSE SCANS;

1.4 Explore how and where these indexes are stored in your SMS- and/or DMS-based
environment. Explain what other possibilities are open to a DB2 UDB administrator for
placement and allocation of index storage space.

1.5 Describe where and when various optional clauses are best used. Illustrate with
examples from the storesdb database. Be prepared to propose one or several situations
not yet discussed in class and defend your ideas.

9-32 Accessing Data Through Indexes


Exercise 2
In this exercise, Visual Explain has an overall structure followed by individually numbered
exercises that apply the Visual Explain techniques to individual queries.
The first few numbered exercises are done synchronously by the whole class with discussion
after each onethis is intended to build the basic Visual Explain skills and interpretation. Once
you have mastered all the skills of Visual Explain, the remaining exercises can be done at your
own pace with the instructor providing a tally of new features found with each numbered
exercise.

Overall structure:
a. Learn to run Visual Explain and perform queries that explain how the inquiry is going
to be performed.

To access Explain SQL statement (Visual Explain) GUI:


Run Control Center (on Windows menu: Start > Programs > IBM DB2 UDB)
Select storesdb (right mouse button)
Select Explain SQL (left mouse button)
Enter specific query in the large box
Check Populate all columns in Explain tables box
Click OK to obtain Visual Explain display
To review previously executed queries:
Run Control Center (on Windows menu: Start > Programs > IBM DB2 UDB)
Select storesdb (right mouse button)
Select Show Explained Statements History (left mouse button)
Select individual query perhaps choosing based on Explain date and Explain time.
Click to see graphical display.
b. Do progressive exercises to understand Visual Explain applied to:
Queries on a single table
Queries on a single table with GROUP BY, HAVING, and ORDER BY clauses
Queries on joined table
c. Using Visual Explain, compare different access strategies without and with a
particular index structure.

Accessing Data Through Indexes 9-33


Numbered Exercises:
2.1 Query a table without the benefit of an index, and with no GROUP BY or ORDER BY
clause:
SELECT fname, lname
FROM customer;

2.2 Query with the benefit of an index:


SELECT fname, lname
FROM customer
WHERE customer_num = 117;

2.3 Query with a GROUP BY and an ORDER BY clause:


SELECT city, count(*)
FROM customer
GROUP BY city
ORDER BY 2 DESC;

2.4 Query with a join:


SELECT *
FROM storesdb.customer RIGHT JOIN storesdb.orders
ON storesdb.customer.customer_num =
storesdb.orders.customer_num
ORDER BY order_date;

9-34 Accessing Data Through Indexes


Solutions

Accessing Data Through Indexes 9-35


Solution 1
1.1 Create indexes on the parent and child tables that were used in the Creating Tables and
Views module:
CREATE UNIQUE INDEX idx_p1
ON parent(p1)
ALLOW REVERSE SCANS;
CREATE UNIQUE INDEX idx_c1
ON child(c1)
ALLOW REVERSE SCANS;
CREATE INDEX idx_c3
ON child(c3)
ALLOW REVERSE SCANS;

1.2 Create sample indexes on various storesdb tables:


Duplicate index
Unique index
Clustered index
At least one table should have multiple indexes created on it. Suggested indexes are:
Unique index on the customer number for the customer table
Duplicate index on the zip code for the customer table
Clustered index on the order number and item number (composite) for the items
table
CREATE UNIQUE index c_cnum
ON customer (customer_num) ALLOW REVERSE SCANS;
CREATE INDEX c_zip
ON customer (zipcode) ALLOW REVERSE SCANS;
CREATE UNIQUE INDEX i_ci
ON items (customer_num, item_num) CLUSTER;

1.3 Create an index with extra data included, but not indexed:
CREATE UNIQUE INDEX o_order_num
ON orders (order_num)
INCLUDE (customer_num) ALLOW REVERSE SCANS;

9-36 Accessing Data Through Indexes


1.4 Explore how and where these indexes are stored in your SMS- and/or DMS-based
environment. Explain what other possibilities are open to a DB2 UDB administrator for
placement and allocation of index storage space.
Generally with this course, we use SMS storage for all table spaces. With SMS,
indexes are stored in the same table space containers as the datano choices are
available. One file is used to hold all indexes. The name of this file is:
/home/inst01/inst01/NODE0000/SQL00005/SQLT0002.0/SQL00011.INX
With DMS storage, the DB2 UDB administrator can choose a separate table space
for [all] indexes of a table (and other separate table spaces for large objects).

1.5 Describe where and when various optional clauses are best utilized. Illustrate with
examples from the storesdb database. Be prepared to propose one or several situations
not yet discussed in class and defend your ideas.
Class discussion.

Accessing Data Through Indexes 9-37


Solution 2
2.1 Query a table without benefit of an index and with no GROUP BY or ORDER BY
clause:
SELECT fname, lname
FROM customer;

2.2 Query with benefit of an index:


SELECT fname, lname
FROM customer
WHERE customer_num = 117;

9-38 Accessing Data Through Indexes


2.3 Query with a GROUP BY and an ORDER BY clause:
SELECT city, count(*)
FROM customer
GROUP BY city
ORDER BY 2 DESC;

Accessing Data Through Indexes 9-39


2.4 Select with a join:
SELECT *
FROM customer RIGHT JOIN orders
ON customer.customer_num =
orders.customer_num
ORDER BY order_date;

Consult the IBM DB2 Universal Database Administration Guide: Performance for
additional details.

9-40 Accessing Data Through Indexes


Module 10

Using Constraints to Manage Business


Requirements

Using Constraints to Manage Business Requirements 02-2003 10-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Describe the types of constraints available in DB2 UDB
Identify the differences between DB2 UDB constraints and
corresponding Oracle constraints
Explain how constraints are implemented in DB2 UDB

10-2

10-2 Using Constraints to Manage Business Requirements


Types of Constraints

DB2 UDB constraints that can be added to a table are:


NOT NULL
UNIQUE
Referential
Primary key
Foreign key
CHECK

All of these constraints are available in Oracle:


Informational constraints (new in v8)
Rules that can be used in query rewrite

10-3

There are several different kinds of constraints that can be added to tables in a DB2 UDB
database:
NOT NULL constraint
Data constraint on a column that requires known data to be inserted in the row
Ensures data is present in the column/row
Implemented using a one-byte nullable flag for every row
Example:
CREATE TABLE state (
state_code CHAR(2) NOT NULL,
state_descr CHAR(20) NOT NULL, ...

Note A NOT NULL constraint can only be specified on a column when a table is created
and cannot be added with the ALTER TABLE statement.

Using Constraints to Manage Business Requirements 10-3


UNIQUE constraint
Data constraint on a column that does not allow duplicate data to exist in the
column, thus ensuring uniqueness of the data
Defined on a column or columns of a table
Requires the column or columns be NOT NULL
Example:
CREATE UNIQUE INDEX idx_cust_num
ON customer(cust_num) ALLOW REVERSE SCANS;

Note A unique constraint can only be added to column(s) that were originally created
with the NOT NULL constraint. Therefore, it may not be possible to add a unique
constraint with the ALTER TABLE statement.

Referential constraints
Referential constraints are used to impose referential integrity of the data between tables
in a parent-child relationship. These constraints are defined on a column or set of
columns as keys.
Primary key constraint
Referential constraint used on a parent table to enforce a parentchild
relationship
One primary key per table
May require multiple columns (composite) to ensure uniqueness
Requires the column(s) to be NOT NULL, imposes uniqueness
Causes an index to be created if one is not already present
Example:
ALTER TABLE customer
ADD CONSTRAINT cust_num_pk
PRIMARY KEY customer(cust_num);

Note A primary key constraint can only be added to column(s) that were originally
created with the NOT NULL constraint. Therefore, it may not be possible to add a
primary key constraint with the ALTER TABLE statement.

10-4 Using Constraints to Manage Business Requirements


Foreign key constraints
Referential constraint used on a child table to enforce a parentchild
relationship
Column or set of columns referring to a primary key or unique key
Can allow duplicates
Can have more than one foreign key on a table
Causes an index to be created if not already present
Will not allow child rows to exist without parent rows (default)
Example:
ALTER TABLE orders
ADD CONSTRAINT cust_num_fk
FOREIGN KEY(cust_num)
REFERENCES customer;
CHECK constraint
Data constraint on a column that allows only the proper values to be entered
Used to ensure data integrity in the table
Data being inserted or changed must meet the check constraint rule set on the table
Example:
ALTER TABLE customer
ADD CONSTRAINT state_check
CHECK (state IN ('FL','AZ','CA'));

Using Constraints to Manage Business Requirements 10-5


Constraint Terminology

DB2 UDB constraint terminology:


Dependent table A table that is a dependent in at least one
referential constraint
Descendent table A table that is a dependent of another table
or a descendent of a dependent table
Parent row A row that has at least one dependent row
Dependent row A row that contains a foreign key that
matches the value of a parent key in the parent row. The foreign
key value represents a reference from the dependent row to the
parent row

10-6

10-6 Using Constraints to Manage Business Requirements


Referential Constraint Delete Rules

For deletes with parent-child relationships, the following delete rules


can apply in DB2 UDB:
NO ACTION or RESTRICT Same as Oracle default
CASCADE Same as Oracle cascade delete
SET NULL No equivalent in Oracle

Example:
CREATE TABLE employee (
empnum INT NOT NULL,
lname CHAR(15),
mgrnum INT NOT NULL,
FOREIGN KEY (mgrnum)
REFERENCES person ON DELETE CASCADE);

10-7

You can specify the delete rule of a referential constraint when the referential constraint is
defined. You can specify NO ACTION, RESTRICT, CASCADE, or SET NULL (on NULL value
columns).
The delete rule is applied when a row of the parent table is deleted and that row has dependents
in the dependent table of the referential constraint. If the delete rule is:
RESTRICT or NO ACTIONan error occurs and no rows are deleted from the parent
CASCADEthe delete operation is propagated to the dependents of the parent table and
the parent table itself
SET NULLeach nullable column of the foreign key of each dependent of the parent
table is set to null
In Oracle, the delete rule of RESTRICT is the default. Oracle allows you to include the CASCADE
DELETE rule.

FYI DB2 also has an optional ON UPDATE clause that follows the ON DELETE clause and has two
alternate rules NO ACTION and RESTRICT but no CASCADE rule.This is similar to Oracle 9i.

Using Constraints to Manage Business Requirements 10-7


Check Constraints

Check constraints are used to check the validity of the data being
inserted or updated in a column of a table:
A check constraint defined on a table automatically applies to all
subtables of that table
A constraint name must be unique within the same table (but
not the database)
Check constraints are not checked for inconsistencies, duplicate
conditions, or equivalent conditions:
Can result in possible errors at execution time

10-8

A check constraint is used to check the validity of the data being inserted or updated in a column
of a table by evaluating the test condition of the column (must evaluate to not false, can be true,
or unknown). A check constraint defined on a table automatically applies to all subtables of that
table. The constraint is defined in the form of:
CONSTRAINT constraint-name <evaluation>

A constraint-name must be unique within the same table. However, the same constraint name
can be used on more than one table in the database. If the constraint name is omitted, an 18-
character identifier that is unique among those defined on the table is generated by the system.
When used with a PRIMARY KEY or UNIQUE constraint, the constraint-name may be used as the
name of an index that is created to support the constraint.

Note Defining triggers would be another way of enforcing business rules (not covered in
this module).

Check constraints are not checked for inconsistencies, duplicate conditions, or equivalent
conditions, so contradictory or redundant check constraints can be defined that result in possible
errors at execution time.

10-8 Using Constraints to Manage Business Requirements


Constraint Syntax Similarities & Differences

Constraint definitions use the same syntax for both Oracle and DB2
UDB.
Example:
ALTER TABLE orders
ADD CONSTRAINT ck_items_qty
CHECK (quantity >= 1 AND quantity <= 10);

10-9

Constraint definitions use the same syntax for both Oracle and DB2 UDB. Examples include:
ALTER TABLE items
ADD CONSTRAINT ck_items_qty
CHECK (quantity >=1 AND quantity <=10);
ALTER TABLE orders
ADD CONSTRAINT orders_fk1
FOREIGN KEY(customer_num)
REFERENCES customer
ON DELETE CASCADE;
ALTER TABLE customer
ADD CONSTRAINT pk_num
PRIMARY KEY(customer_num);

DB2 has an almost complete implementation of integrity constraints. A table can have named or
unnamed primary key, unique, referential, and user-defined check constraints. A referential
constraint can have no action, restrict, cascade, or set null options for deletes, but it can only
have the restrict options for updates.
Oracle implements referential constraints with restrict (default, and not expressed) and cascade
(CASCADE DELETE). Thus, for the most part, all Oracle syntax and functionality is available in
DB2 UDB without change to code; some UDB features are not available in Oracle.

Using Constraints to Manage Business Requirements 10-9


Informational Constraints

Informational constraints are rules that can be used in query


V8 rewrite to improve performance but are not enforced by the
database manager.

Referential or check constraints can be altered:


ENFORCEDThe constraint is enforced by the database
manager during normal operations
NOT ENFORCEDThe constraint is not enforced by the
database manager during normal operations

10-10

Version 8 introduces a new type of constraint called informational constraints.


V8 Informational constraints are rules that can be used in query rewrite to improve
performance but are not enforced by the database manager.

Often, constraints are enforced by the logic in business applications and it is not desirable to use
system enforced constraints since re-verification of the constraints on insert, update and delete
operations can be costly. In this case, informational constraints are a better alternative.

Constraint Alteration
Options for changing attributes associated with referential or check constraints can be altered to:
ENFORCED: Change the constraint to ENFORCED. The constraint is enforced by the
database manager during normal operations, such as insert, update, or delete.
NOT ENFORCED: Change the constraint to NOT ENFORCED. The constraint is not
enforced by the database manager during normal operations, such as insert, update, or
delete. This should only be specified if the table data is independently known to
conform to the constraint.

10-10 Using Constraints to Manage Business Requirements


Summary

You should now be able to:


Describe the types of constraints available in DB2 UDB
Identify the differences between DB2 UDB constraints and
corresponding Oracle constraints
Explain how constraints are implemented in DB2 UDB

10-11

Using Constraints to Manage Business Requirements 10-11


10-12 Using Constraints to Manage Business Requirements
Exercises

Using Constraints to Manage Business Requirements 10-13


Exercise 1
In this exercise, you will create several constraints on previously-created tables in your storesdb
database.
1.1 Alter the parent and child tables that were used in previous modules to add the primary
and foreign keys:
The parent table should have the parent_pk primary key constraint on column p1.
The child table should have the child_pk primary key constraint on column c1.
The child table should have the child_fk foreign key constraint on column c3,
referencing the parent table with the ON DELETE NO ACTION constraint.

1.2 What is the function of the ON DELETE NO ACTION clause in 1.1? What other possibilities are
available and how do they work?

1.3 What types of constraints can be added to an existing table without rebuilding the table?
What types of constraints can be applied only at the time of table creation?

1.4 Attempt to add the following constraint. Explain why it works or why it does not work:
Alter the customer table to add the check constraint valid_state so that only customers
in CA can be inserted.

10-14 Using Constraints to Manage Business Requirements


Exercise 2
In this exercise you will view the results of using constraints.
2.1 Add a primary key constraint to the customer table customer_num column and a
foreign key constraint to the orders table customer_num column that references the
customer table.

2.2 Validate the performance of primary-key/foreign-key relationship by attempting the


following:
DELETE FROM customer
WHERE customer_num = 103;
Explain your results.

2.3 Now try this:


DELETE FROM customer
WHERE customer_num = 117;
Explain your results.

Using Constraints to Manage Business Requirements 10-15


10-16 Using Constraints to Manage Business Requirements
Solutions

Using Constraints to Manage Business Requirements 10-17


Solution 1
In this exercise, you will create several constraints on previously created tables in your storesdb
database.
1.1 Alter the parent and child tables that were used in previous modules to add the primary
and foreign keys:
The parent table should have the parent_pk primary key constraint on column p1.
The child table should have the child_pk primary key constraint on column c1.
The child table should have the child_fk foreign key constraint on column c3,
referencing the parent table with the ON DELETE NO ACTION constraint.
ALTER TABLE parent ADD
CONSTRAINT parent_pk PRIMARY KEY (p1);
ALTER TABLE child ADD
CONSTRAINT child_pk PRIMARY KEY (c1);
ALTER TABLE child ADD
CONSTRAINT child_fk FOREIGN KEY (c3)
REFERENCES parent ON DELETE NO ACTION;
The exercise should work as shown. Potential problems can be:
Column p1 must be NOT NULL
Values stored in column p1 must be unique, since a unique index is required.

1.2 What is the function of the ON DELETE NO ACTION clause in 1.1? What other
possibilities are available and how do they work?
The ON DELETE NO ACTION clause enforces the primary key constraint. A row
in the parent table cannot be deleted if there are corresponding rows in the child
table. This clause can be replaced by ON DELETE RESTRICT (with the same
meaning). The other possibilities are:
ON DELETE CASCADE
ON DELETE SET NULL

10-18 Using Constraints to Manage Business Requirements


1.3 What types of constraints can be added to an existing table without rebuilding the table?
What types of constraints can be applied only at the time of table creation?
A NOT NULL constraint must be specified on a column when a table is initially
created and cannot be added later. In order to add a NOT NULL constraint the
table needs to be unloaded, dropped, recreated and loaded.
Primary key, foreign key, unique and check constraints can all be added to a table
with the ALTER TABLE command. However, primary key and unique constraints
can only be added to column(s) that are specified as NOT NULL and therefore it
may not be possible to add primary key and unique constraints to tables with the
ALTER TABLE statement.

1.4 Attempt to add the following constraint. Explain why it works or why it does not work:
Alter the customer table to add the check constraint valid_state so that only customers
in CA can be inserted.
ALTER TABLE customer
ADD CONSTRAINT valid_state
CHECK (state = 'CA');

SQL error: SQL0544N. The check constraint "VALID_STATE" cannot be added


because the table contains a row that violates the constraint. SQLSTATE=23512

Using Constraints to Manage Business Requirements 10-19


Solution 2
In this exercise you will view the results of using constraints.
2.1 Add a primary key constraint to the customer table customer_num column, and a
foreign key constraint to the orders table customer_num column that references the
customer table.
ALTER TABLE customer
ADD CONSTRAINT pk_cust_num PRIMARY KEY (customer_num);
ALTER TABLE orders
ADD CONSTRAINT fk_cust_num
FOREIGN KEY (customer_num) REFERENCES customer;

2.2 Validate the performance of primary-key/foreign-key relationship by attempting:


DELETE FROM customer
WHERE customer_num = 103;
Explain your results.
SQL fails. Customer 103 has an order. Setting for the primary key is: ON DELETE
NO ACTION or ON DELETE RESTRICT (essentially the same).

2.3 Now try this:


DELETE FROM customer
WHERE customer_num = 117;
Explain your results.
SQL succeeds. Customer 117 does not have any orders and thus the constraint does
not restrict the deletion.

10-20 Using Constraints to Manage Business Requirements


Module 11

Using DB2 Tools and Utilities

Using DB2 Tools and Utilities 02-2003 11-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


List the new DB2 Tools available to DB2 UDB administrators
Find further information about SQL errors using standard
utilities
Obtain the DDL needed to understand and recreate table
structures
Understand how to obtain and evaluate the IBM DB2 Table
Editor

11-2

11-2 Using DB2 Tools and Utilities


Overview of DB2 UDB Tools

Administration tools help you to administer DB2


Performance management tools help you to optimize the
performance of your DB2 databases
Recovery and replication tools enable you to recover and
replicate your DB2 databases
Application management tools help you to manage your DB2
applications

11-3

The IBM Data Management Tools are specifically designed to enhance the performance of IBM
DB2 UDB Database. And the tools support the newest versions of the databases as they become
available, making it easy to migrate from version to version and still benefit from tools support.
The Data Management Tools can be launched from centralized control points, and can share data
and functions across tools. This adds up to increased ease-of-use, less training time, and higher
productivity for DBAs.
These tools have been developed by IBMs Autonomic Computing initiative to help reduce
complexity and improve quality of service through the advancement of self-managing
capabilities in computing environments, and are part of the companys continuing effort to
expand SMART (Self Managing and Resource Tuning) technology into the DB2 database arena.
The full list of available DB2 tools available as of Spring 2003 (from www-3.ibm.com/
software/data/db2imstools) is:
DB2 Application Recovery Tool, v1.2
DB2 Administration Tool, v4.1
DB2 Archive Log Compression Tool, v1.1
DB2 Automation Tool, v1.3
DB2 Bind Manager, v2.1

Using DB2 Tools and Utilities 11-3


DB2 Buffer Pool Analyzer, v1.2
DB2 Change Accumulation Tool, v1.2
DB2 Data Export Facility, v1.1
DB2 DataPropagator, v8.1
DB2 High Performance Unload, v2.1 *
DB2 Log Analysis Tool, v1.3
DB2 Object Comparison Tool, v2.1
DB2 Object Restore, v1.3
DB2 Path Checker, v1.2
DB2 Performance Expert, v1.1 *
DB2 Performance Monitor, v7.2
DB2 Query Monitor, v1.1
DB2 Recovery Expert, v1.1 *
DB2 Row Archive Manager, v1.1
DB2 SQL Performance Analyzer, v2.1
DB2 Table Editor, v4.3 *
DB2 Utilities Migration Toolkit
DB2 Utilities Suite, v8.1
DB2 Web Query Tool, v1.3 *
* Currently available for multiplatforms, including Microsoft Windows, HP-UX, Suns Solaris
Operating Environment, IBM AIX, and Linux.

Application Development Tools


IBM Automated Tape Allocation Manager
IBM Debug Tool
IBM Fault Analyzer
IBM File Manager
IBM Storage Administration Workbench

11-4 Using DB2 Tools and Utilities


DB2 & SQL Errors

The db2 command allows you to obtain an explanation of errors:


db2 SELECT * FROM glen.customer
SQL0204N GLEN.CUSTOMER is an undefined name.
SQLSTATE=42704

To obtain an explanation of this error:


db2 ? SQL0204N

Note:
The SQL error code (from sqlca.sqlcode) is -204
The format, including N (negative), is used for sqlcode errors
Sqlstate (alternate ANSI standard for errors) is 42704

11-5

Sample output (using SQL0204N as example error)


SQL0204N "<name>" is an undefined name.

Explanation: This error is caused by one of the following:

o The object identified by "<name>" is not defined in the database.

o A data type is being used. This error can occur for the following reasons:

- If "<name>" is qualified, then a data type with this name does not
exist in the database.

- If "<name>" is unqualified, then the user's function path does not


contain the schema to which the desired data type belongs.

- The data type does not exist in the database with a create timestamp
earlier than the time the package was bound (applies to static
statements).

- If the data type is in the UNDER clause of a CREATE TYPE statement, the
type name may be the same as the type being defined, which is not valid.

Using DB2 Tools and Utilities 11-5


o A function is being referenced in one of:
- a DROP FUNCTION statement
- a COMMENT ON FUNCTION statement
- the SOURCE clause of a CREATE FUNCTION statement

If "<name>" is qualified, then the function does not


exist. If "<name>" is unqualified, then a function of
this name does not exist in any schema of the current function
path. Note that a function cannot be sourced on the COALESCE,
NULLIF, or VALUE built-in functions.

This return code can be generated for any type of database object.

Federated system users: the object identified by "<name>" is not


defined in the database or "<name>" is not a nickname in a DROP
NICKNAME statement.

Some data sources do not provide the appropriate values for


"<name>". In these cases, the message token will have the
following format: "OBJECT:<data source> TABLE/VIEW", indicating
that the actual value for the specified data source is unknown.

The statement cannot be processed.

User Response: Ensure that the object name (including any


required qualifiers) is correctly specified in the SQL statement
and it exists. For missing data type or function in SOURCE
clause, it may be that the object does not exist, OR it may be
that the object does exist in some schema, but the schema is not
present in your function path.

Federated system users: if the statement is DROP NICKNAME, make


sure the object is actually a nickname. The object might not
exist in the federated database or at the data source. Verify
the existence of the federated database objects (if any) and the
data source objects (if any).

sqlcode: -204

sqlstate: 42704

11-6 Using DB2 Tools and Utilities


db2look & Obtaining DDL Schemas

To obtain the schema for a database:


db2look -d dbname -e

Other formats are possible (e.g., options -e, -p, ...)

If you would prefer to see the results in lowercase (for improved


readability):
db2look -d dbname -e | tr [A-Z] [a-z]

11-7

The following assumptions are made:


The database has been started (db2start)
You have connected to the database (db2 connect to ...)
The variable USER has been appropriately set
Various formats and options are available see syntax below, or obtain syntax by running
db2look without any parameters at all.
The user and password can be supplied as parameters to db2look.

db2look -d dbname -p
% No userid was specified, db2look tries to use Environment variable USER
% USER is: GLEN
% Creating DDL for table(s)
-- This CLP file was created using DB2LOOK Version 7.1
-- Timestamp: 06-Sep-2002 02:27:50 PM
-- Database Name: MYDATABASE
-- Database Manager Version: DB2/NT Version 7.1.0
-- Database Codepage: 1252

CONNECT TO MYDATABASE;

Using DB2 Tools and Utilities 11-7


------------------------------------------------
-- DDL Statements for table "GLEN"."TEST"
------------------------------------------------

CREATE TABLE "GLEN"."TEST" (


"I" INTEGER )
IN "USERSPACE1"

COMMIT WORK;

CONNECT RESET;

TERMINATE;

db2look -d dbname -e
% No userid was specified, db2look tries to use Environment variable USER
% USER is: GLEN
% Use plain text format
%
% Using database MYDATABASE
% Using userid GLEN
% Database Manager Version DB2/NT Version 7.1.0
% Database Codepage 1252
%
%********************************************
TABLE TEST
%********************************************
%
CREATOR GLEN
CARD -1
NPAGES -1
%
COLUMNS
%
NAME I
COLNO 0
TYPE INTEGER
LENGTH 4
NULLS Y
COLCARD -1
NUMNULLS -1
NFRQ -1
NQUN -1
LOW2KEY
HIGH2KEY
AVGCOLLEN -1
%
COLUMN DISTRIBUTION
%

%
INDICES

11-8 Using DB2 Tools and Utilities


Syntax for db2look
db2look Version 7.1

Syntax: db2look -d DBname [-u Creator] [-s] [-g] [-a] [-t Tname1 Tname2...TnameN]
[-p] [-o Fname] [-i userID] [-w password]
db2look -d DBname [-u Creator] [-a] [-e] [-t Tname1 Tname2...TnameN]
[-m] [-c] [-r] [-x] [-l] [-f] [-o Fname] [-i userID]
[-w password]
db2look [-h]

-d: Database Name: This must be specified

-a: Generate statistics for all creators


-c: Do not generate COMMIT statements for mimic
-e: Extract DDL file needed to duplicate database
-g: Use graph to show page fetch pairs for indices
-h: More detailed help message
-m: Run the db2look utility in mimic mode
-o: Redirects the output to the given file name
-p: Use plain text format
-r: Do not generate RUNSTATS statements for mimic
-s: Generate a postscript file
-t: Generate statistics for the specified tables
-x: Generate Authorization statements DDL
-l: Generate Database Layout: Nodegroups, Bufferpools and Tablespaces
-f: Extract configuration parameters and environment variables
-u: Creator ID: If -u and -a are both not specified then $USER will be used
-i: User ID to log on to the server where the database resides
-w: Password to log on to the server where the database resides

About DB2 Table Editor


The DB2 Table Editor is a complete environment for building, deploying, and centrally
administering table editing front-ends and applications that integrate directly with DB2
databases. It is available for a 60-day evaluation from the IBM website.

DB2 Table Editor Features


Rapidly build Java, Windows-based or ISPF table editing front-ends.
Build e-business applications that connect directly to DB2 databases over the Internet.
Build applications that operate with any DB2 data warehouse or DB2 operational data.
Build applications using advanced database techniques and commands without
programming or expert SQL knowledge.
Design controls, data validation rules, and application behavior within a drag and drop
environment.

Using DB2 Tools and Utilities 11-9


Run completed applications using the Java user applet, or with Windows 32-bit user
client.
Satisfy universal requirements such as transaction management, table editing, and data
entry.
No user setup required to run the Java user applet and the Windows user can be set up
in minutes without database gateways, middleware, or ODBC drivers.
Restrict user/application permissions with centralized governing
Use IBM's DB2 DataJoiner to include multi-vendor data sources, such as IMS, VSAM,
Oracle, Informix, Sybase, Microsoft SQL Server, and more.

Whats New in Version 4.3


Edit with formulas: Users can apply a formula to change a column value in one or more
rows.
Informix support: Read/write support for Informix Dynamic Server 9.x.
Enhanced support for large objects (LOBs): A new button, Launch LOB, supports the
association of a file extension/program for a LOB column. In addition, a new enhanced
internal LOB control can display additional LOB data types.
Previous row support: Java player now supports the Previous Row button.
Support for new types of forms: Allows you to create a form without having to
associate it with a primary table. This new feature allows you to build new types of
forms. For example, you can create a "menu form" that simply controls the launching of
other forms.
Usability enhancements:
Input text size limited by the column datatype and definition when editing data
Thousands separator now displayed in numeric columns
Ability to migrate forms from test to production environment (list and change
tables used)
Position on last row of result set after reaching the end for form layout forms
Ability to lock columns for list control
Ability to work with tables with ROWID columns
Object list window that can stay open (Windows only)
Support for refer back in validation rules

11-10 Using DB2 Tools and Utilities


Whats New in Version 4.2a
ISPF interface for the full-screen table editor.
Support for viewing and editing LOB columns.
Stored procedure support for populating lists.
New button actions
Run an SQL action to allow the running of user defined SQL statements or stored
procedures.
Open Form action to allow the creation of linked forms.
New control features for forms
Lock columns in position in list control on a form. A locked column will not scroll
off the screen when you scroll through a list.
Specify default results for controls that derive their content from SQL queries
using the Default Result page of the Control Attribute notebook.
A robust full-screen table editor
Find and replace on selected rows/columns or the entire grid.
Columns in the grid can be sorted, the order changed, locked, set to force
uppercase.
Rows can be inserted, deleted or duplicated.
Cells can be edited, copied, cut, pasted, launched or zoomed.
Visual indicator for primary keys.
Save at end option that enables editing without committing changes until the end
of the edit session.
Referential integrity support
Update primary keys
Edit related tables feature
An enhanced Table Editing Wizard.
Selection of columns to include
Specification of row and sort conditions
Specification of row limit
Specification of locking mode for table being edited
Read only option
Table editing specifications can be saved and reused

Using DB2 Tools and Utilities 11-11


DB2 Control Center plug-in extension
Edit tables directly from the control center
Ability to launch DB2 Table Editor from the control center toolbar

DB2 Table Editor Components


DB2 Table Editor Console is used to by the administrator to set up DB2 servers to serve
DB2 Table Editor applications to users.
DB2 Table Editor Developer is a rapid development environment for creating custom
forms for distribution to end users.
DB2 Table Editor User provides users with access to finished DB2 Table Editor
applications, which are stored centrally at the database server.
DB2 Table Editor User for Java provides users with access to finished DB2 Table
Editor applications using a Java enabled browser or a Java runtime environment.
DB2 Table Editor ISPF interface for table editing the ISPF environment.

Supported Operating Systems


DB2 Table Editor applications can be developed and run on any of these Windows platforms:
Microsoft Windows 95; Microsoft Windows 98; Microsoft Windows NT 3.51; and Microsoft
Windows NT 4.0. They can also be run on any platform that supports a Java enabled browser.

Updates to Shared Dynamic Link Libraries


DB2 Table Editor installs the Visual C Runtime library (MSVCRT.DLL v4.21.7303) and
Microsoft Foundation Classes library (MFC42.DLL v5.00.7303) to the Windows System
directory. If you are running on Windows NT 3.51, DB2 Table Editor will additionally install the
Microsoft 3D Windows Control Library (CTL3D32.DLL v2.31.000).

Installation Types
The setup program provides three setup options:
Custom you may choose any one or more of the three components to install.
Typical A typical install will copy all files required during normal use of DB2 Table
Editor to the specified installation directory and system directories, as required.
Compact The minimum amount of files will be installed to run the user application.

11-12 Using DB2 Tools and Utilities


Configuration for unattended installation
An unattended installation allows you to select the installation options for your DB2 Table
Editor users before beginning the installation process. Using this method you can designate all
the options of an installation rather than having to select the same options repeatedly for each
installation.
The following steps enable unattended installation.

Step 1 Edit setup.ini


Using a text editor such as NotePad, edit setup.ini. This file, on Disk 1 of the installation
diskettes, controls the installation process and determines the settings used for the installation.
The settings you can control are described below.
[Options] Setting Effect
AutoInstall = 0,1 Specify 1 to perform an unattended installation. All other
settings in setup.ini are ignored if this setting is not 1.
FileServerInstall= 0,1 Specifies whether DB2 Table Editor is already installed on a
file server. If the settings is 0, all DB2 Table Editor files are
installed to the directory specified in the InstallPath variable. If
the settings is 1, DB2 Table Editor must be previously installed
into the directory specified in the InstallPath variable. To
enable a file server installation, you must also set
SetupType=2.
SetupType= 0,1,2 Specifies the type of installation to perform. 0 indicates a
typical installation, 1 indicates a compact installation, 2
indicates a custom installation. If you select 2, you must
indicate which components to install. See the [Components]
table.
Note: Option 2, the custom installation, is strongly
recommended. Most users do not require the Console or
Developer.
InstallPath= <directory> Specifies the directory to receive the DB2 Table Editor
installation (if FileServerInstall=0) or the file server directory
that already contains the installation (if FileServerInstall=1).
ProgramGroup= <program Specifies the name of the program group to create (under
group Program Manager) or the program folder to create (for the
name> Windows 95, Windows 98 and Windows NT 4.0 Start Menu).

Using DB2 Tools and Utilities 11-13


[Components] Setting Effect
Admin 0,1 Specifies whether to install DB2 Table Editor Console. If
FileServerInstall=1, files are not copied to the local machine,
but program group icons are still created. Always set this
option to 0 for user installations.
Developer 0,1 Specifies whether to install DB2 Table Editor Developer. If
FileServerInstall=1, files are not copied to the local machine,
but program group icons are still created. Always set this
option to 0 for user installations.

Setup.ini example
[Options]
AutoInstall=1
FileServerInstall=0
SetupType=0
InstallPath=C:\Programs\DB2 Table Editor
ProgramGroup=DB2 Table Editor

This setup.ini file specifies an unattended installation. A typical installation is performed,


copying files to the C:\Programs\DB2 Table Editor directory, and creating a program group or
program folder named DB2 Table Editor.

Step 2 Save setup.ini


After you edit and save setup.ini, copy it to Disk 1 of the DB2 Table Editor installation diskettes.

Step 3 Run DB2 Table Editor Setup


Run the installation from the source diskettes or server. The installation proceeds automatically.

Installing the DB2 Table Editor Control Center Plug-in


The DB2 Table Editor Control Center Plug-in is an extension to the DB2 Control Center. For
more information about the Control Center, refer to the IBM DB2 Administration Guide.
The DB2 Table Editor Control Center plug-in adds DB2 Table Editor menu items to the table
popup menu and adds a toolbar button to start DB2 Table Editor as an add in tool.

11-14 Using DB2 Tools and Utilities


To install the DB2 Table Editor Control Center Plug-in application:
1. Copy db2forms.jar into DB2 SQLLIB\cc directory.
2. Copy the version of db2plug.zip that corresponds to the version of DB2 that you are
using, into DB2 SQLLIB\cc directory. There are two version of db2plug.zip:
db2plug.v6zip and db2plug.v7zip. If you are using DB2 Version 6, then copy
db2plug.v6zip, if you are using DB2 V7, then copy db2plug.v7zip.
3. Rename The release specific version that you copied to "db2plug.zip"
4. Locate the db2cc file in the SQLLIB\bin. If you are using Windows, the file is named
db2cc.bat. If you are using the Unix or Linux, the file is named db2cc.
5. Update db2cc to include both db2plug.zip and db2forms.jar. The file names must
follow a -c option. With newer releases of DB2, you must add the -c option. If you are
using an older version of DB2 which has the -c option specified in db2cc, you can
append the values to the end of the existing -c option.
Here is an example of db2cc after you have added the files if you are working with DB2 V6:
IF "%1" == "wait" GOTO WAIT

db2javit -j:"CC" -d:"CC" -c:"db2plug.zip;db2forms.jar" -o:"-mx128m -ms32m"


-a:"%1 %2 %3 %4 %5 %6 %7 %8"
GOTO END

:WAIT
db2javit -j:"CC" -d:"CC" -c:"db2plug.zip;db2forms.jar" -w: -o:"-mx128m -ms32m"
-a:"%2 %3 %4 %5 %6 %7 %8 %9"
GOTO END
:END

Here is an example of db2cc after you have added the files if you are working with DB2 V7:
IF "%1" == "wait" GOTO WAIT

db2javit -j:"CC" -d:"CC" -c:"db2forms.jar" -o:"-mx128m -ms32m" -a:"%1 %2


%3 %4 %5 %6 %7 %8"
GOTO END

:WAIT
db2javit -j:"CC" -d:"CC" -c:"db2forms.jar" -w: -o:"-mx128m -ms32m" -a:"%2 %3
%4 %5 %6 %7 %8 %9"
GOTO END
:END

If you are running the Control Center as a Java applet, complete the following steps:
1. Copy the db2forms.jar file where the <codebase> tag points to in db2cc.htm.
2. Update db2cc.htm to include db2plug.zip and db2forms.jar in the archive list.

Using DB2 Tools and Utilities 11-15


Summary

You should now be able to:


List the new DB2 Tools available to DB2 UDB administrators
Find further information about SQL errors using standard
utilities
Obtain the DDL needed to understand and recreate table
structures
Understand how to obtain and evaluate the IBM DB2 Table
Editor

11-16

11-16 Using DB2 Tools and Utilities


Exercises

Using DB2 Tools and Utilities 11-17


Exercise 1
Use db2 ? sqlerrorcode to investigate errors that might arise in the exercises of this module.
Thus, if you receive a message such as
SQL1024N A database connection does not exist. SQLSTATE=08003

you should run the query


db2 ? SQL1024N

to obtain the meaning of the error in detail.


1.1 Connect to your storesdb database created and loaded in Modules 6 and 7. This will
require that you perhaps restart your DB2 instance.

1.2 Perform some valid and invalid queries (i.e., SELECT from tables and columns that may
or may not exist).

1.3 Obtain the DDL that was used to create the orders table using db2look.

1.4 Insert a new row into the orders table.

11-18 Using DB2 Tools and Utilities


Exercise 2
Optional exercise, depending on the availability of the IBM DB2 Table Editor for the classroom
where the course is being taught. If the students have administrator access to the student NT/
2000 workstations, the exercise can include the installation of the IBM DB2 Table Editor
software on the workstation.
The exercises require Internet access by the instructor (direct connection or dial out) for
demonstration, or by the students (direct connection from classroom) for student participation.
2.1 (Optional) Install the IBM DB2 Table Editor in the same directory as the DB2 software,
but in a tools subdirectory, e.g.,
c:\DB2\Tools\Table Editor

2.2 Explore the installed programs that are available:


DB2 Table Editor Console
DB2 Table Editor Developer
DB2 Table Editor Resources on the Web
DB2 Table Editor itself

2.3 (If Internet access available) Use the DB2 Table Editor itself to select one of the
demonstration forms from the rocketsoftware.com website using File > Open from
Server > List ... Good, illustrative forms include:
DB2FORMS.Customer
DB2FORMS.Suppliers
DBE.Org
DBE.Movies
DBE.GetColleagues

2.4 Take note of how NULL values are currently displayed. Change the option as to how
NULL values are displayed (View > Options).

2.5 Use the DB2 Table Editor Developer to open one of the forms and locally edit this form
to rearrange the fields or add new features. Do not save the form to the server.

Using DB2 Tools and Utilities 11-19


11-20 Using DB2 Tools and Utilities
Solutions

Using DB2 Tools and Utilities 11-21


Solution 1
Use db2 ? sqlerrorcode to investigate errors that might arise in the exercises of this module.
Thus, if you receive a message such as
SQL1024N A database connection does not exist. SQLSTATE=08003

you should run the query


db2 ? SQL1024N

to obtain the meaning of the error in detail.


1.1 Connect to your storesdb database created and loaded in Modules 6 and 7. This will
require that you perhaps restart your DB2 instance.
db2start
db2 connect to storesdb

1.2 Perform some valid and invalid queries (i.e., SELECT from tables and columns that may
or may not exist).
db2 select * from customerrrrrrrrr
db2 ? SQL0204N

1.3 Obtain the DDL that was used to create the customer and the orders tables using db2look
(note that from the syntax chart you can ask for a list of tables).
db2look -d storesdb -t customer orders -e

1.4 Insert a new row into the orders table.


The constraints on the table are likely to cause errors, unless care is taken with the
values of the columns. This exercise and solution are intended to be used as a review
of constraints, and as an opportunity to generated related errors. The instructor
will discuss with the class the errors that were generated by student interaction.

11-22 Using DB2 Tools and Utilities


Module 12

Managing Backup and Recovery

Managing Backup and Recovery 02-2003 12-1


2002,2003 International Business Machines Corporation
Objectives
At the end of this module, you will be able to:
Identify the privileges and authorizations needed for backup and
recovery
Explain the fundamental backup and recovery mechanisms in
DB2 UDB
Explain logging alternatives
Identify all the requirements for disaster recovery
Perform a simple backup to and restore from disk

12-2

12-2 Managing Backup and Recovery


Topics Covered
Although not exhaustive, the topics covered in this module include:
Logging alternatives
Recovery of lost tables
Point-in-time recovery
General backup and recovery techniques

12-3

The objective of this module is to teach you what you need to know to perform a successful
backup and recovery. Further information can be found in the DB2 UDB Administration
Workshop course.
Logging
Recovery
Recovery of lost tables
Point-in-time recovery
General backup and recovery techniques

Managing Backup and Recovery 12-3


Recovery
Several levels of recovery are:
Crash recovery
Version (or restore) recovery
Roll forward recovery

12-4

Crash Recovery
The instance can simply stop and require restarting. No data was lost, except those transactions
that had not been committed on system failure. Recovery from this type of failure can be
automated using the DB configuration parameter AUTORESTART. his process is also
automated.

Version (or Restore) Recovery


For loss of media, hardware, or software and where the data is no longer intact, you restore that
data from a recent backup. This returns the data to its state at the point in time of that backup.

Roll Forward Recovery


For loss of media, hardware, or software and where the data is no longer intact, you would
restore that data from a recent backup. This is similar to the version/restore recovery, but by
using the roll forward recovery, the data can be recovered up to the point in time of failure
instead of just the point in time of the last backup.

12-4 Managing Backup and Recovery


Types of Logging
In DB2 UDB, there are two types of logging:
Circular logging:
Provides the ability to recover a database from a crash
condition
Logs are overwritten and not backed up
Cannot be used for a complete restore

Log-retention logging:
Provides the capability of complete backup and recovery
Logs are archived as they are used

12-5

In DB2 UDB, there are two types of logging: circular and log retention logging.

Important!
All logging is done at the database level, not the instance level!

Circular Logging
Circular logging provides the ability to recover a database from a crash conditionit logs
transactions as they occur but does not back up the logs as they are used. Therefore, when a log
is reused (circular) the old transaction data in it is no longer available for recovery. A log is not
freed for reuse until all transactions in it are committed or rolled back. Primary logs are used for
the normal transaction logging, but when the primary logs are full and none can be freed for
reuse, a secondary log is created to continue logging operations. This is useful when large units
of work (transactions) occur and cause all of the primary log files to be used. The secondary logs
prevent the database server from hanging.

Managing Backup and Recovery 12-5


Log-Retention Logging
Log-retention logging provides the capability of complete backup and recovery in case of
database data failure. In this case, the logs are archived (or backed up) as they are used so that
the log file can be reused as needed. These logs can be in one of three conditions:
Active these log files contain transaction information that has not yet been
committed or rolled back (active transactions). Information about committed
transactions that have yet to be written to the database files (changed buffer pool pages,
for example) are also retained in the active logs. This supports crash recovery.
Online archive these log files contain transaction information that has been
committed or rolled back (committed transactions), and are candidates for log backup.
This information is no longer needed for crash recovery, but is used for a roll forward
recovery.
Offline archive these log files are no longer needed for active transactions, and can
be backed up for protection. These are log files that have been moved to another
subdirectory and are waiting for backup. This process can be manual or automatic
(using the LOGRETAIN or USEREXIT DB configuration parameters). The backed up
offline archive log files can be used to recover the database to logical consistency up to
the point of time in these logs.
Tape usage is invoked using the USEREXIT facilities. Sample programs are provided:
db2uext2.cadsm for Tivoli Storage Manager (TSM) [formerly ADSM]
db2uext2.cxbsa to interface to Legato Networker 4.2.5 on AIX using the XBSA API

Dual logging was introduced in FixPak 3 of V7.2, but only for Unix. In V8, it is
V8 available for all environments.

Log Mirroring (also known as Dual Logging)


DB2 supports log mirroring at the database level. Mirroring log files helps protect a database
from:
Accidental deletion of an active log
Data corruption caused by hardware failure

12-6 Managing Backup and Recovery


The MIRRORLOGPATH configuration parameter allows the database to write an identical
second copy of log files to a different path. It is recommended that you place the secondary log
path on a physically separate disk (preferably one that is also on a different disk controller). That
way, the disk controller cannot be a single point of failure.

Note When MIRRORLOGPATH is first enabled, it will not actually be used until the next
database startup. This is similar to the NEWLOGPATH configuration parameter.

If there is an error writing to either the active log path or the mirror log path, the database will
mark the failing path as "bad", write a message to the administration notification log, and write
subsequent log records to the remaining "good" log path only. DB2 will not attempt to use the
"bad" path again until the current log file is completed. When DB2 needs to open the next log
file, it will verify that this path is valid, and if so, will begin to use it.
If this path is not valid, DB2 will not attempt to use the path again until the next log file is
accessed for the first time. There is no attempt to synchronize the log paths, but DB2 keeps
information about access errors that occur, so that the correct paths are used when log files are
archived. If a failure occurs while writing to the remaining "good" path, the database shuts
down.

Infinite Active Logging


Infinite active logging is also new in Version 8. It allows an active unit of work to span the
primary logs and archive logs, effectively allowing a transaction to use an infinite number of log
files. Without infinite active log enabled, the log records for a unit of work must fit in the
primary log space.
Infinite active log is enabled by setting LOGSECOND to -1. Infinite active log can be used to
support environments with large jobs that require more log space than you would normally
allocate to the primary logs.
The block on log disk full function that was introduced in Version 7 is now set using the
database configuration parameter BLK_LOG_DSK_FUL in Version 8.
Block on log disk full allows you to specify that DB2 should not fail when running applications
on disk full condition from the active log path. When you enable this option, DB2 will retry
every five minutes allowing you to resolve the full disk situation and allowing the applications
to complete.

Managing Backup and Recovery 12-7


Planning the Backup
Planning the backup and recovery should include:
Assigning SYSADM, SYSCTRL, SYSMAINT authority
Determining whether or not to use a storage manager to handle
the storage devices
Databases can be backed up to disk or tape, or to whatever device a
storage manager might use.

12-8

Planning the backup includes selecting a backup device, ensuring the user has the proper
authority to perform the backup, ensuring both the database and the instance have the correct
configuration parameters, and ensuring that the scheduling of the backup meets your backup and
recovery strategy.
You must have SYSADM, SYSCTRL, or SYSMAINT authority to use the BACKUP DATABASE
command.
The database can be local or remote, but the backup remains on the database server
unless a storage manager is used.
You can back up a database to a fixed disk, a tape, or a location managed by a storage
manager.

Note In this class, we will backup to disk and recover from disk.

12-8 Managing Backup and Recovery


Th

Configuration Parameters
Some DBM configuration parameters and DB configuration
parameters need to be addressed with respect to backup and
recovery.
To view the instance parameters:
db2 get dbm cfg
To view parameters for a specific database:
db2 get db cfg for <database-name>
To change configuration parameters:
db2 update dbm cfg using backbufsiz 1000

12-9

There are some configuration parameters that need to be addressed with respect to backup and
recovery. These parameters are both instance-wide and database specific.
To determine the current setting for both the DBM and the DB configuration parameters, use the
DB2 command line processor utility to view and change them.
To view the instance parameters, use:
db2 get dbm cfg
To view parameters for a specific database, use:
db2 get db cfg for <database-name>

For More Information


Examples of both the instance and database configuration parameters are in the
Appendix.

Managing Backup and Recovery 12-9


Configuration Parameters (cont.)
Instance parameters:
BACKBUFSZ RESTBUFSZ

Database parameters:
LOGFILSIZ LOGPRIMARY
LOGSECOND LOGRETAIN
USEREXIT AUTORESTART
NUM_DB_BACKUPS REC_HIS_RETENTN
Database configuration switches:
Backup pending Database is consistent
Rollforward pending Restore pending

12-10

The configuration parameters for the instance (DBM) that must be changed are:
BACKBUFSZ Backup buffer default size (4K)
RESTBUFSZ Restore buffer default size (4K)

The parameters for the database (DB) that must be changed are:
LOGFILSIZ Log file size (4K)
LOGPRIMARY Number of primary log files
LOGSECOND Number of secondary log files
LOGRETAIN Log retain for recovery enabled (ON)
USEREXIT User exit for logging enabled (ON)
AUTORESTART Auto restart enabled (ON)
NUM_DB_BACKUPS Number of database backups to retain
REC_HIS_RETENTN Recovery history retention (days)

12-10 Managing Backup and Recovery


Backing Up a Database
Use either the backup command at the command line, or the Control
Center GUI:
Use the command line version, backup, to perform the backup
in class:
db2 backup db storesdb to <location>
Page size for BACKUP is always 4 kilobytes
Buffer size specifies the internal buffer space used for the
backup. The default is 1024 pages (see note for Linux below)
If BUFFER is set to 0, the DBM configuration parameter
BACKBUFSZ is used.
Use the db2ckbkp utility to check the backup images

12-11

You can use the backup command on the command line or the Control Center to perform a
database backup.
The pagesize for backup is always 4 kilobytes and should not be confused with the multiple
page sizes allowed in table spaces for database data.
When creating a backup image (or restoring a backup image) the buffer size is 1024
pages (of 4K size). This is important if you are using tape as your backup device. If you
are using variable block sizes, you must lower your buffer size to a range that your tape
drive uses.
Under supported Windows operating systems, you can back up to diskette.
For most versions of Linux, using the DB2 default buffer sizes for backup and restore
to a SCSI tape device results in the error message: SQL2025N, reason code 75. To
prevent the overflow of Linux internal SCSI buffers, use this formula:

bufferpages <= ST_MAX_BUFFERS * ST_BUFFER_BLOCKS / 4


where bufferpages is the value of either BACKBUFSZ or RESTBUFSZ.
ST_MAX_BUFFERS and ST_BUFFER_BLOCKS are defined in the Linux kernel under the drivers/
scsi directory.

Managing Backup and Recovery 12-11


Restore
To use the restore utility to recover your database, you need one of
the following authorities:
SYSADM, SYSCTRL, or SYSMAINT

You can restore:


The recovery history file (backup image file information)
A dropped table
One or more table spaces
The whole database

Point-in-time recovery is possibleuse roll forward to ensure data


consistency.

12-12

To restore your database data to a new database, you must have either SYSADM or SYSCTRL
authority. If you are restoring to a current database, you can use the SYSMAINT authority.
You can choose to restore the recovery history file (backup image file information), one or more
table spaces, or the whole database, either local or remote.
A database restore requires an exclusive connection. No other applications can be connected to
that database. A table-space restore can take place with applications connected to the database,
but the table space being restored requires an exclusive connection by the restore process. To use
table-space-level backup and restore, log-retention logging (archive) must be used.
Point-in-time recovery during a restore is supported via the roll forward part of the recovery.
If the table spaces for a database do not exist when doing a restore, use the REDIRECT option on
the restore command (redirected restore). You can modify the table-space container definitions
during the restore (to add more storage space, for example).
After a restore, the database (or parts of it) may be left in a roll forward pending state. This is
used to ensure data consistency in the database and prevents any activity against the structures in
that state (see the roll forward discussion on the next page).

12-12 Managing Backup and Recovery


Restoring a Table Space
Syntax:
RESTORE DB <database_name>
TABLESPACE (tablespace_name, tablespace_name)
ONLINE
FROM <device or path>
Example:
RESTORE DB storesdb
TABLESPACE (userspace1, userspace2)
ONLINE
FROM /home/inst101/backup

12-13

DB2 UDB has the capability to restore one or more individual table spaces specified in a
comma-separated list. In the example above, the ONLINE keyword is optional and allows users to
connect to the database and access data in any table spaces other than the ones that are being
restored.

Note DB2 UDB also has the capability to restore a dropped table, as long as the DROPPED
TABLE RECOVERY option for a table space is set to ON. There are some restrictions;
for more detail on this feature see the DB2 UDB Administration Guide:
Implementation.

In V8, you can do a point-in-time restore using the local time instead of GMT.
V8

Managing Backup and Recovery 12-13


Roll Forward
You can roll the logs forward to:
The end of the current log path
Point in time (use GMT for command line) format:
yyyymmddhhmmssnnnnnn
Past the end of an online backup (during recovery)

Example:
db2 rollforward db storesdb
to end of logs and stop

12-14

The rollforward utility rolls the log files forward, applying the transaction in those logs to the
database. Use the rollforward utility after a table space has been restored from a backup or if
any of the table spaces have been taken offline.
If a database has been restored and was set to use the rollforward recovery, the database remains
in the rollforward-pending state until the rollforward utility completes successfully.

Tip It is a good idea to perform a complete backup after a recovery and rollforward.

Note If you restore from a full offline database backup image, you can bypass the
rollforward-pending state during the recovery process.

12-14 Managing Backup and Recovery


Summary
You should now be able to:
Identify the privileges and authorizations needed for backup and
recovery
Explain the fundamental backup and recovery mechanisms in
DB2 UDB
Explain logging alternatives
Identify all the requirements for disaster recovery
Perform a simple backup to and restore from disk

12-15

Related Classes
DB2 UDB Administration Workshop
DB2 UDB Advanced Recovery and High Availability Workshop

Managing Backup and Recovery 12-15


12-16 Managing Backup and Recovery
Exercises

Managing Backup and Recovery 12-17


Exercise 1
In this exercise, you will be setting configuration options to facilitate the backup and restore
operations.
All exercises here are done in command line mode. Most operations can be done using a GUI
interface. The approach taken is cookbook; this provides an opportunity to understand the
basic concepts and steps, but not enough skills training for industrial strength backup and
restore.
Note that when run at the command line level, these commands will be run with the DB2
command line processor program. Thus when you see get db cfg for storesdb this is run in
command line mode as:
db2 get db cfg for storesdb

1.1 Find out the current settings for the backing up the storesdb database (archiving is done
at the database level and not at the instance levels). Use DB2 to get the database manager
configuration parameters and the storesdb database configuration parameters.
Make the following changes to the current settings. (Other changes may be necessary for a
production database. These settings are only illustrative and apply to disk backup. Database
settings need only be made once or when changes are necessary and, thus, not every time that
backup is required.)

1.2 Some settings are at the instance level (DBM) and apply to all databases controlled by
the instance. Change the DBM BACKBUFSZ configuration parameter to 1000.

1.3 Some settings are at the database level (DB) and apply just to that database. Change the
storesdb LOGRETAIN configuration parameter to RECOVERY.

1.4 Changes to the DB configuration file do not take effect until the database is deactivated,
so terminate the storesdb database activity.

1.5 Verify that the changes were made by getting the DB configuration parameters for
storesdb.

12-18 Managing Backup and Recovery


Exercise 2
You will perform a basic Backup and Restore to disk in this exercise.
2.1 Perform a backup of the storesdb database data to a disk directory after you have
disconnected all applications. You will need to change to your home directory, create a
new directory named backup, then force all applications from the database before you
perform the backup.
cd
mkdir backup
db2 force application all
db2 backup db storesdb to $HOME/backup
This will take some time, so be patient! Note the timestamp of the backup. You will use
this timestamp shortly.

2.2 Look at the backup history for the storesdb database using the DB2 list command
db2 list history backup all for storesdb | more

2.3 Perform a simple restore from a backup using the DB2 restore command.
db2 restore db storesdb from /home/inst###/backup

2.4 You will need to complete the rollforward of the database in order to use it.
db2 rollforward db storesdb to end of logs and stop

Managing Backup and Recovery 12-19


Exercise 3
You will practice with multiple backup and restores in this exercise.
3.1 (Optional.) Repeat this process with the following steps:
Query the database for information in one row, such as:
db2 connect to storesdb
db2 "SELECT * FROM customer
WHERE customer_num = 103"
Note the phone number in this row.

3.2 Update data in this row:


db2 "UPDATE customer
SET phone = '321-555-1212'
WHERE customer_num = 103"

3.3 Perform another backup of the storesdb database.


db2 force application all
db2 backup db storesdb to $HOME/backup

3.4 List your backup directory. You should see two backup files.
ls -l backup

3.5 Select the same row from storesdb and check the phone number value.
db2 connect to storesdb
db2 "SELECT * FROM customer
WHERE customer_num = 103"
You should see the 32-555-1212 phone number.

3.6 Stop the database. Restore the data from the backup performed in exercise 2.1.
db2 force applications all
db2 restore db from /home/inst###/backup
taken at <timestamp>
without rolling forward
where the <timestamp> is the timestamp from your backup, from exercise 2.1.

12-20 Managing Backup and Recovery


3.7 Query the row viewed in exercise 3.1. Determine whether the update performed in
exercise 3.2 is still in the row.
db2 connect to storesdb
db2 "SELECT * FROM customer
WHERE customer_num = 103"

Managing Backup and Recovery 12-21


12-22 Managing Backup and Recovery
Solutions

Managing Backup and Recovery 12-23


Solution 1
1.1 Find out the current settings for the backing up the storesdb database (archiving is done
at the database level, and not at the instance level). Use DB2 to get the database manager
configuration parameters, and the storesdb database configuration parameters.
db2 get dbm cfg | more
db2 get db cfg for storesdb | more

1.2 Some settings are at the instance level (DBM) and apply to all databases controlled by
the instance. Change the DBM BACKBUFSZ configuration parameter to 1000.
db2 update dbm cfg using backbufsz 1000

1.3 Some settings are at the database level (DB) and apply just to that database. Change the
storesdb LOGRETAIN configuration parameter to RECOVERY.
db2 update db cfg for storesdb using logretain recovery

1.4 Changes to the DB configuration file do not take effect until the database is deactivated,
so terminate the storesdb database activity.
db2 terminate storesdb

1.5 Verify that the changes were made by getting the DB configuration parameters for
storesdb.

db2 get db cfg for storesdb | more


Other

Note: Exercises 2 & 3 have their solutions built into them, and thus those solutions are not
repeated here.

12-24 Managing Backup and Recovery


Module 13

Performance Monitoring and Tuning

Performance Monitoring and Tuning 02-2003 13-1


2002,2003 International Business Machines Corporation
Objectives

At the end of this module, you will be able to:


Describe the basic performance tuning techniques for DB2 UDB and
compare with Oracle
Determine and set database buffer pool sizing
Set database buffer pool cleaning frequency
Set the proper page size for extents
Describe the differences between DMS and SMS table space
performance
List and describe the DB2 UDB self-tuning capabilities
Use AUTOCONFIGURE command to set optimum database and
V8 or database server configuration parameters

13-2

Tuning a server instance and a database must begin as part of installation and development
before the data is laid out on disk and before the very first lines of code are written. This part of
design is too often omitted, unfortunately , or left until after the system is operational and
problems have already crept in.
Our discussion of performance and tuning here is intended to get you ahead of the curve. During
design, implementation, and testing you have a unique opportunity to experiment with your
settings before the new database and server are put into production.
The emphasis in this course is on DB2 UDB features that you will use throughout the life of
your server and need to plan for rather than comparisons with Oracle. The comparisons, while
interesting, do not provide a basis for making decisionseach database server has its own
underlying architecture that is more critical to performance than individual features.
The key factors that you do have some control over are:
Application designand moving an existing design from one server to another without
adapting it, as if understanding the server is not important, is generally a big mistake
Database designusing the features of the database software to implement a logical
design (tables, columns) and physical design (and thus placement of tables and indices)
Server architectureusing the features and parameters of your server to best effect

13-2 Performance Monitoring and Tuning


General Performance Issues

In both Oracle and DB2 UDB, performance is tuned at the instance


and the database level:
The instance is tuned from the operating system point of view
In DB2 UDB, each database is directly tuned using a
configuration file
In Oracle, databases are indirectly tuned using proper
placement of database objects ( partitioned tables)

13-3

This module demonstrates basic database performance tuning of DB2 UDB. The goal of this
module is to highlight DB2 UDB tuning parameters that have significant impact on the DB2
UDB instance and a given database. For example, determine whether to use an SMS- or DMS-
type of table space for your data and indexes.
As in Oracle, there are two basic places in DB2 UDB for performance tuningthe instance and
the database. However, they differ in their approach to performance tuning.
In Oracle, you tune the instance from the operating system point of view, controlling the
resources allocated overall to the instance. The database is then tuned based on layout of data,
table schema, indexes, and the like.
In DB2 UDB, performance tuning for the instance is still based on operating system resource
usage, and the database tuning is somewhat based on data layout, table schema, and indexes.
Beyond that, each database can be tuned to use a specific part of the resources allocated to the
instance.
In DB2 UDB, each database has its own buffer pools and its own set of logs. These can be tuned
differently for each database.

Performance Monitoring and Tuning 13-3


Tuning Oracle versus DB2 UDB

Approximate Oracle tuning capability:


Instance configuration parameters: 250 (Oracle 9i Release 2)
Environment variables: 10 (none for tuning)

Approximate DB2 UDB tuning capability:


DAS configuration parameters: 18
Instance configuration parameters: 80
Database configuration parameters: 70 (per database)
Environment/registry variables: 120

13-4

The DB2 UDB database has a significantly greater capacity (and complexity) for tuning than
Oracle, which results in a greater degree of control over the use of system resources. In addition
to the increased number of tuning parameters in DB2 UDB, there is a significant amount of
interdependence between several of the instance configuration parameters and the database
configuration parameters, adding to the performance-tuning challenge.

13-4 Performance Monitoring and Tuning


DB2 UDB Memory Elements

Global Control

Database Global

Utility Heap Extended Database Heap


Backup Buffer Memory Log Buffer
Restore Buffer Storage Catalog Cache

Global
Package Sort Heap Buffer Pools Lock List
Cache

13-5

As shown in the above diagram, the Database Global Memory consists of:
Utility heap, backup buffer, and restore buffer
Extended memory storage
Database heap, log buffer, and catalog cache
Global package cache
Sort heap
Buffer pools
Lock list
It is beyond the scope of this module to define and explain each of these sections of memory.
However, the main point of the diagram is to show that DB2 UDB memory is complex and
adjustable. As part of the basic performance tuning of this module, you will be configuring
buffer pools, which can result in a significant gain in performance.

Performance Monitoring and Tuning 13-5


Tuning is Tuning

Oracle and DB2 UDB tuning deal with the same three components:
Memory
Disk
Processes

13-6

Even though the configuration names and settings may be different, a database engine is a
database engine, and both Oracle and DB2 UDB are tuned by manipulating the same three
components: memory, disk, and processes. The next few slides will concentrate on these three
areas.

13-6 Performance Monitoring and Tuning


RUNSTATS

Used to update statistics about the physical characteristics of a table


and the associated indexes.
It is recommended to run the RUNSTATS command:
On tables that have been highly modified
On tables that have been reorganized
When a new index has been created
Before binding applications whose performance is critical
When the prefetch quantity is changed

After the command is run, note the following :


A COMMIT should be issued to release the locks

13-7

RUNSTATS is used to update statistics about the physical characteristics of a table and the
associated indexes. These characteristics include number of records, number of pages, and
average record length. The optimizer uses these statistics when determining access paths to the
data.
This utility should be called when a table has had many updates, or after reorganizing a table.
It is recommended to run the RUNSTATS command:
On tables that have been modified considerably.
On tables that have been reorganized (using REORG, REDISTRIBUTE DATABASE
PARTITION GROUP).
When a new index has been created.
Before binding applications whose performance is critical.
When the prefetch quantity is changed.
The options chosen must depend on the specific table and the application. In general:
If the table is a very critical table in critical queries, is relatively small, or does not
change too much and there is not too much activity on the system itself, it may be worth
spending the effort on collecting statistics in as much detail as possible.

Performance Monitoring and Tuning 13-7


If the time to collect statistics is limited, the table is relatively large and/or changes a
lot, it might be beneficial to execute RUNSTATS limited to the set of columns that are
used in predicates.
If time to collect statistics is very limited and the effort to tailor the RUNSTATS
command on a table by table basis is a major issue, consider collecting statistics for the
.KEY. columns only.
If there are many indexes on the table and DETAILED (extended) information on the
indexes might improve access plans, consider the SAMPLED option to reduce the time
it takes to collect statistics.
If there is skew in certain columns and predicates of the type .column = constant., it
may be beneficial to specify a larger NUM_FREQVALUES value for that column.
Collect distribution statistics for all columns that are used in equality predicates and for
which the distribution of values might be skewed.
For columns that have range predicates (for example .column >= constant., .column
BETWEEN constant1 AND constant2.) or of the type .column LIKE %xyz., it may be
beneficial to specify a larger NUM_QUANTILES value.
If storage space is a concern and one cannot afford too much time on collecting
statistics, do not specify high NUM_FREQVALUES or NUM_QUANTILES values for
columns that are not used in predicates.
Note that if index statistics are requested, and statistics have never been run on the table
containing the index, statistics on both the table and indexes are calculated.
After the command is run, note the following :
A COMMIT should be issued to release the locks.
To allow new access plans to be generated, the packages that reference the target table
must be rebound.
Executing the command on portions of the table could result in inconsistencies as a
result of activity on the table since the command was last issued.
In the .On Dist Cols. clause of the command syntax, the .Frequency Option. and .Quantile
Option. parameters are currently not supported for Column GROUPS. These options are
supported for single columns.

13-8 Performance Monitoring and Tuning


RUNSTATS Examples

Collect statistics on the table only, on all columns without distribution


statistics:
RUNSTATS ON TABLE db2user.employee
Collect statistics on the table only, on columns empid and empname
with distribution statistics:
RUNSTATS ON TABLE db2user.employee WITH
DISTRIBUTION ON COLUMNS (empid, empname)
Collect statistics on a set of indexes:
RUNSTATS ON TABLE db2user.employee for indexes
db2user.empl1, db2user empl2
Collect basic statistics on all indexes only:
RUNSTATS ON TABLE db2user.employee FOR INDEXES ALL

13-9

Example RUNSTATS to:


Collect statistics on the table only, on all columns without distribution statistics:
RUNSTATS ON TABLE db2user.employee
Collect statistics on the table only, on columns empid and empname with distribution
statistics:
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION
ON COLUMNS (empid, empname)
Collect statistics on the table only, on all columns with distribution statistics using a
specified number of frequency limit for the table while picking the
NUM_QUANTILES from the configuration setting:
RUNSTATS ON TABLE db2user.employee WITH DISTRIBUTION
DEFAULT NUM_FREQVALUES 50
Collect statistics on a set of indexes:
RUNSTATS ON TABLE db2user.employee for indexes
db2user.empl1, db2user empl2
Collect basic statistics on all indexes only:
RUNSTATS ON TABLE db2user.employee FOR INDEXES ALL

Performance Monitoring and Tuning 13-9


REORG INDEXES/TABLE

Used to reorganize an index or a table


The index option reorganizes all indexes defined on a table by
rebuilding the index data into unfragmented, physically contiguous
pages:
If you specify the CLEANUP ONLY no index rebuild is done
This command cannot be used against indexes on declared
temporary tables
The table option reorganizes a table by reconstructing the rows to
eliminate fragmented data while compacting information.

13-10

REORG INDEXES/TABLE is used to reorganize an index or a table.


The index option reorganizes all indexes defined on a table by rebuilding the index data into
unfragmented, physically contiguous pages. If you specify the CLEANUP ONLY option of the
index option, cleanup is performed without rebuilding the indexes. This command cannot be
used against indexes on declared temporary tables (SQLSTATE 42995).
The table option reorganizes a table by reconstructing the rows to eliminate fragmented data,
and by compacting information.

13-10 Performance Monitoring and Tuning


REORG Examples

Reorganize a table to reclaim space and use the temporary table


space mytemp1:
db2 REORG TABLE homer.employee USE mytemp1
Clean up the deleted keys and empty pages in the indexes on the
EMPLOYEE table while other transactions occur:
db2 REORG INDEXES ALL FOR TABLE homer.employee
ALLOW WRITE ACCESS CLEANUP ONLY
Clean up the empty pages in all the indexes on the EMPLOYEE table
while other transactions occur:
db2 REORG INDEXES ALL FOR TABLE homer.employee
ALLOW WRITE ACCESS CLEANUP ONLY PAGES

13-11

REORG Examples:
For a classic REORG TABLE like the default in DB2 Version 7, enter the following command:
db2 REORG TABLE employee INDEX empid ALLOW NO ACCESS
INDEXSCAN LONGLOBDATA

Defaults are different in DB2 Version 8.


V8
To reorganize a table to reclaim space and use the temporary table space mytemp1, enter the
following command:
db2 REORG TABLE homer.employee USE mytemp1
To reorganize tables in a partitiongroup consisting of nodes 1, 2, 3, and 4 of a four-node system,
you can enter either of the following commands:
db2 REORG TABLE employee INDEX empid ON dbpartitionnum (1,3,4)
or
db2 REORG TABLE homer.employee INDEX homer.empid ON
ALL dbpartitionnums EXCEPT dbpartitionnum (2)

Performance Monitoring and Tuning 13-11


To clean up the pseudo deleted keys and pseudo empty pages in all the indexes on the
EMPLOYEE table while allowing other transactions to read and update the table, enter:
db2 REORG INDEXES ALL FOR TABLE homer.employee
ALLOW WRITE ACCESS CLEANUP ONLY
To clean up the pseudo empty pages in all the indexes on the EMPLOYEE table while allowing
other transactions to read and update the table, enter:
db2 REORG INDEXES ALL FOR TABLE homer.employee
ALLOW WRITE ACCESS CLEANUP ONLY PAGES
To reorganize the EMPLOYEE table using the system temporary table space TEMPSPACE1 as
a work area, enter:
db2 REORG TABLE homer.employee USING tempspace1
Information about the current progress of table reorganization is written to the history file for
database activity. The history file contains a record for each reorganization event. To view this
file, execute the db2 list history command for the database that contains the table you are
reorganizing.
You can also use table snapshots to monitor the progress of table reorganization. Table
reorganization monitoring data is recorded regardless of the Database Monitor Table Switch
setting.

13-12 Performance Monitoring and Tuning


REORGCHK

REORGCHK calculates statistics on the database to determine if


tables, indexes, or both need to be reorganized or cleaned up.
Unless you specify the CURRENT STATISTICS option, REORGCHK
gathers statistics on all columns using the default options only.
The statistics gathered depend on the kind of statistics currently
stored in the catalog tables.
REORGCHK calculates statistics obtained from eight different
formulas to determine if performance:
has deteriorated, or
can be improved by reorganizing a table or its indexes

13-13

REORGCHK calculates statistics on the database to determine if tables or indexes, or both, need
to be reorganized or cleaned up.
This command does not display declared temporary table statistical information.
This utility does not support the use of nicknames.
Unless you specify the CURRENT STATISTICS option, REORGCHK gathers statistics on all
columns using the default options only. Specifically, column group are not gathered and if LIKE
statistics were previously gathered, they are not gathered by REORGCHK.
The statistics gathered depend on the kind of statistics currently stored in the catalog tables:
If detailed index statistics are present in the catalog for any index, table statistics and
detailed index statistics (without sampling) for all indexes are collected.
If detailed index statistics are not detected, table statistics as well as regular index
statistics are collected for every index.
If distribution statistics are detected, distribution statistics are gathered on the table. If
distribution statistics are gathered, the number of frequent values and quantiles are
based on the database configuration parameter settings.
REORGCHK calculates statistics obtained from eight different formulas to determine if
performance has deteriorated or can be improved by reorganizing a table or its indexes.

Performance Monitoring and Tuning 13-13


An example of using REORGCHK:
db2 "CONNECT TO sample"
db2 "REORGCHK UPDATE STATISTICS ON TABLE system"

Doing RUNSTATS ....

Table statistics:

F1: 100 * OVERFLOW / CARD < 5


F2: 100 * TSIZE / ((FPAGES-1) * (TABLEPAGESIZE-76)) > 70
F3: 100 * NPAGES / FPAGES > 80

CREATOR NAME CARD OV NP FP TSIZE F1 F2 F3 REORG


--------------------------------------------------------------------------------
SYSIBM SYSATTRIBUTES - - - - - - - - ---
SYSIBM SYSBUFFERPOOLNODES - - - - - - - - ---
SYSIBM SYSBUFFERPOOLS 1 0 1 1 44 0 - 100 ---
SYSIBM SYSCHECKS - - - - - - - - ---
SYSIBM SYSCOLAUTH - - - - - - - - ---
SYSIBM SYSCOLCHECKS - - - - - - - - ---
SYSIBM SYSCOLDIST - - - - - - - - ---
SYSIBM SYSCOLOPTIONS - - - - - - - - ---
SYSIBM SYSCOLPROPERTIES - - - - - - - - ---
SYSIBM SYSCOLUMNS 1766 2 79 79 309050 0 98 100 ---
SYSIBM SYSCONSTDEP 2 0 1 1 186 0 - 100 ---
SYSIBM SYSDATATYPES 17 0 1 1 7327 0 - 100 ---
SYSIBM SYSDBAUTH 3 0 1 1 126 0 - 100 ---
SYSIBM SYSDEPENDENCIES - - - - - - - - ---

The terms for the table statistics (formulas 1-3) mean:


CARD Number of rows in base table.
OV (OVERFLOW) Number of overflow rows.
NP (NPAGES) Number of pages that contain data.
FP (FPAGES) Total number of pages.
TSIZE Table size in bytes. Calculated as the product of the number of rows in the
table (CARD) and the average row length. The average row length is computed as the
sum of the average column lengths (AVGCOLLEN in SYSCOLUMNS) plus 10 bytes
of row overhead. For long fields and LOBs only the approximate length of the
descriptor is used. The actual long field or LOB data is not counted in TSIZE.
TABLEPAGESIZE Page size of the table space in which the table data resides.
F1 Results of Formula 1.
F2 Results of Formula 2.
F3 Results of Formula 3.

13-14 Performance Monitoring and Tuning


REORG Each hyphen (-) displayed in this column indicates that the calculated
results were within the set bounds of the corresponding formula, and each asterisk (*)
indicates that the calculated results exceeded the set bounds of its corresponding
formula.
- or * on the left side of the column corresponds to F1 (Formula 1).
- or * in the middle of the column corresponds to F2 (Formula 2).
- or * on the right side of the column corresponds to F3 (Formula 3).
Table reorganization is suggested when the results of the calculations exceed the bounds set by
the formula.
The discussion above shows REORGCHK for tables; a similar output is provided for indexes.

Performance Monitoring and Tuning 13-15


Buffer Pool Parameters

The following database configuration parameters are used for buffer


pool tuning:
BUFFPAGE
Default buffer pool size (pages) used by the CREATE BUFFERPOOL
and ALTER BUFFERPOOL statements when SIZE is set to -1
CHNGPGS_THRESH
Threshold percentage for the number of changed pages
NUM_IOCLEANERS
Number of asynchronous page cleaners (similar to db_writers in
Oracle 7, or db_writer_processes in Oracle 8 and later)

13-16

BUFFPAGE
In DB2 UDB, buffer pools are part of the database and not part of the instance. Each database
has one default buffer pool with a default size of 1000 pages for UNIX systems and 250 pages
for Windows systems. Additional buffer pools can be created using the CREATE BUFFERPOOL
statement, and if the SIZE value is set to -1 then the BUFFPAGE value is used to size the buffer
pool. For example, the following SQL statement creates a buffer pool with a size defined by the
BUFFPAGE value:

CREATE BUFFERPOOL bp_mypool SIZE -1


Once a buffer pool has been created, its size remains stable unless it is altered by the ALTER
BUFFERPOOL statement. For example, the following SQL statement changes the size of the
buffer pool to 20,000 pages:
ALTER BUFFERPOOL bp_mypool SIZE 20000
There are numerous theories about how many buffer pools a database ought to have. Only three
theories are listed here; however, buffer pool configuration is a study unto itself.

13-16 Performance Monitoring and Tuning


Theory One:
One theory holds that there should be at least three buffer pools:
A smaller one for randomly accessed tables
A larger one for the rest of the tables
A larger one for all the indexes
This prevents the tables that are randomly accessed from thrashing the main buffer pool and also
separates out data pages from index pages.

Theory Two:
A second theory holds that there should be at least two buffer pools:
One for all of the static tables
Another for the data and index pages
This configuration allows the static tables to be retained in memory, and the most active data and
index pages are retained as well.

Theory Three:
A third theory suggests that there should be only one buffer pool:
Some IBM case studies have demonstrated that one large buffer pool is just as efficient
as multiple buffer pools, and only the most active pages of a database ought to be
retained in memory regardless of their type.
Many IBM articles are devoted to the subject of buffer pool configuration, and there are many
opinions on what is best.

Important!
When tuning, adjust only one parameter at a time!

Performance Monitoring and Tuning 13-17


CHNGPGS_THRESH
This database configuration parameter is the percentage of modified pages that can exist before
page cleaner processes are triggered. Once this value is exceededin any one buffer poolpage
cleaning begins. Page cleaning continues until all of the modified buffer pool pagesin every
buffer poolare synchronized with disk. The range for this parameter is 5 to 99, and the default
is 60.

NUM_IOCLEANERS
This database configuration parameter is equivalent to the Oracle parameter db_writers
(Oracle 7) or db_writer_processes (Oracle 8 and later) and is the number of page cleaning
processes that are allocated at database startup. These are the processes that are triggered when
the CHNGPGS_THRESH is exceeded. The rule of thumb is to set this parameter to one or two
more than the number of physical disks on which the database resides. The range for this
parameter is from 0 to 255, and the default is 1.

13-18 Performance Monitoring and Tuning


Page Cleaning

DB2 UDB has three page-cleaning mechanisms.

At a maximum interval:
SOFTMAX for DB2 UDB

Exceeding a modified pages threshold:


CHNGPGS_THRESH for DB2 UDB

Buffer is full of modified pages:


Victim or stolen page for DB2 UDB

13-19

The three page-cleaning mechanisms in DB2 UDB are similar to the mechanisms found in some
form in other vendor database servers. Thus, for instance, in Oracle, the number of Database
Writer processes (DBWR) handle the flushing of dirty pages from the buffer cache. As of Oracle
8i, different blocksizes can be used for different tablespaces.
We will discussed here, however, only the DB2 UDB mechanism in detail.
1. There is a mechanism to clean the buffer pool pages back to disk when a maximum
interval is exceeded. For DB2 UDB, this maximum interval is set using the DB
configuration parameter, SOFTMAX. This parameter is a percentage of logical log files
that can be filled between soft checkpoints. For example, if SOFTMAX is set to 200, then
200% of logical log files (or 2 logs) are filled between SOFTMAX soft checkpoints in the
logical log files. The concept of a checkpoint interval in DB2 UDB is measured in log
space used rather than seconds of time as one normal expects an interval to be.
2. There is a mechanism to clean buffer pool pages back to disk when a percentage of the
buffer pool pages have been modified. This mechanism is triggered by the DB
configuration parameter CHNGPGS_THRESH.
3. There is a mechanism to clean the buffer pool pages back to disk when the buffer pool
is full of modified pages. When this condition occurs in DB2 UDB, the least-recently-
used page is victimized, or stolen, and cleaned back to disk. In order for this page to be

Performance Monitoring and Tuning 13-19


cleaned, the agent process that requested a buffer pool page has to interrupt its
processing and clean the page back to disk as a single-page, synchronous I/O operation,
which is relatively costly.
The following table summarize the three page-cleaning mechanisms in DB2 UDB:
Process activity * Log pages Asynchronous pages Victim pages
Triggering condition SOFTMAX CHNGPGS_THRESH Buffer pool full
Process type Page cleaner Page cleaner AGENT

Note * In DB2 UDB there is not a specific name for each type of page cleaning activity.
For example, asynchronous pages can include log pages depending on how the log
pages were cleaned to disk (either synchronously or asynchronously). The number of
pages written as a result of CHNGPGS_THRESH being exceeded is calculated by
taking asynchronous pages and subtracting log pages written asynchronously.
Victim pages are always called either victim pages or stolen pages.

13-20 Performance Monitoring and Tuning


Tuning Buffer Pool Parameters

Buffer pools size:


Bigger is better

CHNGPGS_THRESH
Set lower if victim/dirty page steals occur

NUM_IOCLEANERS
One or two more than the number of physical drives

13-21

The more active pages of the database that can be retained in memory, the better the query
performance. Therefore, the larger the buffer pool the better. However, a buffer pool that is too
large could cause memory paging at the OS level, so a good rule of thumb is to allocate the
largest buffer pool that is possible without causing paging.
If CHNGPGS_THRESH is set too high, the buffer pool fills up with modified pages faster than the
asynchronous page cleaning processes can clean them back to disk. Should the buffer pool
become full, the database agent processes that normally only handle SQL processing must take
over the process of page cleaning. When this occurs, the least-recently-used page in the pool is
declared a victim (is stolen) and is cleaned back to disk by the database agent as a synchronous
I/O event. If CHNGPGS_THRESH is set too low, then modifications to the pages are not allowed to
accumulate before they are written out to disk, and excessive I/O operations are performed.
You can monitor for victim pages by creating an event monitor using the CREATE EVENT
MONITOR BUFFERPOOLS statement and then analyzing the results by using the System Monitor
GUI tool or the db2evmon command line tool.

Performance Monitoring and Tuning 13-21


Disk Allocation

Disk is allocated as:


SMS table space
DMS table space

13-22

Disk space is either allocated as SMS table space or DMS table space. Generally SMS table
space access is slower, since the I/O is passed through the operating system I/O buffers.
However, SMS table space is easier to set up. DMS table spaces can be faster since the Database
Manager directly controls the I/O to these table spaces when the containers are raw devices.
However, DMS table spaces require some additional steps to set up, since the device-type
containers must be predefined.
In DB2 UDB, containers are either directories, files, or raw devices.
Therefore, since SMS table spaces uses directories as containers, SMS table spaces are always
cooked.
DMS table spaces are cooked if they use files as containers, and raw if they use devices as
containers.
DMS table spaces that use devices for containers have the best performance.

13-22 Performance Monitoring and Tuning


Page Size Tuning

Page size is dependent on:


Maximum row size in a table
Maximum amount of data in a table
Maximum number of columns in a table
Minimizing wasted space on disk

13-23

The page size that should be used for a DB2 UDB table space is dependent on four factors.
The first factor is the maximum row size. The page size of a table space should always exceed
the maximum row size for any table in the table space. The chart below specifies the maximum
row size for a given page size. It is possible to create a table in a table space whose page size is
smaller than the row size; however, query performance is seriously degraded.
The second factor is the maximum amount of data stored in a table. The chart below specifies
the maximum table size for a given table space page size:
Page Size Maximum Row Size Maximum Table Size
4 KB 4005 bytes 64 GB
8 KB 8101 bytes 128 GB
16 KB 16293 bytes 256 GB
32 KB 32677 bytes 512 GB

The third factor is the number of columns in a table. For a 4K page size, a table is limited to 500
columns. Should a table exceed this limitation, then the table should be moved to a table space
that has a page size of 8K, 16K, or 32K, in which case the maximum number of columns
allowed is 1012.

Performance Monitoring and Tuning 13-23


The forth factor is the row size relative to the page size. We have seen from a previous chapter
that the closer that row size is to a multiple of the page size, the less wasted space there is on a
page.

13-24 Performance Monitoring and Tuning


Extent Size Tuning

Extent size is dependent on:


Size of the table
Growth rate of the table

13-25

The extent size for a table space reflects the number of pages of data that are written to a
container before switching to the next container. For SMS table spaces, space is only allocated
one page at a time. When the number of pages for an extent size have been allocated, the engine
switches to the next container. For DMS table spaces, space is allocated as a full extent at a time,
which fills up as pages are used. When the extent is full, the engine switches to the next extent.
No matter which type of table space is used, extent size tuning is dependent on two factors.

Size of the table


The first factor is the size of the table. Since each table also has an extent allocated as an extent
map, the minimum number of extents for a table is two. For tables that are relatively small and
static, a small extent size is preferred in order to minimize unused space.

Growth rate
The second factor is growth rate. For tables that are actively growing, a large extent size is
preferred. This reduces the number of extent allocation operations that the Database Manager
must perform.

Performance Monitoring and Tuning 13-25


The range of values for extent size is 2 to 256 pages. The default value is either 32 pages or the
value of the DB configuration parameter DFT_EXTENT_SZ, if it is set. The default value can be
overridden by using the EXTENT SIZE parameter in the CREATE TABLESPACE statement.

13-26 Performance Monitoring and Tuning


Prefetch Size Tuning

Prefetching is based on:


Extent size
Sequential reads of data or index pages

13-27

Prefetching is the process of reading pages into the buffer pool prior to the user needing the
pages for a query. It is triggered by a sequential read of the table and is similar to the concept of
read-ahead.
Tuning the prefetch size is a straightforward process. First, the prefetch size should be a multiple
of the extent size. Second, IBM case studies have shown that prefetching works best at one or
two multiples of the extent size. There is almost no increase in performance for prefetch
multiples of three or more.

Performance Monitoring and Tuning 13-27


Process Tuning

Processes are controlled by:


Database Manager configuration parameters
Generally referred to as agents
Database configuration parameters
Generally referred to as appls

13-28

Since DB2 UDB is a single-threaded database engine, there are numerous configuration
parameters that control the quantity of processes running.
At the Database Manager level, there are several configuration parameters that limit the number
of agents available to process the requests submitted by application connections. For example,
the parameter MAXAGENTS limits the maximum number of agents available to all the databases
on the instance.
At the Database level, there are also several configuration parameters that limit the number of
processes running. For example, the parameter MAXAPPLS limits the number of applications that
can be connected to the database at one time.

For More Information


There are many more DB2 UDB configuration parameters that affect processes
other than MAXAGENTS and MAXAPPLS. For a complete list, refer to DB2 UDB
Administration Guide: Performance (Chapter 13).

13-28 Performance Monitoring and Tuning


Tuning MAXAGENTS and MAXAPPLS

MAXAGENTS:
Maximum number of agents available to process requests
across all databases
MAXAPPLS:
Maximum number of applications that can connect to the
database

13-29

MAXAGENTS applies to the Database Manager level. It limits the maximum number of processes
that are available to perform all of the requests from all of the applications connected to all of
the databases on the instance. MAXAPPLS only applies to the database and acts to limit the
number of applications that connect to the database at one time. The two configuration
parameters are interrelated and resetting one may require resetting the other.

Performance Monitoring and Tuning 13-29


DB2 UDB Self-tuning Capability

DB2 UDB has several built-in, self-tuning algorithms:


Memory sections can borrow memory from other sections
An instance can automatically allocate more logical logs

13-30

DB2 UDB has several mechanisms that monitor current conditions within the instance or
databases and can self-tune the instance. For example, package cache is a section of memory
used for storing query plans optimized at BIND time. The package cache monitors its memory
use, and, when memory runs low, it attempts to borrow unused memory from other sections.
Another example is the allocation of secondary logical logs. When the database runs out of
space in the primary logical logs, it automatically allocates secondary logical logs. The numbers
of both primary and secondary logical logs are tunable parameters. You will get a chance to see
secondary logical logs that are allocated in a lab exercise at the end of this module.

13-30 Performance Monitoring and Tuning


Log Files

Primary log files:


Set with the LOGPRIMARY database configuration parameter
Permanently allocated

Secondary log files:


Set with the LOGSECOND database configuration parameter
Allocated and deleted as needed by the database

13-31

In order to prevent an engine crash as a result of running out of logical logs, if the DB2 UDB
Database Manager determines that there are no more primary logs available, it automatically
begins allocating secondary log files in order to continue operation. As each secondary log
becomes full, the Database Manager again checks to see if there are any primary logs that are
not current and are available to be overwritten. If a primary log is now available, the engine uses
it. If a primary log is not available, the engine will allocate another secondary log. When a
secondary log is no longer needed, the engine deallocates it. This process continues until the
Database Manager has allocated all the secondary logs allowed by the LOGSECOND parameter.

In V8, there is another parameter that can be set for log management:
V8 MIRRORLOGPATH. Dual logging was introduced in 7.2, but only for Unix. In that
version, dual logging was enabled by setting the DB2NEWLOGPATH2 registry
variable to Yes. See module 12 (Managing Backup and Recovery) in this course for
more information.

Performance Monitoring and Tuning 13-31


The AUTOCONFIGURE Command

AUTOCONFIGURECalculates and displays the optimum values for


V8 the buffer pool size, database configuration and database manager
configuration parameters, with the option of applying these
recommended values.

13-32

Command parameters:
USING input-keyword param-value
Valid input keywords and parameter values:
Keyword Valid Default Explanation
values value
mem_percent 1100 80 Percentage of memory to dedicate. If
other applications (other than the
operating system) are running on this
server, set this to less than 100.
workload_type simple, mixed, mixed Simple workloads tend to be I/O
complex intensive and mostly transactions,
whereas complex workloads tend to be
CPU intensive and mostly queries.
num_stmts 11,000,000 10 Number of statements per unit of work.
tpm 150,000 60 Transactions per minute.
admin_priority performance, both Optimize for better performance (more
recovery, both transactions per minute) or better
recovery time.
is_populated yes, no yes Is the database populated with data?

13-32 Performance Monitoring and Tuning


num_local_apps 05,000 0 Number of connected local applications.
num_remote_apps 05,000 10 Number of connected remote
applications.
isolation RR, RS, CS, RR Isolation level of applications connecting
UR to this database (Repeatable Read, Read
Stability, Cursor Stability, Uncommitted
Read).
bp_resizeable yes, no yes Are buffer pools resizeable?
APPLY
DB ONLY
Displays all the recommended changes and applies the recommended changes only to
the database configuration and the buffer pool settings. This is the default setting if the
APPLY option is not specified.
DB AND DBM
Displays and applies the recommended changes to the database manager configuration,
the database configuration, and the buffer pool settings.
NONE
Displays the recommended changes, but does not apply them.

Performance Monitoring and Tuning 13-33


Monitoring the Server/Database

There are two primary tools with which you can access system
monitor information, each serving a different purpose:
The snapshot monitor enables you to capture a picture of the
state of database activity at a particular point in time
Event monitors log data as specified database events occur

For both snapshot and event monitors you have the option of:
storing monitor information in files or SQL tables
viewing it on screen (directing it to standard-out)
processing it with a client application

13-34

Database System Monitor


To facilitate monitoring, DB2 collects information from the database manager, its databases, and
any connected applications. With this information you can do the following, and more:
Forecast hardware requirements based on database usage patterns.
Analyze the performance of individual applications or SQL queries.
Track the usage of indexes and tables.
Pinpoint the cause of poor system performance.
Assess the impact of optimization activities (for instance, altering database manager
configuration parameters, adding indexes, or modifying SQL queries).
There are two primary tools with which you can access system monitor information, each
serving a different purpose: the snapshot monitor and event monitors.
The snapshot monitor enables you to capture a picture of the state of database activity at
a particular point in time (the moment the snapshot is taken).
Event monitors log data as specified database events occur.

13-34 Performance Monitoring and Tuning


The system monitor provides multiple means of presenting monitor data to you. For both
snapshot and event monitors you have the option of storing monitor information in files or SQL
tables, viewing it on screen (directing it to standard-out), or processing it with a client
application.

Performance Monitoring and Tuning 13-35


Snapshot Monitoring

The snapshot monitor includes information such as:


Current status of the database
Information on the current or most recent unit of work
List of locks being held by an application
Status of an application
Current number of connections to a database
Most recent SQL statement performed by an application
Run-time values for configurable system parameters
Number of deadlocks that have occurred
Number of transactions performed on a database
Amount of time an application has waited on locks

13-36

The snapshot monitor provides two categories of information for each level being monitored:
State
This includes information such as:
Current status of the database.
Information on the current or most recent unit of work.
List of locks being held by an application.
Status of an application.
Current number of connections to a database.
Most recent SQL statement performed by an application.
Run-time values for configurable system parameters.
Counters
These accumulate counts for activities from the time monitoring started until the time a
snapshot is taken. Such as:
Number of deadlocks that have occurred.
Number of transactions performed on a database.
Amount of time an application has waited on locks.

13-36 Performance Monitoring and Tuning


Snapshot Switch Settings

Example monitor switch settings:

Monitor Recording Switches

Switch list for node 0


Buffer Pool Activity Information(BUFFERPOOL) = OFF
Lock Information (LOCK) = OFF
Sorting Information (SORT) = OFF
SQL Statement Information (STATEMENT) = ON 05-25-1997 10:44:34.82044
Table Activity Information (TABLE) = OFF
Unit of Work Information (UOW) = OFF

13-37

You can view the current state of your applications monitor switches:
db2 GET MONITOR SWITCHES

Monitor Recording Switches

Switch list for node 0


Buffer Pool Activity Information (BUFFERPOOL) = OFF
Lock Information (LOCK) = OFF
Sorting Information (SORT) = OFF
SQL Statement Information (STATEMENT) = OFF
Table Activity Information (TABLE) = OFF
Unit of Work Information (UOW) = OFF

In the V8 Snapshot Monitor, Timestamp information is also captured, as shown


V8 below.

Performance Monitoring and Tuning 13-37


db2 GET DBM MONITOR SWITCHES

DBM System Monitor Information Collected

Switch list for node 0


Buffer Pool Activity Information (BUFFERPOOL) = OFF
Lock Information (LOCK) = ON 01-27-2003 05:48:20.002314
Sorting Information (SORT) = OFF
SQL Statement Information (STATEMENT) = OFF
Table Activity Information (TABLE) = OFF
Take Timestamp Information (TIMESTAMP) = ON 01-27-2003 05:48:08.568414
Unit of Work Information (UOW) = OFF

13-38 Performance Monitoring and Tuning


Snapshot Example Output

13-39

The example below shows how you can obtain a list of the locks held by applications connected
to a database, using a database lock snapshot.
First, turn on the LOCK switch (UPDATE MONITOR SWITCHES), so that the time spent
waiting for locks is collected.
db2 UPDATE MONITOR SWITCHES USING LOCK ON
Connect to the database, start a transaction, then take the snapshot:
db2 CONNECT TO sample
db2 +c LIST TABLES FOR ALL
this command will require locks on the database catalogs
db2 GET SNAPSHOT FOR LOCKS ON sample

Performance Monitoring and Tuning 13-39


Example of partial output:

From this snapshot, you can see that there is currently one application connected to the
SAMPLE database, and it is holding five locks.
Locks held = 5
Applications currently connected = 1
Note that the time (Status change time) when the Application status became UOW Waiting is
returned as Not Collected, because the UOW switch is OFF.

13-40 Performance Monitoring and Tuning


The lock snapshot also returns the total time spent waiting for locks (so far), by applications
connected to this database.
Total wait time (ms) = 0
This is an example of an accumulating counter.
An application taking snapshots can reset its view of the counters at any time by using the
RESET MONITOR command.

Performance Monitoring and Tuning 13-41


Event Monitors

Event monitors are used to collect information about the database


and any connected applications when specified events occur.
An event monitor writes out database system monitor data to either a
file or a named pipe, when one of the following events occurs:
end of a transaction end of a statement
a deadlock start of a connection
database activation end of a connection
database deactivation end of a statements subsection
flush event monitor statement issued

13-42

Event monitors are used to collect information about the database and any connected
applications when specified events occur. An event monitor writes out database system monitor
data to either a file or a named pipe, when one of the following events occurs:
end of a transaction
end of a statement
a deadlock
start of a connection
end of a connection
database activation
database deactivation
end of a statements subsection (when a database is partitioned)
flush event monitor statement issued.
An event monitor effectively provides the ability to obtain a trace of the activity on a database.
For example, a deadlock event monitor waits for a deadlock to occur; when one does, it collects
information about the applications involved and the locks in contention.

13-42 Performance Monitoring and Tuning


Note Whereas the snapshot monitor is typically used for preventative maintenance and
problem analysis, event monitors are used to alert administrators to immediate
problems or to track impending ones.

In V8, event monitor output can be directed to SQL tables, a file, or a named pipe.
V8

Performance Monitoring and Tuning 13-43


Event Monitor Example

You want DB2 to log the occurrence of deadlocks between


connections to a database.
First, you must create and activate a DEADLOCK event monitor:
db2 CONNECT TO sample
db2 "CREATE EVENT MONITOR dlockmon
FOR DEADLOCKS WRITE TO FILE '/tmp/dlocks'"
mkdir /tmp/dlocks
db2 "SET EVENT MONITOR dlockmon STATE 1"
The event trace is written as a binary file which needs to be formatted
using the db2evmon tool:
db2evmon -path /tmp/dlocks
Reading /tmp/dlocks/00000000.evt . . .

13-44

To create an event monitor, use the CREATE EVENT MONITOR SQL statement. Event
monitors collect event data only when they are active. To activate or deactivate an event
monitor, use the SET EVENT MONITOR STATE SQL statement. The status of an event
monitor (whether it is active or inactive) can be determined by the SQL function
EVENT_MON_STATE.
When the CREATE EVENT MONITOR SQL statement is executed, the definition of the event
monitor it creates is stored in the following database system catalog tables:
SYSCAT.EVENTMONITORS: event monitors defined for the database.
SYSCAT.EVENTS: events monitored for the database.
SYSCAT.EVENTTABLES: target tables for table event monitors.
Each event monitor has its own private logical view of the instances data in the monitor
elements. If a particular event monitor is deactivated and then reactivated, its view of these
counters is reset. Only the newly activated event monitor is affected; all other event monitors
will continue to use their view of the counter values (plus any new additions).

13-44 Performance Monitoring and Tuning


Example Event Monitor Request:
For example, you can request that DB2 logs the occurrence of deadlocks between connections to
a database. First, you must create and activate a DEADLOCK event monitor:
db2 CONNECT TO sample
db2 "CREATE EVENT MONITOR dlockmon FOR DEADLOCKS WRITE TO FILE '/tmp/
dlocks'"
mkdir /tmp/dlocks
db2 "SET EVENT MONITOR dlockmon STATE 1"

Setting the Stage:


Now, two applications using the database enter a deadlock. That is, each one is holding a lock
that the other one needs in order to continue processing. The deadlock is eventually detected and
resolved by the DB2 deadlock detector component, which will rollback one of the transactions.
This can be set up with the following scenario:

Application 1:
db2 CONNECT TO sample
db2 +c "INSERT INTO staff VALUES (1, 'Ofer', 1, 'Mgr', 0, 0, 0)"
DB20000I The SQL command completed successfully.
The +c option turns autocommit off for CLP. Application 1 is now holding an exclusive lock on
a row of the staff table.

Application 2:
db2 CONNECT TO sample
db2 +c "INSERT INTO department VALUES
('1', 'System Monitor', '1', 'A00', NULL)"
DB20000I The SQL command completed successfully.
Application 2 now has an exclusive lock on a row of the department table.

Application 1:
db2 +c SELECT deptname FROM department
Assuming cursor stability, Application 1 needs a share lock on each row of the department table
as the rows are fetched, but a lock on the last row cannot be obtained because Application 2 has
an exclusive lock on it. Application 1 enters a LOCK WAIT state, while it waits for the lock to
be released.

Performance Monitoring and Tuning 13-45


Application 2:
db2 +c SELECT name FROM staff
Application 2 also enters a LOCK WAIT state, while waiting for Application 1 to release its
exclusive lock on the last row of the staff table.
These applications are now in a deadlock. This waiting will never be resolved because each
application is holding a resource that the other one needs to continue. Eventually, the deadlock
detector checks for deadlocks (set by the DLCHKTIME database manager configuration
parameter) and chooses a victim to rollback:

Application 2:
SQLN0991N The current transaction has been rolled back
because of a deadlock or timeout.
Reason code "2". SQLSTATE=40001
At this point the event monitor logs a deadlock event to its target (in this case, file /tmp/dlocks).
Application 1 can now continue.
Because an event monitor buffers its output and this scenario did not generate enough event
records to fill a buffer, the event monitor values need to be flushed to the event monitor output
writer:

Monitor Session:
db2 "FLUSH EVENT MONITOR dlockmon BUFFER"
DB20000I The SQL command completed successfully.
The event trace is written as a binary file. It can now be formatted using the db2evmon tool:

Monitor Session:
db2evmon -path /tmp/dlocks
Reading /tmp/dlocks/00000000.evt . . .

13-46 Performance Monitoring and Tuning


This will format and print to stdout, part of which is shown here:

Performance Monitoring and Tuning 13-47


Performance Configuration Wizard

Used to tune the performance of a database by updating


configuration parameters
Enter required information for the wizard:
Select the proper Target Memory to use
Optimize for data warehousing, order entry, or both
Enter the estimated transactions-per-minute
Optimize for faster transactions, faster recovery, or both
Indicate if the database is populated
Enter the average number of connected local and remote
applications
Indicate RR, RS CS, or UR isolation level

13-48

You can use the Performance Configuration wizard to tune the performance of a database by
updating configuration parameters to match your business requirements.
From the Control Center, right-click the database you want to tune and select Configure
Performance Using Wizard (in V8, select Configuration Advisor).
Enter the following information on the various wizard panels:
Server panelSelect the proper Target Memory you want to use.
Workload panelOptimize for data warehousing, order entry, or both.
Transactions panelSelect short or long transactions and enter the estimated
transactions-per-minute.
Priority panelOptimize for faster transactions, faster recovery, or both.
Populated panelSelect whether the database is populated or not.
Connections panelEnter the average number of connected local applications and the
average number of connected remote applications.
Isolation panelIndicate RR, RS CS, or UR isolation level.
Schedule panelSchedule the task to run later.
Results panelReview your settings and change if needed.
Click the Finish button to execute the changes or schedule the task.

13-48 Performance Monitoring and Tuning


Use the Workload Performance wizard (Design Advisor in V8) to guide you through tasks that
help tune queries.

The Performance Configuration wizard has been renamed to the Configuration


V8 Advisor, and the Workload Performance wizard has been renamed to the Design
Advisor.

Performance Monitoring and Tuning 13-49


Health Center

New in V8,the Health Center is a graphical interface used to


V8 set Health Monitor parameters and view the overall health of
database systems.
Health Monitor alerts:
alarm
warning
attention

When an alert is raised two things can occur:


Alert notifications can be sent
Preconfigured actions can be taken

13-50

The Health Center is a graphical interface that is used to view the overall health of
V8 database systems. Using the Health Center, you can view details and
recommendations for alerts on health indicators and take the recommended actions
to resolve the alerts.

The Health Center provides the graphical interface to the Health Monitor. You use it to configure
the Health Monitor, and to see the rolled up alert state of your instances and database objects.
Using the Health Monitors drill-down capability, you can access details about current alerts and
obtain a list of recommended actions that describe how to resolve the alert.

Health Monitor
Health Monitor information can be accessed through the Health Center, Web Health Center, the
CLP, or APIs. Health indicator configuration is available through these same tools.
The Health Monitor is a server-side tool that constantly monitors the health of the instance, even
without user interaction. If the Health Monitor finds that a defined threshold has been exceeded

13-50 Performance Monitoring and Tuning


(for example, the available log space is not sufficient), or if it detects an abnormal state for an
object (for example, an instance is down), the Health Monitor will raise an alert.
There are three types of alerts:
alarm
warning
attention
When an alert is raised two things can occur:
Alert notifications can be sent by e-mail or to a pager address, allowing you to contact
whoever is responsible for a system.
Preconfigured actions can be taken. For example, a script or a task (implemented from
the new Task Center) can be run.

Health Indicators
A health indicator is a system characteristic that the Health Monitor checks. The Health Monitor
comes with a set of predefined thresholds for these health indicators. The Health Monitor checks
the state of your system against these health-indicator thresholds when determining whether to
issue an alert. Using the Health Center, commands, or APIs, you can customize the threshold
settings of the health indicators, and define who should be notified and what script or task
should be run if an alert is issued.
The following are the categories of health indicators:
Table space storage
Sorting
Database management system
Database
Logging
Application concurrency
Package and catalog caches, and workspaces
Memory
You can follow one of the recommended actions to address the alert. If the recommended action
is to make a database or database manager configuration change, a new value will be
recommended and you can implement the recommendation by clicking on a button. In other
cases, the recommendation will be to investigate the problem further by launching a tool, such as
the CLP or the new Memory Visualizer.

Performance Monitoring and Tuning 13-51


Memory Visualizer

The Memory Visualizer, also new in V8, provides a graphical


V8 depiction of memory usage.

With the Memory Visualizer, you can:


View or hide data in various columns on the memory utilization
of selected components
Change settings for individual memory components
Load performance data from a file into a Memory Visualizer
window
Save the performance data

13-52

The Memory Visualizer helps you to uncover and fix memory-related problems on a DB2
instance. It uses visual displays and plotted graphs to help you understand memory components
and their relationships to one another. You can invoke it from a Health Center recommendation
or use it on its own as a monitoring tool.
There are several ways to start the Memory Visualizer:
Start > IBM DB2 > Monitoring Tools > Memory Visualizer
From the Control Center, right-click on the instance and select View memory usage
From the CLP, run the db2memvis program
With the Memory Visualizer, you can:
View or hide data in various columns on the memory utilization of selected components
for a DB2 instance and its databases.
Change settings for individual memory components by updating configuration
parameters.
Load performance data from a file into a Memory Visualizer window.
Save the performance data.

13-52 Performance Monitoring and Tuning


Memory Visualizer Panel

13-53

The Memory Visualizer window displays two views of data: a tree view (shown above) and a
historical view. A series of columns show percentage threshold values for upper and lower
alarms and warnings. The columns also display real time memory utilization.

Hierarchical Tree Organization


The Memory Visualizer tool uses a hierarchical tree organization to help you to display and
browse the memory components in DB2. The hierarchical tree allows you to expand and view
information on individual memory components through columns, graphical displays, and graphs.
The tree view comprises four major types of memory items:
DB2 Instance
The instance that is currently running on the system
Databases
The databases defined on the instance

Performance Monitoring and Tuning 13-53


High-level memory components
Logical groupings for leaf-level memory components. These groups are: Database
manager shared memory, Database global memory, Agent private memory, Agent /
Application shared memory
Leaf-level memory components
The memory components that display in the Memory Visualizer window such as buffer
pools, sort heaps, database heap, and lock list.
Icons in the tree view represent each memory tree item.
If the memory utilization for a tree items exceeds a threshold value, a colored indicator overlays
the icon. The yellow color indicates a warning condition. The red color indicates an alarm
condition.

Historical View
The historical view displays data for memory components selected in the tree view. The data
includes values for memory allocated and utilized, plotted graphs, as well as changes made to
the configuration parameters while the Memory Visualizer is running. The data is saved for a
specific period within the Memory Visualizer. You can save memory performance data to a
Memory Visualizer data file for tracking, comparing with other data, or troubleshooting.

Memory Graph
The memory graph displays plotted data for selected memory components in the Memory Usage
Plot. Each component in the graph is identified by a specific color, which also displays in the
Plot Legend column in the Memory Visualizer window. The graph also displays changes made
to the configuration parameters settings. The original value of the configuration parameter and
the new value setting appear in the graph, in addition to the time that the change was requested.
They become part of the history view that you can use in assessing memory performance.

13-54 Performance Monitoring and Tuning


Other Data Management Tools

Recovery Expert
Provides simplified, comprehensive, and automated recovery:
CourseUsing the DB2 Recovery Expert Tool for Multiplatforms

Performance Expert
Performance Expert integrates performance monitoring, reporting, buffer
pool analysis, and a Performance Warehouse function into one tool,
providing a comprehensive view of DB2 performance-related information:
CourseUsing the DB2 Performance Expert Tool

Buffer Pool Analyzer


IBM DB2 Buffer Pool Analyzer helps database administrators manage buffer
pools more efficiently:
Included in courseUsing the DB2 Performance Expert Tool

13-55

Recovery Expert
IBM DB2 Recovery Expert provides simplified, comprehensive, and automated recovery with
extensive diagnostics and SMART (self managing and resource tuning) techniques to minimize
outage duration.
Provides targeted, flexible and automated recovery of database assets, even as systems
remain online.
Allows expert or novice DBAs to recover database objects safely, precisely and quickly
without having to resort to full disaster recovery processes.
Offers precision recovery options to support safe database development and
maintenance.
Has built-in self-managing and resource tuning (SMART) features that provide
intelligent analysis of altered, incorrect or missing database assets including tables,
indexes, or data and automates the process of rebuilding those assets to a correct point
in time, all without disruption to normal database or business operations.
Supports DB2 Version 7 and later on Microsoft Windows, HP-UX, Sun's Solaris
Operating Environment, IBM AIX and Linux.

Performance Monitoring and Tuning 13-55


The IBM course available for this topic is:
Using the DB2 Recovery Expert Tool for Multiplatforms (Course L1-271)
This course provides installation and usage training for the DB2 Recovery Expert tool.
The course trains the student to use the various GUI panels of the Recovery Expert tool,
including the log analysis feature.
During the course the student will install and run the Recovery Expert against the DB2
SAMPLE database on the Windows platform.

Performance Expert
Performance Expert integrates performance monitoring, reporting, buffer pool analysis, and a
Performance Warehouse function into one tool. It optimizes the performance of IBM DB2 for
DB2 Universal Database by providing a comprehensive view of DB2 performance-related
information. It also provides you with reports, analysis, and recommendations.
In general, Performance Expert includes the following advanced capabilities:
Analyzes, controls and tunes the performance of DB2 and DB2 applications.
Provides expert analysis, a real-time online monitor, a wide range of reports for
analyzing and optimizing DB2 application and SQL statements.
Includes a Performance Warehouse for storing performance data and analysis tools.
Defines and applies analysis functions (rules of thumb, queries) to identify performance
bottlenecks.
A starter set of smart features that provide recommendations for system tuning to gain
optimum throughput.
DB2 Buffer Pool Analyzer collects data and provides reports on related event activity,
to obtain information on current buffer pool behavior. It can provide these reports in the
form of tables, pie charts, and diagrams.
Monitors DB2 Connect Gateways including application and system related
information.
Supports DB2 on Microsoft Windows, HP-UX, Solaris Operating Environment, IBM
AIX, and Linux.
The IBM course available for this topic is:
Using the DB2 Performance Expert Tool (Course L1-274)
This course provides installation and usage training for Performance Expert V1.1 for
Multiplatform. The course trains the student to use the various GUI panels of the
Performance Expert tool on Windows.

13-56 Performance Monitoring and Tuning


During the course the student will install and run the Performance Expert against the
DB2 SAMPLE database (and other databases as needed) on the Windows platform.

In V8 of DB2, known as Enterprise Server Edition (ESE), a multinode instance will


V8 be created by default Performance Expert will not work within a multinode
environment. If installing Performance Expert on a verion 8 DB2 instance, apply
the latest DB2PE Service Pack 6 for the server and the client. Also note ESE
information in module three of this course.

Buffer Pool Analyzer


IBM DB2 Buffer Pool Analyzer (part of Performance Expert) helps database administrators
manage buffer pools more efficiently by providing information about current buffer pool
behavior and using simulation to anticipate future behavior.
Provides data collection of virtual buffer pool activity via the DB2.
Offers comprehensive reporting of the buffer pool activity, including:
Ordering by various identifiers such as buffer pool, plan, object, and primary
authorization id
Sorting by getpage, sequential prefetch, and synchronous read
Filtering capability, and loading into tables
Allows you to simulate buffer pool usage for varying buffer pool sizes and different
object placement
Displays report and simulation results on workstations in form of spreadsheets, graphs
and diagrams
Support is provided for object placement, including support of inactive objects
(tablespaces and buffer pools).
Expert analysis is available through an easy-to-use wizard that runs on a workstation
and guides you through the process of object placement.
Batch trace collection is provided in addition to ISPF Collect Report Data.
Reporting improvements:
Several comprehensive reports are provided which you can browse or print.
Buffer Pool Analyzer (a part of Performance Expert) is covered in this course:
Using the DB2 Performance Expert Tool (Course L1-274)

Performance Monitoring and Tuning 13-57


Summary

You should now be able to:


Describe the basic performance tuning techniques for DB2 UDB and
compare with Oracle
Determine and set database buffer pool sizing
Set database buffer pool cleaning frequency
Set the proper page size for extents
Describe the differences between DMS and SMS table space
performance
List and describe the DB2 UDB self-tuning capabilities
Use AUTOCONFIGURE command to set optimum database and
V8 or database server configuration parameters

13-58

Related Classes
For more training on performance tuning, consider attending DB2 UDB
Performance Tuning and Monitoring Workshop (CF41). This is a four day course
that covers:
Using the System Monitor GUI
Structure of database global memory
Snapshot and event monitors
RUNSTATS, REORG and REBIND
Optimizer strategies and the Visual Explain GUI

13-58 Performance Monitoring and Tuning


Exercises

Performance Monitoring and Tuning 13-59


Exercise 1

Performance Tuning Lab Preparation


There is a tar file called db2_perf_tun_lab_1.tar in your perf_tun_labs directory.
a. Untar this file using tar -xvf db2_perf_tun_lab_1.tar. This will create the
necessary subdirectories and scripts needed for the performance tuning lab 1.
b. Verify that the appropriate directories have been created using the ls lab*
command. You should see a single subdirectory: lab1.

You will be loading raw data into both an SMS-type of table space and a DMS-type of table
space. This will allow you to compare the usage of these two types of table spaces, from a
performance perspective.

Files/Scripts Needed for Performance Tuning Lab 1:


Scripts for the student home directory:
create_onektup_dms.sql creates the onektup table in a DMS table space
create_onketup_sms.sql creates the onektup table in USERSPACE1, the default
SMS table space
load_onektup.250K load script with the UNIX time function to load 250,000 rows
into the onektup table
You are now ready to proceed with the lab.

Loading Raw Data: SMS versus DMS


In this lab, you will load a 250,000 row table into:
An SMS table space using all system defaults. In this case, DB2 UDB will allocate disk
space as needed.
A DMS table space with preallocated containers. The number of containers will be
increased to show the performance advantage of multiple containers.

13-60 Performance Monitoring and Tuning


SMS Table-space Usage
1.1 Change directory to lab1.

1.2 Create a database called wisc_db using an SMS table space using
/export/home/inst###/sms_disk as the path for the disk.

1.3 Connect to the wisc_db database and look at the three default table spaces created for
wisc_db. Each one should be of type System managed space.

1.4 Examine disk allocation under the sms_disk subdirectory.

1.5 Using the create_onektup_sms.sql script, create the onektup table. This will be created
using all system defaults.

1.6 Look at the details for all of the table spaces again.

1.7 Note the following for USERSPACE1:


Table space ID
Name
Type
Total pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers

1.8 Examine disk allocation for the sms_disk subdirectory with the UNIX command
du -k -s sms_disk. The output is in kilobytes.
Kilobytes used = _______.

Performance Monitoring and Tuning 13-61


1.9 Now load 250,000 rows into the onektup table. Use the load_onektup.250K script in
your home directory. It will report the time for the load. The real time is the clock time.
Load time = _______ (Your time will depend on hardware and box load.)

1.10 Again, examine the disk allocation for the sms_disk subdirectory with
du -k -s sms_disk. The output is in kilobytes.
Kilobytes used = ______

1.11 Look at the table-space detail for USERSPACE1 and note the following:
Pages used = ________
Number of containers = ________
In the next lab, you will load the database using DMS containers and compare the load times.

DMS Table-Space Usage


1.12 Drop the onektup table.

1.13 Create a DMS table space named onektbl for the onektup table using one container. Use
the raw device /dev/rdsk/inst###F. This raw device is 100 megabytes in size and has
been preallocated for your use. Use the defaults for all other settings.

1.14 Look at the table-space details for the table space, onektbl. Note the following:
Table space ID
Name ONEKTBL
Type Database managed space
Total pages
Useable pages
Used pages
Page size
Extent size (pages)
Prefetch size (pages)
# of containers

13-62 Performance Monitoring and Tuning


1.15 Create the onektup table in this DMS table space using the create_onektup_dms.sql
script.

1.16 Now load 250,000 rows into the onektup table using the load_onektup.250K script in
your home directory. It will report the time for the load.
Load time = __________ (Your time will depend on hardware and box load.)

1.17 Look at the table-space details for the table space onektbl. Note the following:
Table space ID
Name ONEKTBL
Type Database managed space
Total pages
Useable pages
Used pages
Page size
Extent size (pages)
Prefetch size (pages)
# of containers

Performance Monitoring and Tuning 13-63


Exercise 2

Tuning the Buffer Pool


This lab is designed to allow the student to tune the buffer pool and its activity for the wisc_db
database. It uses an UPDATE script that updates all the rows in a 2.5-million-row table.
The steps and their order in this lab are critical. If a step is skipped or done out of order, it will
significantly impact the results. Also, if the class is large and all the students are running this
same lab at the same time, performance of each instance could be affected. The instructor may
want to stagger the running of this lab. Each iteration takes between 2 and 3 minutes, so the wait
for the other teams will not be excessive.
The wisc_db database parameters being tuned are:
CHNGPGS_THRESH determines when io_cleaners are awakened to clean pages in a
given buffer pool for a database.
NUM_IOCLEANERS the number of io_cleaners for a database responsible for
cleaning/flushing dirty pages to disk.
BUFFERPOOL this is the size of the buffer pool for the wisc_db database. This is not
a database configuration parameter as much as it is the size of the buffer pool created
for the wisc_db database.
Metrics being monitored:
Response time
The response time returned for the UPDATE by a UNIX time command.
Database Snapshot: dirty page threshold cleaner triggers
This indicates the number of times the amount of dirty pages exceeded the
threshold setting and triggers the IO_CLEANERS to clean pages. This number starts
out low in most cases, and response time / performance typically improves as the
number increases.
Database Snapshot: pool_data_writes and pool_async_data_writes
If these are equal, you might be able to decrease the NUM_IOCLEANERS parameter.
If pool_data_writes is much greater than pool_async_data_writes, then the
NUM_IOCLEANERS should be increased.

13-64 Performance Monitoring and Tuning


Lab Preparation
There is a tar file in your perf_tun_labs subdirectory in your home directory called
db2_perf_tun_lab_2.tar. Untar it with the tar xvf db2_perf_tun_lab_2.tar command. It will
create the necessary lab2 subdirectory and the scripts for this lab. Verify that the lab2
subdirectory does exist.

Buffer pool/Page Cleaning Tuning


In this lab, you will:
Create the tenktup1 table.
Load the tenktup1 table with 2.5 million rows
Run an UPDATE that updates all 2.5 million rows to get a baseline worst case response
time.
Monitor (via a database snapshot) the page cleaning and related activity taking place
during each update.
Tune the database parameters that have the highest impact on the performance of the
UPDATE SQL.
Rerun the UPDATE SQL, and note performance changes.

Pre-Lab Engine Preparation


2.1 You will need to increase your log file space. Modify the database configuration
parameters LOGPRIMARY to be 75 and LOGFILSIZ to be 2000. These changes will not
take effect until all database connections have been completed/terminated and the
database is reactivated or connected by an application.

2.2 Change directory to the perf_tun_labs/lab2 directory.

2.3 Modify the create_tenktup1_lab2.db2 script. Locate the create tablespace command.
Change this to use the raw devices preallocated for your student login. For example, if
you are logged in as inst101, your raw devices would be /dev/rdsk/stu101A through
/dev/rdsk/stu101F.

Performance Monitoring and Tuning 13-65


2.4 Run the create_tenktup1_lab2.db2 script. This will do the following:
a. Drop the onektbl table space. This will drop the onektup table and its associated
containers. You will need to do this to reuse your student containers from a prior
lab.
b. Drop the bp_tenktup1 buffer pool. (The first time you run this script, this buffer
pool will not exist. That's okayit will be effective the next time you run the
script).
c. Create a buffer pool called bp_tenktup1 with a size of 1000 pages.
d. Create a table space called tenktbl utilizing 6 containers:
/dev/rdsk/<student login>A through F
Each raw space is 100 MB in size and is preallocated for your use.
e. Create the tenktup1 table in table space tenktbl.

Warning!
Please be very careful with your device names so as not to steal someone else's
raw devices.

2.5 Check the size of the buffer pool created for this lab by using:
db2 "select * from syscat.bufferpools"
Two buffer pools should be shown:
IBMDEFAULTBP as bufferpoolid 1, size of 1000 NPAGES
BP_TENKTUP1 as bufferpoolid 2, size of 1000 NPAGES initially

2.6 Validate the creation and details for the tablespace tenktbl using:
db2 list tablespaces show detail

2.7 Run the load_tenktup1.sh script. This will load the tenktup1 table with 2.5 million
rows. It will also perform a row count for verification. (The load should only take around
a minute, depending on the resource load on the box.)

2.8 Validate that CHNGPGS_THRESH is set to 60 and NUM_IOCLEANERS is set to 1 for the
wisc_db. If not, set these values, then deactivate and reactivate the database.

13-66 Performance Monitoring and Tuning


Lab Steps: Benchmarks
From this point on, you will be modifying different database configuration parameters and
running the following steps below repeatedly. Use the chart below to make one change at a time,
recording the requested results.
2.9 Run an UPDATE against the tenktup1 table using:
db2 connect to wisc_db
time (db2 "update tenktup1 set stringu1='<some string>'")
You should see output similar to the following (the time could vary greatly):
real 1m58.91s
user 0m0.02s
sys 0m0.05s
Record the real time in the attached chart.

2.10 Take a snapshot of the database using the db2 get snapshot for all databases command.
Record the Dirty page threshold cleaner triggers from this output in the chart. Note: you
must have previously turned on the bufferpool snapshot switch before you can take
snapshots: db2 update dbm cfg using dft_mon_bufpool on.

2.11 Clean the buffer pool of all pages from the tenktup1 table.
(Note that with a small buffer pool this is not necessary, but a good practice for
benchmarking.)

2.12 Modify one parameter as asked for in the chart, and repeat at step 2.9 above. Note that
you need to use a different UPDATE string each time you rerun the UPDATE.
Run UPDATE Snapshot: NUM_IOCLEANERS CHNGPGS_THRESH
time in dirty page (%)
seconds threshold
cleaner
triggers
Default 1 60
2 2 60
3 4 60
4 4 40
5 4 20
6 4 10
7 4 5

Performance Monitoring and Tuning 13-67


Exercise 3

Logical Logs
This lab shows how to change the location of the logical logs as well as how secondary logs are
utilized in DB2 UDB.
You will use a script (watch_logs.sh) that repeatedly reports the number of logs used by the
database. This is helpful because it will show the secondary log files that are allocated when
needed, then released and deallocated when no longer needed.

Scripts Needed:
watch_logs.sh - a script to recursively monitor the number of logical logs in a current
directory.

Lab Steps
This lab assumes that the onektup table is still loaded with around 250,000 rows. If not, reload
it as you did in the previous lab. Validate the row count in the onektup table. There should be
around 250,000 rows. If you need to reload the onektup table, do so with the scripts in your
$HOME/perf_tun_labs/lab1 directory.
You will need two terminal windows opened for this lab.
3.1 In window A:
a. Change to your home directory if not already there.
b. Create a subdirectory in your home directory called logs.
c. Change to the perf_tun_labs directory in your home directory.
d. Untar the db2_perf_tun_lab_3.tar using the tar xvf db2_perf_tun_lab_3.tar
command. This will create a directory called lab3 and will place the
watch_logs.sh script file there.
e. Change the location of the logical logs for the wisc_db database to
/export/home/inst<student number>/logs (the directory you just created).
f. Change the number of primary logs to 5.
g. Change the number of secondary logs to 100.
h. Change the size of the log files to 500.
i. Break any connections to the database.

13-68 Performance Monitoring and Tuning


j. Deactivate the wisc_db database.
k. Activate the wisc_db database. This will create the appropriate number of initial
logical logs.

3.2 In window B:
a. Change to the $HOME/perf_tun_labs/lab3 directory.
b. Run the watch_logs.sh using ./watch_logs.sh. This will watch the number of logs
existing in the $HOME/logs subdirectory. (Ignore any error messages as long as
the number of logs is reported accurately. You should see the number of logs equal
to the LOGPRIMARY parameter for the wisc_db.)

3.3 In window A:
a. Connect to the wisc_db database.
b. Run the following update, and watch the number of logs in window A:
time (db2 "update onektup set stringu1='<some string>'")
c. When the UPDATE is complete, the high number of logs should still be shown in
window B. These are not deallocated until the database is deactivated.
d. Break any connections to the database.
e. Deactivate the wisc_db database. The number of logs should go back down to the
LOGPRIMARY value, since the secondary logs have been deallocated.

Performance Monitoring and Tuning 13-69


Exercise 4

Optional Exercise - based on the availablity of DB2 V8


This exercise requires the use of version 8 of DB2, which provides the AUTOCONFIGURE
capability.

Note In this exercise, your values of the various configuration parameters may be
different than those shown.

4.1 Open a DB2 Command Window, and get the current DBM and DB configuration
parameters, saving this information to files (dbm_cfg.txt and db_cfg.txt, respectively)
for further study.

4.2 Using the view program, open the dbm_cfg.txt file, and search for SHEAPTHRES.
Note its value.
SHEAPTHRES = ______________

4.3 In the DB2 Command Window, use the DB2 AUTOCONFIGURE utility to specify that
mem_percent be set to 50, and only for DB. Redirect your output to the auto_cfg.txt
file for later study.

4.4 Again in the command window, use the view program to look at the contents of the
auto_cfg.txt file. Search for the SHEAPTHRES parameter values, and list the old and
new values.
SHEAPTHRES - old = ______________
SHEAPTHRES - new = ______________

4.5 Have these changes become effective yet?

13-70 Performance Monitoring and Tuning


4.6 List the other memory parameters that will be changed, as specified in the auto_cfg.txt
file.
Description Parameter Old Value New Value
Sort heap threshold SHEAPTHRES
Max size of appl. group APPGROUP_MEM_SZ
mem set
Max storage for lock list LOCKLIST
Log buffer size LOGBUFSZ
Package cache size PCKCACHESZ
Sort list heap SORTHEAP
16K-BP Bufferpool size
8K-BP Bufferpool size
IBMDEFAULTBP Bufferpool size

4.7 Are there other parameters, other than memory ones, that are effected by your change?
List them.
Description Parameter Old Value New Value
Maximum query degree of MAX_QUERYDEGREE
parallelism
Agent pool size NUM_POOLAGENTS
Catalog cache size CATALOGCACHE_SZ
Log file size LOGFILSIZ
Number of secondary log LOGSECOND
files
Percent. of lock lists per MAXLOCKS
application
Number of I/O servers NUM_IOSERVERS
Percent log file reclaimed SOFTMAX
before soft chckpt

4.8 In the open DB2 Command Window, stop and restart the instance.

4.9 Now check you parameter values and see if they got changed. Redirect your output to
new_dbm_cfg.txt and new_db_cfg.txt.

Performance Monitoring and Tuning 13-71


4.10 Where you successful in changing SHEAPTHRES?

4.11 Where you successful in changing any DB configuration parameters, such as


LOCKLIST?

13-72 Performance Monitoring and Tuning


Solutions

Performance Monitoring and Tuning 13-73


Solution 1

SMS Table-space Usage


1.1 Change directory to lab1.
cd lab1

1.2 Create a database called wisc_db using an SMS table space on


/export/home/inst###/sms_disk as the path for the disk.
db2 create database wisc_db
on "/export/home/inst###/perf_tun_labs/lab1/sms_disk"

1.3 Connect to the wisc_db database and look at the three default table spaces created for
wisc_db. Each one should be of type, System managed space.
db2 connect to wisc_db
db2 list tablespaces show detail

1.4 Examine disk allocation under the sms_disk subdirectory.


ls sms_disk

1.5 Using the create_onektup_sms.sql script, create the onektup table. This will be created
using all system defaults.
./create_onektup_sms.sql

1.6 Look at the details for all of the table spaces again.
db2 connect to wisc_db
db2 list tablespaces show detail

13-74 Performance Monitoring and Tuning


1.7 Note the following for USERSPACE1:
Tablespace ID 2
Name USERSPACE1
Type System managed space
Total pages 2
Used Pages 2
Page Size 4096
Extent Size (pages) 32
Prefetch Size (pages) 32
# of Containers 1

1.8 Examine disk allocation for the sms_disk subdirectory with the UNIX command
du -k -s sms_disk. The output is in kilobytes (this number may vary slightly).
du -k -s sms_disk
kilobytes used = 16894

1.9 Now load 250,000 rows into the onektup table. Use the load_onektup.250K script in
your home directory. It will report the time for the load. The real time is the clock time.
./load_onektup.250K
Load time = _______. (Your time will depend on hardware and box load.)

1.10 Again, examine the disk allocation for the sms_disk subdirectory with du -k -s
sms_disk. The output is in kilobytes.
du -k -s sms_disk
kilobytes used = 72522

1.11 Look at the table-space detail for USERSPACE1, and note the following:
db2 connect to wisc_db
db2 list tablespaces show detail
Pages used: 13899-4K pages
Number of containers: 1
In the next lab, you will load using DMS containers and compare the load times.

Performance Monitoring and Tuning 13-75


DMS Table-Space Usage
1.12 Drop the onektup table.
db2 connect to wisc_db
db2 drop table onektup

1.13 Create a DMS table space named onektbl for the onektup table using one container. Use
the raw device /dev/rdsk/inst###F. This raw device is 100 megabytes in size and has
been preallocated for your use. Use the defaults for all other settings.
db2 "create tablespace onektbl pagesize 4096
managed by database using
(device '/dev/rdsk/inst####F' 25000)"

1.14 Look at the table-space details for the table space, onektbl. Note the following:
db2 connect to wisc_db
db2 list tablespaces show detail
Table space ID 3
Name ONEKTBL
Type Database managed space
Total pages 25000
Useable pages 24992
Used pages 96
Page size 4096
Extent size (pages) 32
Prefetch size (pages) 32
# of containers 1

1.15 Create the onektup table in this DMS table space using the create_onektup_dms.sql
script.
./create_onektup_dms.sql

1.16 Now load 250,000 rows into the onektup table using the load_onektup.250K script in
your home directory. It will report the time for the load.
./load_onektup.250K
Load time = __________(Your time depends on hardware and box load.)

13-76 Performance Monitoring and Tuning


1.17 Look at the table-space details for the table space onektbl. Note the following:
db2 connect to wisc_db
db2 list tablespaces show detail
Table space ID 3
Name ONEKTBL
Type Database managed space
Total pages 25000
Useable pages 24992
Used pages 14048
Page size 4096
Extent size (pages) 32
Prefetch size (pages) 32
# of containers 1

Performance Monitoring and Tuning 13-77


Solution 2

Pre-Lab Engine Preparation


2.1 You will need to increase your log file space. Modify the database configuration
parameters LOGPRIMARY to be 75 and LOGFILSIZ to be 2000. These changes will not
take effect until all database connections have been completed/terminated, and the
database is reactivated or connected by an application.
db2 update db cfg for wisc_db using LOGPRIMARY 75
db2 update db cfg for wisc_db using LOGFILSIZ 2000
db2 force applications all
db2 terminate
db2 activate database wisc_db

2.2 Change directory to the perf_tun_labs/lab2 directory.


cd perf_tun_labs/lab2

2.3 Modify the create_tenktup1_lab2.db2 script. Locate the "create tablespace"


command. Change this to use the raw devices preallocated for your student login. For
example, if you are logged in as inst101, your raw devices would be /dev/rdsk/stu101A
through /dev/rdsk/stu101F.
vi create_tenktup1_lab2.db2

2.4 Run the create_tenktup1_lab2.db2 script. This will do the following:


a. Drop the onektbl table space. This will drop the onektup table and its associated
containers. You will need to do this to reuse your student containers from a prior
lab.
b. Drop the bp_tenktup1 buffer pool. (The first time you run this script, this buffer
pool will not exist. That's ok it will be effective the next time you run the
script).
c. Create a buffer pool called bp_tenktup1 with a size of 1000 pages.
d. Create a table space called tenktbl utilizing 6 containers:
/dev/rdsk/<student login>A through F
Each raw space is 100 MB in size and is preallocated for your use.
e. Create the tenktup1 table in table space tenktbl.
./create_tenktup1_lab2.db2

13-78 Performance Monitoring and Tuning


2.5 Check the size of the buffer pool created for this lab by using:
db2 "select * from syscat.bufferpools"
Two buffer pools should be shown:
IBMDEFAULTBP as bufferpoolid 1, size of 1000 NPAGES
BP_TENKTUP1 as bufferpoolid 2, size of 1000 NPAGES initially

2.6 Validate the creation and details for the tablespace tenktbl using:
db2 list tablespaces show detail

2.7 Run the load_tenktup1.sh script. This will load the tenktup1 table with 2.5 million
rows. It will also perform a row count for verification. (The load should only take around
a minute, depending on the resource load on the box.)
./load_tenktup1.sh

2.8 Validate that CHNGPGS_THRESH is set to 60 and NUM_IOCLEANERS is set to 1 for the
wisc_db. If not, set these values, then deactivate and reactivate the database.
db2 get db cfg for wisc_db

Lab Steps: Benchmarks


From this point on, you will be modifying different database configuration parameters and
running the following steps below repeatedly. Use the chart below to make one change at a time,
recording the requested results.
2.9 Run an UPDATE against the tenktup1 table using:
db2 connect to wisc_db
time (db2 "update tenktup1 set stringu1='<some string>'")
You should see output similar to the following (the time could vary greatly):
real 1m58.91s
user 0m0.02s
sys 0m0.05s
Record the real time in the attached chart.

Performance Monitoring and Tuning 13-79


2.10 Take a snapshot of the database using the db2 get snapshot for all databases command.
Record the Dirty page threshold cleaner triggers from this output in the chart.Note: you
must have previously turned on the bufferpool snapshot switch before you can do take
snapshots: db2 update dbm cfg using dft_mon_bufpool on).
db2 update dbm cfg using dft_mon_bufpool on
db2 get snapshot for all databases

2.11 Clean the buffer pool of all pages from the tenktup1 table.
db2 force application all
db2 terminate
db2 deactivate database wisc_db
db2 activate database wisc_db
(Note that with a small buffer pool this is not necessary, but a good practice for
benchmarking.)

2.12 Modify one parameter as asked for in the chart, and repeat at step 2.9 above. Note that
you want to use a different UPDATE string each time you rerun the UPDATE.
db2 update db cfg for wisc_db using <parameter> <value>
To make the changes effective, do the following:
db2 force application all
db2 terminate
db2 deactivate database wisc_db
db2 activate database wisc_db
Run UPDATE Snapshot: num_iocleaners chngpgs_thresh (%)
time in dirty page
seconds threshold
cleaner
triggers
Default 1 60
2 2 60
3 4 60
4 4 40
5 4 20
6 4 10
7 4 5

13-80 Performance Monitoring and Tuning


Solution 3

Logical Logs
3.1 In window A:
a. Change to your home directory if not already there.
cd

b. Create a subdirectory in your home directory called logs.


mkdir logs

c. Change to the perf_tun_labs directory in your home directory.


cd $HOME/perf_tun_labs

d. Untar the db2_perf_tun_lab_3.tar using the tar xvf db2_perf_tun_lab_3.tar


command. This will create a directory called lab3 and will place the
watch_logs.sh script file there.
tar xvf db2_perf_tun_lab_3.tar

e. Change the location of the logical logs for the wisc_db database to
/export/home/inst<student number>/logs (the directory you just created).
db2 update database configuration \
for wisc_db using newlogpath \
/export/home/inst<student number>/logs

f. Change the number of primary logs to 5.


db2 update database configuration \
for wisc_db using logprimary 5

g. Change the number of secondary logs to 100.


db2 update database configuration \
for wisc_db using logsecond 100

h. Change the size of the log files to 500.


db2 update database configuration \
for wisc_db using logfilsiz 500

Performance Monitoring and Tuning 13-81


i. Break any connections to the database.
db2 force application all
db2 terminate

j. Deactivate the wisc_db database.


db2 deactivate database wisc_db

k. Activate the wisc_db database. This will create the appropriate number of initial
logical logs.
db2 activate database wisc_db

3.2 In window B:
a. Change to the $HOME/perf_tun_labs/lab3 directory.
cd $HOME/perf_tun_labs/lab3

b. Run the watch_logs.sh using ./watch_logs.sh. This will watch the number of logs
existing in the $HOME/logs subdirectory. (Ignore any error messages as long as
the number of logs is reported accurately. You should see the number of logs equal
to the LOGPRIMARY parameter for the wisc_db.)
./watch_logs.sh

3.3 In Window A:
a. Connect to the wisc_db database.
connect to wisc_db

b. Run the following update, and watch the number of logs in window A:
time (db2 "update onektup set stringu1='<some string>'")

c. When the UPDATE is complete, the high number of logs should still be shown in
window B. These are not deallocated until the database is deactivated.

d. Break any connections to the database.


db2 force application all

e. Deactivate the wisc_db database. The number of logs should go back down to the
LOGPRIMARY value, since the secondary logs have been deallocated.
db2 deactivate database wisc_db

13-82 Performance Monitoring and Tuning


Solution 4

Optional Exercise - based on the availablity of DB2 V8


This exercise requires the use of version 8 of DB2, which provides the AUTOCONFIGURE
capability.

Note In this exercise, your values of the various configuration parameters may be
different than those shown.

4.1 Open a DB2 Command Window, and get the current DBM and DB configuration
parameters, saving this information to files (dbm_cfg.txt and db_cfg.txt, respectively)
for further study.
db2 "GET DBM CFG" > dbm_cfg.txt
db2 "CONNECT TO sample"
db2 "GET DB CFG FOR sample" > db_cfg.txt

4.2 Using the view program, open the dbm_cfg.txt file, and search for SHEAPTHRES.
Note its value.
view dbm_cfg.txt
SHEAPTHRES = 20000

4.3 In the DB2 Command Window, use the DB2 AUTOCONFIGURE utility to specify that
mem_percent be set to 50, and only for DB. Redirect your output to the auto_cfg.txt
file for later study.
db2 "CONNECT TO sample"
db2 "AUTOCONFIGURE USING MEM_PERCENT 50
APPLY DB ONLY" > auto_cfg.txt

4.4 Again in the command window, use the view program to look at the contents of the
auto_cfg.txt file. Search for the SHEAPTHRES parameter values, and list the old and
new values.
view auto_cfg.txt
SHEAPTHRES - old = 20000
SHEAPTHRES - new = 2464

Performance Monitoring and Tuning 13-83


4.5 Have these changes become effective yet?
The configuration parameters have been changed, but will not become effective
until the instance is restarted.

4.6 List the other memory parameters that will be changed, as specified in the auto_cfg.txt
file.
Description Parameter Old Value New Value
Sort heap threshold SHEAPTHRES 20000 2464
Max size of appl. group APPGROUP_MEM_SZ 30000 9790
mem set
Max storage for lock list LOCKLIST 100 374
Log buffer size LOGBUFSZ 8 19
Package cache size PCKCACHESZ MAXAPPLS*8 650
Sort list heap SORTHEAP 256 192
16K-BP Bufferpool size 40 125
8K-BP Bufferpool size 80 250
IBMDEFAULTBP Bufferpool size 1000 500

4.7 Are there other parameters, other than memory ones, that are effected by your change?
List them.
Yes.
Description Parameter Old Value New Value
Maximum query degree of MAX_QUERYDEGREE ANY 1
parallelism
Agent pool size NUM_POOLAGENTS 200(calc.) 10
Catalog cache size CATALOGCACHE_SZ MAXAPPLS*4 58
Log file size LOGFILSIZ 1000 1024
Number of secondary log LOGSECOND 2 1
files
Percent. of lock lists per MAXLOCKS 10 60
application
Number of I/O servers NUM_IOSERVERS 3 6
Percent log file reclaimed SOFTMAX 100 120
before soft chckpt

13-84 Performance Monitoring and Tuning


4.8 In the open DB2 Command Window, stop and restart the instance.
db2 "FORCE APPLICATIONS ALL"
db2 "TERMINATE"
db2stop
db2start

4.9 Now check you parameter values and see if they got changed. Redirect your output to
new_dbm_cfg.txt and new_db_cfg.txt.
db2 "GET DBM CFG" > new_dbm_cfg.txt
db2 "CONNECT TO sample"
db2 "GET DB CFG FOR sample" > new_db_cfg.txt

4.10 Where you successful in changing SHEAPTHRES?


No, because it is a DBM configuration parameter, and you specified DB parameters
using AUTOCONFIGURE.

4.11 Where you successful in changing any DB configuration parameters, such as


LOCKLIST?
Yes, the DB configuration parameters did get changed.

Performance Monitoring and Tuning 13-85


13-86 Performance Monitoring and Tuning
Module 14

Course Summary

Course Summary 02-2003 14-1


2002,2003 International Business Machines Corporation
Course Summary

You should now have a basic understanding of:


Differences between Oracle and DB2 UDB instances
Planning disk usage
Creating a DB2 UDB instance and database
Data type mapping
Creating tables and views
Data migration methods
Accessing data through indexes
Using constraints
Managing backup and recovery
Performance monitoring and tuning

14-2

This section provides you with a summary of knowledge gained in this course and resources you
can use to further your education on DB2 UDB subjects. Included are document sources and
other courses you can attend.

14-2 Course Summary


Where to Go From Here

Two possible choices for your next step:


Fast Path to DB2 UDB for Experienced Relational DBAs
DB2 Universal Database Administration Workshop

14-3

A CBT self-study course, Fast Path to DB2 UDB for Experienced Relational DBAs (CT28),
contains a superb explanation of privileges and is available for download, free of charge, at:
www.ibm.com/software/data/db2/selfstudy
You will be required to register for this free download copy.
A classroom course, DB2 Universal Database Administration Workshop, is available for the
following operating systems:
Linux (CF20)
UNIX (CF21)
Windows NT (CF23)
Solaris (CF27)
There are also a variety of advanced courses described in the next few pages.

Course Summary 14-3


Description of DB2 UDB Courses

Fast Path to DB2 UDB for Experienced Relational DBAs


Also available as CBT on CD-ROM
DB2 UDB Fundamentals
DB2 SQL Workshop
Also available as CBT on CD-ROM
DB2 UDB Administration Workshop (separate courses for Linux,
UNIX, Windows NT, Sun Solaris)
DB2 UDB for UNIX, Windows, and OS/2 Database Admin
Certification Preparation

14-4

The following courses are available to you. Most of these are classroom courses, but there are
several CBT courses included in the list.

Fast Path to DB2 UDB for Experienced Relational DBAs


What you are taught:
List and describe the components of a DB2 UDB
Implement DB2 UDB security
Perform basic administration of a DB2 UDB database system using commands, or the
graphical user interface (GUI)
Perform the tasks necessary to support a basic recovery strategy

14-4 Course Summary


DB2 UDB Fundamentals
What you are taught:
List and describe the components of DB2 UDB
Create a DB2 database
Create objects

DB2 SQL Workshop


What you are taught:
Code simple and complex SELECT statements
Code INSERT, DELETE, and UPDATE statements

DB2 UDB Administration Workshop


What you are taught:
Administer a DB2 UDB database system using commands and Graphical User
Interface (GUI) tools
Implement DB2 UDB security
Manage System Managed Storage (SMS) and Database Managed Storage (DMS) table
spaces within a database and apply data placement principles
Define a DB2 UDB recovery strategy and perform the tasks necessary to support the
strategy
Physically implement a given logical database design using DB2 UDB support integrity
and concurrency requirements
List and describe the components of DB2 UDB
Describe the application development process with respect to DB2 UDB considerations

DB2 UDB for UNIX, Windows, and OS/2 Database Admin


Certification Preparation
What you are taught:
Learn the knowledge necessary to enhance the probability of passing the certification
tests

Course Summary 14-5


Description of DB2 UDB Advanced Courses

DB2 Advanced SQL Workshop


Also available as CBT on CD-ROM
DB2 UDB Performance Tuning and Monitoring Workshop
DB2 UDB Advanced Administration Workshop
DB2 UDB Advanced Recovery and High Availability Workshop
DB2 Stored Procedures Programming Workshop
Also available as CBT on CD-ROM

14-6

These courses are considered advanced and should be taken only after mastering the basic
courses outlined on the previous pages.

DB2 Advanced SQL Workshop


What you are taught:
Use advanced SQL constructs, such as recursive SQL, case expressions, check
constraints, and triggers
Discuss basic relational database concepts, such as referential integrity, tables, and
indexes
Create tables and indexes
Use outer joins
Use complex subqueries
Use the major scalar functions
Use views

14-6 Course Summary


DB2 UDB Performance Tuning and Monitoring Workshop
What you are taught:
Define the impact of database design (tables, indexes, and data placement) on database
performance
Describe database application programming considerations and how they affect
performance
Identify and describe the parameters (database and non-database) that affect
performance
Tune parameters to achieve optimum performance
Identify and use the tools that assist in monitoring and tuning of a database

DB2 UDB Advanced Administration Workshop


What you are taught:
Effectively apply advanced techniques to administer a DB2 UDB using the control
center
Explore parallelism and SMP enablement
Explore multiple bufferpool and extended storage support
Explore client administration
Perform a command line interface (CLI) trace
Explore problem reporting and management
Explore the stored procedure builder
Configure the DB2 Governor to enforce time and central processing unit (CPU)
restrictions
Access data stored in a nonrelational format using table functions
Manage a distributed data environment
Administer DB2 UDB from a remote client
Explore federated databases

Course Summary 14-7


DB2 UDB Advanced Recovery and High Availability Workshop
What you are taught:
Explore the DB2 UDB recovery facilities and database configuration options
Plan the implementation of a user exit for archival of database logs
Recover a DB2 table following a DROP TABLE command issued in error
Plan and execute the recovery of tablespaces to a selected point in time
Effectively utilize incremental backup and restore to reduce the size and duration of
DB2 database backups
Gain a better understanding of DB2 UDB crash recovery facilities
Utilize the redirected restore option to recover DB2 data to alternate disk configurations
Execute recovery scenarios, including loss of DB2 log data or access to the DB2
catalog information
Utilize the information in the DB2 recovery history file to plan and execute various
DB2 utilities
Explore the options for operation of DB2 databases in high availability environments
including the use of split mirrors of the database
Utilize the DB2DART utility to examine a DB2 database for problem determination
Gain a better understanding of the unique recovery planning requirements for DB2
UDB Enterprise Extended-Edition (EEE) databases

DB2 Stored Procedures Programming Workshop


What you are taught:
Describe a stored procedure and justify its use in an application
Understand the Stored Procedure Builder and its capabilities
Describe the DB2 SQL Procedure Language (SQL PL) statements and how to use them
in an application
Describe the basic structure of a Java application using stored procedures
Create a DB2 stored procedure using the SQL DDL statement CREATE PROCEDURE
Describe troubleshooting approaches for stored procedures

14-8 Course Summary


To Enroll in Courses

There are two ways to enroll in courses:


Call phone number 1-800-IBM-TEACH (1-800-426-8322)
Go to the following Web site:

http://www-3.ibm.com/services/learning/

14-9

Course Summary 14-9


DB2 UDB Technical Documents

DB2 UDB Administration Guide: Planning


DB2 UDB Administration Guide: Implementation
DB2 UDB Administration Guide: Performance
DB2 UDB Command Reference
DB2 UDB SQL Reference

14-10

The list of technical documents shown above are the basic references needed to properly
maintain and administer the DB2 UDB database. These documents are provided on CD-ROM
with the product, but hardcopy can be ordered through your IBM sales representative.

14-10 Course Summary


Additional Technical Documents

For general overview:


DB2 UDB Quick Beginnings

For certification:
DB2 UDB v7.1 Database Administration Certification Guide

For advanced study:


DB2 UDB System Monitor Guide and Reference
DB2 UDB Data Movement Utilities Guide and Reference
DB2 UDB Troubleshooting Guide

14-11

Course Summary 14-11


Evaluation Sheet

Kindly provide us with your feedback


Please include written comments, which are better than checked
boxes

Thank You!

14-12

14-12 Course Summary


Appendixes
Appendix A

Oracle and DB2 UDB Comparisons

Oracle and DB2 UDB Comparisons 02-2003 A-1


2002,2003 International Business Machines Corporation
Mapping of Terminology
The table provides a quick mapping to DB2 UDB terminology to Oracle terminology:
DB2 UDB Oracle Comments
UDB EE Oracle EE Enterprise product.
DB2
DB2 UDB Oracle Support node partitioning.
EEE Parallel
DB2 Connect Oracle DRDA access to hosts.
Gateway
SQL Control PL/SQL Programming language extension to SQL.
Statements DB2 UDB stored procedures can be programmed in SQL
Control Statements (subset of PSM standard), Java, C, C++,
COBOL, Fortran, OLE, and REXX. DB2 functions can be
programmed in Java, C, C++, OLE, or SQL control
statements.
DB2 CLP SQL*Plus Command line interface to the server.
Instance Instance Processes and shared memory.
In DB2 it also includes a permanent directory structure: an
instance is usually created at install time (or can be later) and
must exist before a database can be created. A DB2 instance
is also known as the database manager (DBM).
Database Database Physical structure containing data.
In Oracle, multiple instances can use the same database, and
an instance can connect to one and only one database. In
DB2, multiple databases can be created and used
concurrently in the same instance.
DBM and Control files In Oracle, files that name the locations of files making up the
database and .ora files database and provide configuration values. In DB2, each
configuration instance (DBM) and database has its own set of
files, etc. configuration parameters stored in a binary file; there are
also other internal files and directories: none is manually
edited.
Federated Database In Oracle, an object that describes a path from one database
System Link to another. In DB2 a federated system is used. One database
is chosen as the federated database and within it wrappers,
servers, nicknames, and other optional objects are created to
define how to access the other databases (including Oracle
databases) and objects in them. Once an application is
connected to the federated database it can access all
authorized objects in the federated system.
Table spaces Tablespaces Contains actual database data.
Containers Data files Entities inside the table spaces.
Objects Segments Entities inside the containers/data files.
Extents Extents Entities inside the objects/segments.

A-2 Oracle and DB2 UDB Comparisons


DB2 UDB Oracle Comments
Pages Data blocks Smallest storage entity in the storage model.
N/A Clusters Data structure that allows related data to be stored together
on disk; can be table or hash clusters. The closest facility to
this in DB2 is a clustering index, which causes rows inserted
into a table to be placed physically close to the rows for
which the key values of this index are in the same range.
System Data Metadata of the database.
catalog dictionary
SMS N/A System-managed table space.
DMS Data files Database-managed table space.
Buffer pools Data cache Buffers data in the table spaces to reduce disk I/O.
Package Statement Cache prepared dynamic SQL statements.
cache cache
Log files Redo logs Recovery logs.
N/A Rollback Store the old version of data for a mutating table. In DB2 the
segments old version of an updated row is stored in the log file along
with the new version.
Database SGA Shared memory area(s) for the database server. In Oracle
manager and there is one, while in DB2 there is one at the database
database manager (instance) level and one for each active database.
shared
memory
Agent / UGA Shared memory area to store user-specific data passed
application between application process and the database server.
shared
memory
Package N/A A precompiled access plan for an embedded static SQL
application stored in the server.
N/A Package A logical grouping of PL/SQL blocks that can be invoked by
other PL/SQL applications.

Terminology Comparisons
Database Manager
Does not exist in Oracle
In DB2, this is the instance that may be comprised of many databases
Can be tuned via the database manager configuration file

Oracle and DB2 UDB Comparisons A-3


Instance
In both Oracle and DB2 UDB:
This term has the same meaning
Generally consists of memory allocation, processes, and disk but, for Oracle,
the disk storage of the database is not generally considered part of the instance
Many processes comprise an instance
Also known as an engine, database server, or server
DB2 instances manage many databases; Oracle instances manage one database,
but the same database can be managed by more than one instance
Also called the Database Manager
Has a single configuration file, separate from the databases of the instance
Instance DB2 UDB has two types
DAS the DB2 UDB administration server
Used for administration of DB2 UDB instances via tools
Does not contain any databases
DB2 a normal DB2 UDB instance
Typically consists of many databases
Must attach to the instance to access any databases
Database
Generally the same concept for both Oracle and DB2 UDB
Major differences:
Oracle
Redo logs are grouped with multiple members in the group each group is filled
before moving to the next group of files
Oracle 9i has introduced a number of new index types to improve performance, but
like all new features anywhere they may not be fully understood or utilized yet
DB2 UDB
Eash database has its own allocation of logical log files
Can be tuned/configured via the database configuration file
Tablespace
Significantly different implementation for DB2 UDB and Oracle
The DB2 UDB table space is similar concept to a tablespace in Oracle

A-4 Oracle and DB2 UDB Comparisons


Table space in DB2 UDB
A logical allocation of space for storing tables and indexes
One or more physical containers
3 default table spaces created for a database:
SYSCATSPACE, TEMPSPACE1, and USERSPACE1
Can be created explicitly by users to allow segregration of database objects
2 categories:
SMS (System-managed space) the default type
Uses file-system for disk allocation
DMS (Database-managed space) can be created explicitly
Uses either raw space, or file system space for disk allocation
3 types:
Regular
User data, and system catalogs reside here
USERSPACE1 default user data and index tablespace
SYSCATSPACE default system catalog space
Long
Contains long field or long object data
System Temporary
Used for temporary operations
TEMPSPACE1 default temporary tablespace
Tablespace in Oracle
When an Oracle 9i database is first created, only one tablespace is created system
tablespace. Other tablespaces are created as necessary for the business applications.
The system tablespace is created with EXTENT MANAGEMENT DICTIONARY all the
management concerning how space extents are allocated withing the tablespace is
stored within the data dictionary.
Oracle 9i also allows the option of having space management done locally with the
EXTENT MANAGEMENT LOCAL AUTOALLOCATE clause in the CREATE TABLESPACE
statement
Disk Allocation for tables/indexes
DB2 UDB container
A physical storage location assigned to a table space
Identified by a directory name, device name, or file name
Can only belong to one table space

Oracle and DB2 UDB Comparisons A-5


Buffer Pool / Data Cache memory allocation used for caching pages
Oracles Data Cache
An allocation of memory from the system global area (SGA) used by all database
objects to cache pages
Size is determined by a handful of INIT.ora parameters, but primarily in Oracle 9i
by db_cache_size (and in Oracle 8 by the product of db_block_size and
db_block_buffers)
Uses a least recently used (LRU) algorithm
DB2 UDBs Buffer Pool
Can be explicitly created and associated with a single database
There is a default buffer pool (IBMDEFAULTBP) that is used if none other is
created
Multiple buffer pools can be created
Can be sized to better enhance database performance
Can be assigned to a specific table space
Extents
Very much the same in Oracle and DB2 UDB
Oracle:
A contigous collection of Oracle data blocks
Extent sizes are managed either with data dictionary tables or local to the
individual tablespace
DB2 UDB:
A contigous set of pages allocated from a container to a table space
Default size is 32 pages for most ports
BOTH can be explicitly changed at table creation time
Packages SQL statements and query plans that run against the engine
Only in DB2 UDB
Not available in Oracle (an Oracle package is a completely different kind of entity)

A-6 Oracle and DB2 UDB Comparisons


Appendix B

Data Types Comparison Chart

Data Types Comparison Chart 02-2003 B-1


2002,2003 International Business Machines Corporation
DB2 Data Types
The following table provides a complete list of DB2 data types, their C, C++ and Java data type
mappings, their sqllen and sqltype values (from the SQLDA), and a quick description of each.
Note that DB2 UDB has multiple types for DATE and multiple types for NUMBER. For more
information, see the SQL Reference, the Application Development Guide, and the file that is
supplied on DB2 clients: sqllib\include\sql.h.

Concept SQL Data Types C/C++ Data Types / Java sqllen Description
/ Declaration Declaration Data sqltype
Types
Integer SMALLINT short age = 32; short 2 16-bit signed integer
age; short int year; 500/501 Range between (-32,768 and 32,767)
Precision of 5 digits
INTEGER long salary; int 4 32-bit signed integer
salary; long int deptno; 496/497 Range between (-2,147,483,648 and
INT salary; 2,147,483,647)
Precision of 10 digits
BIGINT long long long 8 64-bit signed integer
serial_num; __int64 serial; 492/493
sqlint64 serial
Floating REAL bonus; float bonus; float 4 Single precision floating point
point FLOAT(n); 480/481 32-bit approximation of a real number
FLOAT(n) can be synonym for REAL if 0 < n < 25
DOUBLE double wage; double 8 Double precision floating point
wage; 480/481 64-bit approximation of a real number
DOUBLE Range in (0, -1.79769E+308 to -2.225E-307,
PRECISION 2.225E-307 to 1.79769E+308)
wage; FLOAT(n) can be synonym for DOUBLE if 24 <
n < 54
Decimal DECIMAL(p,s double price; java.math (p/2)+1 Packed decimal
) price; .BigDeci 484/485 No exact equivalent for SQL decimal type -
DEC(p,s) mal use C double data type
price; If precision /scale not specified, default is (5,0)
NUMERIC Max precision is 31 digits, and max range
(p,s)price; between (-10**31 + 1 ... 10**31 -1)
NUM (p,s) Consider using char / decimal functions to
price manipulate packed decimal fields as char data
Date/Time DATE dt; char dt[11]; struct { java.sql. 10 Null-terminated character form (11 characters)
short len; char Date 384/385 or
data[10]; } dt; varchar struct form (10 characters); struct can be
divided as desired to obtain the individual fields
Example: 11/02/2000
Stored internally as a packed string of 4 bytes

TIME tm; char tm[9]; struct { java.sql. 8 Null-terminated character form (9 characters) or
short len; char Time 388/389 varchar struct form (8 characters); struct can be
data[8]; } tm; divided as desired to obtain the individual fields
Example: 19:21:39
Stored internally as a packed string of 3 bytes

B-2 Data Types Comparison Chart


Concept SQL Data Types C/C++ Data Types / Java sqllen Description
/ Declaration Declaration Data sqltype
Types
TIMESTAMP char ts[27]; struct { java.sql. 26 Null-terminated character form (27 characters)
ts; short len; char Timestam 392/393 or varchar struct form (26 characters); struct can
data[26]; } ts; p be divided as desired to obtain the individual
fields
Example: 2000-12-25-01.02.03.000000
Stored internally as a packed string of 10 bytes
Character CHAR sex; char sex; String N Fixed length character string consisting of n
CHAR(5) char zip[6]; 452/453 bytes
zip; Use char[n+1] where 1 <= n <= 254
If length not specified, defaults to 1
VARCHAR(40) char String N Null-terminated variable length character string
address; address[41]; 448/449 Use char[n+1] where 1 <= n <=32672
VARCHAR(40) struct { short len; String len Non null-terminated varying character string with
address char data[40]; } 2-byte string length indicator
Address; Use char[n] in struct form where 1<= n <=
32672
Default SQL type
LONG struct { short len; String Len Non null-terminated varying character string with
VARCHAR char data[n]; } Voice; 456/457 2-byte string length indicator
Use char[n] in struct form where 32673<= n <=
32700
CLOB(n) sql type is JDBC n Non null-terminated varying character string with
clob(1m) chapter; 1.22: 408/409 4-byte string length indicator
String Use char[n] in struct form where 1 <= n <=
JDBC 2147483647
2.0:
java.sql.
Clob

CLOB locator sql type is Identifies CLOB entities residing on the server
variable clob_locator cref;
CLOB file sql type is Descriptor for file containing CLOB data
reference clob_file cFile;
variable
Binary BLOB(n) sql type is blob(1m) JDBC n Non null-terminated varying binary string with 4-
video; 1.22: 404/405 byte string length indicator
Byte[] Use char[n] in struct form where 1 <= n <=
JDBC 2147483647
2.0:
java.sq1.
Blob
BLOB sql type is Identifies BLOB entities on the server
locator variable blob_locator bref;
BLOB file sql type is blob_file Descriptor for the file containing BLOB data
reference bFile;
variable
Double-Byte GRAPHIC(1) sqldbchar String 24 sqldbchar is a single double-byte character
GRAPHIC(n) dbyte; 468/469 string
sqldbchar For a fixed-length graphic string of length integer
graphic1 [n+1]; which may range from 1 to 127. If the length
wchar_t specification is omitted, a length of 1 is assumed.
graphic2 [100]; Precompiled with WCHARTYPE
NOCONVERToption

Data Types Comparison Chart B-3


Concept SQL Data Types C/C++ Data Types / Java sqllen Description
/ Declaration Declaration Data sqltype
Types
VARGRAPHIC struct tag { short int; String n*2+4 For a varying-length graphic string of maximum
(n) sqldbchar[n]} 464/465 length integer, which may range from 1 to 16336.
vargraphic1; Precompiled with WCHARTYPE NOCONVERT
Sqldbchar [n+1]; option.
Null terminated variable-length
LONG struct tag { short int; JDBC n*2 For a varying-length graphic string with a
VARGRAPHIC sqldbchar [n] } 1.22: 472/473 maximum length of 16350 and a 2-byte string
(n) long_vargph1; String length indicator 16337<=n <=16350
JDBC Precompiled with WCHARTYPE NOCONVERT
2.0: option
java.sql.
Clob
DBCLOB(n) sql type is JDBC 412/413 For non-null-terminated varying double-byte
dbclob(1m) 1.22: character large object string of the specified
tokyo_phone_dir; String maximum length in double-byte characters.
JDBC 4 bytes string length indicator
2.0: Use dbclob(n) where 1<=n <= 1073741823
java.sql. double-byte characters.
Clob Precompiled with WCHARTYPE NOCONVERT
option
DBCLOB sql type is Identifies DBCLOB entities residing on the
locator variable dbclob_locator server
tokyo_phn_loc; Precompiled with WCHARTYPE NOCONVERT
option
DBCLOB file sql type is Descriptor for file containing DBCLOB data
reference dbclob_file Precompiled with WCHARTYPE NOCONVERT
variable tokyo_phn_ref; option
External Datalink(n) n+54 The length of a DATALINK column is 200 bytes
Data

B-4 Data Types Comparison Chart


Mapping Oracle Data Types to DB2 UDB Data Types
The following table summarizes the mapping from the Oracle data types to corresponding DB2
data types. The mapping is one to many and depends on the actual usage of the data.

Oracle DB2
Data Type Notes Data Type Notes
DATE TIME If only MM/DD/YYYY is required, use DATE
TIMESTAMP If only HH:MM:SS is required, use TIME
DATE If both date and time are required (MM/DD/YYYY-
HH:MM:SS.000000), use TIMESTAMP
Use Oracle TO_CHAR() function to format a DATE for
subsequent DB2 load. Note that the Oracle default
DATE format is DD-MON-YY
VARCHAR2(n) n <=4000 VARCHAR(n) n <= 32672
LONG n <= 2GB VARCHAR (n) If n <= 32672 bytes, use VARCHAR
LONG VARCHAR (n) If 32672 < n <= 32700 bytes, use LONG VARCHAR or
CLOB(n) CLOB
If 32672 < n <= 2 GB, use CLOB
RAW(n) n <= 255 CHAR(n) FOR BIT DATA If n <= 254, use CHAR(n) FOR BIT DATA
VARCHAR(n) FOR BIT DATA If n <= 32672, use VARCHAR(n) FOR BIT DATA
BLOB(n) If n<= 2 GB, use BLOB(n)
LONG RAW n <= 2 GB VARCHAR(n) FOR BIT DATA If n <= 32672 bytes, use VARCHAR(n) FOR BIT DATA
LONG VARCHAR FOR BIT DATA If 32672 < n <= 32700 bytes, use LONG VARCHAR
BLOB(n) FOR BIT DATA
If n <= 2 GB, use BLOB(n)
BLOB n <= 4 GB BLOB(n) If n <= 2GB use BLOB(n)
CLOB n <= 4GB CLOB(n) If n <= 2GB use CLOB(n)
NCLOB n <= 4GB DBCLOB(n) If n <= 2GB use DBCLOB(n/2)
NUMBER SMALLINT If Oracle decl is NUMBER(p) or NUMBER(p,0), use
INTEGER SMALLINT, if 1 <= p <= 4;
BIGINT INTEGER, if 5 <= p <= 9;
DECIMAL(p,s) BIGINT, if 10 <= p <= 18
DOUBLE / FLOAT(n) / REAL If Oracle decl is NUMBER(p,s), use DECIMAL(p,s) (s >
0)
If Oracle decl is NUMBER, use DOUBLE / FLOAT(n) /
REAL

Data Types Comparison Chart B-5


DATE and TIME
Oracle data type DATE indicates year, month, day, hour, minute, and second. The DB2 UDB data
type DATE has only the year, month, and day. The DB2 UDB data type TIME has only
HH:MM:SS information, but the data type TIMESTAMP contains information with precision from
the year to the seconds and microseconds.
TIMESTAMP gives the most complete set of timing information. Since it also takes the most
storage space, however, you should only use it if you need the full resolution it provides now or
expect to need in the future.
By default, Oracle expects input dates to be in form DD-MON-YY (e.g., 02-JUL-02) and presents
output dates in that form. The default date format can be overridden by changing the
NLS_DATE_FORMAT parameter in the INIT.ORA or SPFILE file, or with ALTER SESSION for an
individual session.

DB2 does not have default input formats for dates or times. If table t1 has a single DATE column
and t2 has a single TIME column, each of these inserts three rows successfully:
INSERT INTO t1 VALUES (1991-10-27'), ('10/27/1991'), ('27.10.1991');
INSERT INTO t2 VALUES ('13.30.05'), ('1:30 PM'), ('13:30:05');

Because DB2 does not accept the Oracle default date format, dates inserted into DB2 must first
be mapped to an acceptable format using a formatting function; for example,
TO_CHAR(OracleDate,MM/DD/YYYY)

DB2 default output date and time formats are determined by the country code of the application,
but can be overridden with the DATETIME parameter of the Bind and Prep commands.
The only one timestamp format allowed in DB2 is YYYY-MM-DD-HH.MM.SS.MMMMMM (where
MMMMMM is microseconds). For more information on date and time formats in DB2, see the
Administration Guide index entry date formats.
Oracle built-in functions, such as ADD_MONTHS, NEXT_MONTH, and NEXT_DAY, can in some
cases be translated directly to equivalent or similar DB2 built-in functions. DB2 provides a suite
of durations and datetime arithmetic capability so, for example, ADD_MONTHS(mydate, 26)
can be mapped to mydate + 26 MONTHS. When an Oracle function does not have a DB2
equivalent, or the calling code must be unchanged, a DB2 user-defined function can be created
with the same name as the Oracle function.

B-6 Data Types Comparison Chart


VARCHAR2
Many Oracle applications use VARCHAR2 for very small character strings. Generally, it is
better to port these fields to the fixed length DB2 datatype CHAR(N), as it is more efficient and
takes less storage than VARCHAR. In DB2 UDB, VARCHAR(N ) uses n+4 bytes of storage and
CHAR(N) uses only N bytes of storage. CHAR should always be used for columns of ten bytes
or fewer, and for longer columns that are relatively full of non-blank data.

NUMBER
The Oracle data type NUMBER can be mapped to many DB2 types. The type of mapping
depends on whether the NUMBER is used to store an integer (NUMBER(p), or NUMBER(p,0)), or a
number with a fixed decimal point (NUMBER(p,s), s > 0), or a floating-point number (NUMBER).
Another consideration is the space usage. Each DB2 type requires a different amount of space:
SMALLINT uses 2 bytes, INTEGER uses 4 bytes, and BIGINT uses 8 bytes. The space usage for
Oracle type NUMBER depends on the parameter used in the declaration. NUMBER, with the
default precision of 38 significant digits, uses 20 bytes of storage. Mapping NUMBER to
SMALLINT, for example, can save 18 bytes per column.

Note that in DB2, unless you specify NOT NULL, another byte is required for the null indicator.

DECIMAL
An Oracle NUMBER with non-zero scale (decimal places) should be mapped to DB2 data type
DECIMAL. DECIMAL is stored as packed decimal in DB2, with four bits used per decimal digit,
plus four bits for the sign, so a DECIMAL column with precision p takes (p/2 + 1) bytes. Decimal
processing in DB2 is less efficient than integer processing, so one of the integer data types
should be used where possible, particularly when the Oracle NUMBER has a scale of zero (no
decimal places).
Here is an example of how a decimal value can be inserted into and retrieved from the database
using the CLI interface. The C program could use a double (SQL_C_DOUBLE) or another numeric
variable type to store the value, but the default CLI data type for DECIMAL is CHAR
(SQL_C_CHAR), and CHAR is used in the example.
SQLCHAR *dec_input = (SQLCHAR *) "-0001234.56780000"; /* input for DEC(15,8) col */
SQLINTEGER ind = 0; /* indicator variable */
SQLCHAR dec_output[18]; /* output from DEC(15,8) col */
.....
/* bind char variable to parameter marker used to provide the value for updating a
decimal column. The length of 18 (2nd-last parameter) includes 15 for
precision plus 1 each for sign, decimal point, and null terminator */
rc = SQLBindParameter(hstmt,1, SQL_PARAM_INPUT, SQL_C_CHAR,SQL_DECIMAL, 15, 8,
dec_input, 18,&ind);

Data Types Comparison Chart B-7


.....
/* bind char variable to decimal output from Select */
rc = SQLBindCol(hstmt, 1,SQL_C_CHAR, dec_output,18,&ind);

RAW
To simulate the Oracle RAW and LONG RAW data types, DB2 provides the FOR BIT DATA clause
for the VARCHAR, LONG VARCHAR , and CLOB data types. In addition, DB2 also provides the
BLOB data type to store up to 2 GB of binary data. Note that the hextoraw() and rawtohex()
functions are not provided in DB2, but it is possible to create a distinct user-defined type (UDT)
by using DB2 functions such as hex(), blob(), and cast().
Oracle8 extends the LONG type to BLOB and CLOB which can be mapped directly to the BLOB
and CLOB data types in DB2 UDB.

B-8 Data Types Comparison Chart


Appendix C

Example import and load Utilities Results

Example import and load Utilities Results 02-2003 C-1


2002,2003 International Business Machines Corporation
Example results of the DB2 UDB import and load utilities for the storesdb database.

Import
import from state.unl of del modified by coldel| insert into inst01.state
SQL3109N The utility is beginning to load data from file "state.unl".

SQL3110N The utility has completed processing. "52" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "52".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "52" rows were processed from the input file. "52" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 52


Number of rows skipped = 0
Number of rows inserted = 52
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 52

import from manufact.unl of del modified by coldel| insert into inst01.manufact


SQL3109N The utility is beginning to load data from file "manufact.unl".

SQL3110N The utility has completed processing. "9" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "9".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "9" rows were processed from the input file. "9" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 9


Number of rows skipped = 0
Number of rows inserted = 9
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 9

import from call_type.unl of del modified by coldel| insert into inst01.call_type


SQL3109N The utility is beginning to load data from file "call_type.unl".

SQL3110N The utility has completed processing. "5" rows were read from the
input file.

C-2 Example import and load Utilities Results


SQL3221W ...Begin COMMIT WORK. Input Record Count = "5".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "5" rows were processed from the input file. "5" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 5


Number of rows skipped = 0
Number of rows inserted = 5
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 5

import from stock.unl of del modified by coldel| insert into inst01.stock


SQL3109N The utility is beginning to load data from file "stock.unl".

SQL3110N The utility has completed processing. "74" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "74".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "74" rows were processed from the input file. "74" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 74


Number of rows skipped = 0
Number of rows inserted = 74
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 74

import from customer.unl of del modified by coldel| insert into inst01.customer


SQL3109N The utility is beginning to load data from file "customer.unl".

SQL3110N The utility has completed processing. "28" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "28".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "28" rows were processed from the input file. "28" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 28


Number of rows skipped = 0
Number of rows inserted = 28

Example import and load Utilities Results C-3


Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 28

import from orders.unl of del modified by coldel| insert into inst01.orders


SQL3109N The utility is beginning to load data from file "orders.unl".

SQL3110N The utility has completed processing. "23" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "23".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "23" rows were processed from the input file. "23" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 23


Number of rows skipped = 0
Number of rows inserted = 23
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 23

import from items.unl of del modified by coldel| insert into inst01.items


SQL3109N The utility is beginning to load data from file "items.unl".

SQL3110N The utility has completed processing. "67" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "67".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "67" rows were processed from the input file. "67" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 67


Number of rows skipped = 0
Number of rows inserted = 67
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 67

import from catalog.unl of del modified by coldel| insert into inst01.catalog


SQL3109N The utility is beginning to load data from file "catalog.unl".

SQL3110N The utility has completed processing. "74" rows were read from the
input file.

C-4 Example import and load Utilities Results


SQL3221W ...Begin COMMIT WORK. Input Record Count = "74".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "74" rows were processed from the input file. "74" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 74


Number of rows skipped = 0
Number of rows inserted = 74
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 74

import from cust_calls.unl of del modified by coldel| insert into inst01.cust_calls


SQL3109N The utility is beginning to load data from file "cust_calls.unl".

SQL3129W The date, time, or timestamp field containing


"1994-06-12-08.20.00.0" in row "1" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-06-12-08.25.00.0" in row "1" and column "6" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-07-10.24.00.0" in row "2" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-07-10.30.00.0" in row "2" and column "6" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-01-15.00.00.0" in row "3" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-02-08.21.00.0" in row "3" and column "6" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-10-14.05.00.0" in row "4" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-10-14.06.00.0" in row "4" and column "6" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1994-07-31-14.30.00.0" in row "5" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1993-11-28-13.34.00.0" in row "6" and column "2" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1993-11-28-16.47.00.0" in row "6" and column "6" was padded with blanks.

SQL3129W The date, time, or timestamp field containing


"1993-12-21-11.24.00.0" in row "7" and column "2" was padded with blanks.

Example import and load Utilities Results C-5


SQL3129W The date, time, or timestamp field containing
"1993-12-27-08.19.00.0" in row "7" and column "6" was padded with blanks.

SQL3110N The utility has completed processing. "7" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "7".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "7" rows were processed from the input file. "7" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 7


Number of rows skipped = 0
Number of rows inserted = 7
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 7

import from log_record.unl of del modified by coldel| insert into inst01.log_record


SQL3109N The utility is beginning to load data from file "log_record.unl".

SQL3110N The utility has completed processing. "0" rows were read from the
input file.

SQL3221W ...Begin COMMIT WORK. Input Record Count = "0".

SQL3222W ...COMMIT of any database changes was successful.

SQL3149N "0" rows were processed from the input file. "0" rows were
successfully inserted into the table. "0" rows were rejected.

Number of rows read = 0


Number of rows skipped = 0
Number of rows inserted = 0
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 0

C-6 Example import and load Utilities Results


Load
load from manufact.unl of del modified by coldel| insert into inst01.manufact
SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/manufact.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:01.929111".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "9" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "9".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.011812".

Number of rows read = 9


Number of rows skipped = 0
Number of rows loaded = 9
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 9

load from state.unl of del modified by coldel| insert into inst01.state


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/state.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:02.189706".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "52" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "52".

Example import and load Utilities Results C-7


SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.213066".

Number of rows read = 52


Number of rows skipped = 0
Number of rows loaded = 52
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 52

load from call_type.unl of del modified by coldel| insert into inst01.call_type


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/call_type.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:02.379466".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "5" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "5".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.402025".

Number of rows read = 5


Number of rows skipped = 0
Number of rows loaded = 5
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 5

load from customer.unl of del modified by coldel| insert into inst01.customer


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/customer.unl".

C-8 Example import and load Utilities Results


SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002
13:27:02.589649".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "28" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "28".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.613186".

Number of rows read = 28


Number of rows skipped = 0
Number of rows loaded = 28
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 28

load from cust_calls.unl of del modified by coldel| insert into inst01.cust_calls


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/cust_calls.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:02.769534".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "7" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "7".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.791886".

Number of rows read = 7


Number of rows skipped = 0
Number of rows loaded = 7

Example import and load Utilities Results C-9


Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 7

load from orders.unl of del modified by coldel| insert into inst01.orders


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/orders.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:02.939734".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "23" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "23".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.961747".

Number of rows read = 23


Number of rows skipped = 0
Number of rows loaded = 23
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 23

load from stock.unl of del modified by coldel| insert into inst01.stock


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/stock.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:03.129438".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "74" rows were read from the
input file.

C-10 Example import and load Utilities Results


SQL3519W Begin Load Consistency Point. Input record count = "74".
SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.161828".

Number of rows read = 74


Number of rows skipped = 0
Number of rows loaded = 74
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 74

load from items.unl of del modified by coldel| insert into inst01.items


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/items.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:03.371730".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "67" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "67".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.418401".

Number of rows read = 67


Number of rows skipped = 0
Number of rows loaded = 67
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 67

load from catalog.unl of del modified by coldel| insert into inst01.catalog


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/catalog.unl".

Example import and load Utilities Results C-11


SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002
13:27:03.595749".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "74" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "74".

SQL3520W Load Consistency Point was successful.

SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.653511".

SQL3500W The utility is beginning the "BUILD" phase at time "02-18-2002


13:27:03.658890".

SQL3213I The indexing mode is "REBUILD".

SQL3515W The utility has finished the "BUILD" phase at time "02-18-2002
13:27:03.684799".

Number of rows read = 74


Number of rows skipped = 0
Number of rows loaded = 74
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 74

load from log_record.unl of del modified by coldel| insert into inst01.log_record


SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.

SQL3109N The utility is beginning to load data from file


"/home/inst01/stores_demo.exp/log_record.unl".

SQL3500W The utility is beginning the "LOAD" phase at time "02-18-2002


13:27:03.899564".

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

SQL3110N The utility has completed processing. "0" rows were read from the
input file.

SQL3519W Begin Load Consistency Point. Input record count = "0".

SQL3520W Load Consistency Point was successful.

C-12 Example import and load Utilities Results


SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.921899".

Number of rows read = 0


Number of rows skipped = 0
Number of rows loaded = 0
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 0

set integrity for inst01.manufact immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.MANUFACT" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.state immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.STATE" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.call_type immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.CALL_TYPE" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.customer immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.CUSTOMER" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.cust_calls immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.CUST_CALLS" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.orders immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.ORDERS" is not in the check pending state.
SQLSTATE=51027

Example import and load Utilities Results C-13


set integrity for inst01.stock immediate checked
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.STOCK" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.items immediate checked


DB20000I The SQL command completed successfully.

set integrity for inst01.catalog immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.CATALOG" is not in the check pending state.
SQLSTATE=51027

set integrity for inst01.log_record immediate checked


DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3600N The IMMEDIATE CHECKED option of the SET INTEGRITY statement is not
valid since the table "INST01.LOG_RECORD" is not in the check pending state.
SQLSTATE=51027

C-14 Example import and load Utilities Results


Appendix D

Example Configuration Parameters

Example Configuration Parameters 02-2003 D-1


2002,2003 International Business Machines Corporation
Instance Configuration Parameters
The following is a list of the instance configuration parameters (DBM) for a Linux installation
(v8).
Database Manager Configuration

Node type = Enterprise Server Edition with local and remote clients

Database manager configuration release level = 0x0a00

CPU speed (millisec/instruction) (CPUSPEED) = 2.408956e-06


Communications bandwidth (MB/sec) (COMM_BANDWIDTH) = 1.000000e+00

Max number of concurrently active databases (NUMDB) = 8


Data Links support (DATALINKS) = NO
Federated Database System Support (FEDERATED) = NO
Transaction processor monitor name (TP_MON_NAME) =

Default charge-back account (DFT_ACCOUNT_STR) =

Java Development Kit installation path (JDK_PATH) = /opt/IBMJava2-131

Diagnostic error capture level (DIAGLEVEL) = 3


Notify Level (NOTIFYLEVEL) = 3
Diagnostic data directory path (DIAGPATH) = /home/insta8/sqllib/
db2dump

Default database monitor switches


Buffer pool (DFT_MON_BUFPOOL) = OFF
Lock (DFT_MON_LOCK) = OFF
Sort (DFT_MON_SORT) = OFF
Statement (DFT_MON_STMT) = OFF
Table (DFT_MON_TABLE) = OFF
Timestamp (DFT_MON_TIMESTAMP) = ON
Unit of work (DFT_MON_UOW) = OFF
Monitor health of instance and databases (HEALTH_MON) = OFF

SYSADM group name (SYSADM_GROUP) = INSTA8


SYSCTRL group name (SYSCTRL_GROUP) =
SYSMAINT group name (SYSMAINT_GROUP) =

Database manager authentication (AUTHENTICATION) = SERVER


Cataloging allowed without authority (CATALOG_NOAUTH) = NO
Trust all clients (TRUST_ALLCLNTS) = YES
Trusted client authentication (TRUST_CLNTAUTH) = CLIENT
Use SNA authentication (USE_SNA_AUTH) = NO
Bypass federated authentication (FED_NOAUTH) = NO

Default database path (DFTDBPATH) = /home/insta8

Database monitor heap size (4KB) (MON_HEAP_SZ) = 90


Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) = 2048
Audit buffer size (4KB) (AUDIT_BUF_SZ) = 0

D-2 Example Configuration Parameters


Size of instance shared memory (4KB) (INSTANCE_MEMORY) = AUTOMATIC
Backup buffer default size (4KB) (BACKBUFSZ) = 1024
Restore buffer default size (4KB) (RESTBUFSZ) = 1024

Sort heap threshold (4KB) (SHEAPTHRES) = 20000

Directory cache support (DIR_CACHE) = YES

Application support layer heap size (4KB) (ASLHEAPSZ) = 15


Max requester I/O block size (bytes) (RQRIOBLK) = 32767
Query heap size (4KB) (QUERY_HEAP_SZ) = 1000
DRDA services heap size (4KB) (DRDA_HEAP_SZ) = 128

Priority of agents (AGENTPRI) = SYSTEM


Max number of existing agents (MAXAGENTS) = 400
Agent pool size (NUM_POOLAGENTS) = 200(calculated)
Initial number of agents in pool (NUM_INITAGENTS) = 0
Max number of coordinating agents (MAX_COORDAGENTS) = (MAXAGENTS -
NUM_INITAGENTS)
Max no. of concurrent coordinating agents (MAXCAGENTS) = MAX_COORDAGENTS
Max number of client connections (MAX_CONNECTIONS) = MAX_COORDAGENTS

Keep fenced process (KEEPFENCED) = YES


Number of pooled fenced processes (FENCED_POOL) = MAX_COORDAGENTS
Initialize fenced process with JVM (INITFENCED_JVM) = NO
Initial number of fenced processes (NUM_INITFENCED) = 0

Index re-creation time (INDEXREC) = RESTART

Transaction manager database name (TM_DATABASE) = 1ST_CONN


Transaction resync interval (sec) (RESYNC_INTERVAL) = 180

SPM name (SPM_NAME) =


SPM log size (SPM_LOG_FILE_SZ) = 256
SPM resync agent limit (SPM_MAX_RESYNC) = 20
SPM log path (SPM_LOG_PATH) =

TCP/IP Service name (SVCENAME) = db2c_insta8


Discovery mode (DISCOVER) = SEARCH
Discovery communication protocols (DISCOVER_COMM) = TCPIP
Discover server instance (DISCOVER_INST) = ENABLE

Maximum query degree of parallelism (MAX_QUERYDEGREE) = ANY


Enable intra-partition parallelism (INTRA_PARALLEL) = NO

No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = 4096


Node connection elapse time (sec) (CONN_ELAPSE) = 10
Max number of node connection retries (MAX_CONNRETRIES) = 5
Max time difference between nodes (min) (MAX_TIME_DIFF) = 60

db2start/db2stop timeout (min) (START_STOP_TIME) = 10

Example Configuration Parameters D-3


Database Configuration Parameters
The following is a list of the database configuration parameters for the storesdb database (v8).
Database Configuration for Database storesdb

Database configuration release level = 0x0a00


Database release level = 0x0a00

Database territory = US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1

Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE

Discovery support for this database (DISCOVER_DB) = ENABLE

Default query optimization class (DFT_QUERYOPT) = 5


Degree of parallelism (DFT_DEGREE) = 1
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20

Backup pending = NO

Database is consistent = YES


Rollforward pending = NO
Restore pending = NO

Multi-page file allocation enabled = NO

Log retain for recovery status = NO


User exit for logging status = NO

Data Links Token Expiry Interval (sec) (DL_EXPINT) = 60


Data Links Write Token Init Expiry Intvl(DL_WT_IEXPINT) = 60
Data Links Number of Copies (DL_NUM_COPIES) = 1
Data Links Time after Drop (days) (DL_TIME_DROP) = 1
Data Links Token in Uppercase (DL_UPPER) = NO
Data Links Token Algorithm (DL_TOKEN) = MAC0

Database heap (4KB) (DBHEAP) = 1200


Size of database shared memory (4KB) (DATABASE_MEMORY) = AUTOMATIC
Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4)
Log buffer size (4KB) (LOGBUFSZ) = 8
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 5000
Buffer pool size (pages) (BUFFPAGE) = 1000
Extended storage segments size (4KB) (ESTORE_SEG_SZ) = 16000
Number of extended storage segments (NUM_ESTORE_SEGS) = 0
Max storage for lock list (4KB) (LOCKLIST) = 100

D-4 Example Configuration Parameters


Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 30000
Percent of mem for appl. group heap (GROUPHEAP_RATIO) = 70
Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 128

Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = (SHEAPTHRES)


Sort list heap (4KB) (SORTHEAP) = 256
SQL statement heap (4KB) (STMTHEAP) = 2048
Default application heap (4KB) (APPLHEAPSZ) = 256
Package cache size (4KB) (PCKCACHESZ) = (MAXAPPLS*8)
Statistics heap size (4KB) (STAT_HEAP_SZ) = 4384

Interval for checking deadlock (ms) (DLCHKTIME) = 10000


Percent. of lock lists per application (MAXLOCKS) = 10
Lock timeout (sec) (LOCKTIMEOUT) = -1

Changed pages threshold (CHNGPGS_THRESH) = 60


Number of asynchronous page cleaners (NUM_IOCLEANERS) = 1
Number of I/O servers (NUM_IOSERVERS) = 3
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = 32

Track modified pages (TRACKMOD) = OFF

Default number of containers = 1


Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 32

Max number of active applications (MAXAPPLS) = AUTOMATIC


Average number of active applications (AVG_APPLS) = 1
Max DB files open per application (MAXFILOP) = 64

Log file size (4KB) (LOGFILSIZ) = 1000


Number of primary log files (LOGPRIMARY) = 3
Number of secondary log files (LOGSECOND) = 2
Changed path to log files (NEWLOGPATH) =
Path to log files = /home/insta8/insta8/
NODE0000/SQL00002/SQLOGDIR/
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file =
Block log on disk full (BLK_LOG_DSK_FUL) = NO
Percent of max active log space by transaction(MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0

Group commit count (MINCOMMIT) = 1


Percent log file reclaimed before soft chckpt (SOFTMAX) = 100
Log retain for recovery enabled (LOGRETAIN) = OFF
User exit for logging enabled (USEREXIT) = OFF

Auto restart enabled (AUTORESTART) = ON


Index re-creation time (INDEXREC) = SYSTEM (RESTART)
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 366

Example Configuration Parameters D-5


TSM management class (TSM_MGMTCLASS) =
TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =

D-6 Example Configuration Parameters


Appendix E

Additional Reference Information

Additional Reference Information 02-2003 E-1


2002,2003 International Business Machines Corporation
Additional documentation for your study includes the documents listed below. Also, there are
many other documents (white papers, books, web pages) that are available from IBM and other
third-party vendors.

For More Information


DB2 Universal Database v7.1 Database Administration Certification Guide (ISBN
0-13-091366-9)
DB2 Universal Database Quick Beginnings (Gc09-2970)
DB2 Universal Database Administration Guide: Planning (SC09-2946)
DB2 Universal Database Administration Guide: Implementation Volume 1 and 2
(SC09-2944)
DB2 Universal Database Command Reference (SC09-2951)
DB2 Universal Database Administration Guide: Performance Volume 1 and 2
(SC09-2945)
DB2 Universal Database System Monitor Guide and Reference (SC09-2956)
DB2 Universal Database SQL Reference, Volume 1 and 2 (SC09-2974 and SC09-
2975)
DB2 Universal Database Data Movement Utilities Guide and Reference (SC09-
2955)
DB2 Universal Database Troubleshooting Guide (GC09-2850)

IBM offers many courses for your information needs. Check this web site for more information
on these in-depth courses:
http://www-3.ibm.com/software/info/education
and
http://www.ibm.com/software/data/db2/selfstudy

Related Classes
CBT self study course, Fast Path to DB2 UDB for Experienced Relational DBAs
(CT28)
DB2 Universal Database Administration Workshop for UNIX (CF211)
DB2 UDB Advanced Admin Workshop (CF45)

E-2 Additional Reference Information


Appendix F

The StoresDB Database

The StoresDB Database 02-2003 F-1


2002,2003 International Business Machines Corporation
The StoresDB Database Map

Joins in the StoresDB Database


items

orders item_num catalog

order_num order_num stock catalog_num

cust_calls customer order_date stock_num stock_num stock_num manufact

customer_num customer_num customer_num manu_code manu_code manu_code manu_code

call_dtime fname ship_instruct quantity description cat_descr manu_name

call_type user_id lname backlog total_price unit_price cat_picture lead_time

call_code call_code company po_num unit cat_advert

code_descr call_descr address1 ship_date unit_descr

res_dtime address2 ship_weight

res_descr city ship_charge

state state paid_date

code zipcode

sname phone

Columns listed as SERIAL in the following pages have been maintained with autonumbering on
INSERT transactions. For Oracle databases this type of column is typically implemented using a
SEQUENCE.

In DB2 UDB these columns are managed to the same effect using one of three techniques (see
page 6-15):
Implement a trigger to generate a sequential number (older method)
Use a DB2 UDB sequence
Define an identity column for the table

F-2 The StoresDB Database


The customer table
customer fname lname company address1 address2 city state zipcode phone
_num

SERIAL CHAR (15) CHAR (15) CHAR(20) CHAR(20) CHAR(20) CHAR(15) CHAR(2) CHAR(5) CHAR(18)

101 Ludwig Pauli All Sports Supplies 213 Erstwild Court Sunnyvale CA 94086 408-789-8075

102 Carole Sadler Sports Spot 785 Geary St San Francisco CA 94117 415-822-1289

103 Philip Currie Phils Sports 654 Poplar P.O.Box 3498 Palo Alto CA 94303 650-328-4543

104 Anthony Higgins Play Ball! East Shopping Cntr. 422 Bay Road Redwood City CA 94026 650-368-1100

105 Raymond Vector Los Altos Sports 1899 La Loma Drive Los Altos CA 94022 650-776-3249

106 George Watson Watson & Son 1143 Carver Place Mountain View CA 94063 650-389-8789

107 Charles Ream Athletic Supplies 41 Jordan Avenue Palo Alto CA 94304 650-356-9876

108 Donald Quinn Quinns Sports 587 Alvarado Redwood City CA 94063 650-544-8729

109 Jane Miller Sport Stuff Mayfair Mart 7345 Ross Blvd. Sunnyvale CA 94086 408-723-8789

110 Roy Jaeger AA Athletics 520 Topaz Way Redwood City CA 94062 650-743-3611

111 Frances Keyes Sports Center 3199 Sterling Court Sunnyvale CA 94085 408-277-7245

112 Margaret Lawson Runners & Others 234 Wyandotte Way Los Altos CA 94022 650-887-7235

113 Lana Beatty Sportstown 654 Oak Grove Menlo Park CA 94025 650-356-9982

114 Frank Albertson Sporting Place 947 Waverly Place Redwood City CA 94062 650-886-6677

115 Alfred Grant Gold Medal Sports 776 Gary Avenue Menlo Park CA 94025 650-356-1123

116 Jean Parmelee Olympic City 1104 Spinosa Drive Mountain View CA 94040 650-534-8822

117 Arnold Sipes Kids Korner 850 Lytton Court Redwood City CA 94063 650-245-4578

118 Dick Baxter Blue Ribbon Sports 5427 College Oakland CA 94609 650-655-0011

119 Bob Shorter The Triathletes Club 2405 Kings Highway Cherry Hill NJ 08002 609-663-6079

120 Fred Jewell Century Pro Shop 6627 N. 17th Way Phoenix AZ 85016 602-265-8754

121 Jason Wallack City Sports Lake Biltmore Mall 350 W. 23rdSt Wilmington DE 19898 302-366-7511

122 Cathy OBrian The Sporting Life 543 Nassau Street Princeton NJ 08540 609-342-0054

123 Marvin Hanlon Bay Sports 10100 Bay Meadows Rd Suite 1020 Jacksonville FL 32256 904-823-4239

124 Chris Putnum Putnums Putters 4715 S. E. Adams Blvd Suite 909C Bartlesville OK 74006 918-355-2074

125 James Henry Total Fitness Sports 1450 Commonwealth Brighton MA 02135 617-232-4159
Ave.

126 Eileen Neelie Neelies Discount 2539 South Utica St Denver CO 80219 303-936-7731
Sports

127 Kim Satifer Big Blue Bike Shop Blue Island Square 12222 Blue Island NY 60406 312-944-5691
Gregory St

128 Frank Lessor Phoenix University Athletic Department 1817 N. Phoenix AZ 85008 602-533-1817
Thomas Rd

The StoresDB Database F-3


The orders table
order order_date customer_n ship_instruct backlog po_num ship_date ship_weight ship_charge paid_date
_num um

SERIAL DATE INTEGER CHAR(40) CHAR(1) CHAR(10) DATE DECIMAL(8,2) MONEY(6,2) DATE

1001 05/20/1998 104 express n B77836 06/01/1998 20.40 10.00 07/22/1998

1002 05/21/1998 101 PO on box; delivery back door n 9270 05/26/1998 50.60 15.30 06/03/1998
only

1003 05/22/1998 104 express n B77890 05/23/1998 35.60 10.80 06/24/1998

1004 05/22/1998 106 ring bell twice y 8006 05/30/1998 95.80 19.20

1005 05/24/1998 116 call before delivery n 2865 06/09/1998 80.80 16.20 06/21/1998

1006 05/30/1998 112 after 10AM y Q13557 70.80 14.20

1007 05/31/1998 117 n 278693 06/05/1998 125.90 25.20

1008 06/07/1998 110 closed Monday y LZ230 07/06/1998 45.60 13.80 07/21/1998

1009 06/14/1998 111 door next to grocery n 4745 06/21/1998 20.40 10.00 08/21/1998

1010 06/17/1998 115 deliver 776 King St. if no n 429Q 06/29/1998 40.60 12.30 08/22/1998
answer

1011 06/18/1998 104 express n B77897 07/03/1998 10.40 5.00 08/29/1998

1012 06/18/1998 117 n 278701 06/29/1998 70.80 14.20

1013 06/22/1998 104 express n B77930 07/10/98 60.80 12.20 07/31/98

1014 06/25/98 106 ring bell, kick door loudly n 8052 0703/98 40.60 12.30 07/10/98

1015 06/27/98 110 closed Mondays n MA003 07/16/98 20.60 6.30 08/31/98

1016 06/29/98 119 delivery entrance off Camp St. n PC6782 07/12/98 35.00 11.80

1017 07/09/98 120 north side of clubhouse n DM354331 07/13/98 60.00 18.00

1018 07/10/98 121 SW corner of Biltmore Mall n S22942 07/13/98 70.50 20.00 08/06/98

1019 07/11/98 122 closed til noon Mondays n Z55709 07/16/98 90.00 23.00 08/06/98

1020 07/11/98 123 express n W2286 07/16/98 14.00 8.50 09/20/98

1021 07/23/98 124 ask for Elaine n C3288 07/25/98 40.00 12.00 08/22/98

1022 07/24/98 126 express n W9925 07/30/98 15.00 13.00 09/02/98

1023 07/24/98 127 no deliveries after 3 p.m. n KF2961 07/30/98 60.00 18.00 08/22/98

F-4 The StoresDB Database


The items table
item_num order_num stock_num manu_code quantity total_price
SMALL INT INTEGER SMALL INT CHAR(3) SMALL INT MONEY(8,2)
1 1001 1 HRO 1 $250.00
1 1002 4 HSK 1 $960.00
2 1002 3 HSK 1 $240.00
1 1003 9 ANZ 1 $20.00
2 1003 8 ANZ 1 $840.00
3 1003 5 ANZ 5 $99.00
1 1004 1 HRO 1 $250.00
2 1004 2 HRO 1 $126.00
3 1004 3 HSK 1 $240.00
4 1004 1 HSK 1 $800.00
1 1005 5 NRG 10 $280.00
2 1005 5 ANZ 10 $198.00
3 1005 6 SMT 1 $36.00
4 1005 6 ANZ 1 $48.00
1 1006 5 SMT 5 $125.00
2 1006 5 NRG 5 $140.00
3 1006 5 ANZ 5 $99.00
4 1006 6 SMT 1 $36.00
5 1006 6 ANZ 1 $48.00
1 1007 1 HRO 1 $250.00
2 1007 2 HRO 1 $126.00
3 1007 3 HSK 1 $240.00
4 1007 4 HRO 1 $480.00
5 1007 7 HRO 1 $600.00
1 1008 8 ANZ 1 $840.00
2 1008 9 ANZ 5 $100.00
1 1009 1 SMT 1 $450.00
1 1010 6 SMT 1 $36.00
2 1010 6 ANZ 1 $48.00
1 1011 5 ANZ 5 $99.00
1 1012 8 ANZ 1 $840.00
2 1012 9 ANZ 10 $200.00
1 1013 5 ANZ 1 $19.80
2 1013 6 SMT 1 $36.00

The StoresDB Database F-5


The items table
item_num order_num stock_num manu_code quantity total_price
SMALL INT INTEGER SMALL INT CHAR(3) SMALL INT MONEY(8,2)
3 1013 6 ANZ 1 $48.00
4 1013 9 ANZ 2 $40.00
1 1014 4 HSK 1 $960.00
2 1014 4 HRO 1 $480.00
1 1015 1 SMT 1 $450.00
1 1016 101 SHM 2 $136.00
2 1016 109 PRC 3 $90.00
3 1016 110 HSK 1 $308.00
4 1016 114 PRC 1 $120.00
1 1017 201 NKL 4 $150.00
2 1017 202 KAR 1 $230.00
3 1017 301 SHM 2 $204.00
1 1018 307 PRC 2 $500.00
2 1018 302 KAR 3 $15.00
3 1018 110 PRC 1 $236.00
4 1018 5 SMT 4 $100.00
5 1018 304 HRO 1 $280.00
1 1019 111 SHM 3 $1499.97
1 1020 204 KAR 2 $90.00
2 1020 301 KAR 4 $348.00
1 1021 201 NKL 2 $75.00
2 1021 201 ANZ 3 $225.00
3 1021 202 KAR 3 $690.00
4 1021 205 ANZ 2 $624.00
1 1022 309 HRO 1 $40.00
2 1022 303 PRC 2 $96.00
3 1022 6 ANZ 2 $96.00
1 1023 103 PRC 2 $40.00
2 1023 104 PRC 2 $116.00
3 1023 105 SHM 1 $80.00
4 1023 110 SHM 1 $228.00
5 1023 304 ANZ 1 $170.00
6 1023 306 SHM 1 $190.00

F-6 The StoresDB Database


The cust_calls table
customer call_dtime user _id call call_descr res_dtime res_descr
_num _code
INTEGER DATETIME YEAR CHAR(32) CHAR(1 CHAR(240) DATETIME YEAR TO CHAR(240)
TO MINUTE ) MINUTE
106 1998-06-12 08:20 maryj * D Order was received, but two... 1998-06-12 08:25 Authorized credit for two...
110 1998-07-07 10:24 richc L Order placed one month ago... 1998-07-07 10:30 Checked with shipping...
119 1998-07-01 15:00 richc B Bill does not reflect credit from... 1998-07-02 08:21 Spoke with Jane Akant...
121 1998-07-10 14:05 maryj O Customer likes our merchandise... 1998-07-10 14:06 Sent note to marketing...
127 1998-07-31 14:30 maryj I Received Hero watches... Sent memo to shipping
116 1997-11-28 13:34 mannyn I Received plain white swim caps... 1997-11-28 16:47 Shipping found correct...
116 1997-12-21 11:24 mannyn I Second complaint from this... 1997-12-27 08:19 Memo to shipping...

The manufact table The state table


manu_code manu_name lead_time code sname
CHAR(3) CHAR(15) INTERVAL CHAR(2) CHAR(15)
DAY TO DAY
SMT Smith 3 AK Alaska
ANZ Anza 5 AL Alabama
NRG Norge 7 AR Arkansas
HSK Husky 5 AZ Arizona
HRO Hero 4 CA California
SHM Shimara 30 ... ...
KAR Karsten 21 DC D.C.
NKL Nikolus 8 PR Puerto Rico
PRO Pro Cycle 9

The call_type table


call_code code_descr
CHAR(1) CHAR(30)
B billing error
D damaged goods
I incorrect merchandise sent
L late shipment
O other

The StoresDB Database F-7


The stock table
Stock_num manu_code description unit_price unit Unit_desc
SMALL INT CHAR(3) CHAR(15) MONEY(6,2) CHAR(4) CHAR(15)

1 HRO baseball gloves 250 case 10 gloves/case


1 HSK baseball gloves 800 case 10 gloves/case
1 SMT baseball gloves 450 case 10 gloves/case
2 HRO baseball 126 case 24/case
3 HSK baseball bat 240 case 12/case
3 SHM baseball bat 280 case 12/case
4 HSK football 960 case 24/case
4 HRO football 480 case 24/case
5 NRG tennis racquet 28 each each
5 SMT tennis racquet 25 each each
5 ANZ tennis racquet 19.8 each each
6 SMT tennis ball 36 case 24 cans/case
6 ANZ tennis ball 48 case 24 cans/case
7 HRO basketball 600 case 24/case
8 ANZ volleyball 840 case 24/case
9 ANZ volleyball net 20 each each
101 PRC bicycle tires 88 box 4/box
101 SHM bicycle tires 68 box 4/box
102 SHM bicycle brakes 220 case 4 sets/case
102 PRC bicycle brakes 480 case 4 sets/case
103 PRC frnt derailleur 20 each each
104 PRC rear derailleur 58 each each
105 PRC bicycle wheels 53 pair pair
105 SHM bicycle wheels 80 pair pair
106 PRC bicycle stem 23 each each
107 PRC bicycle saddle 70 pair pair
108 SHM crankset 45 each each
109 PRC pedal binding 30 case 6 pairs/case

F-8 The StoresDB Database


The stock table
Stock_num manu_code description unit_price unit Unit_desc
109 SHM pedal binding 200 case 4 pairs/case
110 PRC helmet 236 case 4/case
110 ANZ helmet 244 case 4/case
110 SHM helmet 228 case 4/case
110 HRO helmet 260 case 4/case
110 HSK helmet 308 case 4/case
111 SHM "10-spd, assmbld" 499.99 each each
112 SHM "12-spd, assmbld" 549 each each
113 SHM "18-spd, assmbld" 685.9 each each
114 PRC bicycle gloves 120 case 10 pairs/case
201 NKL golf shoes 37.5 each each
201 ANZ golf shoes 75 each each
201 KAR golf shoes 90 each each
202 NKL metal woods 174 case 2 sets/case
202 KAR std woods 230 case 2 sets/case
203 NKL irons/wedge 670 case 2 sets/case
204 KAR putter 45 each each
205 NKL 3 golf balls 312 case 24/case
205 ANZ 3 golf balls 312 case 24/case
205 HRO 3 golf balls 312 case 24/case
301 NKL running shoes 97 each each
301 HRO running shoes 42.5 each each
301 SHM running shoes 102 each each
301 PRC running shoes 75 each each
301 KAR running shoes 87 each each
301 ANZ running shoes 95 each each
302 HRO ice pack 4.5 each each
302 KAR ice pack 5 each each
303 PRC socks 48 box 24 pairs/box

The StoresDB Database F-9


The stock table
Stock_num manu_code description unit_price unit Unit_desc
303 KAR socks 36 box 24 pairs/box
304 ANZ watch 170 box 10/box
304 HRO watch 280 box 10/box
305 HRO first-aid kit 48 case 4/case
306 PRC tandem adapter 160 each each
306 SHM tandem adapter 190 each each
307 PRC infant jogger 250 each each
308 PRC twin jogger 280 each each
309 HRO ear drops 40 case 20/case
309 SHM ear drops 40 case 20/case
310 SHM kick board 80 case 10/case
310 ANZ kick board 84 case 12/case
311 SHM water gloves 48 box 4 pairs/box
312 SHM racer goggles 96 box 12/box
312 HRO racer goggles 72 box 12/box
313 SHM swim cap 72 box 12/box
313 ANZ swim cap 60 box 12/box

F-10 The StoresDB Database


The catalog table
catalog_num Stock_num manu_cod cat_desc cat_picture cat_advert
e
SERIAL SMALL INT CHAR(3) TEXT BYTE VARCHAR(255)

10001 1 HRO Brown leather. (PICTURE) Your First Seasons


Specify first Baseball Glove
basemans or
infield/outfield
style. Specify
right- or left-
handed.
10002 1 HSK Babe Ruth (PICTURE) "All Leather, Hand
signature Stitched, Deep
glove. Black Pockets, Sturdy
leather. Infield/ Webbing That
outfield style. Wont Let Go"
Specify right-
or left-handed.
10003 1 SMT Catchers mitt. (PICTURE) A Sturdy Catchers
Brown leather. Mitt With the
Specify right- Perfect Pocket
or left-handed.
10004 2 HRO "Jackie (PICTURE) "Highest Quality
Robinson Ball Available,
signature ball. from the Hand-
Highest Stitching to the
professional Robinson
quality, used Signature"
by National
League."
10005 3 HSK "Pro-style (PICTURE) High-Technology
wood. Design Expands the
Available in Sweet Spot
sizes: 31, 32,
33, 34, 35."
10006 3 SHM "Aluminum. (PICTURE) Durable Aluminum
Blue with for High School and
black tape. Collegiate Athletes
31"", 20 oz or
22 oz; 32"", 21
oz or 23 oz;
33"", 22 oz or
24 oz;"

The StoresDB Database F-11


The catalog table
catalog_num Stock_num manu_cod cat_desc cat_picture cat_advert
e
SERIAL SMALL INT CHAR(3) TEXT BYTE VARCHAR(255)

10007 4 HSK Norm Van (PICTURE) Quality Pigskin


Brocklin with Norm Van
signature style. Brocklin Signature
10008 4 HRO NFL Style (PICTURE) Highest Quality
pigskin. Football for High
School and
Collegiate
Competitions
10009 5 NRG Graphite (PICTURE) Wide Body
frame. Amplifies Your
Synthetic Natural Abilities by
strings. Providing More
Power Through
Aerodynamic
Design
10010 5 SMT Aluminum (PICTURE) Mid-Sized Racquet
frame. for the Improving
Synthetic Player
strings.
10011 5 ANZ "Wood frame, (PICTURE) Antique Replica of
cat-gut Classic Wooden
strings." Racquet Built with
Cat-Gut Strings
10012 6 SMT Soft yellow (PICTURE) "High-Visibility
color for easy Tennis, Day or
visibility in Night"
sunlight or
artificial light.
10013 6 ANZ "Pro-core. (PICTURE) Durable
Available in Construction
neon yellow, Coupled with the
green, and Brightest Colors
pink." Available
10014 7 HRO Indoor. Classic (PICTURE) Long-Life
NBA style. Basketballs for
Brown leather. Indoor
Gymnasiums

F-12 The StoresDB Database


The catalog table
catalog_num Stock_num manu_cod cat_desc cat_picture cat_advert
e
SERIAL SMALL INT CHAR(3) TEXT BYTE VARCHAR(255)

10015 8 ANZ Indoor. Finest (PICTURE) Professional


leather. Volleyballs for
Professional Indoor
quality. Competitions
10016 9 ANZ Steel eyelets. (PICTURE) Sanctioned
Nylon cording. Volleyball Netting
Double- for Indoor
stitched. Professional and
Sanctioned by Collegiate
the National Competition
Athletic
Congress
... ... ... ... ...
10074 302 HRO Re-usable ice (PICTURE) Water
pack. Store in Compartment
the freezer for Combines with Ice
instant first- to Provide Optimal
aid. Extra Orthopedic
capacity to Treatment
accommodate
water and ice.

The StoresDB Database F-13


F-14 The StoresDB Database
Appendix LE

Lab Exercises Environment

Lab Exercises Environment 02-2003 LE-1


2002,2003 International Business Machines Corporation
Overview

In this appendix, you will learn how to connect to the Lab Exercises
environment in the IBM DB2 classrooms:
Client Setup (Windows)

DB2 Server Setup (Windows)


DB2 Server Setup (UNIX / Linux)

LE-2

LE-2 Lab Exercises Environment


Client Setup (Windows)

Information you will need for the client workstation is:


Workstation name
Workstation login
Workstation password
Client software installation location

LE-3

Client Setup (Windows)


Workstation name _______________________________________
Workstation login _______________________________________
Workstation password ____________________________________
Client software installation location:
Directory ______________________________________________
______________________________________________________

Lab Exercises Environment LE-3


DB2 Server Setup (Windows)

Information you will need for the server workstation is:


COMPUTERNAME
DB2PATH
DB2INSTANCE
etc\services ports

LE-4

DB2 Server Setup (Windows)


Each student will use a PC workstation, which has the DB2 product installed, and may have
database server instances already created.
Use the following command from a DB2 command line window:
set
This will list your environment setup for your logon account.
COMPUTERNAME _____________________________________
DB2PATH _____________________________________________
DB2INSTANCE ________________________________________
etc\services ports ________________________________________

LE-4 Lab Exercises Environment


DB2 Server Setup (UNIX/Linux)

Information you will need for the UNIX/Linux server is:


Team Number
Login
Password
Host name
DB2PATH
DB2INSTANCE
/etc/services ports

LE-5

DB2 Server Setup (UNIX/Linux)


Use Korn Shell on Unix or BASH on Linux. These shells use:
export ...
Set up a file (e.g., $HOME/myenv) with environmental variables that will be used throughout
the course and can be sourced to establish new telnet windows when needed.
Team Number __________________________________________
Login _________________________________________________
Password ______________________________________________
Host name _____________________________________________
DB2PATH _____________________________________________
DB2INSTANCE ________________________________________
/etc/services ports _______________________________________

Lab Exercises Environment LE-5


Source the file that you have just created to set your current user (and any sub-user)
environment, then display your environment to double-check.
. ./myenv
env

LE-6 Lab Exercises Environment


DB2 Platforms

On the Windows platform, you have three ways to work with DB2:
Graphical User Interface
Command Line Processor
Command Window (using the CLP)

On the Unix/Linux platforms, you use a command window


established by your logon session (for example, telnet)

LE-7

Graphical User Interface


Use the mouse and navigate to the required program. Example:
Start > Programs > IBM DB2 > Command Line Tools >
Command Center
This selection opens the DB2 Command Center window.

Command Line Processor


Use the mouse and navigate to the DB2 CLP. Example:
Start > Programs > IBM DB2 > Command Line Tools >
Command Line Processor
This selection opens a DB2 Command Line Processor window and starts the CLP for you.

Lab Exercises Environment LE-7


Command Window
Use the mouse and navigate to the DB2 Command Window. Example:
Start > Programs > IBM DB2 > Command Line Tools >
Command Window
This selection opens a DB2 Command Window. To start the CLP in this window, you must type:
db2

LE-8 Lab Exercises Environment


DB2 Command Line Syntax

db2

option-flag db2-command
sql-statement
?

phrase

message
sql-state
class-code

LE-9

The basic Command Line syntax for the CLP is shown above.

Lab Exercises Environment LE-9


DB2 Online Reference

Command help is available in several forms:


Online command reference:
db2 ?
db2 ? command string
db2 ? SQLnnnn (nnnn = 4 or 5 digit SQLCODE)
db2 nnnnn (nnnnn = 5 digit SQLSTATE)

Online reference manuals:


pdf files
HTML pages

LE-10

While the DB2 server is running, you can use the CLP to get command line help as shown
above.
You can also view pdf/HTML technical document files if they were installed with the server.
The IBM DB2 Command Reference document contains further information on using the CLP.

LE-10 Lab Exercises Environment


Starting a Command Line Session

Non-interactive mode
db2 connect to eddb
db2 select * from syscat.tables | more

Interactive mode
db2
db2=> connect to eddb
db2=> select * from syscat.tables

LE-11

Use the non-interactive mode if you need to issue OS commands while performing your tasks.

Lab Exercises Environment LE-11


QUIT vs. TERMINATE vs. CONNECT RESET

CLP Terminate CLP Disconnect database


COMMAND Back-end Process Connection

quit No No

terminate Yes Yes


Yes if
connect reset
No CONNECT=1
(RUOW)

LE-12

There are several ways to finish your DB2 session.


As shown above, simply issuing a quit command while in the CLP does not terminate your
resource use of the server.
For a clean separation, like when you want new database parameter values to take effect, you
must terminate your database connection.
You may also need to force other applications off the server.
Example commands:
db2=> quit
$
db2 terminate
$
db2 force applications all
$

LE-12 Lab Exercises Environment


List CLP Command Options

Use the DB2 list command to view the Command Line Processor
option settings
db2 list command options

The list created from this command is shown below.

LE-13

Command Line Processor Option Settings


Backend process wait time (Seconds) (DB2BQTIME) = 1
No. of retries to connect to backend (DB2BQTRY) = 60
Request queue wait time (seconds) (DB2RQTIME) = 5
Input queue wait time (seconds) (DB2IQTIME) = 5
Command options (DB2OPTIONS) =

Option Description Current Setting


-a Display SQLCA OFF
-c Auto-Commit ON
-e Display SQLCODE/SQLSTATE OFF
-f Read from Input File OFF
-l Log commands in history file OFF
-o Display output ON
-p Display Interactive input prompt ON

Lab Exercises Environment LE-13


-r Save output to report file OFF
-s Stop execution on command error OFF
-t Use ';' for statement termination OFF
-v Echo current command OFF
-w Display FETCH/SELECT warning messages ON
-x Suppress printing of column headings OFF
-z Save all output to output file OFF

LE-14 Lab Exercises Environment


Modify CLP Options

You can modify CLP options:


Temporary for a command
Temporary for an interactive CLP session
Temporary for a non-interactive CLP session
Every session

LE-15

Temporary for a command:


db2 -r options.rep list command options
db2 -svtf create.tab3
db2 +c "update tab3 set salary=salary + 100"
Temporary for an interactive CLP session:
db2=>update command options using c off a on
Temporary for a non-interactive CLP session:
export DB2OPTIONS="-svt" (UNIX)
set DB2OPTIONS="-svt" (Intel)
db2 -f create.tab3
Every session:
Place environment settings in UNIX db2profile, in OS/2 config.sys, or System
Program Group in Windows NT

Lab Exercises Environment LE-15


Input File - No Operating System Commands

Create a file (create.tab shown below)


Edit the file, specifying the commands you want to execute
Execute the file using DB2

LE-16

Edit create.tab

-- comment: db2 -svtf create.tab


connect to sample;

create table tab3


(name varchar(20) not null,
phone char(40),
salary dec(7,2));

select * from tab3;

commit work;

connect reset;

Execute the file:


db2 -svtf create.tab

LE-16 Lab Exercises Environment


Input File - Operating System Commands

Edit the file (seltab):


UNIX or Linux - vi seltab
echo "Table Name Is" $1 > out.sel
db2 "select * from $1" >> out.sel
OS/2 - epm seltab.cmd
Windows - edit seltab.cmd
echo 'Table Name Is' %1 > out.sel
db2 Select * from %1 >> out.sel
Execute the file:
seltab org

LE-17

out.sel contents:

Table Name Is org


DEPTNUMB DEPTNAME MANAGER DIVISION LOCATION
10 Head Office 160 Corporate New York
15 New England 50 Eastern Boston
20 Mid Atlantic 10 Eastern Washington
38 South Atlantic 30 Eastern Atlanta
42 Great Lakes 100 Midwest Chicago
51 Plains 140 Midwest Dallas
66 Pacific 270 Western San Francisco
84 Mountain 290 Western Denver

Lab Exercises Environment LE-17


LE-18 Lab Exercises Environment
Index
Index
Extent size 1-5, 13-25
A
Administration Client 1-18, 2-8
F
Fenced user 3-5
B
Backup 12-11
Buffer pool 1-6 I
Buffer pool allocation 4-3 import 8-4
BUFFPAGE 13-16 Import and load summary 8-11
Import data file types 8-5
Index placement 9-22
C Index syntax 9-24
CASCADE delete rule 10-7 Indexing differences 9-16
CHNGPGS_THRESH 13-16 Instance 1-3
Concurrency 1-20 Instance authority 3-11
Configuration files 1-11 Instance configuration parameters 13-4
Constraint violations 8-18
Container 1-5, 5-16
CREATE DATABASE 4-10 L
CREATETAB 7-4 Large Object (LOB) string data types 6-10
Creating table spaces 5-19 load 8-4
LOAD authority 8-4
load utility 8-9
D Logging
DAS 3-6 circular 12-5
Database Administration Server 1-16, 1-17 configuration parameters 12-10
DASD 1-26 log retention 12-6
Data types 6-3 LOGSPRIMARY 13-31
Database configuration file 4-13 LOGSSECOND 13-31
Database subdirectory 4-12
Date-time data types 6-11
DB2 UDB memory elements 13-5 M
DB2 UDB storage diagram 5-4 MAXAGENTS 13-28
db2icrt Create instance 3-7 MAXAPPLS 13-28
DB2INSTANCE environment variable 3-8 Mode ANSI 4-5
DBADM 4-7, 7-4, 8-4 Monitoring disk usage 5-26, 7-12, 8-20
Default table spaces 4-14
Delete rules
NO ACTION, CASCADE, SET NULL 10-7 N
Disk storage requirements 5-21, 5-24 NO ACTION delete rule 10-7
Null
handling 6-17
E NUM_IOCLEANERS 13-16
Environment variables 1-12, 13-4 Numeric data types 6-4
Explain 9-25
Extent allocation 5-18

Index-1
O T
Oracle Table space 1-4, 5-5
concurrency and locks 1-20 characteristics 5-14
isolation levels 1-20 Table space usage 5-15
transactions 1-20 Tuning the buffer pool 13-21
Types of DB2 UDB indexes 9-7

P
Package V
BIND, PREP 1-14 Visual Explain 9-27
Page size 13-23
Partitioning 7-5
PREDICATE 1-26
Prefetch size 13-27
Privileges 4-8
Privileges, CONTROL 8-4
Privileges, INSERT 8-4
Privileges, SELECT 8-4

R
Registry variables 1-12
Restore
recovery 12-12
Roll forward recovery utility 12-14
Run-Time Client 1-18, 2-3

S
SARGABLE 1-26
SCALAR 1-26
Schema 4-6
Security
instance 1-19
Self-tuning 13-30
Sequences 6-15, F-2
SET INTEGRITY statement 8-18
SET NULL delete rule 10-7
StoresDB database
call_type F-7
catalog F-11
cust_calls F-7
customer F-3
items F-5
manufact F-7
orders F-4
stock F-8
String data types 6-8
SYSADM 8-4
SYSADM, SYSADM_GROUP 3-4
System Catalog tables 4-15

Index-2

Vous aimerez peut-être aussi