Vous êtes sur la page 1sur 356

Front cover

IMS Version 8
Implementation Guide
A Technical Overview of the New Features
Explore IMSplex components and
discover the new IMS architecture
Utilize your Java skills with IMS
for Java and WebSphere support
Get familiar with all the new
features

Jouko Jntti
Henry Kiesslich
Roddy Munro
John Schlatweiler
Bill Stillwell

ibm.com/redbooks

International Technical Support Organization


IMS Version 8 Implementation Guide
A Technical Overview of the New Features
October 2002

SG24-6594-00

Note: Before using this information and the product it supports, read the information in Notices on
page xi.

First Edition (October 2002)


This edition applies to IMS Version 8 (program number 5655-C56) or later for use with the OS/390 or z/OS
Operating System.
Note: This book is based on a pre-GA version of a product and may not apply when the product becomes
generally available. We recommend that you consult the product documentation or follow-on versions of
this redbook for more current information.

Copyright International Business Machines Corporation 2002. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

......
......
......
......

.......
.......
.......
.......

xiii
xiii
.xv
.xv

Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Introduction to the enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Availability and recoverability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Database Recovery Control (DBRC) enhancements . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Database Image Copy 2 enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 HALDB enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 Batch Resource Recovery Service (RRS) support . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.5 Remote Site Recovery (RSR) enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.6 Enhanced availability by using the Resource Manager (RM) . . . . . . . . . . . . . . . . . 7
1.2.7 Common Queue Server (CQS) enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.8 APPC and OTMA enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.9 APPC/IMS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.10 IMS Online Recovery Service (ORS) support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.11 System Log Data Set (SLDS) dynamic backout processing . . . . . . . . . . . . . . . . . 9
1.3 Performance and capacity enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Fast Path enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.2 Parallel database processing enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 IMS MSC FICON CTC support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.4 Virtual storage constraint relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Systems management enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 BPE enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2 Common Service Layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.3 Installation and configuration enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.4 Syntax Checker. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.5 Transaction trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Application enablement enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.1 Java enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Part 2. IMS Version 8 base enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Chapter 2. Packaging and installing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 Product packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Installation changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Changes in target and distribution data sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3 SMP/E processing changes in IMS Version 8. . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4 User exits in IMS Version 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 IVP changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Execution steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 IMS Java IVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Copyright IBM Corp. 2002. All rights reserved.

21
22
22
22
24
24
24
24
26

iii

2.3 IMS system definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


2.3.1 Changed minimum and default values for RECLNG in MSGQUEUE macro . . . . 26
2.4 New and obsolete execution parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

iv

Chapter 3. Syntax Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Using the Syntax Checker. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Changing releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Display options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Save options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29
30
30
31
38
41
44

Chapter 4. Database management enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . .


4.1 Database Image Copy 2 enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Multiple DBDS and ADS copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.2 Group name support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.3 Single output data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.4 Support for the DFSMSdss OPTIMIZE option . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.5 GENJCL support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Parallel database processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 DBRC authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Full function database allocation, open and close processing . . . . . . . . . . . . . . .
4.2.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Fast Path DEDB enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 DEDB support greater than 240 areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.2 Nonrecoverable DEDBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.3 Coupling Facility support for DEDB VSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.4 Unused IOVF count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Batch RRS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 Activation and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Coordinated IMS/DB2 disaster recovery support . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 XRC tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.2 Log synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.3 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.4 Messages and log records changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.5 Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47
48
48
51
52
53
53
54
55
55
55
56
56
57
60
61
61
61
61
62
63
64
65
66
67

Chapter 5. Database Recovery Control enhancements. . . . . . . . . . . . . . . . . . . . . . . . .


5.1 Support of 16 MB RECON record size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1 RECON record spanning segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2 Usage of alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 DBRC PRILOG compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 DBRC command authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 Security support for DBRC commands and protected resources . . . . . . . . . . . . .
5.3.2 The resource name table DSPRNTBL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.3 How command authorization gets invoked . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.4 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.5 Usage of the DBRC command authorization exit (DSPDCAX0). . . . . . . . . . . . . .
5.3.6 DBRC command authorization examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Avoidance of certain DBRC abends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5 Automatic RECON loss notification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 IMS version coexistence for DBRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69
70
70
71
72
73
74
74
74
75
76
77
79
80
80

IMS Version 8 Implementation Guide

Chapter 6. Transaction trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


6.1 Transaction trace (MVS component trace) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 How transaction trace works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 How to use transaction trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.3 Sample transaction trace output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83
84
84
86
87

Chapter 7. APPC base enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


7.1 Dynamic LU 6.2 descriptor support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 Add a new LU 6.2 descriptor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.2 Delete an LU 6.2 descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 CPU time limit for CPI-C driven transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Support for APPC outbound LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89
90
90
91
92
93

Chapter 8. Application enablement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95


8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.2 Java dependent regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.2.1 Persistent Reusable Java Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.2 Benefits of a JVM environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.3 Other IMS Java considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.4 DFSJMP and DFSJBP procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.2.5 JVMOPMAS and JVMOPWKR members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.2.6 ENVIRON= and DFSJVMAP members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2.7 IMS system definition considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.2.8 PSBGEN considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.2.9 /DISPLAY examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.3 Java standards enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.3.1 Java result set types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.3.2 Java result set concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.3.3 Batch updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.3.4 New SQL keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.4 JDBC access enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.5 Java Tooling enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.6 XML and IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Chapter 9. Java enhancements for IMS and WebSphere . . . . . . . . . . . . . . . . . . . . . .
9.1 WebSphere 4.0.1 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 J2EE architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 DataSource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4 Enterprise Archive (.ear) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5 Deploying the ear file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5.1 Configure the WebSphere server region for IMS access . . . . . . . . . . . . . . . . . .
9.5.2 Obtain the WebSphere for z/OS System Administration tool . . . . . . . . . . . . . . .
9.5.3 Install an IMS JDBC Resource Adapter into a WebSphere server region. . . . . .
9.5.4 Configure and deploy an instance of the IMS JDBC Resource Adapter. . . . . . .
9.6 Configure and deploy an Enterprise Archive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7 IVP for WebSphere for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.1 Untar the IVP Enterprise Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.2 Configure an IMS JDBC Resource Adapter instance for the IVP EJB . . . . . . . .
9.7.3 Import, deploy and export the IVP application . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.4 Deploy and configure the Enterprise Archive (imsjavaIVP.ear) . . . . . . . . . . . . .
9.7.5 Update the HTTP Server for access to the IVP Web application . . . . . . . . . . . .
9.8 Test the IVP application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.9 Error logging and tracing in WebSphere for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.9.1 Sample trace outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

109
110
110
111
113
113
114
117
118
122
125
125
125
126
126
127
130
131
132
133
v

Part 3. IMS Version 8 Parallel Sysplex enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135


Chapter 10. Coupling Facility structure management . . . . . . . . . . . . . . . . . . . . . . . . .
10.1 System managed rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Alter and autoalter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 System managed duplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.2 Duplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.3 Enabling duplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.4 Disabling duplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Which structures support which features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

137
138
139
140
140
140
141
141
141

Chapter 11. Base Primitive Environment enhancements . . . . . . . . . . . . . . . . . . . . . .


11.1 Base Primitive Environment (BPE) enhancements . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2 New BPE address space initialization module . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3 User exits and statistics for BPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3.1 BPE configuration parameters member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3.2 BPE user exit list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.4 Displaying the BPE and CQS versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143
144
144
145
145
146
146

Chapter 12. Shared queues support for APPC and OTMA synchronous messages
12.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3.1 Synchronous messages and program-to-program switches. . . . . . . . . . . . . . .
12.3.2 Error conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3.3 Other miscellaneous migration considerations . . . . . . . . . . . . . . . . . . . . . . . . .
12.3.4 Support considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147
148
148
150
150
151
151
152

Part 4. Common Service Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


Chapter 13. Common Service Layer (CSL) architecture . . . . . . . . . . . . . . . . . . . . . . .
13.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1.1 The IMSplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1.2 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1.3 Operations management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1.4 Resource Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2 Common Service Layer (CSL) architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.3 Structured Call Interface (SCI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4 Operations Manager (OM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4.1 Today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4.2 OM infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4.3 OM clients and their roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4.5 User exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5 Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.1 Resource management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.2 Resource management infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.3 RM clients and their roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.4 Resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.5 Common Queue Server (CQS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.6 Resource Manager (RM) address space . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.7 RM characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

IMS Version 8 Implementation Guide

155
156
160
161
161
161
162
163
164
164
165
168
168
170
171
172
172
173
174
174
175
176

Chapter 14. Sysplex terminal management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


14.1 Sysplex terminal management objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2 Sysplex terminal management environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3 IMSplex resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4 STM terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.1 Resource type consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.2 Resource name uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.3 Resource status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.4 Significant status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.5 Status recovery mode (SRM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.4.6 Status recoverability (RCVYxxxx) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.5 Enabling sysplex terminal management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.5.1 Setting SRM and RCVYxxxx. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.5.2 Overriding SRM and RCVYxxxx defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.6 Ownership and affinities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.6.1 Resource ownership and RM affinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.6.2 VTAM generic resources affinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.6.3 Setting VGR affinity management responsibility . . . . . . . . . . . . . . . . . . . . . . . .
14.6.4 VGR affinities and IMS Version 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.7 Resources and the resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.7.1 Resource structure components and characteristics . . . . . . . . . . . . . . . . . . . .
14.7.2 Resource entries in the resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8 STM in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.1 Before the first IMS joins the IMSplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.2 Start IMSplex address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.3 Log on from a static NODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.4 Logon from an ETO NODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.5 Signon from an ETO NODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.6 Commands that change significant status . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.7 Work which changes end-user significant status . . . . . . . . . . . . . . . . . . . . . . .
14.8.8 Commands which change end-user status . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.9 Session termination with significant status (not IMS failure) . . . . . . . . . . . . . . .
14.8.10 Logon from NODE which already exists in resource structure . . . . . . . . . . . .
14.8.11 IMS failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.12 IMS emergency restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.13 Recovering significant status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.14 Recovering conversations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.15 Recovering Fast Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.16 Recovering STSN sequence numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.17 Summary of STM in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9 Resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9.1 Defining the resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9.2 Managing the resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9.3 Structure failure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9.4 Loss of connectivity to a structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.9.5 SCI, RM, CQS, or structure failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.10 Miscellaneous other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.10.1 IMS exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.10.2 Global callable services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.10.3 Extended Recovery Facility (XRF) considerations . . . . . . . . . . . . . . . . . . . . .
14.10.4 Rapid Network Reconnect (RNR) considerations . . . . . . . . . . . . . . . . . . . . . .
14.11 Summary of sysplex terminal management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

177
178
179
179
183
183
185
186
188
189
189
190
191
191
192
192
193
193
193
195
195
197
201
202
202
202
202
203
203
203
203
204
204
204
205
206
206
208
208
209
209
209
210
210
211
211
211
211
212
213
213
214

vii

viii

Chapter 15. Global online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


15.1 Online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.1.1 Review of local online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.1.2 Overview of global online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2 Setting up the global online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.1 Preparation for global online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.2 Overview of execution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.3 OLCSTAT data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.4 DFSUOLC0 functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.5 DFSUOLC procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.6 Initializing OLCSTAT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.7 OLC copy utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3 Global online change processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.1 Prepare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.2 Commit phase 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.3 Commit phase 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.4 Commit phase 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.4 Terminate command usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.5 Status display commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.5.1 QUERY MEMBER TYPE(IMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.5.2 QUERY OLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.5.3 /DISPLAY MODIFY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.6 Adding and deleting IMS subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.7 Inactive subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.8 Resource consistency checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.9 Migration and fallback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.10 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

215
216
216
219
219
220
220
221
221
221
222
222
223
223
225
226
228
229
229
229
230
230
231
232
232
233
234

Chapter 16. Single point of control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


16.1 Introduction to SPOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.1.1 Command behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2 TSO SPOC application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.1 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.2 Preferences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.3 IMSplex and classic command displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.4 Defining groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.5 Defining command shortcuts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.6 Saving and printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.7 Sorting and searching results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.8 Command status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.9 Leaving the SPOC application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

235
236
238
238
239
242
243
246
246
248
249
253
253

Chapter 17. User written interface to Operations Manager. . . . . . . . . . . . . . . . . . . . .


17.1 Introduction to Operations Manager user interface . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2 REXX SPOC example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2.1 The REXX SPOC environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2.2 Sample REXX API program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

255
256
257
258
260

Chapter 18. Automatic RECON loss notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


18.1 The benefits of automatic RECON loss notification . . . . . . . . . . . . . . . . . . . . . . . . .
18.2 Getting started with automatic RECON loss notification . . . . . . . . . . . . . . . . . . . . . .
18.2.1 Two choices to enable ARLN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18.2.2 How to migrate and fallback from automatic RECON loss notification . . . . . . .
18.3 DSPSCIX0 details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

265
266
267
267
268
269

IMS Version 8 Implementation Guide

18.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270


Chapter 19. Language Environment (LE) dynamic run time options . . . . . . . . . . . . .
19.1 LE overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.2 Defining LE run time options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.3 Dynamic run time option support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.4 DFSCGxxx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5 New commands and enhanced DL/I INQY call . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.1 Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.2 Delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.3 Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.4 DFSBXITA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.5 DL/I INQY LERUNOPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.6 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.7 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5.8 LE option recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

279
280
280
281
282
282
282
283
283
284
284
285
286
286

Chapter 20. Common Service Layer configuration and operation . . . . . . . . . . . . . . .


20.1 Setting up a CSL environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.1 Basic rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.2 Base primitive environment (BPE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.3 Update the CFRM couple data set (CDS). . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.4 Set up the Structured Call Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.5 Set up the Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.6 CQS procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.7 Set up the Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.8 Set up IMS PROCLIB members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1.9 Set up TSO logon procedure for SPOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2 CSL operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2.1 The CSL execution environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2.2 Starting IMSplex address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2.3 Shutting down IMSplex address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3 IMS commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.1 IMSplex commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.2 Classic commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.3 CSL operations summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

289
290
290
291
292
293
294
295
296
296
298
298
298
299
300
301
301
301
303

Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305


Appendix A. Hardware and software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1 Hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.1 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.2 System console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.3 Tape units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.4 Direct access devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.5 Multiple systems coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.6 Terminals supported by IMS Version 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.7 Sysplex data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.8 Shared message queues and shared EMH queues . . . . . . . . . . . . . . . . . . . . . .
A.1.9 DEDB shared VSO enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.1.10 Remote Site Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2.1 Data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2.2 DBRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307
308
308
308
308
308
308
308
309
309
309
309
310
311
311

Contents

ix

A.2.3 IMS Java. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


A.2.4 Small programming enhancements (SPEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2.5 Sysplex data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.2.6 Transaction trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 IBM IMS Tools for IMS Version 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

311
311
312
312
312

Appendix B. Resource structure sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


The resource structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource number. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data element number. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Resource table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adjusting the size of the Resource Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

315
316
316
317
318
318
319

Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

321
321
321
321

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323


Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . .

......
......
......
......
......
......

.......
.......
.......
.......
.......
.......

......
......
......
......
......
......

......
......
......
......
......
......

325
325
325
326
327
327

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

IMS Version 8 Implementation Guide

Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2002. All rights reserved.

xi

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX
AT
C/MVS
CICS
CUA
DATABASE 2
DB2
DFS
DFSMSdss
ES/9000
ESCON
FICON

IBM
IMS
IMS/ESA
Language Environment
MQSeries
MVS
NetView
NetView
OS/390
Parallel Sysplex
RACF
Redbooks

Redbooks(logo)
RMF
S/390
Sequent
SP
System/370
VM/ESA
VSE/ESA
VTAM
WebSphere
z/OS
zSeries

The following terms are trademarks of International Business Machines Corporation and Lotus Development
Corporation in the United States, other countries, or both:
Lotus

Word Pro

The following terms are trademarks of other companies:


ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic
Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.

xii

IMS Version 8 Implementation Guide

Preface
In this IBM Redbook, we describe the new features and functions in IMS Version 8. We
document the tasks necessary to exploit the features, and identify migration, coexistence, and
fallback considerations. We also identify specific hardware and software requirements that
are needed to exploit certain enhancements.
First we provide an overview, where we have grouped the various enhancements and their
discussion into the categories availability and recoverability, performance and capacity,
systems management, and application enablement. Then we have more detailed chapters for
describing the individual enhancements.
The base enhancements part of the book describes the base product enhancements that
apply to all users migrating to IMS Version 8. The Parallel Sysplex enhancements part of the
book describes enhancements in IMS Version 8 that apply to both existing users of IMS
Version 6 or IMS Version 7 in a Parallel Sysplex environment and users that are considering
sysplex functionality.
The Common Service Layer part documents the Common Service Layer (CSL), new in IMS
Version 8, which is the next step in IMS Parallel Sysplex evolution. The CSL enables IMS
systems to operate in unison in an OS/390 Parallel Sysplex. The CSL components provide
the infrastructure for an IMSplex.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Jouko Jntti is a Project Leader specializing in IMS with the IBM International Technical
Support Organization, San Jose Center. He holds a bachelors degree in Business
Information Technology from Helsinki Business Polytechnic, Finland. Before joining the ITSO
in September 2001, Jouko worked as an Advisory IT Specialist at IBM Global Services,
Finland. Jouko has been working on several e-business projects with customers as a
specialist in IMS, WebSphere, and UNIX on the OS/390 platform. He has also been
responsible for the local IMS support for Finnish IMS customers. Prior to joining IBM in 1997,
he worked as a systems programmer and transaction management specialist in a large
Finnish bank for 13 years, and was responsible for the bank's IMS systems.
Henry Kiesslich is an IT Specialist in IMS with IBM Global Services Germany where he is a
member of the IMS Technical Support team. He has 12 years of experience in the IT fields.
He holds a bachelor degree in Information Technology. His areas of expertise include CICS,
IMS and OS/390. Before he joined IBM in 1999 he was working as a systems programmer for
a computer and systems center of a large steel mill and later for a large German bank for 5
years, responsible for IMS systems and IMS related projects.
Roddy Munro is a Project Manager for the IMS Version 8 QPP for EMEA in the Product
Introduction Centre (PIC) in the UK. He has 22 years of experience in the IMS field and has
worked at IBM for 28 years. His areas of expertise include IMS, CICS, DB2, and MQSeries.
He holds a bachelors degree in Geology, and joined IBM initially as an IMS application
programmer. Subsequently he has worked as a systems programmer, a technical planner for
CICS with IMS and DB2, and as a CICS developer. Prior to joining the PIC he was the DB/DC

Copyright IBM Corp. 2002. All rights reserved.

xiii

systems programming group leader at the Hursley lab, providing test systems to the
development projects on-site.
John Schlatweiler is a Systems Manager with SBC Services, Inc. in St. Louis Missouri
where he is a member of the IMS Support Group's IMS Development Team. He has 17 years
of experience in the IT fields. His areas of expertise include CICS, DB2, IMS, and OS/390.
John has worked as an applications developer, application architect, technical leader,
database administrator, and systems programmer. Prior to joining SBC, John was
responsible for database and transaction processing systems in a large banking environment.
Bill Stillwell is a Senior Consulting I/T Specialist and has been providing technical support
and consulting services to IMS customers as a member of the Dallas Systems Center for 20
years. During that time, he developed expertise in application and database design, IMS
performance, Fast Path, data sharing, shared queues, planning for IMS Parallel Sysplex
exploitation and migration, DBRC, and database control (DBCTL). He also develops and
teaches IBM Education and Training courses, and is a regular speaker at the annual IMS
Technical Conferences in the United States and Europe.
Thanks to the following people for their contributions to this project:
Yvonne Lyon
Deanna Polm
International Technical Support Organization, San Jose Center
Rich Conway
Bob Haimowitz
International Technical Support Organization, Poughkeepsie Center
Barbara Klein
Jim Bahls
Thomas Bridges
Kyle Charlet
Richard Cooper
Vince Henley
Rose Levin
Bob Love
Tom Morrison
Bruce Naylor
Khiet Nguyen
David Ormsby
Bovorit Pibulsonggram
Karen Ranson
Patrick Schroeck
Richard Schneider
Sandy Stoob
Don Terry
Judy Tse
Pedro Vera
Jack Wiedlin
IBM Silicon Valley Laboratory
Alonia Lonnie Coleman
IBM Dallas Systems Center, USA
Alan Cooper
IBM EMEA Technical Sales, UK

xiv

IMS Version 8 Implementation Guide

Alison Coughtrie
IBM EMEA Product Introduction Centre, UK

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with
specific products or solutions, while getting hands-on experience with leading-edge
technologies. You'll team with IBM technical professionals, Business Partners and/or
customers.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus,
you'll develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an Internet note to:


redbook@us.ibm.com

Mail your comments to:


IBM Corporation, International Technical Support Organization
Dept. QXXE Building 80-E2
650 Harry Road
San Jose, California 95120-6099

Preface

xv

xvi

IMS Version 8 Implementation Guide

Part 1

Part

Introduction
In this part of the book we provide an introduction to the enhancements included in IMS
Version 8. This part consists of an overview chapter that provides a summary the new
features and enhancements in preparation for a more detailed review of each major item in
the other parts of the book.

Copyright IBM Corp. 2002. All rights reserved.

IMS Version 8 Implementation Guide

Chapter 1.

Overview
In this chapter we provide an overview of the general structure for the materials to be
presented in this IBM Redbook. We introduce the enhancements made to Information
Management System (IMS) Version 8 at the highlight level. All the major enhancements are
then fully discussed in their own chapters later in this book. The following items are only
described in this overview chapter:
IBM IMS Online Recovery Service (ORS) product support
System log data set (SLDS) dynamic backout processing
Virtual storage constraint relief (VSCR)
In the overview chapter, we have grouped the various enhancements and their discussion
into the categories of availability and recoverability, performance and capacity, systems
management, and application enablement.

Copyright IBM Corp. 2002. All rights reserved.

1.1 Introduction to the enhancements


In general terms, IMS development is driven primarily by user requirements within the context
of maintaining compatibility with existing IMS applications and data and ensuring absolute
integrity of those applications and data. Additionally, IMS development is directed toward the
early exploitation of new facilities in the OS/390 and z/OS environments and supporting
hardware to continue to reduce the overall cost of computing.
Since the first IMS system, which was an inventory tracking system for the Apollo space
program in the late 1960s, IMS has successfully provided features that have made it one of
the leading transaction and database management systems.
Still, after more than 30 years, the strengths that have been in place from the beginning are
second to none. Additionally, IMS has been able to evolve through the decades to take
advantage of changes in technology. In addition to the fact that most corporate data is
managed by IMS today, it has also been a foundation for a new generation of Web-based,
high volume workload applications.
IMS Version 8 introduces significant enhancements in availability and recovery, performance
and capacity, application enablement and systems management for IMS systems.

1.2 Availability and recoverability


A major long term goal of IMS developers is the achievement of continuous availability of IMS
applications and data. This aim includes improvements in the reliability, serviceability, and
maintainability of IMS itself. It also includes the removal of requirements for planned outages
for system, application, and database maintenance, as well as the prevention, bypassing, or
elimination of unplanned outages.

1.2.1 Database Recovery Control (DBRC) enhancements


These are the major Database Recovery Control (DBRC) changes for IMS Version 8:

16 megabyte maximum recovery control (RECON) data set record size


Prilog compression enhancement
DBRC command authorization support
Automatic RECON loss notification
Elimination of several DBRC and IMS abend
New DBRC batch commands for HALDB
Increased maximum values for DBRC groups

16 MB RECON record size


Prior to IMS Version 8, the record size for RECON data sets was limited by the type of DASD
the RECONs were defined on and was limited to a maximum of approximately 800 kilobytes.
As of IMS Version 8, the maximum RECON record size is 16 MB. This eliminates the need for
the unwanted shutdown of IMS systems when some of the RECON records (usually the
PRILOG record) were growing over the maximum record size.

PRILOG compression
The efficiency of PRILOG record compression has been improved. Compression will now be
attempted more frequently than was the case prior to IMS Version 8. Compression will be
attempted whenever an online log data set (OLDS) archive job is run. For Remote Site
Recovery (RSR), this is when the tracking log data set is opened.

IMS Version 8 Implementation Guide

DBRC command authorization support


Prior to IMS Version 8, any user who was authorized to access the RECON data set had the
authority to enter any DBRC command. As of IMS Version 8, you can use RACF (or an
equivalent product), a user-written exit, or both to control who is authorized to issue DBRC
commands. When a DBRC command is issued from the DBRC batch utility (DSPURX00),
DBRC verifies that the user is authorized to issue the command.

Automatic RECON loss notification


When an I/O error occurs on a RECON data set and a spare data set is available, the
instance of DBRC that noticed the error copies the good RECON to the spare, activates the
spare, and deallocates the original RECON data set. Prior to IMS Version 8, the other DBRC
instances sharing the same RECON data sets are not aware of the reconfiguration of the
RECONs until their next access to the RECONs. They would continue to hold allocation to the
original (discarded) RECON. The original RECON could not be deleted and redefined until all
DBRC instances deallocated it. This potentially could take a long time. As of IMS Version 8,
the first DBRC now automatically notifies the other DBRCs about the reconfiguration and the
discarded RECON gets deallocated. The bad RECON data set can then be deleted and
redefined as the spare.
This enhancement requires the usage of the Structured Call Interface (SCI) and thus it is
described in detail in the Common Service Layer (CSL) part of the book. For more information
on automatic RECON loss notification (ARLN) feature, refer to Chapter 18, Automatic
RECON loss notification on page 265.

Elimination of DBRC and IMS abends


There are three areas that caused abends in previous releases and no longer do so under
IMS Version 8:
Authorization processing

DBRC will no longer abend if the SUBSYS record becomes


larger than the RECON physical record size. With the support of
RECON records up to 16 megabytes in size, the SUBSYS record
is written as multiple RECON record segments.

Database I/O error

DBRC will no longer abend if recording an extended error queue


element (EEQE) causes the database data set record to exceed
the RECON physical record size.

Deallocation processing

DBRC will not abend during deallocation processing if the


ALLOC record is not found or if the ALLOC record already has a
deallocation time. Instead of the abend, error messages are
issued, a dump is taken, and the status for the database area is
set to prohibit further authorization status.

New DBRC batch commands for HALDB


The exiting DBRC batch commands DELETE.DB, CHANGE.DB and INIT.DB are modified
and the new commands DELETE.PART and CHANGE.PART are provided for HALDBs to
support the deletion and change of a HALDB or one of its partitions. The INIT.DB command
also allows a HIKEY value to be used even tough the database is defined with a partition
selection exit. This enhancement is available also for IMS Version 7 users as an APAR
PQ52858.

Increased maximum values for DBRC groups


The maximum number of members allowed in change accumulation (CA) groups and DBDS
groups have been increased from 2000 to 32767.

Chapter 1. Overview

For details on DBRC enhancements applicable to all operating environments, see Chapter 5,
Database Recovery Control enhancements on page 69. For more information on Automatic
RECON loss notification (ARLN), refer to Chapter 18, Automatic RECON loss notification on
page 265.

1.2.2 Database Image Copy 2 enhancements


The Database Image Copy 2 (DFSUDMT0) utility has been enhanced to:
Image copy multiple database data sets (DBDSs) during a single execution of the utility.
Accept the specification of database group names.
Allow the user to specify the DFSMSdss option OPTIMIZE for better performance.
Allow the user to specify a new option, SAMEDS, for creating multiple image copies in the
same output data set.
Issue an image copy complete notification by group or database name.

For more information on Database Image Copy 2 enhancements, see 4.1, Database Image
Copy 2 enhancements on page 48.

1.2.3 HALDB enhancements


Several enhancements have been made for the usability of High Available Large Database
(HALDB) feature. The enhancements listed here are also available for IMS Version 7 user
through the service process. These enhancements include:
Enhanced and new DBRC batch commands for changing and deleting HALDBs and
partitions (See 1.2.1, Database Recovery Control (DBRC) enhancements on page 4).
The ability to restrict the DL/I calls to a single partition
The ability to bypass the creating of secondary index during the load processing

Restricting DL/I calls to a single partition


Prior this enhancement, all DL/I calls made against a database PCB were allowed to access
any partition in a multiple partition HALDB. This enhancement allows the user to indicate a
particular partition to which the DL/I calls are to be restricted in batch or a BMP. This can also
be done for a secondary index which has multiple partitions, when that index is either being
accessed as a database, or used with DL/I calls via PROCSEQ.
This is accomplished via a new HALDB control statement inputted through a new DD card
with a ddname of DFSHALDB. This new DD card needs to be provided in the JCL for the IMS
batch job or Batch Message Processing dependent region. The syntax of the new control
statement is:
HALDB PCB=(nnn,pppppppp)

In this statement: nnn = the nth DBPCB (required) and pppppppp = partition name (required).
This enhancement is available also for IMS Version 7 users as an APAR PQ57313.

Bypassing the creation of secondary index during the load processing


Prior this enhancement, for secondary indexes that are defined for a HALDB, the index
entries are always created during the initial load of the data base. This enhancement provides
the flexibility to bypass the creating of the secondary index during the load processing, and
then build it once the load has completed, thereby reducing the elapsed time that is required
for the load. A new parameter of BLDSNDX=YES or BLDSNDX=NO is added to the
OPTIONS statement in the DFSVSAMP member for this. If BLDSNDX=YES is coded, or the
parameter is omitted, then secondary index entries will be built during initial load. If
BLDSNDX=NO is coded then secondary index entries will not be built during initial load. This
enhancement is available also for IMS Version 7 users as an APAR PQ55840.
6

IMS Version 8 Implementation Guide

1.2.4 Batch Resource Recovery Service (RRS) support


IMS batch programs can now use the operating systems Resource Recovery Service (RRS)
to enable 2-phase commit processing. This support includes IMS DB, DB2 and MQSeries
resource managers.
For more information on the enhancements to batch Remote Recovery Service support, see
4.4, Batch RRS support on page 61.

1.2.5 Remote Site Recovery (RSR) enhancement


The IMS Remote Site Recovery (RSR) function has been enhanced to support coordinated
disaster recovery for IMS and DB2.
For more information on Remote Site Recovery enhancements, see 4.5, Coordinated
IMS/DB2 disaster recovery support on page 62.

1.2.6 Enhanced availability by using the Resource Manager (RM)


For IMS Version 8, IMS Transaction Manager (TM) has been enhanced to use the new
Resource Manager (RM) to maintain IMS resource information in a sysplex environment. By
having the resource information available to other IMSs in the sysplex, the following is
achievable:

Resume work for VTAM terminals and users if their local IMS fails
Eliminate VTAM generic resources terminal affinities
Provide resource type consistency
Provide name uniqueness
Provide global callable services for NODE, LTERM, and user resources

For more information on Resource Manager as part of the CSL architecture, see Chapter 13,
Common Service Layer (CSL) architecture on page 155.
For more detailed information on resource structures and Resource Manager see
Chapter 14, Sysplex terminal management on page 177
For Resource Manager and CSL configuration information see Chapter 20, Common Service
Layer configuration and operation on page 289.

1.2.7 Common Queue Server (CQS) enhancements


CQS has been enhanced to support the Resource Manager access to the new (optional)
resource structure. Resource manager (RM) is a new address space introduced in IMS
Version 8 and uses the CQS to access a new Coupling Facility list structure called a resource
structure to maintain global resource information for IMSplex resources.
The CQS is also exploiting the enhancements made available to Coupling Facility list
structures in the recent releases of OS/390 and z/OS. These enhancements include:

System managed rebuild of the structures


Structure alter and autoalter
System managed duplexing of the structures
Structure full monitoring capability

In addition to the functional enhancements for IMS Version 8, the Common Queue Server
(CQS) information is now in a separate book from that of BPE information. For IMS Version 8,
you will find all the information pertaining to CQS in the IMS Version 8: Common Queue
Server Guide and Reference, SC27-1292.
Chapter 1. Overview

For more information on Coupling Facility structure processing see Chapter 10, Coupling
Facility structure management on page 137.
For more information on CQS as part of the CSL architecture, see Chapter 13, Common
Service Layer (CSL) architecture on page 155. For more detailed information on resource
structures and Resource Manager see Chapter 14, Sysplex terminal management on
page 177

1.2.8 APPC and OTMA enhancements


Prior to IMS Version 8, implicit synchronous APPC and OTMA messages could only be
processed by the front-end IMS in a shared queue environment. In IMS Version 8, all implicit
APPC and OTMA transactions are eligible for sysplex-wide processing.
For more information on APPC and OTMA enhancements, see Chapter 12, Shared queues
support for APPC and OTMA synchronous messages on page 147.

1.2.9 APPC/IMS enhancements


IMS Version 8 included several enhancements for the management of APPC/IMS resources:
A new parameter (CPUTIME) has been added to the TP_Profile data set (which is
maintained by the operating system) to specify the number of CPU seconds that a CPI-C
program is allowed to run before being terminated. This limits the time that resources are
held due to a possible error in the program causing it to loop endlessly.
LU 6.2 descriptors can now be added or deleted dynamically using the /START and
/DELETE commands.
The /CHANGE command has been enhanced to allow you to change the outbound LU
using the new OUTBND keyword.
New OUTBND= parameter has been added to IMS PROCLIB member DFSDCxxx.
For more information on the APPC/IMS enhancements, see Chapter 7, APPC base
enhancements on page 89.

1.2.10 IMS Online Recovery Service (ORS) support


IMS Online Recovery Service (ORS) is a separately priced product which provides a
database recovery process in an IMS online system. IMS dynamically allocates all of the
required data sets to perform the recovery. ORS utilizes IMS commands as opposed to
DBRC commands and generated batch recovery job streams. The IMS online system uses a
dependent address space to restore the image copy and change accumulation files onto the
database(s). Once these have been processed, IMS ORS reads log data sets and passes a
recovery stream to the IMS control region address space to update the databases.
ORS support has been enhanced in IMS Version 8, and the enhancements made available to
IMS Version 7 through the service process. IMS ORS will accept image copy data sets
created by all releases of the IMS Database Image Copy 2 utility. ORS is now able to:

Process IC2 SAMEDS image copies


Utilize compressed image copies produced by IBM IMS Image Copy Extensions product
Use virtual tape caching
Send messages to IMS MTO (requires IMS Version 8)

To learn more about Online Recovery Service, please see the IBM Redbook A DBAs View of
IMS Online Recovery Service, SG24-6112.

IMS Version 8 Implementation Guide

1.2.11 System Log Data Set (SLDS) dynamic backout processing


With IMS Version 8, the online system has the ability to read the system log data set (SLDS)
for backout processing in the event required backout records are no longer available on the
online log data set (OLDS). Previous releases of IMS could only read the SLDS during
emergency restart. This enhancement is also being made available to IMS Version 7 through
the service process.
If dynamic backout required a record that was only available on a SLDS, a backout failure
occurred and the user was required to run the batch backout utility. This was often the result
of the abend of a BMP which had run a very long time without taking a checkpoint. Records
needed for its backout were written to OLDS which were subsequently archived and reused.
With this enhancement, if a log record is needed for a dynamic backout and the log record is
no longer available on an OLDS, IMS will dynamically allocate the required SLDSs. The
SLDSs are allocated in reverse time sequence. The most recent SLDS containing records no
longer available on an OLDS will be allocated first. IMS reads and saves the log records from
this SLDS in a data space. If this SLDS does not contain all of the required records, additional
SLDSs are allocated, read, and saved. This continues until all required records have been
read.
This enhancement also applies to shared queues. If the message queue structure and its
overflow structure becomes full, committed output messages are logged but not written to the
structures. These messages were created before the structures became full, but not
committed until after the structure full condition occurred. If space in the structures is
available at a later system checkpoint, these messages are written to the structures. It is
necessary to access log records to complete this. If the log records are no longer available
from the OLDS, IMS Version 8 will dynamically allocate the required SLDSs to acquire the
needed records.
Commands /START SLDSREAD and /STOP SLDSREAD have been added to enable and
disable the SLDS read function. When disabled, IMS acts as it did in prior releases.
Disablement during use of the function will cause failure of the current read request and
rejection of subsequent requests. Any outstanding mount request is aborted by abnormal
termination of the subtask that is used to open the SLDS and all dataspace storage is
released. Externally, this results in DFS982I for backouts, for instance. /STOP is
asynchronous, producing the 'command in progress' response.
The SLDS read function is disabled until restart completes and enabled thereafter until a
/STOP command disables it again.
The output from the /DISPLAY OLDS command has been modified to indicate the state of the
SLDS read function. The output will contain the string 'SLDSREAD ON' if the function is
enabled or 'SLDSREAD OFF' otherwise. The new string will occur on the line after the WADS
DDNAME list. Example 1-1 shows the DISPLAY command output with the SLDSREAD ON
string.
Example 1-1 /DISPLAY OLDS -command output showing SLDSREAD ON
DFS000I
DFS000I

WADS = *DFSWADS0 DFSWADS1


SLDSREAD ON

Example 1-2 shows the messages that are issued when SLDS is required for backout.
Example 1-2 /DISPLAY OLDS -command output showing SLDS required for backout
DFS000I
DFS000I

WADS = *DFSWADS0 DFSWADS1


SLDS REQUIRED FOR BACKOUT - RGN 00001, SLDSREAD ON

Chapter 1. Overview

The interface to DBRC has been enhanced, allowing SLDS read to allocate the exact SLDS
requested on the first read request. Subsequent log allocations (within the same set of
requests) will still need to be contiguous with previously allocated logs. This eliminates much
dataspace use when reads are for very old log data, which may happen if SLDS read is
turned off until a time of less contention for storage. Turning SLDS read on later and issuing
/START DB to redrive backout will not cause IMS to read unnecessary log data between point
of failure and the current OLDS.
The DBRC enhancement added field SLDLBKID to the PRISLD record in the RECON data
set. SLDLBKID is not maintained by RSR, so the SLDS read function will not operate
successfully on log data migrated from a remote site by RSR. After a remote takeover,
deferred or restartable backouts may require the Batch Backout utility.

1.3 Performance and capacity enhancements


As usual with the new release of IMS, IMS Version 8 includes improvements also in the area
of performance. These improvements are provided for lowering the cost of processing a given
workload, by internal improvements in IMS, and by exploitation of new system facilities.
Frequently this aspect of performance is measured in terms of cost to perform a given
function which may include both direct processing costs and indirect costs such as the
time and manpower needed to perform system, application and database maintenance.
Additionally, IMS Version 8 continues to remove constraints and limitations on the capacities
of an IMS system.

1.3.1 Fast Path enhancements


Fast Path has been enhanced for IMS Version 8 as follows:
Shared VSO databases are also able to exploit the following Coupling Facility features:
System managed rebuild of a VSO structure.
Automatic altering of a VSO structure size.
System managed duplexing of VSO structures.
DEDBs can now have up to 2048 areas.
DEDBs can now be defined as nonrecoverable in DBRC.
IOVFI parameter in DFSPBxxx sets update interval for unused IOVF count.
For details on Fast Path enhancements applicable to all operating environments, see 4.3,
Fast Path DEDB enhancements on page 56.
For more information on Fast Path enhancements specific to Parallel Sysplex operations, see
Chapter 10, Coupling Facility structure management on page 137.

1.3.2 Parallel database processing enhancement


IMS Database Manager now performs the tasks of database authorization, allocation, and
open and close processing in parallel using multiple OS/390 threads. For systems with a
large number of databases, this enhancement can reduce the amount of time required to
reopen the databases during restart processing and return the system to a steady state.
For more information on parallel database processing enhancements, see 4.2, Parallel
database processing on page 54.

10

IMS Version 8 Implementation Guide

1.3.3 IMS MSC FICON CTC support


The multiple systems coupling (MSC) feature of IMS provides a reliable, high bandwidth
host-to-host communications between IMS systems. One choice for the physical host-to-host
connection is to define the MSC to utilize the channel-to-channel (CTC) hardware support.
On z/Series 900 processors, CTC bandwidth can be enhanced by implementing the S/390
Fiber Connection (FICON) channel support for CTCs. This FICON support could be
significantly faster than the Enterprise System Connection (ESCON) support for large blocks
of data transferred. It is estimated that one FICON CHPID can do the work of a number of
ESCON CHPIDs. This increased bandwidth is the result of faster data transfer rates, I/O
rates, and CCW and data pipelining. The distance between hosts can also be increased.
The IMS MSC FICON CTC support increases the volume of IMS messages that can be sent
between IMS systems when using the IMS MSC facility. This enhancement is activated by
changing the IMS online procedure so that the DD cards specifying the CTC address for the
link specify the CTC FICON address. No changes are needed to the IMS system definition to
convert existing CTC links to FICON. z/OS Version 1 Release 2 is needed for IMS MSC
FICON CTC support. This capability is also enabled for IMS Version 7 by APAR PQ51769.

1.3.4 Virtual storage constraint relief


In order to relieve common system area (CSA) and private (PVT) virtual storage constraints,
IMS Version 8 is using less private and common storage below the 16 MB line. The system
Program Specification Tables (PSTs) and other IMS modules and control blocks have been
moved to private and common storage above the 16 MB line.
Reducing CSA utilization and providing virtual storage constraint relief (VSCR) frees
additional storage below the 16MB line for user applications and vendor program products
that run in the same system as IMS. Additionally, CSA VSCR enables single-system growth
and the capability to add additional IMS systems to the operating system environment.
VSCR has been provided in the following areas:
Checkpoint Processor (DFSRCP00) load module has been moved above the 16MB line.
Restart Processor (DFSRCP00) load module has been moved above the 16MB line.
Fast Database Recovery (FDBR) modules were moved from CSA to private storage, or
from extended CSA to extended private storage. The CSA/ECSA storage relief is received
for each FDBR product on the system.
LPST pools (PSTs for IMS internal use in private storage) have been moved above the
16MB line.
QSAV areas were moved from CSA to Extended CSA (ECSA).
Asynchronous Work Elements (AWEs) were moved from CSA to ECSA.
A portion of each log buffer prefix located in CSA was moved to ECSA. The following
formula can be used to calculate the total number of bytes of log buffer prefixes moved
from CSA to ECSA:
The number of log buffers (BUFFNO= specification on the OLDSDEF statement of
DFSVSMxx member) times 176 (BUFFNO * 176).
If using dual logging, this result is the number of log buffers multiplied by 352.
Table 1-1 summarizes the estimated virtual storage constraint relief provided in IMS Version
8.

Chapter 1. Overview

11

Note: The QSAV and AWE storage pools can expand during high volume processing.
Thus, the CSA saving shown in the following table is a minimum value of 8K (for QSAV)
and 12K (for AWE) per IMS subsystem. During high volume processing, these pools could
be significantly expanded, resulting in exhausting CSA and causing problems. By moving
these areas above the line the CSA problems may be avoided, thereby helping prevent
operating system crashes. So while the base system with no activity saves 8K and 12K,
the actual savings in the previously described situation, would be more.
Table 1-1 Estimated virtual storage constraint relief provided by IMS Version 8
Item

Moved from
CSA to
Private

Moved from CSA to


ECSA

Moved from
ECSA to
Eprivate

Moved from
Private to
Eprivate

Checkpoint processor

54 KB

Restart processor

286 KB

FDBR modules

378 KB

420 KB

LPSTs (per IMS)

12 KB

QSAV areas (per IMS)

8 KB

System PSTs for a DB/DC


FP generated system
- or System PSTs for a DB/DC
system without FP

92 KB
- or 28 KB

AWEs

12 KB

Part of each IMS log


buffer prefix

176 bytes per log


buffer, 352 bytes if dual
logging

Total

378 KB

112 KB (with FP) or


48 KB (without FP)
+
172 * BUFFNO (single
logging) or
352 * BUFFNO (dual
logging)

420 KB

352 KB

1.4 Systems management enhancements


As IMS systems are joined together into sharing groups (sharing databases, network
resources, or message queues) in a sysplex environment, system management becomes
more complex. Prior to IMS Version 8, the IMSs that were in sharing groups had to be
managed individually. IMS Version 8 builds upon the idea of an IMS sysplex (known hereafter
as an IMSplex) to help reduce the complexity of managing multiple IMSs in a sysplex
environment.
An IMSplex can be defined as one or more IMS address spaces (control, manager, or server)
that work together as a unit. Typically (but not always), these address spaces:
Share either databases or network resources or message queues (or any combination)
Run in a S/390 sysplex environment
Include an IMS Common Service Layer (CSL - new for IMS Version 8)
12

IMS Version 8 Implementation Guide

The address spaces that can participate in the IMSplex are:


Control region address spaces (CTL and DBRC)
CSL address spaces (Operations Manager (OM), Resource Manager (RM), Structured
Call Interface (SCI))
IMS server address spaces (CQS)
Batch and utility regions using DBRC
Automated operator programs and SPOCs
Address spaces that serves as an interface between IMS and a protocol that is not directly
supported by IMS (for example, TCP/IP)
Examples of IMSplexes are:
A set of IMS control regions at the Version 6, 7 or 8 level without a CSL that are sharing
data or sharing message queues
A set of IMS control regions at the Version 8 level with a CSL that are sharing data or
sharing message queues
A single IMS control region at the Version 8 level with a CSL. This still qualifies as an
IMSplex because it is a set of IMS address spaces (IMS control, CQS, SCI, OM, RM)
working together.
To support IMSplexes, a number of IMS functions have been enhanced and a number of new
functions have been added.
The Base Primitive Environment (BPE) has been enhanced.
The Common Queue Server (CQS) has been enhanced.
A new component, the Common Service Layer (CSL), is introduced consisting of the
following three new address spaces:
Operations Manager (OM)
Resource Manager (RM)
Structured Call Interface (SCI)
Optional Resource Structure.
A TSO-based single point of control (SPOC) application program and a REXX API is
shipped with IMS Version 8.
The IMS terminal management function of IMS TM has been enhanced.
A new coordinated online change function has been added to coordinate global online
change activities of all the IMSs in the IMSplex.
The following sections briefly describe the enhancements that support the new IMSplexes
and the other systems management enhancements.

1.4.1 BPE enhancements


The Base Primitive Environment (BPE) has been enhanced to support the three new CSL
address spaces. Two new optional exits have been added and a new BPE command,
DISPLAY VERSION, which displays the version number of both the IMS component and the
BPE, is introduced.
Additional information on the Base Primitive Environment enhancements can be found in
Chapter 11, Base Primitive Environment enhancements on page 143.

Chapter 1. Overview

13

1.4.2 Common Service Layer


The Common Service Layer (CSL) is new for IMS Version 8. The components of CSL,
Operations Manager (OM), Resource Manager (RM), and Structured Call Interface (SCI),
provide the infrastructure for an IMSplex. Each OM, RM, and SCI runs in a separate address
space.

Structured Call Interface


The Structured Call Interface (SCI) is the part of the Common Service Layer (CSL) that
provides the communications infrastructure of the IMSplex. Using the SCI, IMSplex
components can communicate with each other within a single OS/390 image or across
multiple OS/390 images. Individual IMSplex members do not need to know where the other
members are running. The SCI is responsible for routing requests and messages between
the IMS control regions, Operations Managers (OMs) and Resource Managers (RMs). SCI
also provides support for automatic RECON loss notification.
For more information on SCI as part of the CSL architecture, see Chapter 13, Common
Service Layer (CSL) architecture on page 155.
For more information on automatic RECON loss notification (ARLN), see Chapter 18,
Automatic RECON loss notification on page 265.
For SCI and CSL configuration information, see Chapter 20, Common Service Layer
configuration and operation on page 289.

Resource Manager
The Resource Manager (RM) helps manage IMSplex resources that are shared by multiple
IMS systems in an IMSplex. The RM provides an infrastructure for managing global network
resources and coordinating processes across the IMSplex. The RM maintains resource
information using a resource structure on a Coupling Facility.
For more information on Resource Manager as part of the CSL architecture, see Chapter 13,
Common Service Layer (CSL) architecture on page 155.
For RM and CSL configuration information, see Chapter 20, Common Service Layer
configuration and operation on page 289.
For more detailed information on resource structures and Resource Manager, see
Chapter 14, Sysplex terminal management on page 177

Operations Manager
The Operations Manager (OM) provides the interface for a single system image for system
operations in an IMS Version 8 IMSplex. The OM provides an application programming
interface (API) for the distribution of commands and responses within the IMSplex. The OM:
Routes IMS commands to IMSplex members that are registered to process those
commands.
Consolidates command responses from individual IMSplex members into a single
response for presentation to the command originator.
Provides user exits for command and response edit and command security.
For more information on Operations Manager as part of the CSL architecture, see
Chapter 13, Common Service Layer (CSL) architecture on page 155.

14

IMS Version 8 Implementation Guide

For OM and CSL configuration information see Chapter 20, Common Service Layer
configuration and operation on page 289.

Single point of control application


One of the new functions delivered with IMS Version 8 is the ability to manage a group of
IMSs (an IMSplex) from a single point of control (SPOC). With IMS Version 8, IBM is
delivering an ISPF SPOC application. Using the ISPF SPOC application, you can:
Issue commands to any or all IMSs in an IMSplex
Display consolidated responses from those commands
For more information on the single point of control application see Chapter 16, Single point of
control on page 235.
IMS Version 8 provides also the REXX Application Programming Interface (API) as an
interface to SPOC for user written automation. For details on the SPOC enhancement for
user written automation, see Chapter 17, User written interface to Operations Manager on
page 255, and IMS Version 8:Common Service Layer Guide and Reference, SC27-1293.

Global online change


One of the complexities of running multiple (cloned) IMS systems in an IMSplex is
coordinating online change processing for all IMS systems in the IMSplex. Prior to IMS
Version 8, it was necessary to perform online change on each individual IMS in the IMSplex.
An important part of managing an IMSplex from a single point of control is to be able to
coordinate global online change processing among all the IMSs in the IMSplex. IMS Version
8 introduces the enhancement called global online change (also referred to as coordinated
online change) for this purpose.
For more information on global online changed, see Chapter 15, Global online change on
page 215.

Dynamic LE runtime parameters enhancement


When using the CSL, Language Environment (LE) runtime parameters for an IMS application
can be dynamically updated. By having this ability, it is also easier to use the Debug tool for
application testing. LE parameters can be changed without requiring CEEROPT, CEEDOPT,
and CEEUOPT to be changed, reassembled, and rebound.
For more information on the dynamic LE runtime parameter enhancement, see Chapter 19,
Language Environment (LE) dynamic run time options on page 279.

1.4.3 Installation and configuration enhancements


This following is a list of the major IMS Version 8 installation and configuration changes.
SMP/E jobs have been removed from the installation verification program (IVP) dialog
Standard SMP/E RECEIVE, APPLY, and ACCEPT processing is now used for installing
IMS and for applying service
DFSJCLIN, DFSJIDLT, and DFSJIRLT are no longer provided as jobs
Changes to packaging of non-required user exits
A new target library has been added for ++SRC elements
Macro name changes
For more information on installation and packaging enhancements, see Chapter 2,
Packaging and installing on page 21.
Chapter 1. Overview

15

1.4.4 Syntax Checker


The Syntax Checker is a new IMS ISPF application that assists you in defining and
maintaining the IMS DFSPBxxx PROCLIB members. It checks the validity of parameters and
their values based on the version of IMS. In addition, it provides detailed help text at the
parameter level and identifies new and obsolete parameters. The Syntax Checker assures
the parameter information is valid prior to either the initial IMS startup or a restart of IMS.
For more information on the Syntax Checker enhancement, see Chapter 3, Syntax Checker
on page 29.

1.4.5 Transaction trace


Transaction trace enhancement provides the ability to trace a transaction through multiple
subsystems in a sysplex, which in turn, helps with diagnosing problems. IMS Version 8 works
with the transaction trace facility of the operating system to enable this function. This function
requires OS/390 APAR number OW50696 or z/OS Version 1 Release 2.
For more information on transaction trace, see Chapter 6, Transaction trace on page 83.

1.5 Application enablement enhancements


Several of the IMS Version 8 enhancements extend and build upon enabling a robust
application environment. Specific enhancements have been made to further support Web and
e-business application development and take advantage of significant opportunities for
software development, interoperability, and portable execution provided by Java.

1.5.1 Java enhancements


IMS Version 8 has been enhanced in the areas of Java standards, JDBC access to IMS data,
and new Java dependent regions.

Java standards
IMS provides support for the new Java standards as they evolve. JDBC 2.0 enhancements
include support for Updatable ResultSet and limited reverse cursors. SQL enhancements to
IMS DB data includes support for aggregate functions (MIN, MAX, and so forth) and scalars.

Java dependent regions


The Java dependent regions enhancement introduces two new dependent regions where
message-driven and non-message-driven IMS Java applications can run. These new regions
use the new IBM technology for Persistent Reusable Java Virtual Machines (JVM) to speed
up the processing of Java applications and provide a serially reusable JVM that can be reset
to a known state between transactions, significantly reducing the overhead in a transaction
processing environment.

JDBC access and Java Tooling


JDBC access to IMS DB data is provided for Java applications running in OS/390 WebSphere
Application Server, CICS Transaction Server/390, and DB2 for OS/390 Java stored
procedures applications. This access is in addition to the access available from IMS TM Java
applications that run in the new JMP or JBP regions.

16

IMS Version 8 Implementation Guide

Java Tooling introduces a new IMS utility called DLIModel, which automatically generates the
required IMS Java metadata class from IMS PSB and DBD source, eliminating the previously
existing manual task of preparing these classes.
For more information on the Java enhancements, see Chapter 8, Application enablement on
page 95. For details on accessing IMS data from WebSphere Application Server, see
Chapter 9, Java enhancements for IMS and WebSphere on page 109.

Chapter 1. Overview

17

18

IMS Version 8 Implementation Guide

Part 2

Part

IMS Version 8 base


enhancements
In this part of the book we describe the general base product enhancements that are
applicable to all users migrating to IMS Version 8. These enhancements have no dependency
on IMS execution in a Parallel Sysplex, and benefit all IMS environments.

Copyright IBM Corp. 2002. All rights reserved.

19

20

IMS Version 8 Implementation Guide

Chapter 2.

Packaging and installing


In this chapter we supply packaging and installing information for IMS Version 8. Some
changes have been made to IMS packaging and installation procedures to conform to
OS/390 packaging and installation standards. The aim of this is to make IMS have the same
look and feel as other products, thus reducing the need for staff who are skilled in
IMS-specific installation.

Copyright IBM Corp. 2002. All rights reserved.

21

2.1 Product packaging


IMS products are ordered by Function Modification Identifiers (FMIDs). Table 2-1 lists the
FMIDs for IMS Version 8.
Table 2-1 FMIDs for IMS Version 8
Function description

Function Modification Identifier (FMID)

System Services - IVP, Logger, Data Base Recovery


Control (DBRC)

HMK8800

Database Manager

JMK8801

Transaction Manager, APPC/LU Manager

JMK8802

Extended Terminal Option (ETO)

JMK8803

Remote Site Recovery/Recovery-Level Tracking

JMK8804

Remote Site Recovery/Database Level Tracking

JMK8805

IMS Java Application Support

JMK8806

IRLM V2 R1

HIR2101

2.1.1 Installation changes


In IMS Version 8 the installation process has changed. The INSTALL/IVP process has been
eliminated and it is replaced by two things: INSTALL and installation verification procedure
(IVP). The INSTALL process is standard SMP/E installation. The INSTALL/IVP dialog has
been renamed to the IVP dialog and the SMP/E (B series) jobs are removed from the dialog.
The IVP process is used to set up the OS/390 and VTAM interfaces and to verify the
installation.
The installation process is documented in the program directory that is shipped with the order.
The program directory contains information on the jobs used to install IMS. Each job contains
instructions for customization to your environment. The instructions for unloading these jobs
as well as the run sequence is provided. A new IMS library (ADFSBASE/SDFSBASE)
contains sample installation jobs to perform an IMS Version 8 installation. The sample jobs
are replacements for the INSTALL/IVP B series jobs of the earlier IMS versions.
If you want to build a new SMP/E environment to install IMS Version 8 in its own SMP/E zone,
there are optional SMP/E jobs provided for this purpose:
DFSALA for allocating and initializing new CSI
DFSALB for initializing CSI zones, allocating SMP/E data sets, and building SMP/E
required DDDEFs
It is strongly recommended that you use these jobs. If you dont use the DFSALB job, be sure
that you set the ACCJCLIN parameter in the IMS distribution zone before IMS is installed.

2.1.2 Changes in target and distribution data sets


There are several SMP/E data sets that are new for IMS Version 8.

New target libraries


New target libraries shipped with IMS Version 8 are the following:

22

IMS Version 8 Implementation Guide

SDFSBASE

Contains sample jobs used for installation

SDFSSMPL

Contains samples for exits, data base descriptors, etc.

SDFSDATA

Contains Operations Manager (a new component of IMS Version 8)


translatable text file in English

SDFSSRC

Contains source code, target library created for the corresponding


distribution library

New target HFS DDDEFs and PATHs


The following list shows the new SMP/E DDDEFs and their default path names:
SDFSJCIC
SDFSJDC8
SDFSJHF8
SDFSJTOL

usr/lpp/ims/imsjava81/samples/jdbc/IBM/
usr/lpp/ims/imsjava81/samples/jdbc/IBM/
usr/lpp/ims/imsjava81/samples/jdbc/IBM/
usr/lpp/ims/imsjava81/samples/jdbc/IBM/

Changed target HFS DDDEF and PATH


The default path has changed for the following SMP/E DDDEF:
SDFSJSAM

old path: usr/lpp/ims/imsjava71/samples/jdbc/IBM/, new path:


usr/lpp/ims/imsjava81/samples/jdbc/IBM/

Note: You may want a separate HFS and path for the IMS Version 8 HFS, as this allows
greater flexibility with maintenance and the coexistence of multiple IMS releases if
required. This can be implemented by setting up an IMS V8 specific mount point, for
example /usr/lpp/ims81, and by changing all references from /usr/lpp/ims to /usr/lpp/ims81
for the various DDDEFS, during the installation of IMS Java.

Obsolete target libraries and DDDEFs


These IMS Version 7 Java Application Support target libraries and DDDEFs need to be
deleted from IMS Version 8 zones, when installing over the existing IMS Version 7 SMP/E
zones:

SDFSJJCL
SDFSJDOC
SDFSJHFS
SDFSJIVP

New distribution data sets


ADFSBASE
ADFSSMPL
ADFSDATA
ADFSJDC8
ADFSJHF8
ADFSJCIC
ADFSJTOL

Contains sample jobs used for installation


Contains samples for exits, data base descriptors, etc.
Contains Operation Manager commands, translatable text file in
English
Contains Java documentation
Contains Java file system
Contains Java file system
Contains Java file system

Obsolete distribution libraries and DDDEFs


These IMS Version 7 Java Application Support distribution libraries and DDDEFs need to be
deleted from IMS Version 8 zones, when installing over the existing IMS Version 7 SMP/E
zones:
ADFSJDOC
ADFSJHFS
Chapter 2. Packaging and installing

23

ADFSJIVP

2.1.3 SMP/E processing changes in IMS Version 8


All FMIDs are installed using the normal SMP/E RECEIVE, APPLY and ACCEPT command
sequence. This conforms to the packaging standards of z/OS and other IBM products. During
the APPLY processing, you will see multiple SMP/E messages which indicate no target library
for parts that are defined in the IMS system definition. The expected messages are
documented in the program directory.
Note: If you install the RSR feature (FMIDs JMK8804 and JMK8805), but don't want to use
it on a specific IMS, you need to code the DFSPBxxx parameter RSRMBR=nn with the
parameter RSR(NO) in DFSRSRnn to avoid the occurrence of DFS0579W FIND FAILED
FOR PROCLIB MEMBER = DFSRSR00 during the IMS start up.
IMS Version 8 has eliminated the jobs DFSJCLIN, DFSJIDLT and DFSJIRLT, that were
available in previous versions. These jobs could be used to build the non-system definition,
DLT and RLT elements of IMS. These elements are now created during the SMP/E APPLY
processing. SMP/E uses the inline ++JCLIN that is provided with FMIDs to accomplish this. If
needed, the SMP/E GENERATE command can be used to create the JCL to build these
elements. The usage of GENERATE command requires that ACCJCLIN was set in your
distribution zone before processing the FMIDs.

2.1.4 User exits in IMS Version 8


In IMS Version 7, the majority of the source code and samples that are provided, are in
ADFSSRC distribution library. ADFSSRC library does not have corresponding SMP/E target
library, so the default SMP/E SMPSTS is used instead. With IMS Version 8 some of the
optional user exits have been moved from ADFSSRC to ADFSSMPL (distribution) and
SDFSSMPL (target) libraries and they are created as ++SRC type part. This allows the exits
to be updated by line updates during SMP/E service processing as opposed to complete
replacement.
IBM is not shipping the object code for these user exits anymore and no module (MOD) to
load module (LMOD) relationships are created during IMS install. That means that SMP/E will
not automatically assemble and bind the parts during APPLY processing. If the user creates
the MOD to LMOD relationship, then SMP/E APPLY processing will automatically assemble
and bind these exits.

2.2 IVP changes


Available with IMS Version 8, the IVP dialog provides an option which allows you to either
include or exclude Fast Path. The selection is made after the IVP environment option
selection as a sub-option selection. The default for this sub-option is to be selected, Fast Path
will then be included in the system definition and used in the IVP applications.

2.2.1 Execution steps


In IMS Version 8, the installation process (B steps of jobs) is withdrawn from the dialog. The
IVP execution begins with steps Cx and proceeds through steps Zx.
All iterations of the IVP process DO NOT contain the same steps. Your iteration of the IVP
process will create the specific steps required based on the options you selected. All steps
24

IMS Version 8 Implementation Guide

should be completed to thoroughly test the IMS system. The changes in Version 8 are listed
under the following paragraphs.

Removed Bx - steps
The Bx -series of jobs and tasks that were used to build the SMP/E distribution libraries for
the previous IMS versions, are removed in IMS Version 8.

Changed Cx - steps
The Cx -series of job and tasks are used for IMS system definition. For IMS version 8, a new
APPLCTN macro has been added for application DFSIVP37 with the TRANSACT macro for
transaction code IVTCM. This is for IMS Java IVP. IMS Java IVP is not executed using the IVP
dialog. For more information about IMS Java IVP, refer to 2.2.2, IMS Java IVP on page 26.

Changed Dx - steps
The Dx -series of jobs and tasks are used to define the IMSs interfaces to OS/390 and
VTAM. For IMS version 8, the following change has been made:
Task IV_D208T - Update PPT entries, added BPEINI00 for Common Service Layer (CSL).
Task IV_D209T - Updated to have the scatter (SCTR) parameter in the parameter list for
the link step. Without the parameter, IPL fails with wait status: x'054'.

New and changed Ex -steps


The Ex -series of jobs and tasks are used to prepare IVP IMS system and applications. For
IMS Version 8, there are the following new items in the execution of these steps:
Job IV_E202J updated to contain a step for PSB generation for the new PSB DFSIVP37
Job IV_E303J added to add CSL control statement members to IMS PROCLIB
Task IV_E305T added for the Syntax Checker sample

New Ox -series of jobs and tasks for IMS Version 8


IMS Version 8 IVP includes new Ox -series of jobs and tasks. These items are used to verify
the IMS shipped TSO single point of control (SPOC) application. The Ox -series for DB/DC
system contains the following jobs and tasks:

Job IV_O101J - Allocate Data Sets


Job IV_O102J - Initialize RECON / Register Data Bases
Job IV_O103J - Data Base Initial Load
Job IV_O104J - Batch Image Copy
Job IV_O201J - Start SCI
Job IV_O202J - Start OM
Job IV_O204J - Start RM
Job IV_O205T - SPOC Sample I
Job IV_O210J - Start IRLM #1
Job IV_O211J - Start IRLM #2
Job IV_O215J - Start DB/DC Region IVP1
Task IV_O217T - Cold Start IMS
Task IV_O220T - SPOC Sample II
Task IV_O230T - Stop IMS with a /CHE FREEZE
Task IV_O232T - Shut Down SCI/OM/RM
Task IV_O233T - Stop IRLM #1 and IRLM #2
Task IV_O401J - Scratch Data Sets

Chapter 2. Packaging and installing

25

2.2.2 IMS Java IVP


IMS Java enhancements in IMS Version 8 provides expanded sample applications for IMS,
WebSphere, CICS, and DB2 stored procedures, but the IVP is not executed using the IVP
dialog. The detailed instructions on how to run IMS Java IVP, can be found in the README
files under the following directories (if you are using the default path names for IMS Java):
/usr/lpp/ims/imsjava81/samples/ivp/ims
/usr/lpp/ims/imsjava81/samples/ivp/ims
/usr/lpp/ims/imsjava81/samples/ivp/ims
For additional information about the IMS Java IVP applications, refer to the manual IMS
Version 8: Java Users Guide, SC27-1296.

2.3 IMS system definition


The DFSJCLIN member for creating the non-system definition parts is no longer provided.
These parts are created during SMP/E APPLY processing. SMP/E uses inline ++JCLIN that
is provided with the FMIDs to accomplish this. The SMP/E GENERATE command can be
used to create the JCL to build the non-system definition elements with the elements actually
being created during SMP/E APPLY processing. It is necessary to set up ACCJCLIN in the
distribution zone before processing the FMIDs.

2.3.1 Changed minimum and default values for RECLNG in MSGQUEUE macro
The MSGQUEUE macro RECLNG minimum and default values have been modified for the
short and large message queue data sets. The minimum values are now 392 and 1176, and
the default values are 504 and 2520. The old minimum values were 112 and 672, and the old
defaults were 224 and 2240. If you have been using the values that are less than the new
minimum values, you should adjust those values accordingly, otherwise you will receive a
return code 04 from stage 1 while running the minimum of CTLBLKS type of system
definition.

2.4 New and obsolete execution parameters


New keywords for IMS Version 8 in the IMS execution parameters PROCLIB member
DFSPBxxx are the following:

26

CSLG=

A three-character suffix for the Common Service Layer global


PROCLIB member, DFSCGxxx. When you specify this parameter, IMS
uses the Common Service Layer (CSL) to manage and operate the
IMSplex. This parameter does not have default values. For more
information about setting up the DFSCGxxx PROCLIB member, refer
to 20.1.8, Set up IMS PROCLIB members on page 296.

IOVFI=

Specifies how often the count of unused IOVF control intervalss is


updated. This parameter sets a timer, in seconds, which triggers an
IMS internal task to begin a count of unused IOVF control intervals.
The default value is 7200 seconds (2 hours). To disable the timer,
specify a time of 1 second. A value of 0 sets the timer to the default
value (7200 seconds). The maximum allowed value is 86400 (24
hours).

IMS Version 8 Implementation Guide

OTMAASY=

This parameter is to determine if a transaction defined as


non-response and originated from a program to program switch should
always be scheduled asynchronously (OTMAASY=Y). The default is
synchronous (OTMAASY=N). This parameter is for send-then-commit
message only. There is no DFS2082 message issued for a transaction
scheduled asynchronously. It can also be used in the multiple program
to program switches environment to ensure that only the response
transaction can be scheduled synchronously.

Note: With the introduction of OTMAASY parameter, IMS has changed the way it
schedules a non-response transaction originating from a program-to-program switch. This
could affect existing applications.
When migrating from IMS Version 7, the CHTS parameter (number of CCB hash table slots)
is obsolete for IMS Version 8. If it is specified in the DFSPBxxx, IMS Version 8 issues the
following message during the start:
DFS1921I - PARAMETER KEYWORD INVALID, CHTS

IMS

IMS Version 8 ignores the CHTS parameter and the processing continues.

Chapter 2. Packaging and installing

27

28

IMS Version 8 Implementation Guide

Chapter 3.

Syntax Checker
In this chapter, we introduce the new TSO/ISPF Syntax Checker application. We describe the
functions and benefits of the Syntax Checker, and the Syntax Checker allocation
requirements. We then provide a brief overview and examples of the various functions
through a sample Syntax Checker session.

Copyright IBM Corp. 2002. All rights reserved.

29

3.1 Introduction
The IMS Syntax Checker is a new ISPF application delivered with IMS Version 8. The Syntax
Checker only runs in an ISPF environment on TSO/E.
Syntax Checker provides the capability to define, verify, and validate the parameters (and
their value specifications) of the DFSPBxxx member of IMS.PROCLIB before restarting the
IMS control region to activate the startup parameters.
The Syntax Checker application performs the following functions:

Reads and displays an IMS.PROCLIB member's parameters and values


Verifies that parameters and values are valid
Allows modification of values
Saves parameters and values back to the same or a new PROCLIB member
Identifies new and obsolete parameters per IMS version
Provides detailed online help text at the parameter level

Syntax Checker aids you in maintaining PROCLIB members by minimizing the need to edit
them directly, reducing the chance for typographical errors. It also aids in the migration
between versions, by identifying all parameters that are new in a release, and parameters that
are obsolete. This reduces IMS setup and configuration time.
The Syntax Checker supports IMS Version 6, 7, and 8, and provides specific validation of
DFSPBxxx members for:
DBCTL initialization parameters
DCCTL initialization parameters
DB/DC initialization parameters
For detailed information on the DFSPBxxx and other parameter members, see IMS Version 8:
Installation Volume 2: System Definition and Tailoring, GC27-1298.

3.2 Getting started


The Syntax Checker application is enabled by allocating the required files to the users TSO
session. Table 3-1 shows the DD names and the associated files that need to be allocated.
Table 3-1 Syntax Checker file allocation associations
TSO file DD name

Data set

ISPLLIB

IMS.SDFSRESL

ISPPLIB

IMS.SDFSPLIB

ISPMLIB

IMS.SDFSMLIB

ISPTLIB

IMS.SDFSTLIB

A Syntax Checker startup REXX exec is supplied in IMS.SDFSEXEC(DFSSCSRT), which


takes the IMS high level qualifier as a parameter and performs the necessary allocations. It
can be invoked by using the following TSO command:
ex 'IMSPSA.IMS0.SDFSEXEC(DFSSCSRT)' 'hlq(imspsa.ims0)'

30

IMS Version 8 Implementation Guide

This will allocate the libraries and invoke the TSO Syntax Checker application, and will place
you on the Syntax Checker member and data set name panel. Figure 3-2 shows the hierarchy
of the Syntax Checker application menu options, and their functions.
Table 3-2 Syntax Checker application menu structure
Menu item

Sub-menu item

Function

File

1. Save

Saves the parameters and values.

2. Save as

Saves the parameters and values as a member with a new


name.

3. Cancel

Discards all changes that have been made to parameters and


values.

4. Change release

Allows you to select the IMS release that the parameters and
values should be validated against.

5. Exit

Terminates the Syntax Checker application. If changes to


parameter or values have been made, you will be prompted to
save or to discard your changes.

1. Comment

Inserts a comment line before parameters that have been


selected with a / (slash) on the parameter display listing.

2. Delete

Deletes all of the parameters that have been selected with a


/ (slash) on the parameter display listing.

3. Delete all

Deletes all parameters, and presents a listing of all available


parameters for this release and region type. This essentially
creates a new empty member with all available parameters
displayed.

1. Display all

Displays all of the parameters that have assigned values.


These parameters will only be saved if they have been
assigned a value when the member is saved

2. Display selected

Displays all of the parameters that have values. All of these


parameters will be saved when the member is saved.

3. Display New

Displays all of the new parameters for the selected IMS


release and region type. These parameters will only be saved
if they have been assigned a value when the member is saved

1. Help for Help

Explains the general function of the help dialogs

2. Extended help

Explains how to use the Syntax Checker application to work


with keywords and values

3. Keys help

Explains current function key processes

4. Help Index

Provides an index of help topics

5. About

Displays release and copyright information

Edit

View

Help

3.3 Using the Syntax Checker


When the Syntax Checker first starts, you will see a prompt for a data set name and member
name. If a member name is not specified, a member list will be displayed. You can select the
member to be processed from the list. Figure 3-1 shows the library and member specification
panel.

Chapter 3. Syntax Checker

31

Figure 3-1 Library and member selection panel

After you have responded with the data set and member name, Syntax Checker reads the
input file and tries to determine, from specially formatted comments, the IMS version and type
of control region. These comments are generated by the Syntax Checker application when
the member has been saved once.
Example 3-1 shows the special Syntax Checker comments in the DFSPBxxx member. The
comments begin with the characters '*<':
Example 3-1 Syntax Checker version and region control comments
*<SYSUID>JOUKO3
*<VERSION>8.1
*<IMSCR>DB/DC
*<DATE>02/06/07
*<TIME>17:35
ALOT=10
...

When a new DFSPBxxx member is created, the IMS procedure library name and member
name must be specified; and the IMS release (for example, IMS 8.1) and type of control
region (for example, DB/DC Control Region) must also be specified. The IMS release and
type of control region is saved as a comment in the DFSPBxxx member. After the DFSPBxxx
member has been saved, the next time the member is accessed, the Syntax Checker can
determine the IMS release and type of control region from the comments in the member.

32

IMS Version 8 Implementation Guide

The version of IMS and the type of control region are necessary to correctly process the
member. If Syntax Checker cannot determine the information it requires from the comments
in the member, it will prompt the user for the release and type of control region information as
shown in Figure 3-2. When IMS is able to determine this information, this screen is not
displayed.

Figure 3-2 Release and control region type specification panel

After the necessary information is determined or provided, Syntax Checker displays a list of
parameters read from the input member.
The keywords displayed in the list will be determined as follows:
If the member is new or empty then a list of all possible keywords for the member will be
displayed in alphabetical order. A message will be displayed to inform the user that the
member is new or empty.
If the member is not new nor empty then the list will contain the current parameters
defined in the member. The list will be shown in alphabetical order by keyword name. A
display of all possible keywords for the member can be obtained selecting by View>1.
Display all from the action bar.
If there are parameter errors, the parameter is highlighted. Press the enter key without any
other inputs to display the first keyword with a value error at the top of the keyword display.
Figure 3-3 shows the colors used to highlight various parameter keywords while Figure 3-4
shows the colors used to highlight various parameter values.
Table 3-3 Keyword color highlighting
Color

Description

Green

Normal color for a Keyword with a value

Blue

A new keyword in this release

Yellow

Warning: Read the help text

Red

Keyword error

Turquoise

Keyword is a template. It has no value and will not be saved. These keywords are
only displayed using the display all view.

Chapter 3. Syntax Checker

33

Table 3-4 Keyword value color highlighting


Color

Description

Turquoise

Value is correct

Red

Value error

In Figure 3-3, AAAA=1234 and APPLID keywords are invalid. After entering updates; and
pressing Enter, press Enter again to display any other keywords in error.

Figure 3-3 Invalid keywords and parameters

If no errors are found, a message indicating that no errors have been detected, is displayed.
Invalid keywords are always displayed at the top of the first screen. To delete an invalid
keyword, enter a 'd' in the SEL field of the keyword.
The values of each of the parameters in the display are modifiable. New parameters may be
added and existing parameters may be deleted.

34

IMS Version 8 Implementation Guide

Figure 3-4 Invalid parameter value

As shown in Figure 3-4, after pressing Enter to check the syntax, Syntax Checker indicates
that the ALOT keyword has an invalid parameter value assigned.

Chapter 3. Syntax Checker

35

Detailed help information is provided for each parameter member and each keyword
parameter within the member. To obtain the help screen place the cursor on the line
containing the value and hit the PF1 key. Figure 3-5 shows the help text that is displayed for
the ALOT keyword that contained an invalid value.

Figure 3-5 Parameter level help for the ALOT parameter

36

IMS Version 8 Implementation Guide

After correcting the parameter value and again checking the syntax, the Syntax Checker
display a message indicating there were no errors found as shown in Figure 3-6.

Figure 3-6 Successfully validated parameters

Chapter 3. Syntax Checker

37

3.3.1 Changing releases


IMS Syntax Checker provides an aid in the migration between versions, by identifying all
parameters that are new in a release, and parameters that are obsolete. The Syntax Checker
supports IMS Version 6, 7, and 8, parameters. To change the IMS version, select File>4.
Change Release from the action bar as shown in Figure 3-7. If any changes have been made
to the current member, you will be given an option to save the member.

Figure 3-7 Changing releases

38

IMS Version 8 Implementation Guide

This option validates the member being processed against the release selected. Invalid and
obsolete parameters are identified. Syntax Checker provides a selection of supported IMS
releases as shown in Figure 3-8.

Figure 3-8 Select new release prompt

Chapter 3. Syntax Checker

39

The IMS Execution parameter Display - Current Parameters panel is displayed. The new and
obsolete keywords are highlighted. In our example, the CHTS parameter appears highlighted.
Pressing Enter would display the message indicating CHTS is invalid for IMS Version 8 as
shown in Figure 3-9.

Figure 3-9 Invalid parameters after release change

40

IMS Version 8 Implementation Guide

3.3.2 Display options


There are multiple options available from the View action bar selection as shown in
Figure 3-10. The items in the display options list that have a '*' instead of a number are not
valid options at this time.

Figure 3-10 Display options

Selecting option View>1. Display all, from the action bar, which displays all of the valid
keywords for the selected IMS release as shown in Figure 3-11.

Chapter 3. Syntax Checker

41

Figure 3-11 Display all view

Select option View>1. Display new, from the action bar, which displays all of the new
keywords that were added for an IMS release, as shown in Figure 3-12. New keywords for
IMS Version 8 are:
CSLG
IOVFI
OTMAASY

CSL global member


IOVF timer control intervals
OTMA program switch for non response mode transaction

For the description of these parameters, refer to 2.4, New and obsolete execution
parameters on page 26 and the manual IMS Version 8: Installation Volume 2: System
Definition and Tailoring, GC27-1298.

Figure 3-12 Display new view

42

IMS Version 8 Implementation Guide

Select option View>1. Display selected, from the action bar, which displays only the
keywords that have been specified in the DFSPBxxx member that is being viewed as shown
in Figure 3-13.

Figure 3-13 Display selected view

To add a comment in the DFSPBxxx member, enter a 'C' in the SEL field where you would like
the comment line added as shown in Figure 3-13. After pressing Enter, a comment line is
created in the DFSPBxxx member for you to enter a comment, as shown in Figure 3-14.
Type your comment and press Enter.
Comments can also be added to the same line with the parameter by overtyping the
Description field. If there are comments on the same line in the DFSPBxxx member, they are
displayed in place of the Syntax Checker's description. If there are no same line Comments
then the description will be displayed. The same line comments cannot exceed 42 characters.

Chapter 3. Syntax Checker

43

Figure 3-14 Display selected view with comment

3.3.3 Save options


There are two save options available from the file selection on the action bar: Save and
Save as options. These options work the same as normal ISPF save options. Save is used
to save the current member with the same name, where Save as allows the specification of a
new member and library name.
Since we have been creating a new DFSPBxxx member for an IMS Version 8 system using
an IMS Version 7 DFSPBxxx member for input values, the 'Save as' option will be selected as
shown in Figure 3-15. The member will be saved to a new IMS Version 8 member rather than
overlaying the existing DFSPBxxx member for IMS Version 7.

44

IMS Version 8 Implementation Guide

Figure 3-15 Syntax Checker save options

Syntax Checker will prompt for the library and member name for the new DFSPBxxx member,
as shown in Figure 3-16. Type the name of the IMS member where you want the DFSPBxxx
member for IMS Version 8 to be saved and press Enter.

Figure 3-16 Save as library and member prompt

Chapter 3. Syntax Checker

45

A message is displayed to indicate the member was saved successfully as shown in


Figure 3-17.

Figure 3-17 Member saved message

Additional information on the parameters and parameter values for the DFSPBxxx member
can be found in IMS Version 8: Installation Volume 2: System Definition and Tailoring,
GC27-1298, and IMS Version 8: Release Planning Guide, GC27-1305.

46

IMS Version 8 Implementation Guide

Chapter 4.

Database management
enhancements
In this chapter we introduce IMS Database Manager (IMS DB) enhancements in IMS Version
8. This chapter contains information relating to the following items:

Database Image Copy 2 enhancements


Parallel database processing
Fast Path DEDB enhancements
Batch RRS support
Coordinated IMS/DB2 disaster recovery support

Copyright IBM Corp. 2002. All rights reserved.

47

4.1 Database Image Copy 2 enhancements


The enhancements for Image Copy 2 (IC2) in IMS Version 8 are intended to ease the
coordination of your database administration and are based on a number of user
requirements. With IMS Version 8, you are able to execute multiple utility control statements
during one IC2 execution step (DFSUDMT0 utility). This means, multiple copies are created
in one IC2 step invoking DFSMSdss once.
The logical copy completion is achieved in a very brief period of time for multiple database or
area data sets (unless the same tape output volume is used for multiple ICs, see Note in
4.1.1, Multiple DBDS and ADS copies on page 48).
The group name support, naming the data sets in one job execution, can be used to better
correlate the commands used for stopping and starting (/DBR, /STA) your databases in
groups and can be a benefit for coordination in your database management processes.
In overview Image Copy 2 now supports:

Multiple DBDSs (as well as ADSs) copied in one execution step


Parallel dump processing
IC data set group names
DFSMSdss OPTimize() specification by the user
SAMEDS option to create multiple image copies in the same output data set

Some changes concern the Image Copy 2 utility (DFSUDMT0) itself. Since the IC2 is running
in AMODE(31) and additional control blocks are allocated above the line, the storage use
below the line (below 16 MB) is minimized, which is an important improvement, for example if
multiple copies are running in parallel.
Also, any DBDS level errors do not force the termination of the utility. Instead, there is a return
code of 8 issued. The possible Image Copy 2 return codes are listed in Table 4-1.
Table 4-1 Return codes of the Image Copy 2 utility
Return code

Meaning

0000

Operation ended successfully

0004

I/O error(s) on output data set(s)

0008

One or more DBDS operations failed

0012

All DBDS operations failed

0016

Syntax or other severe error (for example,


DBRC is not present)

4.1.1 Multiple DBDS and ADS copies


A major enhancement of the Image Copy 2 utility in IMS Version 8 is the ability to run with
only one execution step to process multiple control statements. When Image Copy 2 is
executed, the utility makes a single invocation of DFSMSdss to start multiple dump processes
in parallel to copy all specified database data sets (DBDSs) or area data sets (ADSs).
Since we are processing the DBDS dumps in parallel, and exploiting full DFSMS support and
hardware features for concurrent copy support (sidefiles), logical copies for all DBDSs will
complete even though physical copies are still being taken. In other words, the database is
unavailable only for the time DFSMSdss requires to initialize a Concurrent Copy session for
the data, which is a very small fraction of the time that the complete backup will take.
48

IMS Version 8 Implementation Guide

The physical output is processed in parallel using multiple TCBs - as long as no other
limitations or constraints exist.
Note: Without DFSMSdss APAR OW54614, DFSMSdss would serialize multiple dump or
restore tasks that are using the same DASD volume, even if the PARALLEL keyword was
specified. APAR OW54614 allows multiple dump tasks to execute in parallel while
dumping to the same DASD volume. The serialization still applies for multiple dump tasks
dumping to the same tape volume.
For further information about the DFSMSdss feature of Concurrent Copy, refer to the IBM
Redbook, Implementing ESS Copy Services on S/390, SG24-5680, Chapter 5, Concurrent
Copy.
There are changed and new messages in IMS Version 8 for image copy completion and
failure notification:
If all the database data sets (DBDS) for a database or HALDB partitions are logically
complete (or those that failed, also notified with this message), the following message is
issued to the system console and the sysprint:
DFS3121A LOGICAL COPY COMPLETE FOR GROUP | DB/AREA groupname |dbname;
n OF m DATA SETS FAILED

The message DFS3121A will only appear if a logical (XL) option was coded, not for any
other option like XP or S. The groupname appears if it is used in the control statements to
group the DBDS and indicates that this group is logically complete. Please refer to the
section about Group name support on page 51.
If the Image Copy is logically complete for each individual database data set (DBDS), the
following message is issued to the sysprint:
DFS3121I COPIED DB/AREA dbname DDN ddname DSN dsname

This message will follow the preceding alert message to list each successfully processed
DBDS (otherwise, in case of unsuccessful process the DFS3122A will be issued for each
failing DBDS).
If all the DBDS for a database or HALDB partitions are physically complete (or for those
that failed, also specified with this message), the following message is issued to the
system console and the sysprint:
DFS3141A PHYSICAL COPY COMPLETE FOR GROUP | DB/AREA groupname | dbname;
n OF m DATA SETS FAILED

The message DFS3141A will only appear if a physical (XP) option was coded, not for any
other option like XL or S. The groupname appears if it is used in the control statements to
group the DBDS and indicates that this group is physically complete. Please refer to the
section about Group name support on page 51.
If the Image Copy is physically complete for each individual DBDS, the following
message is issued to the sysprint:
DFS3141I COPIED DB/AREA dbname DDN ddname DSN dsname

This message will follow the preceding DFS3141A alert message.


If the Image Copy failed for the specified DBDS intended for physical processing,
associated with a reason code, the following message is issued:
DFS3144A IMAGE COPY PROCESSING FAILED FOR DB/AREA dbname DDN ddname, REASON = nn

Chapter 4. Database management enhancements

49

Example 4-1 Control statements for Image Copy 2


|...+....1....+....2....+....3....+....4....+....5....+....6...
//SYSIN
DD *
2 CUSTDB1 DDNAME1A ICOUT1A1 ICOUT1A2
XLC
2 CUSTDB1 DDNAME1B ICOUT1B1 ICOUT1B2
XLC
2 CUSTDB1 DDNAME1C ICOUT1C1 ICOUT1C2
XLC
2 CUSTDB1 DDNAME1D ICOUT1D1 ICOUT1D2
XLC
2 CUSTDB1 DDNAME1E ICOUT1E1 ICOUT1E2
XLC
2 CUSTDB2 DDNAME2A ICOUT2A1 ICOUT2A2
S C
2 CUSTDB2 DDNAME2B ICOUT2B1 ICOUT2B2
S C
2 CUSTDB3 DDNAME3A ICOUT3A1 ICOUT3A2
XP
2 CUSTDB3 DDNAME3B ICOUT3B1 ICOUT3B2
XP
2 CUSTDB3 DDNAME3C ICOUT3C1 ICOUT3C2
XLC
2 CUSTDB4 DDNAME4A ICOUT4A1 ICOUT4A2
S C
/*

Example 4-1 shows utility control statements to copy DBDSs for 4 different databases
(CUSTDB1,..2,..3,..4). Please note the different processing options (S | XL | XP) specified for
the DBDSs. When the image copies for the DBDSs for CUSTDB1 are all logically complete, a
DFS3121A message is issued for CUSTDB1. The DFS3121I message is issued for each of
the five DBDSs processed belonging to CUSTDB1.
When the image copies for the DBDSs for CUSTDB3 are all physically complete, a
DFS3141A message for CUSTDB3 is issued. The DFS3141I message is issued for each of
the three DBDSs processed belonging to CUSTDB3. Copy completion messages are not
issued for CUSTDB2 or CUSTDB4 since a fuzzy image copy was requested.
As you can see the option XL specified for the third DBDS of CUSTDB3 is ignored. The
Image Copy uses the highest level of consistency specified for any of the DBDSs belonging
to the database. This will also apply if you dont specify the options for all DBDSs of a
database. The utility considers processing the database by the following levels of
consistency:
XP

if any DBDS statement is specified with; else

XL

if any DBDS statement is specified with; else

if all DBDS statement is specified with or without / nothing else


(default).

Here are some additional points of interest about the Image Copy options:
With KSDS organized DBDS, take care if you are using the S - option (fuzzy, correlates to
SMSCIC). If the KSDS is being copied along with other DBDSs the utility only makes one
attempt at attaining a logical copy. This one attempt can easily fail, for example if any CI or
CA split happens during the process. If the KSDS is the only data set being copied, the
utility re-attempts the dump process up to 10 times.
The copy completion messages for the XL | XP options as issued in IMS Version 8 are
intended to indicate that the databases may be restarted and available for further
database authorizations. That is why the S - option in use for fuzzy copies does not issue
any completion message.
If some automation steps in your environment are based on any message issued (for
example, DFS3121A) please note the changes. In IMS Version 7, you got the DFS3121A
for all copy types, now the DFS3121I is issuing for each DBDS preceded by the
DFS3121A message for the entire database (or group). Also, there was no indication for
physical copy completion (DFS3141A) in previous versions (job step ending implicitly
indicated the physical copy completion).

50

IMS Version 8 Implementation Guide

In the use of the XL | XLC option be aware that the RECON (notification of the Image
Copies) will be updated after the physical process of dump and output data set is finished.
At this time the DFS3121A message has already been issued and may trigger your
automation to perform /STA DATABASE activity. If the physical copy fails after any DBDS
is authorized (intended for update) and an ALLOC entry is created for this DBDS, the
Image Copy is unusable and thus cannot be used for recoveries for any timestamp before
the ALLOC timestamps (allocation or log start time). You should consider if the change to
XP option is the better solution - if this is really critical for your recovery scenario.
Tip: For full exploitation of multiple DBDS and AREA copies and parallel processing
ensure different tape output volumes, otherwise the dump processing is serialized. Also,
please consider that tape processing needs longer execution time. Users who want to
stack image copies on the same output tape volumes should consider using the SAMEDS
option, refer to 4.1.3, Single output data set on page 52.

4.1.2 Group name support


As mentioned above, a new control statement can be used to assign a group name, existing
only for this execution, to the collection of DBDSs that are to be copied. The group name
statement must be the first control statement, and is followed by control statements
identifying the corresponding DBDSs. This also means that only one group may be specified
per IC2 execution. The options specified (or defaulted) on the group statement apply to all the
DBDS statements and any options specified on the DBDS statements are ignored.
Since the group exists only for the execution there is no dependency nor any requirement to
match IC2 group names with existing database (or DBDS) group names defined in your
RECON. However, using the DBRC defined group names can simplify your operations:
/DBR DATAGROUP DBGRP1A
GENJCL.IC GROUP(DBGRP1A)
/STA DATAGROUP DBGRP1A

The generated IC JCL will include a group name statement for DBGRP1A followed by
statements for any DBDS member of the predefined DBDS group named DBGRP1A.
Please refer to GENJCL support on page 53 for discussion on the GENJCL.IC command
support as they relate to Image Copy 2 enhancements.
The new messages mentioned in the previous section will notify the completion for the
processed group and will be followed by the individual DFS3121I or DFS3141I message for
every DBDS. Example 4-2 shows the Image Copy 2 statements, and the corresponding
completion messages.
Example 4-2 Image Copy 2 completion messages
|...+....1....+....2....+....3....+....4....+....5....+....6...
//SYSIN
DD *
G DBGRP1A
XL
2 CUSTDB1 DDNAME1A ICOUT1A1 ICOUT1A2
2 CUSTDB1 DDNAME1B ICOUT1B1 ICOUT1B2
2 CUSTDB1 DDNAME1C ICOUT1C1 ICOUT1C2
2 CUSTDB2 DDNAME2A ICOUT2A1 ICOUT2A2
S
2 CUSTDB2 DDNAME2B ICOUT2B1 ICOUT2B2
2 CUSTDB3 DDNAME3A ICOUT3A1 ICOUT3A2
XP
/*
Output messages:
DFS3121A LOGICAL COPY COMPLETE FOR GROUP DBGRP1A; 0 OF 6 DATA SETS FAILED

Chapter 4. Database management enhancements

51

DFS3121I
DFS3121I
DFS3121I
DFS3121I
DFS3121I
DFS3121I

COPIED
COPIED
COPIED
COPIED
COPIED
COPIED

DB/AREA
DB/AREA
DB/AREA
DB/AREA
DB/AREA
DB/AREA

CUSTDB1
CUSTDB1
CUSTDB1
CUSTDB2
CUSTDB2
CUSTDB3

DDN
DDN
DDN
DDN
DDN
DDN

DDNAME1A
DDNAME1B
DDNAME1C
DDNAME2A
DDNAME2B
DDNAME3A

DSN
DSN
DSN
DSN
DSN
DSN

IMSPROD.CUSTDB1.DD1A
IMSPROD.CUSTDB1.DD1B
IMSPROD.CUSTDB1.DD1C
IMSPROD.CUSTDB2.DD2A
IMSPROD.CUSTDB2.DD2B
IMSPROD.CUSTDB3.DD3A

Please note the option stated on the group statement overrides any option for the following
DBDS statements. If there is no option specified for the group, the default S will be used for
all nested DBDSs. In our example the specified options, S for the first DBDS of CUSTDB2
and XP for the DBDS for CUSTDB3, will be overridden by the XL option of the group
statement.
Note: Processing options S | XL | XP, COMPRESS, OPTIMIZE() on the group statement
override the options specified for the imbedded DBDS statements.

4.1.3 Single output data set


You can use the control statement with the S option in the first column to write multiple ICs
to a single output data set, as shown in Example 4-3. The S ('same data set' or SAMEDS)
option causes the Image Copy 2 utility to invoke DFSMSdss to write the copy into the same
data set belonging to the previous control statement that specified output ddnames (without
S in first column). The absence of any preceding statement without S in the first column
would cause the new message DFS3143A to be issued.
The Image Copies easily stacked onto one output data set is an alternative to writing multiple
output data sets to one volume.
Please consider the following guidelines:
Stacking of IC dumps is limited to 255. GENJCL.IC with the SAMEDS parameter needs
the ONEJOB parameter. Otherwise the new DSP0192I message will be issued.
The physical copying of multiple data sets to the same data set can only run serialized.
Recovery using the standard database recovery utility will require a separate read pass for
each data set.
The IMS Online Recovery Service (ORS) recognizes this form of 'stacked' ICs and
schedules one single restore operation for all involved DBDSs.
DBDSs defined in RECON as REUSE are supposed to use unique preallocated Image
Copy data sets. For these, the same data set option is not supported. It would probably
run if the specified output ddnames are referring to the expected preallocated output data
set. But you will agree, it doesnt make sense to merge the REUSE option of your DBDS
definition in the RECON with this new Image Copy option intended to stack more ICs onto
one output data set.
The DSP0351I message will indicate any inconsistent information between your RECON
definitions (the REUSE option) and the intended Image Copy (the generated JCL with a
mismatch in ddname caused by the SAMEDS option).
Example 4-3 Image Copy 2 group statement
|...+....1....+....2....+....3....+....4....+....5....+....6...
//SYSIN
DD *
G DBGRP1A
XL
2 CUSTDB1 DDNAME1A ICOUT11 ICOUT12
S CUSTDB1 DDNAME1B

52

IMS Version 8 Implementation Guide

S
2
S
S
2
2
/*

CUSTDB1
CUSTDB2
CUSTDB2
CUSTDB3
CUSTDB4
CUSTDB4

DDNAME1C
DDNAME2A ICOUT21 ICOUT22
DDNAME2B
DDNAME3A
DDNAME4A ICOUT4A1 ICOUT4A2
DDNAME4B ICOUT4B1 ICOUT4B2

In Example 4-3 we are using the same DBGRP1A group combined with statements for same
output data set. All three DBDSs for CUSTDB1 data base are to be dumped to the ICOUT11
and ICOUT12 data sets. The two DBDSs for CUSTDB2 together with the one DBDS for
CUSTDB3 are to be dumped to the ICOUT21 and ICOUT22 data sets. Each DBDS for
CUSTDB4 is to be dumped to its own two output data sets. The group is processed logically
and the message DFS3121A will be issued for the group.

4.1.4 Support for the DFSMSdss OPTIMIZE option


With IMS Version 8, Image Copy 2 exploits the support of the DFSMSdss OPTIMIZE option.
The usage of the option can now specified for Image Copy 2 through the control statements.
DFSMSdss provides four levels of optimization to control the number of DASD tracks
transferred in one I/O operation:
OPTimize(1)
OPT(2)
OPT(3)
OPT(4)

1 track (IC2 default for fuzzy ICs)


2 tracks
5 tracks
1 cylinder (IC2 default for clean ICs)

You can specify the optimization level 1|2|3|4 on the GROUP or DBDS control statement at
column 61.
Another option that is not really new in IMS Version 8, but is worth mentioning is
COMPRESS. Since IMS Version 7 the DFSMSdss COMPRESS option (at column 60) is
available to reduce the storage space required to hold the Image Copy. However, these
savings costs more CPU time for execution.
Note: If you are taking advantage of the capability to have the Image Copy 2 utility specify
SET PATCH commands to customize DFSMSdss processing, be aware that the zap now
has to be applied in module DFSUDMT2 instead of DFSUDMT0. (APARs PQ63048 and
PQ50832 contain information about the support for SET PATCH commands.)

4.1.5 GENJCL support


The enhancements for Image Copy 2 are also available using GENJCL.IC commands and
are supported by new and changed skeletal JCL members. The GENJCL.IC counterparts for
the new Image Copy 2 options are expressed in the Table 4-2.
Table 4-2 GENJCL.IC options
ONEJOB

Creates all IC2 statements for only one execution step

GROUP(DBGRP1A)
with SMSCIC | SMSNOCIC (*)

G DBGRP1A statement with following statements of all


DBDSs of this DBGRP1A defined DBDSGRP

DBD(name)
with SMSCIC | SMSNOCIC (*)

Statements for all DBDSs of this database

Chapter 4. Database management enhancements

53

DBREL( L | P ) if SMSNOCIC is
specified

L or P in column 59 for logical or physical processing

(*) smsopts in any order, following


the SMSCIC(smsopts) or
SMSNOCIC(smsopts) parameter

SAMEDS, COMPRESS, OPTIMIZE

SAMEDS

S in column 1, multiple Image Copies into same single


output data set

COMPRESS

C in column 60 for compression mode

n [1,2,3,4]

n in column 61, applies the value for the OPTimize(n) option

Assuming that database CUSTDB1 has 3 DBDS defined for (as pictured in Example 4-3),
consider the following GENJCL input:
GENJCL.IC DBD(CUSTDB1) COPIES(2) SMSNOCIC(4,S,C) ONEJOB DBREL(L)
This invokes the Image Copy 2 tool with the control statements in one execution step as
shown in Example 4-4.
Example 4-4 IC2 control cards
|...+....1....+....2....+....3....+....4....+....5....+....6...
//SYSIN
DD *
2 CUSTDB1 DDNAME1A ICOUT11 ICOUT12
XLC4
S CUSTDB1 DDNAME1B
S CUSTDB1 DDNAME1C

The necessary skeletal JCL changes introduce the new keywords:


%SMSGRPA
%SMS1DSA
%GROUPA

numeric value which controls group processing


numeric value indicating whether SAMEDS was specified
character value containing the group name or null

If you leave column one and column 61 blank, you are invoking the Image Copy 2 utility
without any use of the new features. However, you can use the existing skeletal JCL as
provided for exploiting as well as omitting the enhancements.
Note: IMS Database Image Copy 2 enhancements require concurrent-copy capable DASD
controllers.

4.2 Parallel database processing


The parallel database processing enhancement is provided for improving the performance of
IMS after restart to reach a steady state faster. Multiple TCBs (using multiple threads) are
used for database authorization, dynamic allocation, open, close and end-of-volume
processes, which were running as serialized processes before. The parallel database
processing provides the following benefits:
Exploits available processor power
Reduces the elapsed processing times
Achieves faster steady state response times
The parallel database processing is implemented by automatically exploiting additional
multiple task control blocks (TCBs). The parallel database processing is supported for full
function databases only. 10 TCBs are used in parallel. IMS uses an algorithm to divide the

54

IMS Version 8 Implementation Guide

local DMB number, distribute and assign each database to one of the 10 TCBs. At IMS
initialization, after the 10th TCB has begun its open processing, warm start (/NRE or /ERE)
processing is resumed.
This enhancement is very beneficial in when it is necessary to authorize multiple databases
and allocate and open multiple DBDSs during restart processing to return your systems to a
steady state.

4.2.1 DBRC authorization


Prior to IMS Version 8 it took one DBRC call per database authorization request to perform
the authorization and set the state into the RECON. This could cause increased RECON data
set contention and a performance slow down at times when applications start to open a large
number of databases at the same time. Now each of the 10 TCBs issues only one DBRC call
for the authorization of all assigned databases, which means a maximum of ten DBRC
authorization call.

4.2.2 Full function database allocation, open and close processing


In IMS Version 8 data set allocation and open processing will be done during any IMS warm
start (/NRE and /ERE) for full function databases. This early allocation and open processing
replaces the data set allocation at PSB schedule time and data set open at first DL/I call, that
was done in prior versions of IMS. There have been no changes to the DBRC allocation
process - DBRC ALLOC record is still written to RECON when the database is updated first
time after the restart.
The databases that are allocated during warm start are the same databases that had been
allocated at IMS shutdown. All data sets comprising the allocated databases are opened
during warm restart, whether or not those data sets were open at previous IMS shutdown.
IMS full function databases are also closed in parallel during IMS shutdown.

4.2.3 Considerations
Since the database allocation and open processing is performed during initialization of a
warm (/NRE and /ERE) started IMS (as well as database deallocation is done during IMS
termination) now the DFS2500I message will not be issued during warm starts nor
terminations when the database is successfully allocated or deallocated.
Also, please check your installation for any additional planning that might be required for DL/I
batch processing. If you are dependent on jobs scheduled during times the online system is
already brought up, but you expect the databases to not yet have been accessed (databases
available, but not open, and transactions, applications not started), the pre-open processing
in IMS Version 8 may require additional planning.
You will now need to ensure that any batch DBRC authorization requests will not fail, since
your IMS Version 8 subsystems are started (and has opened/authorized your databases).
In this case your databases can be taken off-line to the IMS online system through use of the
/DBR command. Alternatively, the installation can implement data sharing to allow the IMS
online system and batch jobs to share databases.
Some installations run a job for a so called pre-open processing to ensure that these
databases have already been allocated and opened when online users start to use them.
This is done to ensure the availability of the databases or to provide a faster startup for the
applications and to avoid any constraints if most of the online activity would be starting at the
Chapter 4. Database management enhancements

55

same time (for example, after network open). Now you are able to eliminate running these
types of jobs since the database allocation and open processing occur during the warm start
of the system (these jobs may still remain necessary for the times IMS has to be cold started).
For databases that are not successfully allocated or deallocated the message DFS2503W will
continue to appear.

4.3 Fast Path DEDB enhancements


In this chapter we discuss the database enhancements for Fast Path in IMS version 8. There
are three main topics we want to describe in detail.

Support for Fast Path DEDBs greater than 240 Areas


Support for nonrecoverable DEDBs
Enhanced support for sysplex Coupling Facility (CF) management of DEDBs
Unused IOVF count update

There are enhancements relating to IMS in a Parallel Sysplex environment. You will find more
information about these enhancements in Part 3, IMS Version 8 Parallel Sysplex
enhancements on page 135.

4.3.1 DEDB support greater than 240 areas


IMS Version 8 extends the capacity of data stored and managed in Fast Path data entry
databases (DEDBs) now allowing you to define more than 240 AREAs. The upper limit is
changed to 2048 AREA statements in your DEDB DBD source. This means greater design
flexibility whereas the DEDB externals are not affected and can be maintained without
change.
There are some migration considerations to review.

The DEDB randomizing routine


The DEDB randomizing routine, which will obtain the address of a randomizing module block
(MRMB) and will obtain the number of entries in MRMB, should be able to handle more than
240 entries (one per AREA) so the routine will probably not be affected by this increase.
However, you have to check your routine(s).

IMS log record changes


These Fast Path enhancements also resulted in some IMS log record changes. Now all FP
log records that include any AREA number also include the corresponding AREA name.
Some of these FP log records only carry the new, 2-byte, AREA number, some others still
only carry the old, 1-byte, AREA number and a few records carry both values. The 1-byte
AREA number field indicates a number for any AREA above 240 by a value of x'FF'. The
2-byte AREA number will map the exact AREA number as a value between 1 and 2048.

Lock token format changes


The internal lock management will be handled now using a new format of the IRLM lock token
for those DEDBs with a number of AREAs greater than 240. Consequently, this means that
these DEDBs (with more than 240 AREAs) cannot be processed by any previous release of
IMS, and therefore cannot be shared with them (version prior to IMS Version 8) in a database
sharing environment.
If you are running with PI locking, IMS will continue to use the current PI lock tokens,
independent of the specification of more than 240 AREAs for any DEDB.
56

IMS Version 8 Implementation Guide

Using IRLM there is no change to the lock token formats for DEDBs whose internal AREA
number is 1 through 240.

4.3.2 Nonrecoverable DEDBs


IMS Fast Path capability continues to be enhanced to provide the fastest access through the
system, continuing to lead database products.
Nonrecoverable DEDBs are provided for use as certain databases, called and used as work,
temporary, or scratch pad databases, where recoverability is not a requirement. For these
databases, there is no logging, and they are not recovered during an emergency restart. This
was a consideration when it was decided which DEDBs are designed and able to be utilized
as nonrecoverable.
Marking the DEDB nonrecoverable will reduce the amount of log records and checkpoint
information written, thus improving on the performance of IMS. Support for nonrecoverable
DEDBs includes VSO types as well as non-VSO types, shared and non-shared DEDBs.
MSDB type of databases are excluded.
Restriction: DEDBs with sequential dependent segment types (SDEPs) can not be
marked as nonrecoverable.
If you are issuing the CHANGE.DB NONRECOV against any DEDB including SDEPs, the
next authorization call for this DEDB will fail with following message:
DFS3711A Nonrecoverable DEDB authorization error DEDB=dddddddd AREA=aaaaaaaa

The message indicates that DEDB authorization has determined that DEDB contains SDEPs.
Note: Nonrecoverable DEDBs are not supported by Concurrent Image Copy.

Declaring nonrecoverable DEDBs


You can declare your DEDBs are intended as nonrecoverable using the following DBRC
commands:
INIT.DB DBD(name) NONRECOV
CHANGE.DB DBD(name) NONRECOV
It is not possible to mark only an AREA as nonrecoverable. This status applies to the entire
database so you have to change the status to NONRECOV at DB level. This also implies that
the DEDB must be DBRC registered and must not be RSR covered.

Consequences and log record changes


The x5950 log record containing all DEDB changes is no longer written. There is no longer a
REDO capability nor any EEQE chained to an ADS.
A new x'5951' log record will be written once per updated AREA per syncpoint and a new
DMAC flag is inserted to mark the DEDB as nonrecoverable.
The update process for a nonrecoverable compared to a recoverable VSO DEDB is changed.
If an DEDB marked as recoverable has been updated, its updated CI will be written to DASD
by system checkpoints. Additionally, if the VSO area is not shared, the updated CI may also
be written due to a timer driven process.

Chapter 4. Database management enhancements

57

These writes do not occur with VSO DEDBs flagged as nonrecoverable. Updated CIs for
nonrecoverable DEDBs are written to DASD at:
IMS shutdown
when an area is closed (/DBR, /STO, /VUNLOAD)
when the VSO CF structure is out of data entries
The last condition occurs when IMS is running in shared mode using VSO structures and
needs to write an updated CI and there are no entries available in the structure (assuming it is
a non-preloaded shared area). You can get more information about shared VSO in Coupling
Facility support for DEDB VSO on page 60.

Error handling
If any DEDB marked as nonrecoverable gets an error IMS Version 8 will behave as follows:
Read error:
Status code AO will be returned to the application.
Write error to at least one more good MADS:
Tolerating 10 errors (as EQEs) before switching to continue.
Write error to a single ADS or a last good MADS:
Tolerating 10 errors (as EQEs) before the AREA will be stopped and marked as
recovery needed status.
IMS failure and output thread to a nonrecoverable DEDB did not complete:
The next /ERE or XRF takeover will issue a message (and will reissue at each data set
open call until reinitialization or restore of the ADS is done)
DFS3711W Nonrecoverable DEDB integrity warning DEDB=name AREA=name

and during later processing an error can happen causing normal error processing as
an IMS user abend 1026.

Considerations
Nonrecoverable DEDBs (NRDEDB) may not be used by prior IMS systems (IMS Version 7
and earlier). Your DEDBs eligible for nonrecoverable may be changed after you have finished
the migration to IMS Version 8. The DBRC Migration SPE will enforce this. If you try to access
a DEDB marked as nonrecoverable from an IMS system running on a lower version the
allocation will fail and you will get the message DSP0079I in DBRC as shown in Example 4-5.
Example 4-5 IMS Version 7 job logs with failed access to nonrecoverable DEDB
12.08.39
12.08.39
12.08.39
12.08.39
12.09.18
12.09.18
12.09.18
12.09.19
12.09.19
12.09.18
12.09.19

STC13263 R 136,/STA DB DISTDB.


STC13263 DFS0488I STA COMMAND COMPLETED. DBN= DISTDB
RC= 0 IM2A
STC13263 DFS058I 12:08:39 START COMMAND IN PROGRESS
IM2A
STC13263 *137 DFS996I *IMS READY* IM2A
STC13263 R 137,/STA AREA AREADI01.
STC13263 DFS058I 12:09:18 START COMMAND IN PROGRESS
IM2A
STC13263 *138 DFS996I *IMS READY* IM2A
STC13263 DFS0011W AREA=AREADI01 DD=AREADI01 ALLOCATION FAILED IM2A
STC13263 DFS0488I STA COMMAND COMPLETED. AREA= AREADI01 RC= 4 IM2A
STC13265 DSP0079I RECORD NOT ACCESSIBLE
STC13265
KEY TYPE= DB
, DBD=DISTDB , DDN=**NULL**,

Your operational procedures may need to be changed to catch any database error on a
nonrecoverable DEDB and to restore or reinitialize the affected DEDB using automation
processes.

58

IMS Version 8 Implementation Guide

In case of a fallback situation, be aware that before performing the fallback, improperly closed
nonrecoverable DEDBs must be:
1. Restored or reinitialized
2. Changed to recoverable
3. Image copied
You can use the following command with RESTORE option to generate the JCL using the last
Image Copy, just in case you have to recover your DEDB or any other database marked as
nonrecoverable:
GENJCL.RECOV DBD(fpname) [AREA(name)] ... RESTORE
Properly closed nonrecoverable DEDBs (Shutdown, /DBR, /STO) are usable by a prior IMS
version after they have been:
1. Changed back to recoverable
2. Image copied
This is also forced by setting the Image Copy Needed flag (to ensure that DBRC can
generate the proper recovery JCL) when the database is changed to recoverable
(Example 4-6).
Example 4-6 IC needed after change NONRECOV back to RECOV
LIST.DB DBD(DISTDB) DBDS
2002.169 12:06:39.5 -04:00
LISTING OF RECON
PAGE 0003
------------------------------------------------------------------------------DB
DBD=DISTDB
DMB#=2
TYPE=FP
SHARE LEVEL=3
FLAGS:
COUNTERS:
RECOVERY NEEDED COUNT
=0
IMAGE COPY NEEDED COUNT =0
PROHIBIT AUTHORIZATION=OFF
AUTHORIZED AREAS
=1
RECOVERABLE
=NO
EEQE COUNT
=0
...
DBDS
DBD=DISTDB
AREA=AREADI01
TYPE=FP
SHARE LEVEL=3
DSID=00001 DBORG=DEDB
DSORG=VSAM
GSGNAME=**NULL**
USID=0000000019
...
FLAGS:
COUNTERS:
PROHIBIT AUTHORIZATION=OFF
AUTHORIZED SUBSYSTEMS
=1
HELD AUTHORIZATION STATE=3
IC NEEDED
=OFF
ADS AVAIL #
=1
...(CHANGE.DB ... NONRECOV done)...
DB
DBD=DISTDB
SHARE LEVEL=3
FLAGS:

PROHIBIT AUTHORIZATION=OFF
RECOVERABLE
=YES

DMB#=2
COUNTERS:
RECOVERY NEEDED COUNT
IMAGE COPY NEEDED COUNT
AUTHORIZED AREAS
EEQE COUNT

DBDS
DBD=DISTDB
AREA=AREADI01
SHARE LEVEL=3
DSID=00001 DBORG=DEDB
GSGNAME=**NULL**
USID=0000000019

TYPE=FP

=0
=1
=0
=0
TYPE=FP

DSORG=VSAM

Chapter 4. Database management enhancements

59

AUTHORIZED USID=0000000019 RECEIVE USID=0000000019 HARD USID=0000000019


RECEIVE NEEDED USID=0000000000
CAGRP=**NULL** GENMAX=3
IC AVAIL=0
IC USED=0
DSSN=00000018
NOREUSE
RECOVPD=0
VSO
PREOPEN
PRELOAD
CFSTR1=IM0A_AREADI01A
CFSTR2=IM0A_AREADI01B
LKASID
DEFLTJCL=**NULL** ICJCL=ICJCL
RECVJCL=ICRCVJCL RECOVJCL=RECOVJCL
DBRCVGRP=**NULL**
FLAGS:
COUNTERS:
PROHIBIT AUTHORIZATION=OFF
AUTHORIZED SUBSYSTEMS
=0
HELD AUTHORIZATION STATE=0
IC NEEDED
=ON
ADS AVAIL #
=1

DEDB VSO considerations


There are reasonable decisions you might come to for DEDB VSO as a choice for
nonrecoverable DEDBs. Generally, non-shared VSO is used to improve performance of
highly active databases. DEDB VSO is also used for databases where performance is of
concern. Further considerations about more load sharing may drive a decision to run in a
parallel processing environment, and therefore VSO sharing.
However, you will also receive a performance benefit due to minimal DASD writes (we talked
about updated CI writes previously). This also results in the fact that these VSO ADSs are
less likely to be broken following a failure due to writes only occurring when the area is closed
(it is unlikely to have partial writes to DASD). Of course, failures may happen. But a partial
DASD write (broken database) can only occur if failures or severe system abends occur
during these rare writes (/VUNLOAD, /STO, /DBR, shutdown). An exception is still there for
non-preloaded shared VSO with a structure smaller than the area:
Writes of updated CIs may occur to make new free space in the structure.
Any failure at other times (than writing the updates) would not create a situation where partial
writes have occurred.

4.3.3 Coupling Facility support for DEDB VSO


Fast Path has been enhanced for IMS Version 8 so that shared DEDB VSO databases are
also able to exploit the following Coupling Facility features:
System managed rebuild of a VSO structure
Alter and automatic alter of a VSO structure size
System managed duplexing of VSO structures
This provides the following benefits:
You can keep the structure online during planned reconfiguration (rebuild).
CF storage is reclaimed together with dynamic expansion and contraction of structures
based on actual CF storage usage (autoalter).
IMS supports dual structures without having to define secondary structures in DBRC and
in a Coupling Facility Resource Manager (CFRM) policy.
For a VSO structure of a dual structure pair, their sizes can be different. Structure size
inconsistency for a dual structure pair for nonpreload areas will prevent an IMS system that
does not support these functions from connecting to the structure. If the connect to the
structure fails, IMS will wait for 2 seconds before retry attempts. After 3 unsuccessful retry
attempts the message DFS2826A is issued to indicate open failure for this area.

60

IMS Version 8 Implementation Guide

This enhancement is retrofitted to IMS Version 7 by APAR PQ50661. IMS Version 6 does not
support duplexed structures. The management of any IMS structure in the Coupling Facility
as a whole is discussed in Coupling Facility structure management on page 137.

4.3.4 Unused IOVF count


A new DFSPBxxx parameter, IOVFI, allows the user to set the time interval between updates
to the unused IOVF count in the DMAC (DMACOCNT). The default is 7200 seconds (2
hours). The maximum is 86400 seconds (24 hours). Since the updating these counts requires
reading all the space map CIs in all the DEDB areas, the overhead could be significant.
Therefore, the update can be disabled by coding IOVFI=1.
This count is used when a /DIS AREA command is issued. Note that the /DIS AREA
command with the IOVF parameter still causes IMS to count the unused IOVF CIs in the
specified area and use the result to update the DMAC and to display the counts.

4.4 Batch RRS support


The Resource Recovery Service (RRS) support of the operating system may be invoked if
more than one resource manager is involved in update processing, like IMS DB and DB2, and
will be invoked with the intention to operate as a system wide syncpoint manager to make
sure that all commits or backouts will be (have been) kept consistent across all participants.
If you are running batch applications updating IMS databases and resources managed and
hosted by other resource managers like DB2 or MQSeries, RRS support for the batch
applications is now available. This functionality has been made available to IMS Version 7
through the service process (APAR number PQ51895). The batch RRS support is intended to
simplify your administration and to make your operations less error prone and easier to use.

4.4.1 Supported environments


The Batch RRS support allows following:
Batch programs using MQSeries with coordinated commit processing.
Full two phase commit processing for batch programs accessing DB2 as well as IMS DB.
The data capture facility was upgraded to support Batch RRS. Any synchronous data
propagating to another system (DB2), for example invoking IMS DataPropagator as a data
capture exit, will be ensured as a single unit of recovery (UOR) - as through the two phase
commit participation - along with the IMS work making sure that all is done or not done
where it is all part of this same single unit of recovery.
IMS DataPropagator Version 3 Release 1 assists in providing asynchronous, near
real-time IMS-to-DB2 propagations. IMS DataPropagator exploits all advantages of the
asynchronous messaging functions of MQSeries. Now it is a near real time propagation
which offers improved performance, reliability and, at the same time, minimized impact on
your mission-critical IMS applications (whose updates are being propagated to DB2).

4.4.2 Activation and requirements


As a starting point for the discussion of RRS, you must specify BKO=Y[es] to activate
dynamic backout, and must also ensure that the IMS batch job has a DASD log.
Batch RRS is invoked if the execution step specifies the new execution parameter RRS=Y.
RRS=N is the default.
Chapter 4. Database management enhancements

61

During a batch run when the application program issues either a CHKP or a ROLB call, IMS
will determine whether it should coordinate the subsequent syncpoint itself or participate in
the syncpoint coordinated by RRS.
It determines this by expressing interest in the current UOW, and then retrieving the count of
interests expressed. If the count is one, then the lone interest must be IMS's itself and it then
deletes its own interest and coordinates the syncpoint itself.
However, if there has been more than one interest expressed, then batch will initiate the RRS
syncpoint via the ATRCMIT (or ATRBACK) call and assume the role of a syncpoint participant
to RRS.

4.5 Coordinated IMS/DB2 disaster recovery support


There have been additional enhancements to aid in the coordination of disaster recovery for
installations accessing DB2 and IMS. They are in response to user requirements asking for
synchronized disaster recovery support using independent transfer mechanisms to send both
IMS and DB2 logs to the remote site. You will find further information on Coordinated
IMS/DB2 disaster recovery support in IMS Version 8: Release Planning Guide, GC27-1305.
The enhancements are based on the usage of Remote Site Recovery (RSR) for IMS and
eXtended Remote Copy (XRC) for your DB2 environment. RSR utilizes APPC to send IMS
log data from the active site to the remote site. The RSR tracker instance at the remote site
which is writing its copy of log records received from the active site is also recording into the
tracking RECON data set and optionally updating its own shadowed databases if running on
database level tracking (DLT), shown in Figure 4-1,

IM S
A ctive S ite

A PP C
IM S Log R eco rd s

IM S
R S R Tra cke r

IM S Lo gs

IM S D a ta ba ses

S ha do w IM S D atab ase s
(option al)

Figure 4-1 IMS RSR

XRC is a function of DASD storage servers (3990 control units and Enterprise Storage Server
(ESS)) and is using DFSMS System Data Mover (SDM) for asynchronous mirroring of the
necessary DB2 logs and boot strap data sets as shown in Figure 4-2.

62

IMS Version 8 Implementation Guide

DFSMS
System Data Mover

ESCON with
DASD Channel Extenders

Storage Server

Storage Server

Figure 4-2 DB2 XRC

For more information about XRC please refer to IBM Redbook Implementing ESS Copy
Services on S/390, SG24-5680, Chapter 3, XRC.
Coordinated recovery operations now allow users to recover IMS and DB2 data to a
consistent point in time. The highlights are:

eXtended Remote Copy (XRC) tracking is added to IMS RSR


IMS and DB2 log synchronization for disaster recovery
Support of RSR for IMS logs and optionally, shadowed databases
Support of XRC for DB2 logs, together with the boot strap data sets (BSDS) on a single
XRC session to keep both synchronized to each other; DB2 databases (tables) may not
be included

This support is especially interesting for environments with limited bandwidth for XRC
transmissions. Those environments probably cannot support the transmission of the entire
DB2 databases and their updates. Instead, only the logged DB2 and IMS data and the DB2
BSDSs are transmitted.
IMS TM Version 8 can be connected to the DB2 Version 6 and Version 7. The DB2 logs and
BSDSs must reside on devices supporting XRC. DB2 must be running in data sharing mode
since this mode provides timestamps in the DB2 log. Without data sharing mode the DB2
subsystem would use RBAs instead.

4.5.1 XRC tracking


Since XRC tracking is implemented in our IMS RSR process the IMS truncation point is
issued in the new DFS2933I message.
The IMS log truncation point will be kept behind XRC consistency time. This means the RSR
tracking subsystem (the log router function) ensures that the routing of IMS logs is always
behind that of the DB2 logs. To get this information, the RSR tracker will frequently invoke an
XRC Query API request against the DFSMS system data mover (SDM). This XRC tracking
is running under its own ITASK and is using SDMs API provided by the ANTRQST macro call
REQUEST(XCONTIME). So it is a requirement that the SDM is running on the same OS/390
system as the IMS RSR tracking subsystem is running. Otherwise the previously invoked API

Chapter 4. Database management enhancements

63

call REQUEST(LEVEL) checking the presence of SDM (provided by same ANTRQST


macro) will fail.
The RSR tracker will refresh this information on a timer driven every 5 seconds (normal timer
interval) up to one minute (depending on certain return codes in case of gaps or delays,
determined between tracked logs and XRC timestamp in the control data set).
After two minutes without receiving a refreshed timestamp from XRC you will be informed for
the first time about a possible delay by DFS4035A, one of the new messages (see Messages
and log records changes on page 66).
The XRC tracking can be enabled by specifying the following XRC parameters in the
DFSRSRxx PROCLIB member before the tracking subsystem is started:
XRC(SESSION(sessionid) HLQ(highqual))

The XRC tracking cannot be initially started by the /START command (see Operations on
page 65).
You have to specify the session id of the XRC session that is to be tracked because there is
no default value that can be used. The HLQ value specifies the XRC control data set prefix.
There is a default of SYS1 if nothing is specified.

Some prerequisites
Since the log router of the RSR tracker getting log information stays in conversation with any
active transport manager system (TMS) from the isolated log sender (ILS) and maybe from
several IMS subsystems in an IMSplex, the log router will merge these log records in STCK
time order. To do this in a consistent way (in the right creation-time sequence) it requires a
9037 Sysplex timer (or equivalent) for multi-CPC environments (as well as for the DB2
subsystems running in data sharing).
The SDM must be running on the same OS/390 system as the IMS RSR tracking subsystem.
Note: The IMS RSR tracking subsystem must have authority to make XRC requests, for
example a RACF read access to following FACILITY profile:
STGADMIN.ANT.XRC.COMMANDS

4.5.2 Log synchronization


For coordinated recovery, the IMS log truncation point (STCK) has to be specified into the
DB2 conditional restart control record, to truncate its own DB2 log based on this timestamp.
On both sides (IMS and DB2) the logs will now end on this consistent point for the following
recovery activities during subsystem restarts. In case of takeover (unplanned) you should be
prepared for following scenario:
If database shadowing has been done for IMS, the IMS subsystems may then be restarted
at the remote site.
Otherwise, IMS database recovery (full recovery) must be done before IMS subsystem(s)
restart. The IMSs will be restarted with /ERE commands (emergency restart backs out any
in-flight work that existed in the systems as reflected in their tracked logs).
Conditional restart is done for DB2. The conditional restart control record must be created
or updated. The ENDLRSN parameter is specified with the high order 12 bytes of the
timestamp from the IMS DFS2933I message.

64

IMS Version 8 Implementation Guide

To create or update your DB2 conditional restart record you should use following options:
DEFER=ALL,
FORWARD=YES,
BACKOUT=YES,
ENDLRSN=timestamp
Specifying DEFER=ALL will eliminate database processing during restart.
DB2 objects must be recovered. The procedures for doing these recoveries are
documented in the DB2 UDB for OS/390 and z/OS V7 Utility Guide and Reference,
SC26-9945.
Tip: To reduce the amount of time required to recover your DB2 tables, you might run
periodic timestamp recoveries (LOGONLY) during tracking. This provides databases that
have been shadowed up to the time of the timestamp recovery.
Figure 4-3 shows the determination of the log truncation points (log synchronization) for
various log streams.

DB2 logs:
DB2A log

t3

DB2B log

t3

IMS logs:

XRC consistency time

IMSA log

t2

IMSB log

t1

RSR truncation point

Figure 4-3 Log synchronization

IMS RSR tracking subsystem truncates IMSAs logtime at t1.


The truncation point timestamp is displayed in DFS2933I message.
IMS log truncation timestamp is supplied as the to timestamp for conditional restart of
DB2A and DB2B.
DB2 restart truncates the logs for DB2A and DB2B at t1.

4.5.3 Operations
There are several commands available to operate with the XRC tracking facility:
/STOP XRCTRACK

This command is used to allow the tracking subsystem to continue


routing without synchronizing with the XRC session, for example in
case the DB2 systems are shut down or there is an XRC session
failure. Please use this command with care since the synchronization
of IMS and DB2 logs get lost.

Note: The stopped status will be persistent across tracking subsystem restarts.

Chapter 4. Database management enhancements

65

/START XRCTRACK

This command resumes XRC tracking after it was stopped by /STOP


XRCTRACK.

/DISPLAY TRACKING STATUS


This command response will show you XRC session id and the XRC
tracking status which can be active, active/waiting, stopped, error or
inactive. It also shows the current routing STCK time value.
Example 4-7 shows the response for a /DISPLAY TRACKING STATUS command.
Example 4-7 XRC tracking status
**** XRC COORDINATION *******************************************
XRC-SESSIONID

STATUS

ROUTING-STCK

XRC00DB2

ACTIVE

B8848134

5A6D6603

4.5.4 Messages and log records changes


There are some new messages provided with this XRC support in our IMS RSR components:
DFS2919A
DFS2920I
DFS2921A
DFS2933I
DFS4035A
DFS4037A
DFS4039A
DFS4041I
DFS4110I

SYNCHRONIZATION WITH XRC SESSION WILL NOT OCCUR


SYNCHRONIZATION WITH XRC SESSION HAS BEEN STOPPED
ERROR RETURNED FROM XRC QUERY: error-code
UNPLANNED TAKEOVER LOG TRUNCATION POINT: xxxxxxxx xxxxxxxx
ROUTING SUSPENDED -- reason
UNEXPECTED CONSISTENCY TIME FROM XRC SERVICE: xxxxxxxx xxxxxxxx
UNABLE TO OBTAIN XRC CONSISTENCY TIME -- reason
XRC CONSISTENCY TIME WAS OBTAINED FROM THE STATE DATA SET
START|STOP XRCTRACK COMMAND COMPLETED|IGNORED|INVALID

There are several reasons that may appear in the DFS4035A message. For example the
reason:
"XRC CONSISTENCY TIME IS NOT ADVANCING"

This reason indicates that consecutive checking of the XRC time returned the same time and
IMS log tracking is being held up. This message is sent after XRC has not updated its time for
about two minutes. It is sent again about every 10 minutes until the time is updated. It is only
sent when IMS tracking is waiting on XRC.
In addition the IMS user abend 0381 is changed to issue a new reason code in case IMS
tracking subsystem is
26 - unable to create XRC tracking itask.

Some IMS log records are changed according to XRC support:


x'4905'

XRC tracking log record (new in IMS version 8) containing XRC


tracking stopped/started status and written when /START or /STOP
XRCTRACK command is processed; at offset x06 you will find the flags
for:
x80 for the /START XRCTRACK command processed,
x40 for the /STOP XRCTRACK command processed.

66

IMS Version 8 Implementation Guide

x'4900'

The milestone log record (changed for IMS Version 8) includes flags
and fields which will be used now to capture XRC tracking status and
routing STCK value.

The log record layout can be found in member DFSLOG49 of the macro library (ADFSMAC).
Especially fields and flags of the MPB - the log router milestone position block - are now used.
Note: Requirements for the coordinated IMS/DB2 disaster recovery support are:
The IMS Version 8 RSR Record Level Tracking (RLT) feature is necessary.
The DB2 logs and BSDS must reside on devices supporting eXtended Remote Copy.

4.5.5 Coexistence
IMS Version 8 tracking subsystem with XRC tracking enabled supports Version 6, Version 7,
and Version 8 active systems. However, it is necessary to ensure that all required
coexistence SPEs for IMS systems on previous versions have been addressed.
In an RSR environment, there are some important actions you have to consider if you are
planning to upgrade your RECONs, (active and the tracking RECON). Please refer to IMS
Version 8: Release Planning Guide, GC27-1305 and IMS Version 8: DBRC Guide and
Reference, SC27-1295.

Chapter 4. Database management enhancements

67

68

IMS Version 8 Implementation Guide

Chapter 5.

Database Recovery Control


enhancements
In this chapter, we discuss the new features and enhancements to Database Recovery
Control (DBRC). DBRC continues to evolve as a key component in the IMS architecture, and
an important central point of control for database access, administration, and recovery. With
the increased complexity of the IMSplex environments it is becoming an important focal point
to ensure the fastest access, highest availability and absolute database integrity. This has
resulted in significant efforts for improvement and enhancements around the RECON as an
important component responsible for database control and recovery.
The enhancements in DBRC for IMS Version 8 offer these capabilities:

A 16 megabyte maximum recovery control (RECON) data set record size


PRILOG compression enhancement
DBRC command authorization support
Automatic RECON loss notification
Elimination of several DBRC and IMS abend
New DBRC batch commands for HALDB
Increased maximum values for DBRC groups

This chapter contains information related to DBRC enhancements and some migration
considerations. The last two bullets in the previous list are discussed only in the overview
chapter. Refer to 1.2.1, Database Recovery Control (DBRC) enhancements on page 4.

Copyright IBM Corp. 2002. All rights reserved.

69

5.1 Support of 16 MB RECON record size


Database Recovery Control (DBRC) in IMS Version 8 implements its own RECON record
spanning technique. IMS Version 8 now supports a RECON record size of 16 MB to eliminate
most of potential outages due to RECON records reaching the maximum VSAM record size
in the RECON cluster definition. Outages may occur because:
PRILOG record grows to its limit as a result of the archive of online log data sets
SUBSYS records reach limit due to many authorized databases
DBDS records reach limit due to many EEQEs (the limit is now 32767 per database) in
consequence of many I/O errors.
If the EEQE limit is reached the following messages are issued (for example in response to a
CHANGE.DBDS ADDEQE(value) command) and an IMS user abend U0602 occurs:
DSP1146A EEQE LIMIT OF 32767 FOR DB DBD = xxxxxxxx
DFS0612I still appears (with a new return code of 32)

There is still a limit you could reach, for example, a subsystem record could reach the limit
due to a lot of database authorizations, but there is a long way to go before that limit is
reached.

5.1.1 RECON record spanning segments


Let us explain some details how DBRC now works with RECON record spanning. A logical
RECON record is now written as multiple VSAM records, called segments. But DBRC
manages the segments in its own way. A segment will never exceed a single control interval
(CI) in length. This means DBRC ignores a maximum VSAM record size defined larger than
CI in length. Since any segment fits into a single CI, DBRC is no longer using VSAM
SPANNED records. If the maximum VSAM record size is defined smaller than the CI, this
value will be used for the size for the segments. This means the segment size calculation is
based on the minor value:
segment size = MIN (VSAM record size, CI size)

RECON record spanning and segmenting is transparent. The segments can only be seen if
you print your RECON cluster in an IDCAMS print. At the end of any key the segment number
is assigned. A data Prefix follows the key of the first physical segment and includes the
segment number of the last segment at its end. The Prefix is not included for subsequent
physical segments for that logical record.
Hence some requirements for the RECON cluster definition previous IMS versions are
obsolete now. You can redefine your RECON cluster with the following options:
NONSPANNED: You dont need to change from SPANNED into NONSPANNED, but
anytime after you have finished the migration process for all your subsystems to IMS
Version 8, you should do so as soon as possible.
RECORDSIZE values are not required to be large, up to the maximum value for the VSAM
logical record size. Usable values would be 32K for the CI size and 32K - 7 bytes for
maximum VSAM record size.
The recommendation is for the changes to be made after you have migrated all of your
subsystems to IMS Version 8, because some restrictions apply as long as you are running in
coexistence with IMS Version 6 or IMS Version 7 (see 5.6, IMS version coexistence for
DBRC on page 80).

70

IMS Version 8 Implementation Guide

However, you should be aware of the following considerations:


RECORDSIZE and CI size must be the same for the data set cluster definitions of
RECON1, RECON2, and RECON3 when you are in coexistence mode. When IMS
Version 8 is the minimum version for all systems that are sharing the RECONs, it is
possible to use different sizes, which enables you to change the RECORDSIZE and CI
size by reorganizing the RECONs online.
The RECON I/O exit routine (DSPCEXT0), which receives the records unsegmented, may
need to be modified since the RECON records now may be larger than before and up to
the new 16 MB limit.
In IMS Version 7 the RECON record limit was based on CA size calculation of VSAM, which
is typically no larger than 0.6 MB. The new RECON record size limit of 16 MB is limited by the
addressability of the MVCL (move character long) instruction used within the DBRC code.

5.1.2 Usage of alerts


The foregoing discussion does not necessarily mean that all things are going well and the
RECON records can now grow indefinitely, since the upper limit has been relieved! A very fast
growing PRILOG record can indicate incorrect image copy frequency, due to an unexpected
increased volume of logging. So it may be useful to readjust your threshold values:
LOGALERT(dsnum,volnum), that triggers DSP0287W warning message
SIZALERT(dsnum,volnum,percent), that triggers DSP0387W warning message or the
DSP0007I informational message
Both parameters were introduced in IMS Version 7 to control the issuing of warnings and to
give you the opportunity to react (deletion of inactive logs, image copy execution, ensuring
log compression is not inhibited) and to correct this before the IMS system abends. They give
you time to determine what is causing the very large PRILOG record. Calculations with the
default values will now result in the following (see Example 5-1):
Example 5-1 LOGALERT and SIZEALERT calculation based on default values
LOGALERT ( defaults are dsnum=3 and volnum=16 ) :
No longer room for a
record size = 112 + ( dsnum * ( 120 + (40 * volnum ) ) ,
means that the DSP0287W message will be issued when the PRILOG record exceedes a
length of ( 16777215 - 2392 ) bytes .
SIZALERT ( defaults are dsnum=15 and volnum=16, percent=95 ) :
If PRILOG record has no longer space for a
record size = 112 + ( dsnum * ( 120 + (40 * volnum ) ) ,
IMS issues DSP0387W message when record is ( 16,777,215 - 11,512 ) bytes large .
If any RECON record's size exceeds 95 percent of 16MB,
IMS issues DSP0007I message when PRILOG record is 15,383,354 bytes large .

You probably want to issue the messages much earlier, so the value settings may need to be
readjusted. Another example shown in Example 5-2 demonstrates a more useful setting of
these values. To get warning messages when a PRILOG record length reaches 0.5 MB,
2 MB, and 14 MB, set the values as follows:
Example 5-2 SIZALERT and LOGALERT settings
Set SIZALERT(366,999,3)
DSP0007I issued when record reaches 3% of 16MB (0.48MB)
DSP0387W issued when record reaches (16,777,215 - 14,669,392 = ) 2,107,823 bytes .
Set LOGALERT(52,999)
DSP0287W issued when record reaches (16,777,215 - 2,084,272 = ) 14,692,944 bytes .

Chapter 5. Database Recovery Control enhancements

71

Since the changes in RECON record segmenting, all prior IMS versions need SPEs for
compatibility reasons to coexist in accessing the RECONs that have been upgraded to IMS
Version 8. Refer to 5.6, IMS version coexistence for DBRC on page 80 for more information
about the coexistence.

5.2 DBRC PRILOG compression


With IMS Version 8 now able to manage larger RECON records, efforts have still been made
related to RECON performance. To reduce PRILOG record length, compression is attempted
more frequently than in previous IMS versions. Compression is now attempted:
After every archive of an OLDS data set (automatic process)
During any execution of the DELETE.LOG INACTIVE command
In the RSR environment, automatically when a tracking log data set is opened
To reduce the overhead of compression attempts (read entire record, write with new
segmentation), the oldest allocation information for each DBDS (kept in the LOGALL record)
will be used for a compare of the timestamps. A listing of RECON record with an entry for the
earliest allocation time within the LOGALL record is provided in Example 5-3.
Example 5-3 Earliest allocation time in LOGALL
PRILOG
RECORD SIZE=
START = 2002.122 21:30:34.2 -04:00
*
SSID=IM1A
STOP = 2002.123 18:12:11.7 -04:00
#DSN=1
GSGNAME=**NULL**
FIRST RECORD ID= 0000000000000001
PRILOG TOKEN= 0
EARLIEST CHECKPOINT = 2002.127 17:14:36.2 -04:00

304
VERSION=8.1

DSN=IMSPSA.SLDSP.IM1A.D02122.T2130342.V11
UNIT=3390
START = 2002.122 21:30:34.2 -04:00
FIRST DS LSN= 0000000000000001
STOP = 2002.123 18:12:11.7 -04:00
LAST DS LSN= 000000000000012A
FILE SEQ=0001
#VOLUMES=0001
VOLSER=IMS002 STOPTIME = 2002.123 18:12:11.7 -04:00
CKPTCT=2
CHKPT ID = 2002.123 18:12:11.4 -04:00
LOCK SEQUENCE#= 000000000000
LOGALL
START
= 2002.122 21:30:34.2 -04:00
*
EARLIEST ALLOC TIME = 2002.122 21:30:38.7 -04:00
DBDS ALLOC=3
-DBD-DDN-ALLOCDISTDB
AREADI01 1
ITEMDB
AREAIT01 1
WAREDB
AREAWH01 1

The log record compression will be indicated by the DSP0135I message as issued prior to
IMS Version 8. If a compression attempt doesnt remove any data set entries, a new message
DSP1150I is issued:
DSP1150I LOG RECORD(S) COULD NOT BE COMPRESSED,
RECORD TIME = timestamp1
reason type = timestamp2

Reason types can be:


EARLIEST ALLOC TIME,
LOG RETENTION TIME, or
EARLIEST CHECK POINT

72

IMS Version 8 Implementation Guide

However, in RSR environment, the DSP1150I message is suppressed at the tracker because
the console could get flooded with these messages during gap fill processing.
The messages are shown in Example 5-4 and Example 5-5.
Example 5-4 DSP1150I - no compression during DELETE.LOG INACTIVE
DSP1123I

DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT


IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
DELETE.LOG INACTIVE
DSP1047I DELETED DSN=IMSPSA.SLDSP.IM2A.D02121.T1825405.V00
DSP1047I DELETED DSN=IMSPSA.SLDSP.IM1A.D02121.T1905568.V04
DSP1047I DELETED DSN=IMSPSA.SLDSP.IM1A.D02127.T1705128.V0D
DSP1150I LOG RECORD(S) COULD NOT BE COMPRESSED,
DSP1150I RECORD TIME = 2002.144 16:21:52.7 -04:00
DSP1150I EARLIEST CHECK POINT = 2002.144 16:21:55.2 -04:00
DSP1150I LOG RECORD(S) COULD NOT BE COMPRESSED,
DSP1150I RECORD TIME = 2002.149 13:59:55.3 -04:00
DSP1150I EARLIEST CHECK POINT = 2002.142 14:25:55.6 -04:00
...
DSP0126I NUMBER OF INACTIVE PRILOG RECORDS DELETED WAS 00003
DSP0203I COMMAND COMPLETED WITH CONDITION CODE 00
DSP0220I COMMAND COMPLETION TIME 2002.165 14:50:41.1 -04:00
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
DSP0211I COMMAND PROCESSING COMPLETE
DSP0211I HIGHEST CONDITION CODE = 00

PAGE 0002

PAGE 0003

The compression is automatically invoked during log archive execution. If there is nothing to
compress the DSP1150I is issued.
Example 5-5 DSP1150I - no compression during archive
******** LOG ARCHIVE UTILITY CONTROL STATEMENT *************************
SLDS FEOV(08000)
DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
SYSTEM CHECKPOINT RECORD
-2002.159 00:17:53.468246 -04:00 02158/201753 (TOTIMA) CHECKPOINT SIMPLE
SYSTEM CHECKPOINT RECORD
-2002.161 07:52:27.960457 -04:00 02161/035227 (TOTIMA) CHECKPOINT SIMPLE
SYSTEM CHECKPOINT RECORD
-2002.162 06:10:18.198227 -04:00 02162/021018 (TOTIMA) CHECKPOINT FREEZE
DSP1150I LOG RECORD(S) COULD NOT BE COMPRESSED,
DSP1150I RECORD TIME = 2002.149 13:59:55.3 -04:00
DSP1150I EARLIEST ALLOC TIME = 2002.149 13:59:59.7 -04:00
***
ARCHIVE UTILITY (DFSUARC0) COPIED LOG RECORDS
***
15:10:
FROM DDNAME=DFSOLP03 VOLSER=IMS001
TO PRIMARY SLDS
DSNAME=IMSPSA.SLDSP.IM1A.D02156.T2025163.V17
VOLSER= TOTIMA
DFS3263I ARCHIVE UTILITY ENDED SUCCESSFULLY

5.3 DBRC command authorization


In this section we describe the new features of DBRC command authorization. IBM has
provided this security feature in response to some design change requests coming up from
customers. It addresses global access concerns as DBRC is becoming more and more
important, and the central point of database sharing and recovery management across the
IMSplex. You are now able to control the usage of DBRC commands and to break command
security down at different levels to get a deeper resolution of authorities.
Chapter 5. Database Recovery Control enhancements

73

5.3.1 Security support for DBRC commands and protected resources


With IMS Version 8, DBRC commands can be authorized at following levels:
Command verb

To distinguish, for example, between persons authorized to read


only, (LIST) commands against RECON resources, and others
authorized to submit any CHANGE and/or INIT commands. Please
also see Example 5-6 on page 77 and following pages.

Resource type

To distinguish, for example, between persons responsible for all


database activities and other persons authorized to change
subsystem entries. Please see Example 5-6 on page 77.

Resource

To restrict the authority to specific resources, for example, for a certain


(group of) database name(s). Please see Example 5-6 on page 77.

5.3.2 The resource name table DSPRNTBL


The resource name table provided in IMS Version 8 contains a list of all resources which may
be protected. The list cannot be modified. Some examples of the list entries follow:

CHANGE.PRILOG.OLDS
CHANGE.DB.dbname
GENJCL.RECOV.dbname
CHANGE.SUBSYS.ssid

A complete list of all of the resources in resource name table can be found in Appendix E of
the manual IMS Version 8: DBRC Guide and Reference, SC27-1295.
Invoking the authorization process, these entries will be prefixed by the RECON resource
profile high level qualifier (safhlq) as described in following section.

5.3.3 How command authorization gets invoked


Command authorization is invoked by RACF definitions as a prerequisite. To define and
protect the resources in RACF we are using the FACILITY resource class. The resource can
be defined to RACF by using the following command:
RDEFINE FACILITY resource UACC(NONE)

wherein the resource profile (static or generic) is built by


safhlq.command_verb.qualifier.modifier

The safhlq is the RECON resource high level qualifier (one to eight characters), which is
defined using the INIT.RECON CMDAUTH or CHANGE.RECON CMDAUTH command in the RECON.
For example, if the safhlq is PLEX1 and you define the resource CHANGE.PRILOG OLDS,
use the following command:
RDEFINE FACILITY PLEX1.CHANGE.PRILOG.OLDS UACC(NONE)

A user of this protected resource now needs to be permitted to this resource profile for read
access (his/her RACF user ID or a usergroup he/she belongs to):
PERMIT resource CLASS(FACILITY) ID(user_id) ACCESS(READ)

Your security profiles may differ for different RECON sets if you are using different resource
qualifiers (safhlq) within the initial set of CMDAUTH command against the RECON sets (as well
as by the CHANGE.RECON command).

74

IMS Version 8 Implementation Guide

If a resource entry in the resource name table cannot be found that matches the DBRC
command that was issued, you would receive the following new message:
DSP1162A DBRC RESOURCE NAME TABLE DEFINITION ERROR FOR
COMMAND VERB cmdname MODIFIER modname

This is a situation that should not occur.

DBRC commands to switch over into command authorization


Once you set the command authorization on using the following DBRC commands:
INIT.RECON
CMDAUTH(SAF|EXIT|BOTH|NONE,safhlq)
CHANGE.RECON CMDAUTH(SAF|EXIT|BOTH|NONE,safhlq)

Any user of the DBRC command utility needs to be authorized. The expressions valid for the
the CMDAUTH value in the command:
SAF

Invoke security authorization facility, the chosen security product


(for example, RACF).

EXIT

Invoke DBRC command authorization exit routine (DSPDCAX0),


described later.

BOTH

Invoke both security product and exit routine.

NONE

Do not invoke command authorization. This is the default value.

safhlq

RECON high level qualifier for the resource names (one to eight
characters). The safhlq must be specified with SAF, EXIT, or BOTH
and it cannot be specified with NONE.

Note: To disable command authorization the user issuing this command must be
authorized with the current DBRC command security settings.
If the access to the protected command resource failed (authorization denied) the user will
get the new alert message:
DSP1157A USER userid NOT AUTHORIZED FOR COMMAND RESOURCE
NAME=resource name SAF RC=safrc
RACF RC=racfrc RACF REASON=racfrsn

In the event you are using the command authorization exit (DSPDCAX0) denying
authorization you will get another alert message DSP1154A instead. See also 5.3.5, Usage
of the DBRC command authorization exit (DSPDCAX0) on page 76.
When authorization is denied, the utility itself will finish with following informational message:
DSP0209I

PROCESSING TERMINATED WITH CONDITION CODE = 12

Restriction: The specification of CMDAUTH on the CHANGE.RECON or INIT.RECON


command is restricted to the DBRC command utility only and cannot be specified on the
online command /RMC.

5.3.4 Supported environments


The command authorization will be invoked by following environments:
From any DBRC Utility (DSPURX00)
From the HALDB Partition Definition Utility

Chapter 5. Database Recovery Control enhancements

75

The DBRC requests passed from the HALDB partition definition utility will be converted to the
equivalent DBRC commands for the purpose of command authorization. IMS Version 8
supports the following command requests:
From any Query, Set, Change, and Delete request
Treated as LIST, INIT, CHANGE, and DELETE commands
Table 5-1 gives you an idea how the HALDB partition definition utility will express its
command request and what is the equivalent form, passed through the DBRC command
authorization environment.
Table 5-1 HALDB request conversion
HALDB request

Master or Partition

Equivalent DBRC command

Query

master HALDB

LIST.DB DBD(haldb)

Set

master HALDB

INIT.DB DBD(haldb)

Change

partition name

CHANGE.PART DBD(haldb)
PART(partition) ...

delete

master HALDB

DELETE.DB DBD(haldb)

Please be aware of the following considerations:


IMS /RMxxxxxx commands submitted from an IMS online user terminal will never invoke
DBRC command authorization (different environment) thus they are intended to do similar
things. This command type (/RMxxx DBRC=...) used in IMS online can only be protected in the
way you have done in the past. With the IMS online command security (SMU and/or RACF)
you are able to protect them.
In addition, with the Common Service Layer (CSL) functionality and especially the Operations
Manager (OM) you can use the RACF class OPERCMDS to protect the usage of /RMxxxx
commands passed from any OM client, for example a TSO SPOC user, through any IMS.
This command authorization for the RACF class OPERCMDS is intended to protect any
command used to operate under the OM / CSL. Dont forget the permission of the
TSO/SPOC users to the CSL.CSLplexname RACF facility profile, if it is defined. See the
following sections which discuss OM setup and security: Set up the Structured Call Interface
on page 293 and Set up the Operations Manager on page 294.

5.3.5 Usage of the DBRC command authorization exit (DSPDCAX0)


The DBRC command authorization exit (DSPDCAX0), will be called to perform the command
authorization. The exit is optional. IBM provides a user exit sample routine in your
ADFSSMPL library.
If the command authorization exit is coded and is intended to be implemented, the exit must
be named DSPDCAX0 and must be found in an authorized data set, which can be a member
of JOBLIB, STEPLIB, or LINKLIST. If the library is part of a concatenation, only the data set
containing the exit needs to be authorized. The following command will set the appropriate
flags to indicate the usage of a SAF interface (for example a RACROUTE auth call if you are
using RACF) and your provided command authorization exit:
CHANGE.RECON CMDAUTH(BOTH,PLEX1)

If the exit denies the authorization for the user, the following new error message is issued:
DSP1154A DBRC COMMAND AUTHORIZATION DENIED BY DSPDCAX0 FOR USER userid
RESOURCE NAME = nnn RC = rc

76

IMS Version 8 Implementation Guide

As we have shown, the exit can be used in combination with any security product (RACF, etc.)
if you set the CMDAUTH keyword with the value BOTH. In this case the security product will
be invoked first. The SAF return code and RACF return code/reason code are passed to the
exit routine. Our sample exit is basically coded to provide addressability to the parameter
block (DSPDCABK) and copies the return code set by the security product to register 15. In
your own coded exit you are now able to override the return code set by the security product
and to suppress the DBRC SAF error message (DSP1157A). For further details about the exit
(which fields are passed into the parameter block and which values are passed back) please
refer to the descriptions in IMS Version 8: Customization Guide, SC27-1294.
Note: Any jobs using DBRC must have control-level access to all three RECON data sets
(VSAM) if any RACF data set profile is existent for them.

5.3.6 DBRC command authorization examples


The following security definitions are provided only as examples that show you some effects
of different kinds of definitions:
Definition
(1) PLEX1.CHANGE.RECON.* (G)

UACC(NONE) USER(JOUKO1(R),JOUKO4(R))

User JOUKO1 is authorized (listed in access list for the CHANGE.RECON.* generic profile)
to set initially the command authority with the saf profile HLQ PLEX1 (Example 5-6):
Example 5-6 CHANGE.RECON CMDAUTH
IRR010I USERID JOUKO1
IS ASSIGNED TO THIS JOB.
ICH70001I JOUKO1
LAST ACCESS AT 12:39:59 ON WEDNESDAY, JUNE
$HASP373 CHNGRCN STARTED - INIT D
- CLASS A - SYS SC54
IEF403I CHNGRCN - STARTED - TIME=12.58.20 - ASID=005D - SC54
+DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
--TIMINGS (MINS.)--JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
-CHNGRCN
D
00
216
.00
.00
.0
...
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
CHANGE.RECON CMDAUTH(SAF,PLEX1)
DSP0203I COMMAND COMPLETED WITH CONDITION CODE 00
DSP0220I COMMAND COMPLETION TIME 2002.177 12:58:21.9 -04:00
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
DSP0211I COMMAND PROCESSING COMPLETE
DSP0211I HIGHEST CONDITION CODE = 00

The following RECON listing will show the new cmdauth values listed in Example 5-7.
Example 5-7 RECON listing with cmdauth switched on
2002.172 20:58:41.0 -04:00
LISTING OF RECON
----------------------------------------------------------------RECON
RECOVERY CONTROL DATA SET, IMS V8R1
DMB#=13
INIT TOKEN=02015F0058438F
NOFORCER LOG DSN CHECK=CHECK17
STARTNEW=NO
TAPE UNIT=3480
DASD UNIT=3390
TRACEOFF
SSID=IM1A
LIST DLOG=YES
CA/IC/LOG DATA SETS CATALOGED=YES
MINIMUM VERSION = 6.1
LOG RETENTION PERIOD=00.001 00:00:00.0
COMMAND AUTH=SAF
HLQ=PLEX1

Chapter 5. Database Recovery Control enhancements

77

SIZALERT DSNUM=15
LOGALERT DSNUM=3

VOLNUM=16
VOLNUM=16

PERCENT= 95

TIME STAMP INFORMATION:


TIMEZIN = %SYS
OUTPUT FORMAT:

-LABEL- -OFFSETUTC
+00:00
DEFAULT = LOCORG LABEL PUNC YYYY
CURRENT = LOCORG LABEL PUNC YYYY

IMSPLEX = PLEX1
-DDNAME-STATUS-DATA SET NAMERECON1
COPY1
IMSPSA.IM0A.RECON1
RECON2
COPY2
IMSPSA.IM0A.RECON2
RECON3
SPARE
IMSPSA.IM0A.RECON3
DSP0180I NUMBER OF RECORDS LISTED IS
1

Any other user (for example, JOUKO2) has no authority to change the RECON record (and
will fail) as shown in Example 5-8.
Example 5-8 DSP1157A - no command authorization
...
+DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
ICH408I USER(JOUKO2 ) GROUP(SYS1
) NAME(JOUKO JANTTI
PLEX1.CHANGE.RECON.CMDAUTH CL(FACILITY)
INSUFFICIENT ACCESS AUTHORITY
FROM PLEX1.CHANGE.RECON.* (G)
ACCESS INTENT(READ
) ACCESS ALLOWED(NONE
)
-JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
-CHNGRCN
D
12
171
.00
.00
.0
...
DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
CHANGE.RECON CMDAUTH(NONE)
DSP1157A USER JOUKO2
NOT AUTHORIZED FOR COMMAND
DSP1157A RESOURCE NAME=PLEX1.CHANGE.RECON.CMDAUTH
DSP1157A SAF RC=00000008 RACF RC=00000008 RACF REASON=00000000
DSP0209I PROCESSING TERMINATED WITH CONDITION CODE = 12

Here is another example of definitions:


(2) PLEX1.LIST.* (G)
(3) PLEX1.*.DB.* (G)
(4) PLEX1.*.DB.ALL (G)

UACC(READ)
UACC(NONE) USER(JOUKO4(R))
UACC(NONE) USER(JOUKO2(R))

The universal access (UACC) READ allows every user to list any RECON record. In addition
to LIST, the user JOUKO4 can change (,delete,...) any database record, but not against ALL
in a row whereas user JOUKO2 is authorized to do so. JOUKO2 can only run against ALL
databases. The combination of profile definition (2),(3) and (4) does not protect any database
record listing, the PLEX1.LIST.* definition profile (2) is predominant.
Here are the more precise definitions:
(5) PLEX1.LIST.DB.* (G)
(6) PLEX1.LIST.DB.DISTDB

UACC(NONE) USER(JOUKO1(R))
UACC(NONE) USER(JOUKO3(R))

These are used to protect any database record listing. Only user JOUKO1 is allowed to list
any (and ALL) database records, except the DISTDB database, which is allowed to list for
user JOUKO3 only.

78

IMS Version 8 Implementation Guide

Take care with some of the resource names, for example, LIST, INIT or BACKUP.RECON
command verbs. To protect these resources, you eventually have to define discrete RACF
profiles. In other words, you don't need to specify any qualifier for these command verbs. For
example, if you issue a LIST.RECON command for the listing of the entire RECON, the LIST
command verb resource is not covered by following generic profile:
PLEX1.LIST.RECON.* (G)

This is shown in Example 5-9:


Example 5-9 DBRC command not authorized, SAF RC 04
LIST.RECON
DSP1157A USER JOUKO2
NOT AUTHORIZED FOR COMMAND
DSP1157A RESOURCE NAME=PLEX1.LIST.RECON
DSP1157A SAF RC=00000004 RACF RC=00000004 RACF REASON=00000000
DSP0209I PROCESSING TERMINATED WITH CONDITION CODE = 12

However, the LIST command verb resource is covered by the following profile:
PLEX1.LIST.* (G)

Or, it is covered by a discrete profile like this:


PLEX1.LIST.RECON

Also note that any permission for a command verb with qualifier ALL will of course allow you
to act against ALL possible qualifiers of the intended modifier. This is the case even if there
is defined a discrete profile which excludes the read authority for a certain one of them (if you
specify this particular qualifier, your access will be denied as expected).

5.4 Avoidance of certain DBRC abends


DBRC abends are avoided with IMS Version 8 in some situations, that would have caused a
DBRC abend with previous versions of IMS. You will no longer receive an abend caused by
RECON SUBSYS records exceeding the maximum VSAM logical record size since the
RECON records are now transparently segmented (described in Support of 16 MB RECON
record size on page 70).
In IMS Version 8 the behavior of DBRC is changed if any database DEALLOC request is
issued. No abend will be issued in the deallocation process for the following reasons:
There is no ALLOC record found in RECON.
The DEALLOC timestamp is already set.
Instead of an abend:
A new alert message is issued:
DSP0153A DEALLOCATION EXIT FAILED FOR DBDNAME=dbdname
DDNAME=ddname
ALLOCATION timestamp1
DEALLOCATION timestamp2
The informational message DSP0300I appears, specifying the error.
A dump is taken.
Further authorizations for the indicated DB/Area in DSP0153A will be prohibited due to
flags being set in its RECON record.
So you can investigate and analyze the cause of this mismatch without the loss of DBRC.
Only the availability of the involved database is affected.
Chapter 5. Database Recovery Control enhancements

79

5.5 Automatic RECON loss notification


Automatic RECON loss notification (ARLN) is a new function used together with the available
new functions of the Common Service Layer.
ARLN ensures that all IMS subsystems in the IMSplex are made aware of the changes in the
RECON configuration. This allows the other subsystems to deallocate discarded RECONs
immediately; otherwise this would not happen until the next RECON access by the other
subsystems.
Automatic RECON loss notification is enabled by the following tasks:
Specifying an IMSplex name, either through use of the IMSPLEX= execution parameter or
the DBRC SCI registration exit (DSPSCIX0). Subsequently the RECONs can only be
accessed by members of the same IMSplex.
Defining an SCI on each OS/390 image running IMS subsystems belonging to the
IMSplex.
Full details of this function can be found in Chapter 18, Automatic RECON loss notification
on page 265.

5.6 IMS version coexistence for DBRC


Prior IMS versions need Small Programming Enhancements (SPE) for compatibility reasons
to coexist in accessing the RECONs that have been upgraded to IMS Version 8.
The APAR numbers for the SPEs are PQ54584 for IMS Version 6 and PQ54585/PQ63108 for
IMS Version 7. The corresponding PTF numbers are UQ67709/UQ99326 for IMS Version 6
and UQ99327 for IMS Version 7.
At this point we mention a few more changes in IMS Version 8 related to DBRC:
Since IMS Version 8 can only coexist with IMS Version 6 and IMS Version 7, the use of the
keywords COEX | NOCOEX will be ignored. In addition, the ...COEXISTENCE
ENABLED message has been removed.
You can, however, restrict the IMS versions allowed to sign on to DBRC by using a new
keyword MINVERS with the INIT.RECON or CHANGE.RECON command:
CHANGE.RECON MINVERS (61 | 71 | 81)

The default value is set to 61. Once you have finished migrating all IMS systems running
in your shared environment to IMS Version 8 (thats the goal and will exploit all of the
enhancements across your IMSplex), you can change this value to avoid any unintended
RECON access from any DBRC instance running on a lower version than IMS Version 8.
Note: The new shared queue support for APPC and OTMA synchronous messages is
only available for IMS Version 8 and needs a MINVERS value of 81.
Since the Time History Table (THT) was used to support IMS Version 5, this table is no
longer used and the keywords THT | REPTHT will be ignored. The table will be deleted
automatically during the RECON upgrade process.
This RECON upgrade process may be invoked by using the DBRC command utility
DSPURX00 to submit the command:
CHANGE.RECON UPGRADE

Support for the batch upgrade utility DSPURU00 has ceased.


80

IMS Version 8 Implementation Guide

If you are running in coexistence mode between the IMS versions there are some restrictions
you need to be aware of the following recommendations:
Keep the values for CI size and VSAM record length consistent for all three RECON data
sets, otherwise you will get messages complaining about record mismatch followed by a
user abend 0048 (Example 5-11 on page 81).
The maintenance for coexistence, the compatibility SPEs, will provide the ability to read
and write segmented RECON records. But the used RECON record size in maximum (for
those subsystems running on a lower version) is actually limited to the cluster attribute
defined as maximum VSAM record size of the RECON data sets, as long as you are
running in mixed version mode. This means that the recommended changes of the
attributes in your RECON cluster definition (NONSPANNED, RECORDSIZE and CI size)
should be implemented only if you are running in IMS Version 8 with all of your
subsystems, in other words the migration into Version 8 is finished completely for all IMS
systems, including batch. Keep in mind the usability of the new keyword MINVERS after
your RECON cluster attribute changes.
If you are trying to start an IMS system running on a lower version than IMS Version 8 without
the coexistence SPEs the access to the migrated RECON will fail and the DBRC address
space will receive a user abend 2480 as shown in Example 5-10.
Example 5-10 IMS user abend 2480 - coexistence SPEs missing
12.15.10 STC26189 IEF695I START IVP7FRC1 WITH JOBNAME IVP7FRC1 IS ASSIGNED TO USER STC
, GROUP SYS1
12.15.10 STC26189 $HASP373 IVP7FRC1 STARTED
12.15.10 STC26189 IEF403I IVP7FRC1 - STARTED - TIME=12.15.10 - ASID=03F9 - SC53
12.15.13 STC26189 DSP0008I VSAM LOGICAL ERROR ON RECON1 DATA SET
12.15.13 STC26189 DSP0008I DSNAME=IMS810F.RECON1
12.15.13 STC26189 DSP0008I VSAM FEEDBACK CODE=044
12.15.13 STC26189
KEY TYPE= RECON
, DBD=**NULL**, DDN=**NULL**,
12.15.13 STC26189
TIME=00.000 00:00:00 +00:00
12.15.13 STC26189 DSP0300I INTERNAL DBRC ERROR DSPURI30(PQ39252 )+X2E66 #22 TERM/DUMP
DIAG=VSAM ERROR
12.15.14 STC26189 DFS3932I IMS DUMP REQUEST COMPLETED - RETURN CODE = 000 IMSF
12.15.14 STC26189 DFS629I IMS
TCB ABEND - IMS 2480
IMSF
12.15.14 STC26189 DFS629I PSW AT ERROR = 077C1000 9065732E IMSF
12.15.15 STC26189 IEF450I IVP7FRC1 IVP7FRC1 - ABEND=S000 U2480 REASON=00000000
TIME=12.15.15

The inconsistent maximum record length between the RECON data sets will cause following
messages and a user abend 0048 as shown in Example 5-11.
Example 5-11 Maximum record length mismatch (DSP0023I) and following U0048
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761
STC12761

IEF695I START IM2ADBRC WITH JOBNAME IM2ADBRC IS ASSIGNED TO USER STC


$HASP373 IM2ADBRC STARTED
IEF403I IM2ADBRC - STARTED - TIME=18.22.19 - ASID=00B0 - SC47
DSP0023I MAXIMUM RECORD LENGTHS OF RECON DATA SETS DO NOT MATCH
DFS0048I DBRC INITIALIZATION FAILED - RC=16 IM2A
DFS3932I IMS DUMP REQUEST COMPLETED - RETURN CODE = 000 IM2A
DFS629I IMS
TCB ABEND - IMS 0048
IM2A
DFS629I PSW AT ERROR = 077C1000 8000A7EE IM2A
IEF450I IM2ADBRC IM2ADBRC - ABEND=S000 U0048 REASON=00000000 670
TIME=18.22.25
--TIMINGS (MINS.)--JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
SERV
-IM2ADBRC IEFPROC IM2ADBRC U0048
149
.00
.00
.0 34589
IEF404I IM2ADBRC - ENDED - TIME=18.22.25 - ASID=00B0 - SC47

Chapter 5. Database Recovery Control enhancements

81

82

IMS Version 8 Implementation Guide

Chapter 6.

Transaction trace
In this chapter we discuss the new IMS transaction trace feature. This feature provides useful
diagnostic information for transactions processed in multiple subsystems.
Transaction trace uses a Workload Manager (WLM) CLASSIFY command to determine
whether or not a particular unit of work is eligible for tracing. If specified, a transaction trace
token is passed along in the message prefix, which affects logging. To accommodate the
token, the IMS message prefix size has been increased by 8 bytes.
The steps involved in tracing a transaction are:
1.
2.
3.
4.

Start the transaction trace with a filter using the OS/390 TRACE command.
Run the transactions.
Stop the transaction trace and dump the transaction trace data space.
Use Interactive Problem Control System (IPCS) to view the transaction trace records.

Copyright IBM Corp. 2002. All rights reserved.

83

6.1 Transaction trace (MVS component trace)


IMS has provided transaction trace support within an IMS subsystem for many years. IMS
Version 8 continues to provide this same support. IMS Version 8 also exploits the transaction
trace facility of OS/390. It is also the first and only subsystem to support transaction trace.
The transaction trace facility of OS/390 provides the ability to trace a work unit (transactions)
through multiple subsystems. All IMS subsystems on an OS/390 system store the trace
records in a transaction trace data space.
The essential task of transaction trace is to aggregate data showing the flow of work between
components in the sysplex that combine to service a transaction. The transaction trace
records events such as transaction message arrival and transaction message output.
Additionally, it should be noted that transaction trace commands are propagated to all of the
systems in a sysplex. Therefore, when a command is entered on one system to activated the
transaction trace facility, the command is sent to all other systems in the sysplex.
Transaction trace is provided with APAR OW50696 for OS/390 Version 2 Release 10 and up.
Trace points are provided for:

IMS TM transaction input message arrival (IMS Entry)


Database call entry (DL/I Entry)
Database call exit (DL/I Exit)
IMS TM output message delivery (IMS Exit)

A WLM CLASSIFY call determines whether or not a particular unit of work should be traced.
A transaction trace token is passed along in the message prefix. This affects logging, as a
field has been added to the Workload Manager message prefix segment to store the trace
token. The token is used during calls to create trace entries, and is maintained between IMS
entry and IMS exit trace log entries.

6.1.1 How transaction trace works


Figure 6-1 provides an overview of the transaction trace process, and the interaction between
IMS, DL/I, and the Workload Manager.
When the first transaction trace command is entered with a filter to specify the work unit(s) to
be traced, transaction trace is activated. With transaction trace activated, WLM CLASSIFY
invokes a filter exit to determine whether the current unit of work should be traced. The work
units attributes are compared with the command filter attributes to determine if tracing should
occur. If tracing is required, a non-zero token is built and returned to the CLASSIFY caller.
The transaction trace token is set to zero if tracking is not to be performed on the work unit.
IMS propagates the token in the Workload Manager message prefix, in a manner similar to
propagation of the service class token. The transaction token is also recorded in the x01,
x03, and x30 log records.
Upon receipt of a non-zero transaction trace token, the instrumented IMS and DL/I modules
will build the necessary parameter lists and invoke OS/390 transaction trace support to
produce trace entries for the following:
Receipt of the input message
Each DL/I call
Delivery of the output message

84

IMS Version 8 Implementation Guide

When IMS completes transaction processing, an IMS exit transaction or exit DL/I trace record
is recorded, and WLM REPORT service is invoked to report response time for the completed
work request, and its corresponding service class.

WORKLOAD MANAGER (WLM)


BUILD or RETURN
TRANSACTION TRACE
TOKEN TO IMS V8

ASSOCIATE TRANSACTION
WITH SERVICE CLASS

IMS TRANSACTION MANAGER (IMSA)

DL/I
CONNECT TO WLM

INITIALIZE & START-UP ADDRESS


SPACE
RECEIVE A TRANSACTION

CONNECT TO WLM
``

RECEIVE A TRANSACTION
PROCESSING REQUEST

CLASSIFY WORK REQUEST

CLASSIFY WORK REQUEST


EVENT - IMS ENTRY

EVENT - DL/I ENTRY


PROCESS THE REQUEST

PROCESS THE TRANSACTION

EVENT- DL/I EXIT


REPORT COMPLETION

EVENT - IMS EXIT


REPORT COMPLETION
CLEAN-UP & TERMINATE SUBSYSTEM

DISCONNECT FROM WLM

TRANSACTION TRACE DATA SPACE

Figure 6-1 Transaction trace process flow overview

Figure 6-2 shows the interaction of components and the execution of transaction trace in a
sysplex environment. It displays the propagation of the transaction trace command across the
sysplex.

Chapter 6. Transaction trace

85

TRACE TT,TRAN=TRAN1

SYSPLEX
OS/390 or z/OS
WORKLOAD MANAGER
4

TRAN1 TRACE
TOKEN

TRANSACTION TRACE FAC.


1

TRACE CMD

WRITE TRACE DATA

CLASSIFY

TRANSACTION
TRACE DATA
SPACE
9

TRAN1

OS/390 or z/OS

IMSA

TRAN1 TRACE
5
TOKEN
6

TRACE EVENTS OCCUR

INITIATE PROCESS THAT WRITES TRACE RECORD

WORKLOAD MANAGER

MSC LINK

TRAN1 TRACE
TOKEN

TRAN1
2

TRAN1

TRANSACTION TRACE FAC.


1

TRACE CMD

WRITE TRACE DATA

CLASSIFY

TRAN1 TRACE
5
TOKEN

TRANSACTION
TRACE DATA
SPACE
9

TRAN1

IMSB

TRACE EVENTS OCCUR

INITIATE PROCESS THAT WRITES TRACE RECORD

Figure 6-2 Transaction trace processing in a sysplex environment

6.1.2 How to use transaction trace


These are the steps you must follow to use the transaction trace feature:
1. Start transaction trace with a filter using the TRACE TT MVS command.
trace tt,user=<userid>,tran=<tran name>

2. Execute the IMS transactions.


3. Stop transaction trace using the TRACE TT MVS command and dump the trace data
space using the MVS DUMP command.
trace tt,off=all
dump comm=(<comment here>)
r x,dspname='TRACE'.SYSTTRC
Note: An IMS abend will also provide a dump of the trace address space.

4. Use IPCS to view the transaction trace records. The component trace command with the
component subparameter for system transaction trace, as follows is used to display the
trace records.
ctrace comp(systtrc) full

86

IMS Version 8 Implementation Guide

6.1.3 Sample transaction trace output


Example 6-1 shows the formatted output from a transaction trace session. It shows entry and
exit IMS trace points.
Example 6-1 TT trace output
COMPONENT TRACE FULL FORMAT
COMP(SYSTTRC)
**** 02/05/2002
SYSNAME
-------

MNEMONIC
--------

ENTRY ID
--------

TIME STAMP
---------------

ECSER49
TTCMD
00000002 00:37:51.579839
CMDID.....0501
COMMAND...TRACE TT,TRAN=TRAN1

DESCRIPTION
------------TRACE TT Command

ECSER49
EVENT
00000003 00:38:00.165441 TRACE EVENT
COMPONENT..IMS
EVENTDESC..Entry to IMS
CMDID.....0501
FUNCTION...DFSICIO0
TCB...007C1E88 ASID..0060
TRACETOKEN..00000001
ECSER49
EVENT
00000003 00:38:10.691232 TRACE EVENT
COMPONENT..IMS
EVENTDESC..Exit from IMS
CMDID.....0501
FUNCTION...DFSFXC40
TCB...007CDC20 ASID..0022
TRACETOKEN..00000001

Chapter 6. Transaction trace

87

88

IMS Version 8 Implementation Guide

Chapter 7.

APPC base enhancements


In this chapter we introduce the APPC base enhancements of IMS Version 8. The chapter
contains information about the following topics:
Dynamic LU 6.2 descriptor support
CPU time limit for CPI-C driven transactions
Support for APPC outbound LU

Another APPC related enhancement in IMS Version 8 is the full shared message queue
support for synchronous APPC and OTMA messages. This is discussed in the Parallel
Sysplex enhancements part of this book. See Chapter 12, Shared queues support for APPC
and OTMA synchronous messages on page 147.

Copyright IBM Corp. 2002. All rights reserved.

89

7.1 Dynamic LU 6.2 descriptor support


In IMS Version 8 the support for dynamic handling of LU 6.2 descriptors is intended to provide
the ability to add and delete LU 6.2 descriptors without an IMS restart.
The LU 6.2 descriptors are used to associate an application specified destination name, for
example an ALTPCB destination name, with an LU 6.2 application program. With the /CHANGE
DESCRIPTOR command the support has been provided to dynamically modify a descriptor.
However, the descriptor is assumed to be created in IMS restart, which means any definition
for this descriptor intended to be changed has to be predefined into a PROCLIB member.
This default member is the PROCLIB member DFS62DTx where x is used for the nucleus
suffix you are running with. Adding any new or deleting any existing descriptors required the
user to update this member and then restart IMS to read this changed PROCLIB member and
create all included descriptors during initialization.
IMS Version 8 adds support to dynamically create or delete descriptors during the execution
of an online system.

7.1.1 Add a new LU 6.2 descriptor


To dynamically add new LU 6.2 descriptors, do the following:
Add the new descriptors to a new descriptor member DFS62DTy in IMS.PROCLIB, where
y is a user-defined suffix.
Issue a /START L62DESC y command where y is the suffix associated with the new
PROCLIB member.

The command execution will create the new descriptors by reading this PROCLIB member
and will issue similar messages as issued during IMS initialization (creating the descriptors
that have been read from the default DFS62DTy member). We will show you the process to
create additional descriptors in following examples. An example of a PROCLIB member is
shown in Example 7-1 with two new descriptors. Please note also the use of the new added
synonym L62DESC for the keyword DESCRIPTOR [DESC].
Example 7-1 LU 6.2 descriptors in new built PROCLIB member
EDIT
IMSPSA.IM0A.PROCLIB(DFS62DTY) - 01.01
Member DFS62DTY saved
Command ===>
Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 U USATEAM LUNAME=USERP MODE=LU62P
000002 U EURO
LUNAME=UNKNOWN
****** **************************** Bottom of Data ****************************

The /STA DESCRIPTOR Y. command will read this PROCLIB member DFS62DTY and create
these descriptors. If you are using the same PROCLIB shared between multiple IMSs for the
entire IMSplex, or at least using the same member suffix in several IMS specific PROCLIBs,
you can use the SPOC interface for sending the command to multiple IMSs (see Single point
of control on page 235). In Example 7-2 you can see the added descriptors. The naming
conventions for LU 6.2 descriptors must follow the already known rules, also described in
Resource type consistency on page 183.
Example 7-2 Adding LU 6.2 descriptors
19.58.33
20.07.58
20.07.58
20.07.58

90

STC14179 *377 DFS996I *IMS READY* IM1A


STC14179 R 377,/STA L62DESC Y.
STC14179 DFS0578I - READ SUCCESSFUL FOR DDNAME PROCLIB MEMBER = DFS62DTY IM1A
STC14179 DFS058I 20:07:58 START COMMAND COMPLETED
IM1A

IMS Version 8 Implementation Guide

20.08.30
20.09.12
20.09.12
20.09.12
20.09.12
20.09.12
20.09.12
20.09.12
20.09.12

STC14179 *395 DFS996I *IMS READY* IM1A


STC14179 R 395,/DIS L62DESC ALL.
STC14179 DFS000I
DESC
LUNAME
STC14179 DFS000I
USATEAM USERP
STC14179 DFS000I
TPNAME: DFSASYNC
STC14179 DFS000I
EURO
UNKNOWN
STC14179 DFS000I
TPNAME: DFSASYNC
STC14179 DFS000I
*2002169/200912*
IM1A
STC14179 *396 DFS996I *IMS READY* IM1A

MODE
LU62P

SIDE

DFSMODE

SYNCLEVEL TYPE
CONFIRM
MAPPED
CONFIRM

MAPPED

7.1.2 Delete an LU 6.2 descriptor


To delete an existing descriptor you issue a /DELETE L62DESC descriptor command where
descriptor is the name of the one to be deleted (Example 7-3).
Example 7-3 Deleting an LU 6.2 descriptor
20.10.29
20.10.29
20.10.29
20.10.39
20.10.40
20.10.40
20.10.40
20.10.40

STC14179 R 396,/DEL L62DESC EURO.


STC14179 DFS058I 20:10:29 DELETE COMMAND COMPLETED
IM1A
STC14179 *397 DFS996I *IMS READY* IM1A
STC14179 R 397,/DIS L62DESC ALL.
STC14179 DFS000I
DESC
LUNAME
MODE
SIDE
STC14179 DFS000I
USATEAM USERP
LU62P
STC14179 DFS000I
TPNAME: DFSASYNC
STC14179 DFS000I
*2002169/201039*
IM1A

SYNCLEVEL TYPE
CONFIRM
MAPPED

Example 7-4 shows the DFS3647W message issued for any duplicate descriptor found in the
DFS62DTY PROCLIB member. Duplicate entries will be ignored. However, successfully
added descriptors are ready to change, as well as existing descriptors since initialization.
Example 7-4 Duplicate descriptors; new descriptors ready to change
20.26.11 STC14179 *419 DFS996I *IMS READY* IM1A
21.07.44 STC14179 R 419,/STA L62DESC Y.
21.07.44 STC14179 DFS3647W MISPLACED OR DUPLICATE DESCRIPTOR USATEAM .
CONTENTS ARE IGNORED.
21.07.44 STC14179 DFS0578I - READ SUCCESSFUL FOR DDNAME PROCLIB MEMBER = DFS62DTY
21.07.44 STC14179 DFS058I 21:07:44 START COMMAND COMPLETED
IM1A
21.11.01 STC14179 R 420,/DIS DESC ALL
21.11.02 STC14179 DFS000I
DESC
LUNAME
MODE
SIDE
SYNCLEVEL
21.11.02 STC14179 DFS000I
USATEAM USERP
LU62P
CONFIRM
21.11.02 STC14179 DFS000I
TPNAME: DFSASYNC
21.11.02 STC14179 DFS000I
EURO
UNKNOWN
DFSMODE
CONFIRM
21.11.02 STC14179 DFS000I
TPNAME: DFSASYNC
21.11.02 STC14179 DFS000I
*2002169/211101*
IM1A
21.11.02 STC14179 *421 DFS996I *IMS READY* IM1A
21.11.51 STC14179 R 421,/CHANGE DESC EURO LUNAME=WELLKNWN.
21.11.51 STC14179 DFS058I 21:11:51 CHANGE COMMAND COMPLETED
IM1A
21.11.51 STC14179 *422 DFS996I *IMS READY* IM1A
21.12.02 STC14179 R 422,/DIS DESC ALL.
21.12.03 STC14179 DFS000I
DESC
LUNAME
MODE
SIDE
SYNCLEVEL
21.12.03 STC14179 DFS000I
USATEAM USERP
LU62P
CONFIRM
21.12.03 STC14179 DFS000I
TPNAME: DFSASYNC
21.12.03 STC14179 DFS000I
EURO
WELLKNWN
DFSMODE
CONFIRM
21.12.03 STC14179 DFS000I
TPNAME: DFSASYNC
21.12.03 STC14179 DFS000I
*2002169/211202*
IM1A

TYPE
MAPPED
MAPPED

TYPE
MAPPED
MAPPED

Chapter 7. APPC base enhancements

91

Note: Any descriptor changes are not saved across any IMS restart.

The effect of the /STA and /DEL L62DESC commands are applicable only during the life of an
IMS system and are not persistent across any IMS restart. To ensure that the appropriate
descriptors are added or deleted also for the next restart, you have to update also the
DFS62DTx member which is used during IMS initialization (where x is suffix of the IMS
nucleus).

7.2 CPU time limit for CPI-C driven transactions


In IMS Version 8 a new parameter is implemented to specify a time limit for CPI-C
transactions. This new parameter may help you in certain situations. For example, if an
application program loops, IMS resources are held until the transaction can be terminated.
During this time any resources held by this looping application, such as database locks, the
dependent region, etc., are unavailable to any other application or transaction message ready
to schedule. Many installations have addressed this situation for non CPI-C transactions by
specifying a time-out value in the PROCLIM parameter of the TRANSACT macro. Since
CPI-C transactions are not defined in IMS system definition, this PROCLIM parameter
specification is not applicable. IMS Version 8 applies now a time limit also for CPI-C
transactions with a new TP_Profile parameter specification CPUTIME.
Keep in mind that this time-out applies for message processing only and doesnt protect your
transactions against calls waiting for an APPC response (receive_and_wait).
The value must be coded in the TP_Profile definition as following:
CPUTIME = 0 - 1440

The value specified is the number of CPU seconds before a time-out occurs. The valid values
are in a range of 0 to 1440. CPUTIME = 0 is the default and it has the special meaning of no
time-out. You can see the use of the new parameter in Example 7-5 in a job invoking the
APPC definition utility ATBSDFMU.
Example 7-5 CPUTIME used in TPADD; JCL, SYSPRINT and SYSSDOUT messages
//IMSTPADD EXEC PGM=ATBSDFMU
//SYSPRINT DD SYSOUT=*
//STEPLIB DD DISP=SHR,DSN=SYS1.MIGLIB
//
DD DISP=SHR,DSN=IMS810.SDFSRESL
//SYSSDLIB DD DSN=SYS1.APPCTP,DISP=SHR
//SYSSDOUT DD SYSOUT=*
/*TPADD TPSCHED_EXIT(DFSTPPE0)
TPNAME(HUGO2)
SYSTEM
ACTIVE(YES)
TPSCHED_DELIMITER(##)
TRANCODE=CPIHUGO2
CLASS=5
MAXRGN=2
CPUTIME=300
##

The new definition in the APPC TP_Profile data set can be displayed also using the APPC
administration utility under TSO (Example 7-6). Editing the TP_Profile entry is also possible
by invoking the IMS edit routine DFSTPROF.

92

IMS Version 8 Implementation Guide

The RACF value will be displayed if set to another value than the default CHECK.
Example 7-6 TSO screen browsing certain TP_Profile
ICQASE60 IMSADMIN.TEMP.SYSSDATA --------------------- Line 00000000 Col 001 080
BROWSE DATA FOR TP PROFILE
Command ===>
PF01 = Help

Scroll ===> PAGE


PF03 = Exit

PF07 = Up

PF08 = Down

TP Name: HUGO2
Level : SYSTEM
ID . . . :
********************************* Top of Data **********************************
TRANCODE=CPIHUGO2
CLASS=5
MAXRGN=2
CPUTIME=300
******************************** Bottom of Data ********************************

If the time limit is exceeded when processing a message, this CPI-C transaction abends in a
way consistent to non CPI-C transaction time-outs. A user abend 0240 occurs and the
message DFS554A is issued.
APPC/MVS continues to allow dynamic updates and activation of TP_Profile entries. So you
should be able to react on demand, meaning the definition and activation at once.

7.3 Support for APPC outbound LU


Prior to IMS Version 8, APPC/IMS outbound conversations always used the APPC/IMS LU
defined to APPC/MVS as the BASE LU. If the BASE LU status was disabled or unavailable,
outbound conversations could not be allocated even if there were other APPC/IMS LUs
defined for this IMS system that were active and available.
IMS Version 8 allows you to specify an APPC/IMS LU, other than the BASE, to be used when
establishing new outbound conversations. This presumes that your APPCPMxx member in
SYS1.PARMLIB includes multiple APPC/IMS LUADD definition statements associated with
this specific IMS system. Keep in mind that APPC/MVS is able to add new definitions by a SET
APPC command specifying another APPCPMxx member (including the changes LUDELs,
LUADDs) dynamically. For system restart reasons (IPL) you have to include the changes in
the default member also, if it is intended to keep it permanent.
There are two ways to change or specify which LU is to be used for outbound conversations:
DFSDCxxx parameter OUTBND=luname, where xxx is the DC=xxx value in IMS
parameters, usually in DFSPByyy PROCLIB member
/CHANGE APPC OUTBND luname command

The DFSDCxxx PROCLIB member is read during initialization whereas the /CHA command
applies changes dynamically. Any IMS restart will revert to the OUTBND value in DFSDCxxx
member. If a DFSDCxxx member is absent or doesnt include an OUTBND=luname
statement, the BASE LU defined in APPCPMxx member is used instead, which is also the
default value for the outbound LU.
You can issue a /DISPLAY APPC command to list all the APPC/IMS LUs defined to the system
and to figure out which one is being used for outbound processing and/or is defined as the
BASE LU, as shown in Example 7-7.

Chapter 7. APPC base enhancements

93

Example 7-7 OUTBND luname changes and displays of the resources


COMMAND INPUT ===> /SET APPC=1A
SCROLL ===> CSR
COMMAND INPUT ===> /d appc,lu,all
SCROLL ===> CSR
...
LLUN=SCSIM1AA
SCHED=IM1A
BASE=YES
NQN=NO
STATUS=ACTIVE
PARTNERS=00000
TPLEVEL=SYSTEM
SYNCPT=NO
GRNAME=*NONE*
RMNAME=*NONE*
TPDATA=SYS1.APPCTP
LLUN=SCSIM1AB
SCHED=IM1A
BASE=NO
NQN=YES
STATUS=ACTIVE
PARTNERS=00000
TPLEVEL=SYSTEM
SYNCPT=NO
GRNAME=*NONE*
RMNAME=*NONE*
TPDATA=SYS1.APPCTP
...
*669 DFS996I *IMS READY* IM1A
R 669,/DIS APPC.
(DFS000I):
IMSLU
#APPC-CONV #APPC-SYNC #APPC-ASYN SECURITY STATUS DESIRED GRNAME
USIBMSC.SCSIM1AA
0
0
0 NONE
ENABLED ENABLED
SCSIM1AB
ENABLED
*671 DFS996I *IMS READY* IM1A
R 671,/CHANGE APPC OUTBND SCSIM1AB.
DFS058I 19:16:31 CHANGE COMMAND COMPLETED
IM1A
*672 DFS996I *IMS READY* IM1A
R 672,/DIS APPC.
IMSLU
#APPC-CONV #APPC-SYNC #APPC-ASYN SECURITY STATUS DESIRED GRNAME
USIBMSC.SCSIM1AA
0
0
0 NONE
ENABLED ENABLED
SCSIM1AB
ENABLED
*674 DFS996I *IMS READY* IM1A
R 674,/STO APPC.
DFS058I 19:43:28 STOP COMMAND COMPLETED
IM1A
...
R 677,/STA APPC.
DFS1960I IMS HAS REQUESTED A CONNECTION WITH APPC/MVS. IM1A
DFS1958I IMS CONNECTION TO APPC/MVS COMPLETE, LUNAME=USIBMSC.SCSIM1AA
IM1A
DFS3491I APPC/IMS TIMEOUT DEACTIVATED. APPCIOT = 0 IM1A
DFS1985I APPC/IMS OUTBOUND LU SCSIM1AB ACTIVE. IM1A
DFS058I 19:47:50 START COMMAND COMPLETED
IM1A
R 678,/DIS APPC.
IMSLU
#APPC-CONV #APPC-SYNC #APPC-ASYN SECURITY STATUS
DESIRED GRNAME
USIBMSC.SCSIM1AA
0
0
0 NONE
ENABLED ENABLED
SCSIM1AB
ENABLED

TYPE
BASE

TYPE
BASE
OUTB

TYPE
BASE
OUTB

There are two new IMS messages for outbound APPC/IMS LUs in IMS Version 8:
DFS1985I APPC/IMS OUTBOUND LU xxxxxxxx ACTIVE
DFS1983W APPC/IMS OUTBOUND LU xxxxxxxx NOT DEFINED

The active message is issued if your definitions are correct and the IMS system is accepting
your changes. The not defined message is issued if your outbound LU definition (defined in
the DFSDCxxx member or by the /CHANGE APPC OUTBND= command) is not predefined in
APPC/MVS.

94

IMS Version 8 Implementation Guide

Chapter 8.

Application enablement
In this chapter we provide a brief discussion of enhancements to IMS that support new
environments for Java applications.
Support the execution of a Java application in a standalone Java Virtual Machine (JVM)
environment in two new IMS dependent region types:

Java Message Processing (JMP) region type for message driven JVM applications
Java Batch Processing (JBP) region type for non-message driven JVM applications
Java standards enhancements
JDBC DL/I access enhancements
XML and IMS

Many of new features have been retrofitted to IMS Version 7 through the service process.
We recommend reviewing the IBM Redbook IMS Version 7 Java Update, SG24-6536. This
book discusses installation, tailoring and configuration of IMS, CICS, and DB2 environments
to use JDBC to access IMS databases. JDBC access from WebSphere environment is
covered in Chapter 9, Java enhancements for IMS and WebSphere on page 109.

Copyright IBM Corp. 2002. All rights reserved.

95

8.1 Overview
The following illustration, Figure 8-1, shows an overview of the additional IMS Java
processing environments available for you to run your Java application programs. In addition
to the IMS Java dependent regions, you can access data in IMS databases using Java
application programs running in other OS/390 subsystems.
CICS supports Java application programs using the JDBC API interface to get to IMS data.
The IMS Java Classes use the database resource adapter (DRA) interface to IMS.
DB2 stored procedures using Java can access IMS databases through JDBC application
programming interface (API). The IMS Java Classes use the open database access (ODBA)
interface to IMS.
WebSphere Application Server (WAS) can use Enterprise Java Beans (EJBs) to access IMS
databases through the JDBC API. The IMS Java classes use the ODBA interface to IMS.

JVM

LE

DB2

WAS

Stored
Procedure

EJB

CICS
JCICS

ODBA

IMS/
TM
MPP BMP IFP

JMP JBP

IMS/DBCTL

DLI
Database
View

IMS Java
App
A
P
P

DRA

JDBC / SQL

DB
Base
JNI

AIB Interface

Figure 8-1 IMS Java environments

8.2 Java dependent regions


IMS has two new region types, Java Message Processing (JMP) and Java Batch Processing
(JBP), which allow you to execute your Java application programs in a Java Virtual Machine
(JVM) environment. These new regions utilize the Persistent Reusable Java Virtual Machine
technology.
Note: IMS Version 7 was the last release to support the use of the High Performance Java
Compiler (HPJ). You will want to make use of the JVM environment (IMS Java dependent
regions) for your Java applications.

96

IMS Version 8 Implementation Guide

8.2.1 Persistent Reusable Java Virtual Machine


The IBM Developer Kit for OS/390, Java 2 Technology Edition provides the Persistent
Reusable JVM. It is designed to speed up the processing of Java applications in transaction
processing environments such as IMS. The Persistent Reusable JVM provides serial reuse of
a JVM for multiple transactions, while resetting the JVM to a known state between each
transaction. This provides isolation without paying the high cost of a full JVM initialization for
each transaction.
The IMS Version 8 Java Classes support the Persistent Reusable JVM.
The Persistent Reusable JVM also provides an optimized garbage collection scheme. The
following types of storage is managed by the Persistent Reusable JVM:
System heap contains class objects that persist for the lifetime of the JVM. The objects
are never garbage collected.
Application class system heap contains shareable application class objects that persist
for the lifetime of the JVM.
Middleware heap contains objects that have a life expectancy longer than a single
transaction and that persist across JVM-resets.
Transient heap contains objects with a life expectancy tied to the transaction that are
subject to garbage collection.

The following URL is for the document New IBM Technology featuring Persistent Reusable
Java Virtual Machines, SC34-6034, which describes the Persistent Reusable JVM:
http://www.ibm.com/servers/eserver/zseries/software/java/pdf/jtc0a100.pdf

8.2.2 Benefits of a JVM environment


If you make use of the JVM environment (IMS Java dependent regions) to run your Java
applications you can take advantage of the following benefits:
Your Java application will be interpreted at run-time from JVM bytecode to machine code;
in general, Java applications are intended to be able to run on multiple platforms, such as
UNIX and S/390.
An important feature of Java is its support for dynamically loading and accessing classes
at runtime. The JVM environment gives you flexibility at run-time to access other Java
class files.
As experience grows with the JVM technology, further improvements to the JVM (JDK
1.3.1+) will likely occur.
Usability is a benefit with a JVM. The total development time is less with a JVM.

8.2.3 Other IMS Java considerations


IMS Java programs running in IMS dependent regions must be single-threaded. IMS Java
only supports the AIB interface, therefore the database PCBs in your PSBGEN must be
named. IMS Java applications can only call other language applications through Java native
interface (JNI) that are POSIX(ON).
It is now possible to execute COBOL routines in an IMS Java Message Processing or a Java
Batch Processing region. This support provides the ability to build an application with IMS
message processing in a Java class and IMS database access in a COBOL routine. It also
supports an application with IMS message processing logic in a COBOL main method, and
that invokes other Java or COBOL routines that perform IMS database access.
Chapter 8. Application enablement

97

8.2.4 DFSJMP and DFSJBP procedures


The DFSJMP procedure is the launcher for message-driven JVM applications. The launcher
subsystem in the JVM architecture creates and controls the Persistent Reusable JVM and
interfaces with the host transaction processing system, IMS. DFSJMP is similar to DFSMPR,
and introduces the following parameters: JVMOPMAS, JVMOPWKR, and ENVIRON. The
following DFSMPR parameters are not supported: APPLFE=, DBLDL=, PRLD=, SSM=,
VSFX=, VFREE=
Example 8-1 shows a sample DFSJMP procedure that starts a JMP dependent region. This
procedure is created in the IMS PROCLIB as a result of IMS system definition.
Example 8-1 DFSJMP procedure
//
PROC SOUT=A,RGN=512K,SYS2=,
//
CL1=001,CL2=000,CL3=000,CL4=000,
//
OPT=N,OVLA=0,SPIE=0,VALCK=0,TLIM=00,
//
PCB=000,STIMER=,SOD=,
//
NBA=,OBA=,IMSID=,AGN=,
//
PREINIT=,ALTID=,PWFI=N,APARM=,
//
LOCKMAX=,ENVIRON=,JVMOPWKR=,JVMOPMAS=
//*
//JMPRGN EXEC PGM=DFSRRC00,REGION=&RGN,
//
TIME=1440,DPRTY=(12,0),
//
PARM=(JMP,&CL1&CL2&CL3&CL4,
//
&OPT&OVLA&SPIE&VALCK&TLIM&PCB,
//
&STIMER,&SOD,&NBA,
//
&OBA,&IMSID,&AGN,&PREINIT,
//
&ALTID,&PWFI,'&APARM',&LOCKMAX,
//
&ENVIRON,&JVMOPWKR,&JVMOPMAS)
//*
//STEPLIB DD DSN=IMS810C.&SYS2.PGMLIB,DISP=SHR
//
DD DSN=IMS810C.&SYS2.SDFSJLIB,DISP=SHR
//
DD DSN=IMS810C.&SYS2.SDFSRESL,DISP=SHR
//
DD DSN=CEE.SCEERUN,DISP=SHR
//
DD DSN=SYS1.CSSLIB,DISP=SHR
//PROCLIB DD DSN=IMS810C.&SYS2.PROCLIB,DISP=SHR
//SYSUDUMP DD SYSOUT=&SOUT,
//
DCB=(LRECL=121,BLKSIZE=3129,RECFM=VBA),
//
SPACE=(125,(2500,100),RLSE,,ROUND)

The DFSJBP procedure is the launcher for non-message-driven JVM applications. DFSJBP
is similar to IMSBATCH, and introduces the following parameters: JVMOPMAS and
ENVIRON. The following IMSBATCH parameters are not supported: IN=, PRLD=, and,
SSM=.
Example 8-2 shows a sample DFSJBP procedure that starts a JBP dependent region. This
procedure is created in the IMS PROCLIB as a result of IMS system definition.
Example 8-2 DFSJBP procedure
//
PROC MBR=TEMPNAME,PSB=,JVMOPMAS=,OUT=,
//
OPT=N,SPIE=0,TEST=0,DIRCA=000,
//
STIMER=,CKPTID=,PARDLI=,
//
CPUTIME=,NBA=,OBA=,IMSID=,AGN=,
//
PREINIT=,RGN=512K,SOUT=A,
//
SYS2=,ALTID=,APARM=,ENVIRON=,LOCKMAX=
//*
//JBPRGN EXEC PGM=DFSRRC00,REGION=&RGN,
//
PARM=(JBP,&MBR,&PSB,&JVMOPMAS,&OUT,
//
&OPT&SPIE&TEST&DIRCA,

98

IMS Version 8 Implementation Guide

//
//
//
//
//STEPLIB
//
//
//
//
//PROCLIB
//SYSUDUMP
//
//

&STIMER,&CKPTID,&PARDLI,&CPUTIME,
&NBA,&OBA,&IMSID,&AGN,
&PREINIT,&ALTID,
'&APARM',&ENVIRON,&LOCKMAX)
DD DSN=IMS810C.&SYS2.SDFSRESL,DISP=SHR
DD DSN=IMS810C.&SYS2.SDFSJLIB,DISP=SHR
DD DSN=IMS810C.&SYS2.PGMLIB,DISP=SHR
DD DSN=CEE.SCEERUN,DISP=SHR
DD DSN=SYS1.CSSLIB,DISP=SHR
DD DSN=IMS810C.&SYS2.PROCLIB,DISP=SHR
DD SYSOUT=&SOUT,
DCB=(LRECL=121,RECFM=VBA,BLKSIZE=3129),
SPACE=(125,(2500,100),RLSE,,ROUND)

IMS region size


Running a JVM in IMS considerably increases the region size required. If no REGION
parameter is specified, the system uses an installation default specified at JES initialization. If
your installation does not change the IBM-supplied default limits in the IEALIMIT or IEFUSI
exit routine modules, then specifying various values for the region size have the following
results:
A value equal to 0K or 0M gives the job step all the storage available below and above
16 megabytes. The resulting size of the region below and above the16 megabytes line
depends on system options and what system software is installed.

8.2.5 JVMOPMAS and JVMOPWKR members


The JVMOPMAS and JVMOPWKR members in the IMS.PROCLIB set the appropriate JVM
options for the master JVM and worker JVM. These members consist of one or more
80-character records where each record represents a specific JVM option.
In the IMS Java dependent region implementation of the JVM architecture, there is one
master JVM and one worker JVM for a JMP dependent region. For the JBP dependent region
type, there is only a master JVM.

Master JVM
The master JVM controls the set of JVMs within an address space. To ensure isolation
between transactions, each JVM processes only one program at a time, and each JVM is
created in its own Language Environment (LE) enclave to ensure isolation between JVMs
running in parallel.
The JVM manages storage by specifying a heap size. The following JVM options are for the
heap:
-Xinitacsh
-Xinitsh
-Xmaxf
-Xminf
-Xmx
-Xoss

The initial Application Class system heap size


The initial system heap size
The maximum percentage of free space from middleware heap
The minimum percentage of free space from middleware heap
The maximum size of middleware and transient heap
The Java stack size

The master JVM is involved only during JVM initialization. It gets initialized under the region
controller and does the following:
Provides the system heap which is shared by all the worker JVMs
Sets up the class-loading environment to be used for loading classes into the system heap
Chapter 8. Application enablement

99

Worker JVM
The worker JVM gets initialized under the IMS program controller.

Sample members
Examples of simple option settings are shown in this section. Example 8-3 shows the JVM
options used in the master JVM member in IMS.PROCLIB (referred by the parameter
JVMOPMAS= in DFSJMP and DFSJBP procedures). The IBM shipped sample member
DFSJVMMS can be found in the target IMS sample library, SDFSSMPL.
Note: The class needs to be in the sharable application path.

Example 8-3 Sample JVMOPMAS member in PROCLIB


********************************************************************
* Sample JVMOPMAS= member
*
********************************************************************
********************************************************************
* The following two JVM options are required. The pathname
*
* '/ims/java/applications' is an example only.
*
********************************************************************
-Dibm.jvm.shareable.application.class.path=/ims/java/applications
*
-Dibm.jvm.trusted.middleware.class.path=
>
/usr/lpp/ims/imsjava81/imsjava.jar
********************************************************************
* The following JVM options are a subset of the options allowed
*
* under JDK 1.3.1S
*
********************************************************************
-Xinitacsh128k
-Xinitsh128k
-Xmaxf0.6
-Xminf0.3
-Xmx64M
-Xoss400k

Example 8-4 shows the JVM options in the worker JVM member in IMS.PROCLIB (referred
by the parameter JVMOPWKR= in DFSJMP procedure). The IBM shipped sample member
DFSJVMWK can be found in the target IMS sample library, SDFSSMPL.
Example 8-4 Sample JVMOPWKR member in PROCLIB
********************************************************************
* Sample JVMOPWKR= member
*
********************************************************************
********************************************************************
********************************************************************
*
********************************************************************
* The following JVM options are a subset of the options allowed
*
* under JDK 1.3.1S
*
********************************************************************
-Xmaxf0.6
-Xminf0.3
-Xmx64M
-Xoss400k

100

IMS Version 8 Implementation Guide

8.2.6 ENVIRON= and DFSJVMAP members


The environment JVM member (referred by the parameter ENVIRON= in DFSJMP and
DFSJBP procedures) and DFSJVMAP are two members in the IMS.PROCLIB that specify
other options that help IMS to map the application. In IMS Version 8, these samples are
provided in the target IMS sample library, SDFSSMPL as members DFSJVMEV and
DFSJVMAP.

DFSJVMAP
DFSJVMAP maps an 8-byte or less uppercase IMS application name with the fully qualified
Java class name for that application class file as shown in Example 8-4.
The application name is specified to IMS in one of the following ways:
LANG=JAVA, GPSB= parameter on the APPLCTN system definition macro. The
APPLCTN macro has been changed to allow LANG=JAVA for GPSBs.
LANG=JAVA, PSB= parameter on the PSBGEN macro
MBR= parameter on the DFSJBP procedure

Example 8-5 shows application mapping examples relating to both APPLCTN and PSB
definitions.
Example 8-5 DFSJVMAP member in PROCLIB
**********************************************************************
* The following JVM option is set for both examples:
*
*
*
* -Dibm.jvm.shareable.application.class.path=/ims/java/applications *
*
*
**********************************************************************
* Mapping example for an IMS PSB genned as:
*
*
*
*
APPLCTN GPSB=IMSJAVA1,LANG=JAVA
*
*
APPLCTN GPSB=IMSJAVA2,LANG=JAVA
*
*
*
* With the actual java application class file at pathname:
*
*
*
*
/ims/java/applications/imsjava1.class
*
*
/ims/java/applications/imsjava2.class
*
*
*
**********************************************************************
IMSJAVA1=/ims/java/applications/imsjava1
IMSJAVA2=imsjava2
*
**********************************************************************
* Mapping example for an IMS PSB genned as:
*
*
*
*
PSBGEN PSBNAME=IMSJAVA3,LANG=JAVA
*
*
PSBGEN PSBNAME=IMSJAVA4,LANG=JAVA
*
*
*
* With the actual java application class file at pathname:
*
*
*
*
/ims/java/applications/imsjava3.class
*
*
/ims/java/applications/imsjava4.class
*
**********************************************************************
IMSJAVA3=/ims/java/applications/imsjava3
IMSJAVA4=imsjava4

Chapter 8. Application enablement

101

DFSJVMEV
The ENVIRON= member is specified on both procedures. It specifies the library path
information for the IMS Java dependent region to find the IMS Java native code.
Example 8-6 shows the DFSJVMEV sample ENVIRON=member that IMS provides. The
example shows DLLs that are described in more detail in the IMS Version 8: Java Users
Guide, SC27-1296. Typically, an application developer does not need to know much about
these DLLs, except to specify them in the LIBPATH if necessary.
The DLLs libjvm.so and libatoe.so are required for the JVM regardless of JDK 1.3.1S.
libjvm.so, for example, contains the JNI methods for initializing and maintaining a JVM.
IMS loads libjvm.so, and thats why its path needs to be specified on LIBPATH=. Some
libjvm.so functions require libatoe.so too.
The DLL libJavTDLI.so is an IMS Java DLL, not to be confused with the IMS Java jar file
(imsjava.jar). The DLL libJavTDLI.so contains native C methods responsible for the DL/I
calls to IMS (GU, GN, etc.). The applications should not use these methods directly, but
rely on the higher level methods in JDBC, DLIConnection, or as a last resort, JavaToDLI.
The DLLs libjvm.so, libatoe.so, libJavTDLI.so, and imsjava.jar are not new. However,
imsjava.jar is a new name in IMS V8; in IMS Version 8, it is called imsjava.zip.
libJavTDLI.so and imsjava.jar are described in the IMS Version 8: Java Users Guide,
SC27-1296. Explanations of libjvm.so are in the New IBM Technology featuring Persistent
Reusable Java Virtual Machines, SC34-6034.
Example 8-6 DFSJVMEV member in PROCLOB
**********************************************************************
* Sample ENVIRON= member
*
**********************************************************************
**********************************************************************
* LIBPATH environment variable
*
* ---------------------------*
* /usr/J1.3/bin/classic is path to libjvm.so
*
* /usr/J1.3/bin is path to libatoe.so
*
**********************************************************************
LIBPATH=/usr/J1.3/bin/classic:/usr/J1.3/bin:/usr/lpp/ims/imsjava81

8.2.7 IMS system definition considerations


The following system definition changes were made in support of the new Java dependent
regions.

IMSGEN macro: SCEERUN parameter


This new parameter specifies the name of the C runtime library IMS will generate the
STEPLIB concatenation in the DFSJMP and DFSJBP procedures to contain this name.
The default is CEE.SCEERUN.

IMSGEN macro: CSSLIB


This new parameter specifies the name of the OS/390 callable services library. IMS will
generate the STEPLIB concatenation in the DFSJMP and DFSJBP procedures to contain this
name.
The default is SYS1.CSSLIB.

102

IMS Version 8 Implementation Guide

8.2.8 PSBGEN considerations


You must specify LANG=JAVA for any PSBs associated with a JMP or JBP type IMS Java
application.
If the PSB has LANG=JAVA and is used by a non-message driven application, then it can be
accessed by any type of program (COBOL, ODBA thread, etc.). If, however, the PSB has
LANG=JAVA, and the only way to schedule the PSB is by submitting a transaction (i.e., the
PSB is used by a message driven application), then the PSB can only be used by a Java
application in a JMP region.

8.2.9 /DISPLAY examples


This section shows some examples of the /DISPLAY command output for the new Java
regions.

/DISPLAY TRAN
Example 8-7 shows an example of a /DISPLAY of a transaction JVMTRAN associated with
the Java program named JVMJMP1. This /DISPLAY is identical in format to a display of a
program that executes in a MPP (Message Processing Region).
Example 8-7 /DIS TRAN
R 17,/DIS TRAN JVMTRAN1
IEE600I REPLY TO 17 IS;/DIS TRAN JVMTRAN1
DFS000I
TRAN
CLS ENQCT
QCT
LCT PLCT CP NP LP SEGSZ SEGNO
PARLM
RC
IMS1
DFS000I
JVMTRAN1 1
0
0 65535 65535 1 1 1
0
0
NONE
0
IMS1
DFS000I
PSBNAME: JVMJMP1
IMS1
DFS000I
*01144/112828*
IMS1
18 DFS996I *IMS READY* IMS1

/DISPLAY ACTIVE REGION


Example 8-8 shows a /DISPLAY ACTIVE REGION that shows an active JMP type region
(REGID 1) that is waiting for a program to be scheduled into the region. The JBP type region
(JOBNAME JBPRGN) is not active.
Example 8-8 /DISPLAY ACTIVE REGION
R 20,/DIS ACTIVE REGION
IEE600I REPLY TO 20 IS;/DIS
DFS000I
REGID JOBNAME
CLASS
IMS1
DFS000I
1 JMP1
1, 2, 3, 4
IMS1
DFS000I
MSGRGN
DFS000I
JBPRGN
DFS000I
BATCHREG
DFS000I
FPRGN
DFS000I
DBTRGN
DFS000I
DBRMCSAC
DFS000I
DLIMCSAC
DFS000I
*01144/112901*

ACTIVE REGION
TYPE TRAN/STEP PROGRAM

STATUS

JMP

WAITING

TP
NONE
JBP
NONE
BMP
NONE
FP
NONE
DBT
NONE
DBRC
DLS
IMS1
IMS1

IMS1
IMS1
IMS1
IMS1
IMS1
IMS1

Chapter 8. Application enablement

103

/DISPLAY PROGRAM
Example 8-9 shows the region type JBP in a /DIS PROGRAM command.
Example 8-9 /DISPLAY PROGRAM
DIS PGM JVMJBPA
DFS4445I CMD FROM MCS/E-MCS CONSOLE USERID=01: DIS PGM JVMJBPA IMS1
DFS000I MESSAGE(S) FROM ID=IMS1 413
PROGRAM
TRAN
TYPE
JVMJBPA
JBP
*01144/143955*

8.3 Java standards enhancements


IMS provides support for the new Java standards as they evolve. JDBC 2.1 enhancements
include support for Updatable ResultSet and limited reverse cursors. Additionally, new SQL
keyword support has been added.

8.3.1 Java result set types


The JDBC 2.1 core API provides three result set types: forward-only, scroll-insensitive, and
scroll-sensitive.
Scrollable result sets support the ability to move backward (last-to-first) through its contents,
as well as forward (first-to-last).
A scroll-insensitive result set is generally not sensitive to changes that are made to the
underlying data store while it is open. A scroll-insensitive result set provides a static view of
the underlying data it contains. An example being all of the results are read and cached.
Subsequent changes to the underlying data are not reflected in original cached results.
A scroll-sensitive result set is sensitive to changes that are made to the underlying data store
while it is open, and provides a dynamic view of the underlying data. IMS Java supports
forward only (scroll-insensitive) result sets. IMS has no means to traverse a query backwards,
IMS Java does NOT support this function. Figure 8-2 shows a matrix of the supported result
set scrolling and database sensitivity options.

104

IMS Version 8 Implementation Guide

Sensitive

Insensitive

(DB changes are seen)

ForwardOnly

(DB changes are not seen)

Forward-Only

(ResultSets are
iterated in
only one direction)

Scrollable
Scroll-Sensitive
(not supported)

(ResultSets are
iterated in
both directions)

Scroll-Insensitive

*IMS has no means to traverse a query backwards


(iterate last to first through a result set)
Figure 8-2 IMS Java result sets and traversal

Forward-Only which is scroll sensitive and was previously supported (this is the default result
set type)
TYPE_FORWARD_ONLY (scroll sensitive)
Each next() call retrieves data from the database
Calls:
ResultSet.next()

Scroll-Insensitive
executeQuery accesses the database, and caches all results
TYPE_SCROLL_INSENSITIVE
Calls:
ResultSet.next()
ResultSet.previous()
ResultSet.absolute(int)
ResultSet.relative(int)

8.3.2 Java result set concurrency


Java supports the following concurrency attributes:
Read-Only (default)

CONCUR_READ_ONLY
Does not allow updates using the ResultSet interface
Updatable

CONCUR_UPDATABLE
Allows updates using the ResultSet interface
Once database data is returned in the result set the application can update, insert or delete
the data. The PROCOPT in the PCB specifies the processing capability. This cannot be
dynamically changed via the interface.

Chapter 8. Application enablement

105

If the application uses a PCB with PROCOPT of read only then it will not be able to make
updates to the database
Figure 8-10 is an example Java statement specifying scroll and concurrency attributes.
Example 8-10 Result set attributes
stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY)

8.3.3 Batch updates


IMS Java supports batch updates which allow multiple update operations to be submitted for
processing at once. This could be used as a way to insert IMS database records. Figure 8-11
shows batch employee and department updates.
Example 8-11 Batch update statements
Statement stmt = con.createStatement();
stmt.addBatch("INSERT INTO EMPLPCB.employees VALUES (1000, 'An Employee')");
stmt.addBatch("INSERT INTO EMPLPCB.departments VALUES (260, 'ADept')");
// submit a batch of update commands for execution
int[] updateCounts = stmt.executeBatch();

8.3.4 New SQL keywords


Support for the following SQL keywords has been added:
Table 8-1 New SQL keywords supported by IMS
Function

new SQL keywords

Field renaming

AS

Aggregates

AVG, COUNT, MAX, MIN, SUM, and GROUP BY

Ordering

ORDER BY, ASC, DESC

Scalar functions

ABSOLUTE, +, -, *, /, UPPER, LOWER

Unions

UNION

The field renaming SQL statement in Example 8-12 returns all the values in EMPNO in a
column labeled Employee Number.
Example 8-12 SQL field renaming
SELECT EMPNO AS 'Employee Number'
FROM EMPLPCB.Employees

The SQL statement in Example 8-13 returns the average age per department using the AVG
aggregate function.
Example 8-13 SQL aggregates
SELECT AVG(age), department
FROM EMPLPCB. Employees
GROUP BY department

106

IMS Version 8 Implementation Guide

The ordering SQL statement in Example 8-14 will return rows ordered by lastName in
ascending order, followed by firstName in ascending order in the case of a tie.
Example 8-14 SQL ORDER BY
SELECT firstName, lastName, department
FROM EMPLPCB.Employees
ORDER BY lastName ASC, firstName

Example 8-15 lists employees' last names in all capitals, into a field named lastCaps
Example 8-15 SQL scalar functions
SELECT UPPER(lastName) AS lastCaps
FROM EMPLPCB.Employees

Example 8-16 lists all employees working for IBM and SUN.
Example 8-16 SQL union
SELECT firstName, lastName
FROM EMPLPCB.IBMEmployees
UNION
SELECT firstName, lastName
FROM EMPLPCB.SUNEmployees

8.4 JDBC access enhancements


JDBC provides a standard API for tool and database developers and makes it possible to
write database applications using a pure Java API. JDBC enables the writing of a single
program using the JDBC API, and the program will be able to send SQL statements to the
appropriate database. JDBC makes it possible to establish a connection with a database,
send SQL statements, and process the results.
IMS provides JDBC access to IMS databases. For IMS Version 8 the IMS Java Classes were
updated to support some of the JDBC 2.1 Core and Optional APIs.
JDBC access to IMS DB data is now provided for Java applications running in CICS
Transaction Server/390, and DB2 for OS/390 Java stored procedures applications. IMS
Version 7 Java Update, SG24-6536 includes a sample application and configuration
information for these environments. We provide an example for JDBC access from
WebSphere environment in Chapter 9, Java enhancements for IMS and WebSphere on
page 109.

Chapter 8. Application enablement

107

8.5 Java Tooling enhancement


Java Tooling introduces a new IMS utility called DLIModel, which automatically generates the
required IMS Java metadata class from IMS PSB and DBD source, eliminating the previously
existing manual task of preparing these classes. The utility allows information on additional
fields, long Java-style names, and datatypes to be supplied from user-coded control
statements, and/or from XMI descriptions of COBOL copybook members. If desired, it will
produce XMI descriptions of databases that conform to the Object Management Group's
Common Warehouse Metamodel 1.1. This greatly eases development of Java applications
and JDBC access to IMS DB For more information on the DLIModel utility see IMS Version 8:
Java Users Guide, SC27-1296.

8.6 XML and IMS


IMS Version 8 provides XML support for transaction applications, by allowing XML documents
in the data portion of the IMS message. The only restriction IMS imposes on an XML data
stream is the transaction code must be EBCDIC. IMS needs to understand the transaction
code in order to perform application program scheduling. This provides the capability of
sending and receiving XML documents to and from IMS transaction applications
This enables IMS in business to business (B2B) environments to perform as a high
performance XML server, and provide complete Web-enabled connectivity for all possible
IMS applications. Service definition creation for IMS COBOL and C applications enables
these IMS applications to be used as Web services, using WebSphere Studio Application
Developer Integration Edition Version 4.1.
Service definition deployment of IMS services as EJB services or SOAP services makes
them available to distributed WebSphere Application Server (WAS). This transforms existing
IMS COBOL and applications into Web services by supporting SOAP, EJB, and Java inbound
bindings.
IMS COBOL and PL/I programs can receive and send XML directly using the XML Enabler for
COBOL and PL/I to parse the incoming XML documents. IMS supports the transmission of
XML documents in the dataportion of the IMS message. Messages can be placed on and
retrieved from the IMS message queues from applications running in any IMS region.Since
XML provides the interchange of structured data using tags a parser is needed.
IMS COBOL programmers can use IBM Enterprise COBOL for z/OS and OS/390 which
includes a high-speed XML parser and transform the XML contents to COBOL data
structures. IMS C++ and Java application programmers can invoke the APIs (in example,
DOM APIs and SAX APIs) of the OS/390 XML parser to convert an XML document from its
stream form into a parsed form for reading, editing, or updating an XML document.
For an example of integrating an IMS application see Using XML on z/OS and OS/390 for
Application Integration, SG24-6285. A sample Java IMS application is enhanced to utilize
XML for input and output message processing, in addition to its existing input and output
message processing.

108

IMS Version 8 Implementation Guide

Chapter 9.

Java enhancements for IMS and


WebSphere
In this chapter we describe the following Java enhancements in IMS Version 8:
WebSphere 4.0.1 support
J2EE Connector Architecture (JCA)

The following Java enhancements are also available in IMS Version 8 (and in many cases
retrofitted to IMS Version 7) and most are covered in the IBM Redbook IMS Version 7 Java
Update, SG24-6536.

DB2 stored procedures support


CICS Java access to IMS data - IMS DB access through DRA
Java dependent regions using persistent reusable Java Virtual Machine (JVM)
IMS Transaction Manager Java applications - IMS applications run in JVM regions
JDBC 2.0/2.1 enhancements
New SQL keyword support
Java tooling to generate IMS metadata - DLIModel
IMS Connector for Java
XML and IMS

In addition, overviews of these items are presented in Chapter 8, Application enablement on


page 95.
The topic of this chapter is WebSphere 4.0.1 support related topics which have not been
covered elsewhere.

Copyright IBM Corp. 2002. All rights reserved.

109

9.1 WebSphere 4.0.1 support


WebSphere 4.0.1 Java applications can access IMS databases directly via the IMS open
database access (ODBA) interface. You can use the classes in the IMS Java library to build a
WebSphere for z/OS Enterprise Java Bean (EJB) that accesses IMS data when WebSphere
and IMS are running on the same z/OS image. The WebSphere EJB you build can access
both IMS full-function and data-entry databases using the IMS ODBA interface.
To provide this capability in WebSphere for z/OS, the IMS Java class library implements the
J2EE Connector Architecture Resource Adapter interfaces, referred to below and elsewhere
in this chapter as the IMS JDBC Resource Adapter.
Figure 9-1 shows CICS, IMS Transaction Manager, DB2 stored procedures, and WebSphere
accessing IMS databases.

DB2
S t ore d
P ro c ed u re

C IC S

H
P
J

H
P
J

M P P B M P IF P
JBP

EJB

J
V
M

OD BA
J
V
M

J
V
M

D RA

IM S /D B C T L

JM P

D LI
D ata ba se
V ie w

IM S Ja va
A pp
A
P
P

W AS

J
V
M

JC IC S

IM S / H
P
TM J

J
V
M

JD B C /
SQL

DB
Ba se

IM S
To o - J
l in g

D B D LIB
P S B LIB

EN
DBDG N
E
PSBG N
E
G
B
C
A

JN I

C E E TD LI Interfa ce

C O P Y L IB

Figure 9-1 IMS Java the big picture

9.2 J2EE architecture


Accessing IMS data from WebSphere is very similar to accessing data in other supported
environments. However, one difference is that you must acquire a Java Database
Connectivity (JDBC) Connection object from a DataSource object that youve looked up in the
Java Naming and Directory Interface (JNDI) namespace, after youve deployed an instance
of the IMS JDBC Resource Adapter using the WebSphere Administration tools.
Figure 9-2 shows the flow from the Web applet, through the Web container to the EJB
container and finally the IMS data. You cannot use DriverManager.getConnection to create a
JDBC connection, nor can you use DLIConnection.createInstance to create a DLIConnection.
However, once you have acquired a JDBC Connection from the DataSource in JNDI, you do
have the ability to retrieve the DLIConnection used in its implementation and use it for the
small number of features not supported in JDBC.

110

IMS Version 8 Implementation Guide

Applet
Container
Applet

Web Container

HTTP
SSL

JSP

EJB

Servlet

JAF

JDBC

Java
Mail

RMI/IIOP

JTA

JMS

J2SE

JNDI

JAF

JDBC

Java
Mail

RMI/IIOP

JTA

Application
Client Container

JMS

HTTP
SSL

JNDI

J2SE

EJB Container

J2SE

Database

Application
Client

JDBC
RMI/IIOP

JMS

JNDI

J2SE

Figure 9-2 J2EE architecture

9.3 DataSource
The DataSource is a factory for connections to a physical data source and is the only way to
obtain a connection using the J2EE architecture in a managed environment (WebSphere with
IMS always uses a managed environment). There is an alternative to the DriverManager
facility in unmanaged environments.
Typically the DataSource is registered with a naming service based on the Java Naming and
Directory Interface (JNDI) API; as shown in Figure 9-3.
The DataSource objects have properties that can be modified when necessary, so code
accessing the data source does not need to be changed.

Chapter 9. Java enhancements for IMS and WebSphere

111

OtherDB

JNDI
PhonebookDB

B"
kD

rce
ou
aS

DRA Name:
IMSC

Lookup

o
bo
ne

t
Da

Admin
tool

b
ne

"
DB

l.
.sq
ax
jav

ho
"P

k
oo

ho
"P

Deploy

AnotherName

Database View:
DFSIVP37DatabaseView

Figure 9-3 DataSource and JNDI

As mentioned earlier the WebSphere IMS application runs as a managed environment. The
DataSource is deployed in JNDI namespace using the WebSphere Application Server for
z/OS and OS/390 Administration tool.
The application (EJB) makes a request for the DataSource and acquires a Connection from it,
see Example 9-1.
Example 9-1 Request for connection
Context ctx = new InitialContext();
DataSource dataSource = (DataSource)ctx.lookup("PhonebookDB");
Connection con = dataSource.getConnection();

For the WebSphere IMS application running in the JVM, the required Java Development Kit
(JDK) and JDBC levels are:
JDK 1.3
JDBC 2.1

The application is run as Enterprise Java Beans (EJBs), using J2EE Connection Architecture
and accessing IMS through ODBA using the DRA.

112

IMS Version 8 Implementation Guide

9.4 Enterprise Archive (.ear)


The Enterprise Archive (.ear) file contains the whole application. It consists of two other
archive files, the Web Archive (.war) and the Java Archive (.jar), see Figure 9-4. The .war file
consists of the Web side of the application, while the .jar file consists of the Home interface,
the Remote interface, and the Enterprise Java Bean (EJB). The EJB is the application. You
need to generate the Home interface and the EJB, the Remote interface is generated from
the Home interface.
In this chapter we are setting up and running the IMS supplied WebSphere sample for the
Phone book application.

Enterprise Archive
(.ear)
Web Archive
(.war)

HTML

Java Archive
(.jar)

Servlet

Remote

Home
EJB

JSP

JNDI

A
P
P

JDBC /
SQL

DB
Base
JNI

CEETDLI Interface

Figure 9-4 The enterprise archive

The IMS WebSphere phone book sample consists of the following major components:

The HTML to initiate the WebSphere transaction


The Home Interface
The Remote Interface
The Application (EJB)

9.5 Deploying the ear file


We will deploy the .ear file for the phone book application supplied by IBM with IMS Version 8
as one of the samples to be found in the samples.tar file. The .ear file consists of a .war file
and a .jar file. The names and contents are listed in Example 9-2.

Chapter 9. Java enhancements for IMS and WebSphere

113

Example 9-2 The .ear file and its contents


/path/imsjava81.ear
/path/IMSJdbcIVPEJB.jar
/path/IMSJdbcIVPWeb.war

To deploy an application in WebSphere for z/OS that uses this resource adapter, you need to
do the following:
1.
2.
3.
4.
5.

Configure the WebSphere server region for access to a particular IMS system
Obtain the WebSphere for z/OS System Administration tool
Install an IMS JDBC Resource Adapter into a WebSphere J2EE server region
Configure and deploy an instance of the IMS JDBC Resource Adapter
Configure and deploy an Enterprise Archive containing an EJB that references an
instance of the IMS JDBC Resource Adapter and bind that reference to a resource
adapter installed in the J2EE server

Note: The instructions following assume that you have installed and correctly configured a
WebSphere for z/OS and OS/390 4.0.1 or later system and have successfully executed
the WebSphere IVP program. It also assumes that your environment in Unix System
Services has been configured to access tools (the Java jar utility) in the IBM Developer Kit
for OS/390 Java Technology Edition.

Restrictions
The EJB that you build to access IMS data is restricted in several ways:
Resource Adapter requires that a global transaction exist prior to you creating a JDBC
Connection, therefore the EJB must use Connection objects in a Get-Use-Close model.
That is, the java.sql.Connection object must be acquired (via
javax.sql.DataSource.getConnection), used, and closed (via java.sql.Connection) within
the scope of a transactional method. You accomplish this either by specifying
container-demarcated transactions in the EJB deployment descriptor, so that the
transaction is started by the EJB container prior to dispatching the EJB method, or by
explicitly beginning a global transaction by calling
javax.transaction.UserTransaction.begin within the EJB method prior to getting a
Connection from a DataSource.
As in the other IMS supported environments, local transactions are not supported,
therefore you cannot use the JDBC Connection methods commit, rollback, or
setAutoCommit.
Component-managed signon is not supported. See the section 9.5.1, Configure the
WebSphere server region for IMS access on page 114 for further details concerning
security.

For further information in building applications that execute in a WebSphere for z/OS
environment, see WebSphere Application Server V4.0.1 for z/OS and OS/390: Assembling
J2EE Applications, SA22-7836. For further information regarding installing the IMS JDBC
Resource Adapter in WebSphere for z/OS, see WebSphere Application Server V4.0.1 for
z/OS and OS/390: Installation and Customization, GA22-7834.

9.5.1 Configure the WebSphere server region for IMS access


A WebSphere EJB accesses IMS data using the ODBA interface. In turn, the ODBA interface
uses the IMS database resource adapter (DRA) to access IMS full-function and data entry
databases. Prior to accessing IMS databases using ODBA, you must configure and deploy a

114

IMS Version 8 Implementation Guide

DRA startup table that identifies the particular IMS you will be using and characteristics of the
connection to that IMS. This process is described in the topics Accessing IMS Databases via
the ODBA Interface and The DRA Startup Table in IMS Version 8: Installation Volume 2:
System Definition and Tailoring, GC27-1298. See Example 9-3 for an example of a DRA
startup table.
Example 9-3 DRA startup table
//DFSPZPIV EXEC PROC=ASMDRA,MBR=DFSIMSC0
//ASM.SYSIN DD *
PZP
TITLE 'DATABASE RESOURCE ADAPTER STARTUP PARAMETER TABLE'
DFSPZP00 CSECT
*******************************************************************
EJECT
DFSPRP DSECT=NO,
X
FUNCLV=1,
X
CCTL FUNCTION LEVEL
X
DDNAME=CCTLDD,
XXXXXXXX DDN FOR CCTL RESLIB DYNALOC X
DSNME=IMS810C.SDFSRESL,
X
DBCTLID=IMSC,
NAME OF DBCTL REGION
X
USERID=,
XXXXXXXX NAME OF USER REGION
X
MINTHRD=001,
XXX
MINIMUM THREADS
X
MAXTHRD=005,
XXX
MAXIMUM THREADS
X
TIMER=60,
XX
IDENTIFY TIMER VALUE - SECS X
FPBUF=000,
XXX
FP FIXED BFRS PER THREAD
X
FPBOF=000,
XXX
FP OVFLW BFRS PER THREAD
X
SOD=X,
X
SNAP DUMP CLASS
X
TIMEOUT=060,
XXX
DRATERM TIMEOUT IN SECONDS X
CNBA=001,
XXX
TOTAL FP NBA BFRS FOR CCTL X
AGN=IVP
XXXXXXXX APPLICATION GROUP NAME
END
//*

Note: The DRA for ODBA is linked as module name that is based on the following naming
conventions:
Characters 1-3 = DFS
Characters 4 - 7 = specified 1 to 4-byte ID
Character 8 = 0

The recommendation for the 1 to 4 byte ID is that it should be the same as the IMSID of the
IMS system to which you connect. However, this is not a requirement. Ensure that the DRA
startup table module name is not the same as the name of an existing IMS module.
In our example, the DRA module name is DFSIMSC0. This is important when deploying
application as the DRA suffix will be defined to WebSphere as IMSC. After you have
assembled a DRA startup table and linked it into a load library, you must update the started
task JCL for the WebSphere J2EE server region where your EJB executes so that it has
access to the following libraries:
Load library containing the DRA startup table
Load library containing the DRA startup and router routines (usually SDFSRESL)
Partitioned data set (SDFSJLIB) containing the IMS Java native libraries (DFSCLIB)

You provide this access by concatenating the appropriate load libraries to the STEPLIB in the
JCL that starts the J2EE server region of the J2EE server instance. The J2EE server instance
consists of:
A control region that receives and queues client requests to the z/OS or OS/390 workload
manager (WLM).

Chapter 9. Java enhancements for IMS and WebSphere

115

One or more server regions (z/OS or OS/390 address spaces). A server region consists of
several functions that work together to run and manage your applications code. A Java
virtual machine (JVM) runs in a server region address space; your application
components will run in this JVM. WLM starts additional server regions depending on the
volume of incoming requests. Add the appropriate libraries to the STEPLIB of this JCL.

See Example 9-4 for an example of the J2EE server instance JCLs for the control and the
server regions. The required additions are highlighted in the example. We placed our IMS
DRA module into the IMS810C.SDFSRESL. If a different load library is used for the DRA
module, it also has to be concatenated to the STEPLIB and make sure that the library is APF
authorized.
Example 9-4 The control region and the related server region JCL for J2EE server instance
J2EE server instance control region JCL:
//IMOASR2 PROC SRVNAME='IMOASR2A',
//
PARMS=''
// SET RELPATH='controlinfo/envfile'
// SET CBCONFIG='/WebSphereIM/CB390'
//IMOASR2 EXEC PGM=BBOCTL,REGION=0M,
// PARM='/ -ORBsrvname &SRVNAME &PARMS'
//STEPLIB DD DISP=SHR,DSN=DB7L7.SDSNEXIT
//
DD DISP=SHR,DSN=DB7L7.SDSNLOAD
//BBOENV DD PATH='&CBCONFIG/&RELPATH/&SYSPLEX/&SRVNAME/current.env'
//CEEDUMP DD SYSOUT=*,SPIN=UNALLOC,FREE=CLOSE
//SYSOUT
DD SYSOUT=*,SPIN=UNALLOC,FREE=CLOSE
//SYSPRINT DD SYSOUT=*,SPIN=UNALLOC,FREE=CLOSE
//
The related J2EE server region JCL:
//IMOASR2S PROC IWMSSNM='IMOASR2A',PARMS='-ORBsrvname '
// SET CBCONFIG='/WebSphereIM/CB390'
// SET RELPATH='controlinfo/envfile'
//IMOASR2S EXEC PGM=BBOSR,REGION=0M,TIME=NOLIMIT,
// PARM='/ &PARMS &IWMSSNM'
//STEPLIB DD DISP=SHR,DSN=BBO53.SBBOULIB
//
DD DISP=SHR,DSN=IMS810C.SDFSRESL
//
DD DISP=SHR,DSN=IMS810C.SDFSJLIB
//
DD DISP=SHR,DSN=DB7L7.SDSNEXIT
//
DD DISP=SHR,DSN=DB7L7.SDSNLOAD
//BBOENV DD PATH='&CBCONFIG/&RELPATH/&SYSPLEX/&IWMSSNM/current.env'
//CEEDUMP DD SYSOUT=*
//SYSOUT
DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SOUT
DD SYSOUT=*
//

In addition to providing access to the DRA startup table and the DRA code, you must
establish and define the connection security and PSB security to use for security control. See
Establishing and Defining Security in IMS Version 8: Installation Volume 2: System
Definition and Tailoring, GC27-1298. Also, reference IMS APAR PQ50230 for further
information on security in an ODBA environment.

116

IMS Version 8 Implementation Guide

9.5.2 Obtain the WebSphere for z/OS System Administration tool


This chapter contains information on how you can use the WebSphere for z/OS System
Administration tool to deploy the IMS IVP Web application into the OS/390 or z/OS server.
The detailed information about this tool can be found in the manual WebSphere Application
Server V4.0.1 for z/OS and OS/390: System Management User Interface, SA22-7838.
The WebSphere for z/OS System Administration tool is shipped with WebSphere for z/OS
and must be downloaded from the Unix System Services environment on the host to the
workstation by using FTP for instance. The file required is bboninst.exe and if using the
default path names when installing the WebSphere for z/OS, the full qualified file name is:
/usr/lpp/WebSphere/bin/bboninst.exe

Once the above file is on the workstation, click bboninst.exe, and it will install itself. To invoke
the tool:
Click Select>Programs>WebSphere for z/OS>Administration
Figure 9-5 shows the WebSphere administration tool startup window.

Figure 9-5 WebSphere Application Server administration tool startup window

After the startup window, the Login window is presented. Enter your user ID and password.
The user ID and password must be valid for your z/OS system. In addition, the user ID must
have been previously defined as an administrator to WebSphere for z/OS. See WebSphere
Application Server V4.0.1 for z/OS and OS/390: Installation and Customization, GA22-7834,
for the details on how to define an administrator user ID. The manual also contains the
description of the other options, that you can modify by selecting the options push button from
the Login window. Figure 9-6 shows the Login window.

Chapter 9. Java enhancements for IMS and WebSphere

117

Figure 9-6 Login window

The tool is operated by creating a new conversation consisting of the desired components.
The conversation is then validated, committed, completed, and activated to implement the
desired changes in WebSphere. Once activated any subsequent changes requires a new
conversation. The tool maintains a trail of these conversations. Figure 9-7 shows the existing
conversation list window.

Figure 9-7 Existing conversations list

9.5.3 Install an IMS JDBC Resource Adapter into a WebSphere server region.
The J2EE Connector Architecture defines a Resource Archive (RAR) as the mechanism to
package a resource adapter for deployment into a J2EE Server. As WebSphere for z/OS
does not currently support the automatic deployment of a resource adapter via the Resource
Archive, you are required to manually install the contents of the IMS JDBC Resource Adapter,
imsjava.rar into an Hierarchical File System (HFS) location that the WebSphere for z/OS
J2EE server can access, and configure the J2EE server for access to its contents. To install
and configure the IMS JDBC Resource Adapter, perform the following steps:

118

IMS Version 8 Implementation Guide

1. On the same z/OS or OS/390 system on which IMS and WebSphere for z/OS are
installed, create an HFS work directory that permits both read and write authority. Use a
meaningful name for the directory; for example:
/usr/lpp/connectors

2. In the install directory for IMS Java, look for the file imsjava.rar and copy it into the work
directory you created in the previous step. If your installation used the default directory for
installing IMS, you will find the imsjava.rar file in the directory:
/usr/lpp/ims/imsjava81

3. Under the work directory, expand the imsjava.rar file by entering the following command:
jar -xvf imsjava.rar

As a result for the command, number of files are expanded into the directory. The files
include the following:
IMSJdbcCustomService.xml
howto.html
Note: It is highly recommended to read the howto.html file as it contains the latest
information associated with installing the IMS JDBC resource adapter.

4. Using the WebSphere for z/OS System Administration tool, create and activate a
conversation that performs the following steps to define the IMS JDBC Resource Adapter
as a J2EE resource for a WebSphere for z/OS J2EE server.
To add a new conversation, choose Selected>Add, as shown in Figure 9-8.

Figure 9-8 Selected and Add for new conversation

After clicking add, two fields appear in the window where you are supposed to enter a
name and the description for the conversation. The name can be up to 256 characters
long, and the description is up to 4096 descriptive text. In our example, we give the name
SDM005E to our conversation, as shown in the Figure 9-9.

Chapter 9. Java enhancements for IMS and WebSphere

119

Figure 9-9 Enter new conversation name

5. Modify the properties for the sysplex containing the J2EE server by checking the box
labelled Connection Management within the Configuration Extensions section. Expand
new conversation (SDM005E) and sysplexes, click Selected>Modify and check box
labelled Connection Management within Configurations Extensions section; see
Figure 9-10.

Figure 9-10 Check box Connections Management within Configuration Extension

6. Modify the properties for the J2EE Server to update the CLASSPATH and LIBPATH
environment variable values:
Select CLASSPATH from the Environment Variable List window as shown in
Figure 9-11, and add the full directory name for the file imsjava.jar shipped with IMS
Java. The default path name is the following, if you have not changed it during the IMS
Java installation:
/usr/lpp/ims/imsjava81/imsjava.jar

120

IMS Version 8 Implementation Guide

In our example, we have the following path name for imsjava.jar file:
/SC53/imsv8/imsjava81/imsjava.jar)

Figure 9-11 CLASSPATH update selection

Clicking the variable opens the Environment Viewing Dialog window as shown in
Figure 9-12. Check that the entered value is correct and then update the LIBPATH
environment variable correspondingly.

Figure 9-12 Updated CLASSPATH variable

Select LIBPATH from the Environment Variable List window as shown in Figure 9-11,
add the full name of the directory containing the file libJavTDLI.so shipped with IMS.
For example:
/usr/lpp/ims/imsjava81

7. Install the file IMSJdbcCustomService.xml as a WebSphere Custom Service. The IMS


Custom Service handles IMS initialization and termination for the WebSphere J2EE server
region. Failure to install the service will result in PSB allocation failures when using the
IMS JDBC Resource Adapter. Installation consists of adding a property to the JVM for the
WebSphere server region that specifies an XML file describing the IMS Custom Service.
The custom service itself is containing with the IMS Java library code, in imsjava.jar.
To install the IMS JDBC custom service, edit the jvm.properties file associated with each
J2EE server region that will access IMS data, (remember you can get the path from your
J2EE server start task JCL DDname BBOENV), and add a property that identifies the
Chapter 9. Java enhancements for IMS and WebSphere

121

location of IMSJdbcCustomService.xml file which you extracted into the work directory
above. If your work directory was /usr/lpp/connectors, then add the following line to the
jvm.properties file as in Example 9-5.
Example 9-5 Update to jvm.properties file
com.ibm.websphere.preconfiguredCustomServices=/usr/lpp/connectors/IMSJdbcCustomService.xml

9.5.4 Configure and deploy an instance of the IMS JDBC Resource Adapter
To access a PSB in IMS from an EJB in WebSphere for z/OS, you need to create and deploy
a configured instance of the IMS JDBC Resource Adapter. Primarily this involves identifying
the IMS system to be accessed, by specifying the DRA Startup Table name, and the PSB and
associated database metadata needed to access that PSB, by specifying the
DLIDatabaseView subclass that provides that information. See IMS Version 8: Java Users
Guide, SC27-1296, for a description of how to build a DLIDatabaseView subclass to access
an IMS database.
Note: If you wish, you may defer specification of the DLIDatabaseView subclass until
runtime, following the lookup of the IMSJdbcDataSource from JNDI. By deferring the
specification of the DLIDatabaseView subclass name, you can use the same resource
adapter instance to access multiple IMS PSBs using the same DRA.

To defer specifying the DLIDatabaseView name, enter any name for the DLIDatabaseView
subclass at deployment and add code similar to shown in Example 9-6 to your EJB to call the
method IMSJdbcDataSource.setDatabaseViewName following the creation of the
connection.
Example 9-6 Code for setting the DLIDatabaseView name at runtime
// Lookup the DataSource in JNDI
IMSJdbcDataSource dataSource =
(IMSJdbcDataSource)(initialContext.lookup("java:comp/env/jdbc/MyResourceAdapter"));
// Update the DatabaseView name in the DataSource
dataSource.setDatabaseViewName("MyDatabaseViewName");
// Create the Connection from the DataSource
Connection connection = dataSource.getConnection();

For the purposes of the IMS IVP sample however we will specify the actual DLIDatabaseView
using the WebSphere for z/OS System Administration tool. We use the following name:
samples.ivp.DFSIVP37DatabaseView

Using the WebSphere for z/OS System Administration tool, create and activate a
conversation that performs the following steps to define and deploy an instance of the IMS
JDBC Resource Adapter into a WebSphere for z/OS J2EE server.
1. Expand the J2EE Resources as shown in Figure 9-13.

122

IMS Version 8 Implementation Guide

Figure 9-13 J2EE resources

2. Create a J2EE Resource and specify IMSJdbcDataSource as the J2EE Resource


Type. as shown in Figure 9-14.

Figure 9-14 Create J2EE resource (SDM005Ares)

3. Create a J2EE Resource Instance for the resource created above, Figure 9-15.

Chapter 9. Java enhancements for IMS and WebSphere

123

Figure 9-15 Resource instance (SDM005Ares_inst)

4. Configure the resource instance with the 1-4 character name identifying the DRA startup
table (IMSC in our case), and the fully qualified name of your DLIDatabaseView subclass.
Optionally, you can enable LogWriter Recording. Figure 9-16 shows the window for these
definitions.

Figure 9-16 J2EE resource instance DLIDatabaseView and DRA

To activate the changes made with the administration tool you need to validate, commit,
complete all, and activate by right clicking the desired conversation. You can do this now, or
you can continue with deploying the EJB part of the application and activate the changes
then. Probably you would like to do it all at the end as once you have activated a conversation
you need to create a new one to make further changes.
Note: At present it is advisable to stop the J2EE server before doing the activate step. This
should no longer be necessary once subsequent maintenance becomes available.
124

IMS Version 8 Implementation Guide

9.6 Configure and deploy an Enterprise Archive


You use a development tool like WebSphere Studio Application Developer to code and
package, as an Enterprise Archive or EAR, the J2EE application components that use JDBC
to access IMS databases. You then follow the WebSphere for z/OS instructions to deploy that
EAR into the server. Today, this is accomplished by importing the EAR (the tool also supports
importing the just the EJB jar) into the WebSphere for z/OS Application Assembly tool to
create a resolved EAR, suitable for deployment into a WebSphere of z/OS server. Finally, you
use the WebSphere for z/OS Administration tool to install the EAR into one or more
WebSphere for z/OS J2EE servers.
When installing the EAR with the Administration tool, you will be required to bind references
in your EJB to the IMS JDBC Resource Adapter to specific instances of the resource adapter
you deployed in the J2EE Server. In the Reference and Resource Resolution window, the
Administration application displays the JDBCDataSource resource references that are
defined in the Enterprise beans or servlets in your application. For each resource reference
that needs to be an IMSJDBCDataSource, associate that resource reference with the
IMSJDBCDataSource J2EE resource instance that you defined in the previous step.
Note: Detailed instructions for using the WebSphere for z/OS System Administration tool
to deploy an instance of the J2EE Resource Adapter for the IVP EJB program, and to
deploy the IVP EJB that uses that resource adapter instance, are included in the following
section IVP for WebSphere for z/OS.

9.7 IVP for WebSphere for z/OS


The IMS Java IVP program for the WebSphere environment consists of a Web Application
(servlet and a set of JSPs) that execute an EJB Session bean in a J2EE server region.
To install the IVP application in a WebSphere for z/OS server region, do the following:
Untar the Enterprise Archive, imsjavaIVP.ear, from the IMS Java sample program tar file.
Configure an IMS JDBC Resource Adapter instance for use by the IVP EJB program.
Assembly the IVP Application.
Deploy and configure the Enterprise Archive (imsjavaIVP.ear) containing the IVP EJB jar
(IMSJdbcIVPEJB.jar) and the IVP Web Archive (IMSJdbcIVPWeb.war) using the
WebSphere for z/OS Administration tool.
Update the HTTP server for access to the IVP Web Application
Test the IVP program

9.7.1 Untar the IVP Enterprise Archive


Follow the directions in /usr/lpp/ims/imsjava81/samples/Readme file to decompress the
samples.tar file to obtain the Enterprise Archive, imsjavaIVP.ear, for the IVP program.
Basically what you should do, is the following:
Set the directory into which you want the untarred output to be placed: CD <path>
Decompress the samples.tar file by using the command:
tar -xvf /usr/lpp/ims/imsjava81/samples.tar

The 'ear' file can then be found in the directory:


<path>/samples/ivp/was/imsjavaIVP.ear

/usr/lpp/ims/imsjava81 is the default path name for IMS Java. If you have changed the path
name during the installation, use your own specified path name when locating IMS Java files.
Chapter 9. Java enhancements for IMS and WebSphere

125

Using FTP, transfer the file to a directory on the workstation for use by the WebSphere for
z/OS Administration tool.

9.7.2 Configure an IMS JDBC Resource Adapter instance for the IVP EJB
Changes to a WebSphere Server region are currently accomplished using the WebSphere for
z/OS Administration tool. To install and configure the IMS JDBC Resource Adapter for use by
the IVP program, do the following:
1. Start the WebSphere for z/OS Administration application on the Desktop and connect to
the z/OS server by entering the server machine, user name, and password.
2. Create a new conversation. To create a conversation, highlight Conversations and then
choose Add from the context menu (right mouse) or menu bar (Selected>Add).
In the Conversation window that is displayed, enter a Conversation Name (such as
InstallDataSources) and description (such as Install DataSource for IMS IVP).
Save the changes (Save Icon or Save on Selected menu). You should see the new
conversation listed in the view.
3. Once the new conversation is added, expand the conversation hierarchy (by
double-clicking) down to and including the specific sysplex where the example EJB
application will be installed. You should see the folder J2EE Resources.
4. Define a J2EE Resource for the IVP:
Right-click J2EE Resources and select Add. In the J2EE Resource window displayed on
the right window, add the following: a) J2EE Resource Name: IMSJdbcResource
b) J2EE Resource Type: IMSJdbcDataSource. Save the changes.
5. Define a Resource Instance for the IVP program to associate a target IMS with the
DataSource:
Double-click the J2EE Resources and then double-click the IMSJdbcResource resource
that was added. This will display J2EE Resource Instances. Right-click J2EE Resource
Instances and select Add. In the J2EE Resource Instance window displayed on the right
of the Systems Management EUI window, add the following information:
J2EE Resource Instance Name: IMSJdbcIVPDataSource
System Name: the name of the system where you'll run the server
Input Properties:

(Optional) LogWriter Recording: Enable

(Required) DatabaseView subclass name: samples.ivp.DFSIVP37DatabaseView

Enter the fully qualified name of the DLIDatabaseView subclass that identifies the
metadata for an IMS program status block (PSB).

(Required) DRA Startup Table Name: SYS1 (the 4 character identifier for the DRA
Startup Table)

The one- through four-alphanumeric identifier of a database resource adapter


(DRA) startup table that identifies the IMS subsystem with which the IMS JDBC
resource adapter is to communicate.

Save the definition.

9.7.3 Import, deploy and export the IVP application


Using the WebSphere for z/OS Application Assembly Tool, import the Enterprise Archive
(imsjavaIVP.ear), deploy it, and export the updated Enterprise Archive (below as
imsjavaIVP.ear).
126

IMS Version 8 Implementation Guide

9.7.4 Deploy and configure the Enterprise Archive (imsjavaIVP.ear)


The IVP Enterprise Archive contains the IVP EJB jar (IMSJdbcIVPEJB.jar) and the IVP Web
Archive (IMSJdbcIVPWeb.war). To Install the IVP Enterprise Archive:
1. In the Administration Application tree, select the server where you wish to install the EAR.
Figure 9-17 shows that we selected the server named IMOASR2 in our example.

Figure 9-17 Select Server where EAR to be installed

2. Choose Install J2EE Application from the Selected menu bar as in Figure 9-18. The
Install J2EE Application dialog box appears.

Figure 9-18 J2EE application installation drop down

3. In the dialog box (Figure 9-19), enter the following values:


The name of the IVP EAR file, imsjavaIVP.ear.
The name of the FTP server for the sysplex in which you want to install your application.

Chapter 9. Java enhancements for IMS and WebSphere

127

Figure 9-19 EAR file distribution dialog

Click OK.
4. Click the button Set Default JNDI path and names for all beans from the next window,
as shown in Figure 9-20.

Figure 9-20 Resource and resolution dialog

5. Expand both IMSJdbcIVPEJB and IMSJdbcIVPWeb_WebApp.java


6. Click WASIVPSession.
In the EJB folder set (Figure 9-21) the JNDI path and names for all beans are as follows:

128

JNDI Path

Clear this field

JNDI Name

Use samples.ivp.was.WASIVPSessionHome

IMS Version 8 Implementation Guide

Figure 9-21 JNDI path and names prompts

7. In the J2EE Resource folder (Figure 9-22) associate the J2EE resource you defined
above, that is, IMSJDBCIVPDataSource, with the resource reference, SDM005ARes.

Figure 9-22 J2EE resources dialog

Click the J2EE Resource field and select your resource instance, in our case SDM005ARes,
as shown in Figure 9-23.

Chapter 9. Java enhancements for IMS and WebSphere

129

Figure 9-23 Instance selection

Select OK to install IVP. The Systems Management tool will use the Destination FTP
Server you specified to FTP the application into the HFS on that server.
8. Validate, commit, complete all, and activate the conversation to update the J2EE Server.

9.7.5 Update the HTTP Server for access to the IVP Web application
It will make a difference if you have a separate web server as we did. The webcontainer.conf
will be in the J2EE server DDname BBOENV and is path
/SC53/WebSphereIM/CB390/controlinfo/envfile/WTSCPLX1/IMOASR2A

1. Update webcontainer.conf to contain the context root specification for the IVP program.
Add the following string to the end of variable "host.default_host.contextroots":
/IMSJdbcIVPWeb, /IMSJdbcIVPWeb/*

Example 9-7 shows the sample definition for the host.default_host.contextroots variable in
webcontainer.conf after our updates.
Example 9-7 Sample host.default_host.contextroots
host.default_host.contextroots=/IMSJdbcIVPWeb, /IMSJdbcIVPWeb/*, /

2. Add a new Service entry, /IMSJdbcIVPWeb/*, in httpd.conf. (all on one line). This file is in
the Web server you are using, in our case /web/rose/httpd.conf. Example 9-8 shows the
required Service entry in our case.
Example 9-8 Service entry
Service /IMSJdbcIVPWeb/* /usr/lpp/WebSphere/WebServerPlugIn/bin/was400plugin.so:
service_exit

130

IMS Version 8 Implementation Guide

9.8 Test the IVP application


For testing the IVP application, start a Web browser and enter the Internet address:
http://host_address:8080/IMSJdbcIVPWeb/WASIVP.html

Here, host_address is the IP address of the WebSphere HTTP server, and 8080 is the port
address specified in your /path/webcontainer.conf file variable host.default_host.alias, as
shown in Example 9-9.
Remember you can get the path from your J2EE server start task JCL DDname BBOENV.
Example 9-9 host.default_host.alias
host.default_host.alias=wtsc53oe.itso.ibm.com:9808,SC53:9808

In our case, these values are specified in the file:


/SC53/WebSphereIM/CB390/controlinfo/envfile/WTSCPLX1/IMOASR2A/webcontainer.conf
The values are:
wtsc53oe.itso.ibm.com for host_address
9808 for port

You should see the input panel shown in Figure 9-24. Entries can be either displayed, added,
deleted, or updated. The example will display MUNRO, as that entry has already been added
to the phone book database IVPDB2.

Figure 9-24 WebSphere IVP sample input panel

First, you enter a last name (MUNRO in our example). Next, you click the radio button
Display an entry and click Submit. You will see the response shown in Figure 9-25.

Chapter 9. Java enhancements for IMS and WebSphere

131

Figure 9-25 WebSphere IVP sample output panel

9.9 Error logging and tracing in WebSphere for z/OS


The IMS JDBC Resource Adapter supports basic error logging and tracing by implementing
the following methods:
ManagedConnectionFactory.set/getLogWriter
DataSource.set/getLogWriter

The design of tracing in the J2EE Connection Architecture and that previously defined in the
IMS Java class libraries does not mesh cleanly. In the Connector Architecture, all tracing is
tied to a PrintWriter of a particular ManagedConnectionFactory object. Different
ManagedConnectionFactory objects can use different PrintWriter objects. This is reflected in
the final trace log by headers that identify the ManagedConnectionFactory.
Prior to implementing the Connector Architecture, all tracing in the IMS Java class libraries
occurs on a single Writer object. There is no ability to distinguish trace entries in low-level
service routines based upon which higher-level object, such as ManagedConnectionFactory,
invoked that service routine.
To accommodate these design differences, the IMS Java objects that implement the
Connector Architecture (the com.ibm.connector2.ims.db package), write trace information
(currently minimal) to the PrintWriter associated with a ManagedConnectionFactory and they
write trace information to the IMSTrace object. By default, these trace entries to the IMSTrace
object are independent of the ManagedConnectionFactory and turned off.

132

IMS Version 8 Implementation Guide

Using DataSource.setLogWriter, you can merge the tracing that occurs to IMSTrace with that
of the ManagedConnectionFactory PrintWriter object. You do this by calling this method with
the PrintWriter returned from a call to DataSource.getLogWriter as follows:
dataSource.setLogWriter(dataSource.getLogWriter());

The only PrintWriter that can be passed to DataSource.setLogWriter is the one returned from
DataSource.getLogWriter. Any other PrintWriter is ignored.
Here are some other debug options which may provide you better results:
Use IMSTrace to write trace information to a file in HFS. This mechanism should be limited
to debugging and generally should not be used in deployed applications in a production
system.
Use IMSTrace to write trace information to the Java Standard Error stream and configure
WebSphere to put this information into the server region job log. See the discussion of
tracing, and in particular the setting of TRACEBUFFLOC=SYSPRINT, in the WebSphere
Application Server V4.0.1 for z/OS and OS/390: Messages and Diagnosis, GA22-7837

9.9.1 Sample trace outputs


Look at the trace output in the J2EE start task for both the successful and an unsuccessful
run. This will give a feel for what should happen and what common setup errors look like.
Other things to look out for are the DLIDatabaseView being wrongly specified and the
DLIConnection not initializing successfully at J2EE server startup perhaps because the
IMSJdbcCustomService.xml file cannot be accessed.

Successful! Example 9-10 - IVP run is good


Example 9-10 Successful PSB schedule
<entry>JavaToDLI.allocatePSB(String, String, AIB)</entry>
<parm>
<parmName>psbName</parmName>
<parmChar>DFSIVP37</parmChar></parm>
<parm>
<parmName>draStartUpTableName</parmName>
<parmChar>IMSC</parmChar></parm>
<entry>JavaToDLI.execute(String, AIB)</entry>
<parm>
<parmName>function</parmName>
<parmChar>APSB</parmChar></parm>
<parm>
<parmName>AIB.resourceName1</parmName>
<parmChar>DFSIVP37</parmChar></parm>
<parm>
<parmName>AIB.resourceName2</parmName>
<parmChar>IMSC</parmChar></parm>
<parm>
<parmName>AIB.resourceName3</parmName> <parmChar>0</parmChar></parm>
<exit>JavaToDLI.execute(String, AIB)</exit>
<exit>JavaToDLI.allocatePSB(String, String, AIB)</exit>

Failed Example 9-11 - PSB unavailable


Possibly the PSB was unavailable or not specified in the IMS system definition.
Note: PSB schedule failure: AIB return code 108 reason code 304

Chapter 9. Java enhancements for IMS and WebSphere

133

Example 9-11 PSB Unavailable trace


<entry>JavaToDLI.allocatePSB(String, String, AIB)</entry>
<parm>
<parmName>psbName</parmName>
<parmChar>DFSIVP37</parmChar></parm>
<parm>
<parmName>draStartUpTableName</parmName>
<parmChar>IMSC</parmChar></parm>
<entry>JavaToDLI.execute(String, AIB)</entry>
<parm>
<parmName>function</parmName>
<parmChar>APSB</parmChar></parm>
<parm>
<parmName>AIB.resourceName1</parmName>
<parmChar>DFSIVP37</parmChar></parm>
<parm>
<parmName>AIB.resourceName2</parmName>
<parmChar>IMSC</parmChar></parm>
<parm>
<parmName>AIB.resourceName3</parmName> <parmChar>0</parmChar></parm>
<entry>IMSException(String, AIB, short, String)</entry>
<parm>
<parmName>function</parmName>
<parmChar>APSB</parmChar></parm>
<parm>
<parmName>AIB.resourceName</parmName>
<parmChar>DFSIVP37</parmChar></parm>
<parm>
<parmName>StatusCodeHex</parmName>
<parmChar>4040</parmChar></parm>
<parm>
<parmName>exceptionType</parmName>
<parmChar>Unknown IMS Status code</parmChar></parm>
<data>
<dataName>ReturnCodeHex</dataName>
<dataChar>108</dataChar></data>
<data>
<dataName>ReasonCodeHex</dataName> <dataChar>304</dataChar></data>
Error Code Extension Decimal: -1708163670
<exit>IMSException(String, AIB, short, String)</exit>
<exit>JavaToDLI.execute(String, AIB)</exit>
<exit>JavaToDLI.allocatePSB(String, String, AIB)</exit>
<entry>RuntimeExceptionTrace(String)</entry>
<parm>
<parmName>message</parmName>
<parmChar>Unknown IMS Status code
Function: APSB
Status Code Hex: 4040
Return Code Hex: 108
Reason Code Hex: 304
Error Code Extension Decimal: -1708163670</parmChar></parm>
<exit>RuntimeExceptionTrace(String)</exit>

134

IMS Version 8 Implementation Guide

Part 3

Part

IMS Version 8
Parallel Sysplex
enhancements
In this part of the book we describe the enhancements in IMS Version 8 that are applicable to
the following types of users:
Existing users of IMS Version 6 or Version 7 in a Parallel Sysplex environment
Other users who are considering or planning to exploit sysplex functionality.

Copyright IBM Corp. 2002. All rights reserved.

135

136

IMS Version 8 Implementation Guide

10

Chapter 10.

Coupling Facility structure


management
This chapter describes the enhancements made available to Coupling Facility list structures
in the recent releases of OS/390 and z/OS. Structures used in IMS Parallel Sysplex
environments can take advantage of these enhancements.
All of the IMS-related structures can benefit from one or more of these enhancements:
System managed rebuild
System managed duplexing
Autoalter and structure full threshold processing

Copyright IBM Corp. 2002. All rights reserved.

137

10.1 System managed rebuild


Structure rebuild is often referred to as structure copy. It may be used for several purposes,
including:
Moving a structure from one Coupling Facility to another:

This may be initiated by operations personnel by entering a structure rebuild command, or


by the system when connectivity falls below the rebuild percent specified in the Coupling
Facility Resource Manager (CFRM) policy. For this to be successful, the structure must
have been defined in the CFRM policy with more than one CF in the preference list
(PREFLIST).
Changing the SIZE of a structure:

This may be desirable when the structure utilization is approaching its maximum size and
the user wants to make it larger before it fills up.
An example of a structure definition in a CFRM policy is shown in Example 10-1.
Example 10-1 Structure definition in a CFRM policy
STRUCTURE

NAME(IMS_MSGQ_STR0)
SIZE(8192)
INITSIZE(4096)
REBUILDPERCENT(1)
PREFLIST(CF01 CF02)

The structure rebuild command to move a structure from one CF to another is as follows:
SETXCF START,REBUILD,STRUCTURE,STRNAME=IMS_MSGQ_STR0,LOCATION=OTHER

To rebuild a structure without moving it, for example to change its size, the operator would
enter the same command but without the LOCATION=OTHER parameter. Of course one
could do both at the same time, move it and make it larger.
For rebuild to be successful, the structure must have integrity ; that is, it must not be in a
failed state, or on a Coupling Facility which has failed. In addition, user-managed rebuild was
the only type of rebuild supported for these structures. This means that if there was no active
connector to the structure, rebuild would fail. For example, to rebuild (copy) a shared queue
structure, at least one CQS must be active and connected to it.
Prior to IMS Version 8, not all IMS structures supported the rebuild function. For example, a
shared VSO structure could not be rebuilt. To move it, or resize it, it was necessary to
/VUNLOAD the VSO area and then /START it. When it was restarted, a new structure would
be allocated according to the policy SIZE and PREFLIST. All of the other IMS structures
(VSAM, OSAM, IRLM, CQS) supported the rebuild function.
In IMS Version 8, and in IMS Version 7 with appropriate APARs, system managed rebuild is
supported. The difference between system managed and user managed rebuild is that, with
system managed rebuild, an active connector does not have to be available. For example, if
all the CQSs were down, a structure rebuild command could be used to move the structure.
The only definitional requirement for enabling the system managed rebuild is to update the
CFRM Couple Data Set (CDS) to have the following definition:
ITEM NAME(SMREBLD)

138

IMS Version 8 Implementation Guide

10.2 Alter and autoalter


Using the ALTER function, a structures actual allocated size (not its maximum size) and its
entry-to-element ratio can be changed. When a structure is first allocated by a connector, the
CFRM policy sets an initial size (INITSIZE) and a maximum size (SIZE). The connector sets a
ratio of entries-to-elements. This tells the Coupling Facility how to allocate space between (for
example) list entries and data elements. A ratio of 1:3 would lead to a structure with
(approximately) three data elements for every list entry.
XES provides the means to alter both the current size and the entry-to-element ratio.
However, the alter command only supports the size alter function. The connector must make
any changes to the ratio. None of the IMS connectors request a change to the
entry-to-element ratio. For IMS shared message queue structures, the ratio is derived from
the OBJAVGSZ parameter in the global CQS parameter member CQSSGxxx in
IMS.PROCLIB.
In order to be able to alter the structure size, the structure definition in the CFRM policy must
specify both an INITSIZE and a SIZE. The operator can then change the current size to any
value up to the maximum SIZE. Example 10-2 shows a structure defined to enable alter.
Example 10-2 Structure definition to enable alter
STRUCTURE

NAME(IMS_MSGQ_STR0)
SIZE(8192)
INITSIZE(4096)
MINSIZE(2048)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
REBUILDPERCENT(10)
PREFLIST(CF01 CF02)

An example of a command to change the size of the structure (up or down) to 6120K is:
SETXCF START,ALTER,STRNAME=IMS_MSGQ_STR0,SIZE=6120

The only IMS connector which invokes alter internally is CQS. CQS will alter the size of a
structure up when its CQS-defined overflow threshold is reached. It does not alter the
entry-to-element ratio when it does this.
Autoalter is invoked by the system to change the size of a structure to relieve current
Coupling Facility storage constraints (make the structure smaller) or to avoid a structure full
condition (make the structure larger). While it is doing this, the entry-to-element ratio may
also be changed to more accurately reflect actual usage. Autoalter will not be invoked just to
change this ratio.
There are several user requirements to enable autoalter for a structure:
The connector must allow alter when connecting to the structure
The policy must allow alter by including both an INITSIZE and a SIZE
The policy must allow autoalter by specifying ALLOWAUTOALT(YES)

There are also some optional parameters:


The policy may include a FULLTHRESHOLD value. This value is the percent full that will
trigger autoalter. This is optional because there is a default of 80%. This must be set to
zero to disable structure full monitoring.
The policy may include a minimum size (MINSIZE) to which the structure may be
(auto)altered down. This also has a default of 75% of INITSIZE.

Chapter 10. Coupling Facility structure management

139

10.3 System managed duplexing


System managed duplexing, as its name implies, supports the duplexing of any structure for
which it is enabled (and for which the connector supports it). But first a little background.

10.3.1 Background
Although structures generally do not contain data that is as critical as (for example) a
database, it is nonetheless desirable to be able to recover from a structure failure, Coupling
Facility failure, or connectivity failure. This was especially true for the IMS shared queue
structures which contain the IMS message queues (not a disaster if they are lost, but certainly
painful), and the shared VSO structures (could be recovered through the standard database
recovery process, but could be very time consuming, and painful).
For the shared queue structures, CQS implemented a recovery mechanism which requires
the user to take a structure checkpoint to a structure recovery data set (SRDS), similar to a
SNAPQ, periodically, and then CQS logs all changes to the shared queue structure in a
system logger logstream (which had its own technique for failure recovery). If a structure fails,
CQS can recover it using the SRDS and the logs. While this is effective, it is not very efficient
- lots of overhead creating the structure checkpoint and then logging all the changes.
Fast Paths solution for shared VSO was to implement user duplexing. That is, Fast Path will
create and maintain two copies of a VSO structure. If one fails, then the other remains
available and database recovery is not required. Of course, since shared VSO areas are
recoverable using standard database recovery techniques, user duplexing is optional.
Other IMS structures can be rebuilt by their connectors if they fail. Each IRLM, for example,
knows the locks it holds and so can rebuild its part of the lock structure, and for OSAM and
VSAM, IMS merely invalidates all the buffers and starts over. No big deal.

10.3.2 Duplexing
IMS Version 8 (and Version 7 with appropriate APARs) supports a new CF and z/OS feature
called system managed duplexing. When enabled, the system will maintain a duplicate
copy of the structure on another Coupling Facility. If one copy fails, then the other remains
available (unless its a disaster) and work continues without the need to recover. When (if)
another Coupling Facility is available, duality is automatically restored.
For shared VSO, this means that the user does not have to define dual structures (user
duplexing) and Fast Path does not have to maintain dual structures. For shared queues, the
SRDS and CQS logging is still in effect, but if the message queue structures are duplexed,
then CQS would not have to go through the (perhaps extensive) structure recovery scenario.
For the new resource structure, which has no recoverability of its own (only repopulation,
which is not a complete recovery), duplexing provides for recovery from a structure failure or
loss of connectivity. Duplexing does not make sense for the data sharing structures (OSAM,
VSAM and IRLM) since they can be completely rebuilt without the overhead of duplexing.

140

IMS Version 8 Implementation Guide

10.3.3 Enabling duplexing


There are several requirements to enable system managed duplexing:
1. Update the CFRM CDS to enable system managed duplexing:
ITEM NAME(SMREBLD)
ITEM NAME(SMDUPLEX)

2. Update and activate the CFRM policy:


Example 10-3 is an example of a structure defined for duplexing. DUPLEX(ALLOWED)
means that duplexing is supported but must be started by operator command.
DUPLEX(ENABLED) means that duplexing is enabled a soon as it is allocated.
Example 10-3 Structure definition for system managed duplexing
STRUCTURE

NAME(IMS_RSRC_STR0)
SIZE(8192)
INITSIZE(4096)
MINSIZE(2048)
DUPLEX(ALLOWED -or- ENABLED)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
REBUILDPERCENT(10)
PREFLIST(CF01 CF02)

3. If duplexing is ALLOWED, start it by entering the command:


SETXCF START,REBUILD,DUPLEX,STRNAME=IMS_RSRC_STR0

10.3.4 Disabling duplexing


Duplexing can be stopped using the command:
SETXCF STOP,REBUILD,DUPLEX,STRNAME=IMS_RSRC_STR0,KEEP=OLD-or-NEW

When a structure is being duplexed, rebuild is not supported. So, to move a structure to
another Coupling Facility:
1. Stop duplexing, keeping the one that you want. This may require the CFRM policy to be
changed from ENABLED to ALLOWED.
2. Change the PREFLIST on the policy to include the CF to where you want to move the
structure (if not already in the policy). Remove the CF where you dont want the structure.
Activate the updated policy.
3. Start duplexing again, either by command or by changing the policy back to ENABLED.

10.4 Which structures support which features


System managed rebuild is supported for the following structures:

Shared VSO, Shared Queues, FP EMHQ, IRLM, and Resource


Autoalter is supported for the following structures:

Shared VSO, Shared Queues, FP EMHQ, IRLM, Resource, OSAM, and VSAM
System managed duplexing is supported for the following structures:

Shared VSO, Shared Queues, FP EMHQ, IRLM, and Resource (same as rebuild)

Chapter 10. Coupling Facility structure management

141

142

IMS Version 8 Implementation Guide

11

Chapter 11.

Base Primitive Environment


enhancements
In this chapter, we describe enhancements made to the Base Primitive Environment for IMS
Version 8.
Please be aware that there are many internal enhancements that are targeted only for IMS
internal development and therefore are not visible to customers.
However, in this chapter, we only describe BPE external enhancements that may be of
interest to customers, such as some new exits and commands in Version 8.
These are the main BPE enhancements we cover:
New BPE address space initialization module
User exits and statistics for BPE
Displaying the BPE and CQS version

Copyright IBM Corp. 2002. All rights reserved.

143

11.1 Base Primitive Environment (BPE) enhancements


Base Primitive Environment (BPE) is a programming environment for developing new IMS
component address spaces. Common Queue Server (CQS) is an example of this, and more
and more IMS components are written to utilize the services provided by the BPE. A version
of BPE has been shipped with IMS since IMS 5.1. For IMS Version 8, the BPE version
shipped and required is 1.4. BPE 1.4 supports the Common Queue Server (CQS) and the
new Common Service Layer (CSL) components. The CQS version to be used with IMS
Version 8 is 1.3.0. BPE enhancements include the following:

New BPE address space initialization module


BPEPARSE enhancements
Buffer pool compression
Hash table services
User exit enhancements
Statistics services
DISPLAY VERSION command

Almost all of BPE is object code only (OCO) and many of the enhancements are internal (like
Buffer pool compression) so that they do not require any user action to get the benefits of
them. Some of the enhancements, like BPEPARSE and hash table services, are intended for
use by the people writing IMS components on BPE. The following paragraphs shortly
describe the enhancements concentrating on the externals.

11.2 New BPE address space initialization module


Prior to BPE 1.4 each IMS component using BPE had to write its own first module to get
control in the address space when the job started (for example: CQSINIT0). This module was
passed on the PGM= of the EXEC card in the JCL and it had to be added to the OS/390
Program Properties Table (PPT) to allow it to get control in key 7. This module then issued a
BPESTART macro to start BPE services.
BPE 1.4 provides a module named BPEINI00, which can be used to start a new address
space. A new PARM= parameter (BPEINIT=) is used to pass the name of a data-only module
containing a static structure built by BPESTART to define the address space characteristics.
BPEINI00 verifies that the PSW key is 7 and that the program is authorized. It loads the
indicated BPEINIT= module, and then starts the BPE services.
Only one module BPEINI00 needs to be added to the PPT. Example 11-1 shows an
example of the required entry in the PPT.
Example 11-1 A sample PPT entry for the Base Primitive Environment
PPT PGMNAME(BPEINI00)
CANCEL
KEY(7)
NOSWAP
NOPRIV
SYST
DSI
PASS
AFF(NONE)
NOPREF

/*
/*
/*
/*
/*
/*
/*
/*
/*
/*

MVS SUPPLIED VALUE IS - '6870FFFF00000000'


PROGRAM CAN BE CANCELLED
(DEFAULT)
PROTECT KEY ASSIGNED IS 7
PROGRAM IS NOT-SWAPPABLE
PROGRAM NOT PRIVILEGED
(DEFAULT)
PROGRAM IS A SYSTEM TASK
DOES REQUIRE DATA SET INTEGRITY (DEFAULT)
PASSWORD PROTECTION ACTIVE
(DEFAULT)
NO CPU AFFINITY
(DEFAULT)
NO PREFERRED STORAGE FRAMES
(NODEFAULT)

*/
*/
*/
*/
*/
*/
*/
*/
*/
*/

The new CSL components, Operations Manager (OM), Resource Manager (RM), and
Structured Call Interface (SCI) all use BPEINI00 to start. CQS can use either BPEINI00 or the
144

IMS Version 8 Implementation Guide

existing CQSINIT0. To start a CQS address space using the new BPEINI00 module, code the
following into CQSs start procedure:
//CQS

EXEC PGM=BPEINI00,PARM='BPEINIT=CQSINI00,...'

11.3 User exits and statistics for BPE


Two new BPE-owned user exit types are created. The new BPE-defined user exit types are:
INITTERM

Initialization and termination exit. The exit is called during early BPE
initialization and late normal BPE termination, it is not called if CSL
address space abends.

STATS

Statistics exit. The exit is called periodically with BPE and IMS
component statistics. Exit call frequency is determined by BPECFG
PROCLIB member STATINTV= parameter.

These exits are available to any address space using BPE 1.4. The INITTERM exit is
provided as a way for user exits to get control when a BPE address space starts and when it
ends. The STATS exit gets called periodically to provide BPE statistics and optionally, IMS
component statistics. The statistics services of BPE has been enhanced to provide the exit
with the information that includes:

General address space statistics


Dispatcher statistics
Asynchronous Work Element (AWE) server statistics
Control block services statistics
Storage statistics

New fields are added to the standard BPE user exit parameter list. There are changes in the
way user exits are defined to allow a single PROCLIB member to be used for multiple
component user exits. The BPE environment and the exits are defined in the following
PROCLIB members:
BPE configuration parameters member
BPE user exit list member

Neither one of these PROCLIB members are required, because you can use default values
for the parameters and none of the user exits are required for BPE.
See IMS Version 8: Base Primitive Environment Guide and Reference, SC27-1290 for the
detailed description of these PROCLIB members and information on the coding and use of
the user exits. This manual is a new manual for IMS Version 8 and it is provided in softcopy
only. In IMS Version 7, the BPE information was included in the book IMS Version 7 Common
Queue Server Guide and Reference, SC26-9426.

11.3.1 BPE configuration parameters member


The BPE execution environment is defined in the BPE configuration parameters member in
PROCLIB. The PROCLIB member name is specified in the JCL of the component utilizing the
BPE using the BPECFG= keyword. Example 11-2 shows the contents of a BPE configuration
parameters member. The same member can be used for all address spaces.
Note: IBM recommends trace levels of at least LOW as in our example. These are incore
traces and should have very little overhead. The '*' turns on all trace types and allows for
future types without having to change or add TRCLEV statements.

Chapter 11. Base Primitive Environment enhancements

145

Example 11-2 An example of the BPE configuration parameters member


LANG=ENU
STATINV=600
TRCLEV=(*,LOW,BPE)
TRCLEV=(*,LOW,CQS)
TRCLEV=(*,LOW,RM)
TRCLEV=(*,LOW,OM)
TRCLEV=(*,LOW,SCI)
EXITMBR=(SHREXIT0,BPE)
EXITMBR=(SHREXIT0,CQS)
EXITMBR=(SHREXIT0,OM)
EXITMBR=(SHREXIT0,RM)
EXITMBR=(SHREXIT0,SCI)

11.3.2 BPE user exit list


The user exits are defined in the BPE user exit list in PROCLIB. The BPE user exit list is
specified by the BPE configuration parameter EXITMBR statements. The EXITMBR
statement defines the member name and the component using the user exit list member. All
user exits can be in same user exit list PROCLIB member like we are referring to the member
name SHREXIT0 in our example. Example 11-3 shows a sample of the contents of BPE user
exit list.
Example 11-3 An example of the BPE user exit list PROCLIB member
EXITDEF=(TYPE=STATS,EXITS=(BPESTAT0),COMP=BPE)
EXITDEF=(TYPE=INITTERM,EXITS=(RMITEXIT),COMP=RM)

11.4 Displaying the BPE and CQS versions


As we get more and more new address spaces in IMS, knowing what versions we are working
with becomes important - particularly in diagnosing problems. The new BPE modify
command DISPLAY VERSION (short form: DIS VER ) is a quick and easy way to find this out on a
running system. The BPE and CQS versions can be displayed by using this command. The
same command can be used to display the versions of the other components (CSL
components) using the BPE. Example 11-4 shows the commands and the output for all the
CQSs in our test sysplex environment. Note that the IM2ACQS in our sysplex was at IMS
Version 7 level, and the command is not supported with its CQS, thus the error message
BPE0031E is issued.
Example 11-4 BPE command DISPLAY VERSION
F IM1ACQS,DISPLAY VERSION
BPE0000I CQS VERSION = 1.3.0 BPE VERSION = 1.4.0 CQ1ACQS
RO SC47,F IM2ACQS,DIS VER
BPE0031E DIS VER COMMAND IS INVALID CQ2ACQS
RO SC54,F IM3ACQS,DIS VER
BPE0000I CQS VERSION = 1.3.0 BPE VERSION = 1.4.0 CQ3ACQS
RO SC67,F IM4ACQS,DIS VER
BPE0000I CQS VERSION = 1.3.0 BPE VERSION = 1.4.0 CQ4ACQS

146

IMS Version 8 Implementation Guide

12

Chapter 12.

Shared queues support for APPC


and OTMA synchronous
messages
In this chapter we describe the IMS shared message queue support for synchronous APPC
and OTMA transactions. In IMS Version 6, when IMS shared message queue was introduced,
the support did not cover APPC and OTMA messages. In IMS Version 7, the support was
added for asynchronous APPC and OTMA messages. In Version 8, the support is further
enhanced to cover all types of APPC and OTMA messages with the exception of CPI-C
driven APPC transactions.

Copyright IBM Corp. 2002. All rights reserved.

147

12.1 Background
When the shared queues feature was introduced in IMS Version 6, transactions entered to
IMS from an APPC or OTMA client could be processed only on the front-end IMS - the IMS
which received that input message. To enforce this, IMS appended its own IMSID to the end
of the queue name. For example, if transaction TRANABCD were entered into IMS1 from a
LU2 device, its transaction queue name would be 1TRANABCD. That transaction could be
processed on any IMS with registered interest in TRANABCD. If that same transaction were
entered from APPC or OTMA to IMS1, it would be queued with the name of
1TRANABCDIMS1. Only IMS1 would register interest in transactions queued with the
transaction name suffixed with the IMSID of IMS1 (TRANABCDIMS1), so only IMS1 was able
to process it. This restriction limited severely the benefits of shared queues for systems in
which APPC or OTMA was used.
In IMS Version 7, the support for sysplex-wide processing of asynchronous APPC and
OTMA transactions was introduced. Asynchronous APPC transactions are those whose
command sequence is ALLOCATE-SEND-DEALLOCATE. For OTMA, the corresponding
sequence is COMMIT-THEN-SEND (also known as commit mode 0). These are transactions
for which the unit of work is committed before the response is sent to the end-user. The
response is sent asynchronously, whenever it becomes available. With asynchronous
transactions, it was only necessary for a back-end IMS to inform the front-end when the
message was available on the APPC or OTMA output queue. So, in IMS Version 7, these
transactions were queued with the regular transaction queue name (no IMSID appended). If
it was scheduled on a back-end IMS, when the response was available, a message was sent
to a special queue which the front-end monitored and then knew when to go look for the
response.
However, this still left the synchronous APPC (ALLOCATE-SEND-RECEIVE) and OTMA
(SEND-THEN-COMMIT or Commit mode 1) transactions without shared queues support. The
problem with this is that IMS does not put uncommitted messages on the shared queue.
Synchronous APPC and OTMA messages are sent before they are committed, so if the
transaction were scheduled in a back-end IMS, how could the front-end deliver the
uncommitted response? This problem has been addressed and resolved in IMS Version 8,
which now offers full shared queues support to all APPC and OTMA transactions. The new
support uses the z/OS Resource Recovery Service (RRS) multi-system cascaded
transactions support.

12.2 Implementation
Prerequisites for the shared queues support for asynchronous APPC and OTMA messages
are:
All IMSs are at the Version 8 level and the RECONs specify MINVERS(81)
z/OS 1.2 (or higher) with RRS APAR OW50627
RRS must be enabled in all environments where IMS shared queues is running

If any of these are not met, then synchronous APPC and OTMA transactions will be
processed only on the front-end IMS.
When an input message is received from an OTMA or APPC client, the receiving IMS
determines whether the environment is shared queues capable and whether the request is
asynchronous or synchronous. If it is asynchronous, then the actions described above are
taken. If the message is synchronous then IMS also checks to see if the synchronous support
is enabled. IMS stores information about the requester (TPIPE token or remote LU token)
along with the SMQ name, e.g., IMSID, in the input message prefix.
148

IMS Version 8 Implementation Guide

If the IMS system processing the message is the front-end IMS, then the IOPCB reply is not
put on the shared queues but rather sent directly to the partner client prior to syncpoint
processing. This is business-as-usual processing.
If the message is processed on a back-end system, then the IOPCB reply must be routed
back to the front-end IMS for delivery since it is the front-end that maintains the connection to
the client. The routing is done prior to syncpoint processing with the back-end holding the IMS
resources until an indication of a commit or abort. Note the following:
All conversational IOPCB reply messages, and messages where all segments added
together are greater than 61K, are routed through the shared queues with a special
notify sent to the front-end via XCF. A specialized OTMA/APPC task in the front-end
reads the notify message which contains the message output queue name, and notifies
the appropriate task to retrieve the message from the shared queues.
All non-conversational IOPCB reply messages less than 61K are sent directly (not through
the shared queues structure) to the front-end using XCF services.

Figure 12-1 shows the general flow of a synchronous message as it is processed in an IMS
shared queue environment.

APPC Synchronous (Allocate - Send - Receive)


OTMA Send-then-Commit (Commit mode 1)
OTMA/ APPC Client
- Sends INPUT
- Waits for REPLY

Shared
Message
Queues

IOPCB reply

IMS2

IMS1

XCF

"front-end" always delivers


IOPCB replies

IMS3

Non-conversational IOPCB reply messages (less than 61K) are sent to the
front-end using XCF services
Conversational IOPCB reply messages or any messages greater than 61K are
sent to the front-end using Shared Queues along with a special NOTIFY message
that is sent using XCF

Figure 12-1 General flow of synchronous APPC or OTMA transaction in shared queues

When an input message is received, IMS determines if the message is synchronous, checks
to see if the shared queues capability is enabled, and invokes RRS callable services to
establish the RRS working environment.
The input message is placed on the global transaction ready queue to be processed by an
available IMS with registered interest. In this example, the back-end system is immediately
available to process the transaction. The back-end IMS invokes RRS callable services to
establish the RRS environment and perform message synchronization.
Chapter 12. Shared queues support for APPC and OTMA synchronous messages

149

The message is processed in the back-end and the IOPCB reply sent to the front-end. All
IMS resources are held pending syncpoint processing until either a commit or backout
indicator is received. Depending on the size of the IOPCB reply and whether or not this is
a conversational transaction, the message is either sent via XCF services or routed
through shared queues.
The front-end IMS delivers the message and interfaces with the partner client following the
rules of sync_level processing - either none, confirm, or syncpoint. Based on the success
or failure of partner interaction, the front-end IMS invokes RRS to either commit or
backout.
The RRS commit or backout is communicated to the back-end for the corresponding
commit or backout. RRS on both sides provide the support for the synchronization of
commit or backout.
At the completion of commit or backout, the front-end IMS interacts with the partner
program to either terminate the connection or to get the next message.

12.3 Migration considerations


The meaning of synchronous has not changed with this support. One of the following
occurs:
The partner waits for an IOPCB reply message.
If the transaction terminates without IOPCB reply, the DFS2082 message is issued.
If the transaction processes in a back-end that abends, the new message DFS2224 is
issued.
When the interaction between IMS (front-end or back-end) and RRS is stopped (RRS
crashed), an IMS user abend 0711 is forced.

When data is sent by the front-end before IMS commit processing:


If the message is processed on a front-end IMS:

Processing is like that in a non-SQ environment.


If the message is processed on a back-end IMS:

Both the front-end and back-end are part of the commit.


All sync_levels are supported none, confirm, syncpoint. Only CPI-C driven (explicit)
transactions are not supported, since they do not go through the shared queues. There are
no definitional requirements for this support. It will be automatically available when all the
pre-requisites are met.

12.3.1 Synchronous messages and program-to-program switches


It is important to determine whether or not the transactions participate in program-to-program
switches. Program-to-program switching is not supported for messages that are sent in with a
sync_level of syncpoint. This restriction is true even in a non-shared queues environment.
For messages sent in with a sync_level of none or confirm, all transactions involved in the
program-to-program switches, as a general rule, must run in the same IMS (front-end or
back-end) that processes the first message. There are some exceptions to this restriction.
Conversational transactions, whether immediate or deferred, can run in any IMS in the
shared queues group.

150

IMS Version 8 Implementation Guide

The other exception involves a unique situation where a message processed in a back-end
IMS issues multiple program-to-program switches: one or more to a local transaction(s) that
will be processed in the shared queues environment and at least one to a remote transaction
that will be sent across an multiple systems coupling (MSC) link. In this situation, the local
transaction(s) must be processed in the front-end IMS, which has the connection to the APPC
or OTMA partner. It should also be noted that any replies that come back across the MSC link
from the remote IMS system will be sent to the front-end IMS for delivery.

12.3.2 Error conditions


When an output IOPCB reply is sent, it is always sent before syncpoint. If the transaction that
replies to the IOPCB executes on the front-end and the reply cannot be sent, the default
action is to abend the transaction with a user abend 0119 and discard the output reply. The
Message Control/Error Exit Routine (DFSCMUX0) can be implemented to change the default
action. This is the way errors with synchronous message replies have been processed for
several releases of IMS. If the transaction that replies to the IOPCB executes in a back-end
IMS and the front-end cannot deliver the message, an RRS Take Backout is issued which
forces backout processing to occur on the back-end. It is of note that although backout
occurs, the transaction does not experience an abend condition. As in the front-end,
DFSCMUX0 can be used to change the default action.
As mentioned earlier, non-conversational transaction reply messages that are less than 61k
in length are sent to the front-end IMS using XCF services. If the back-end IMS receives an
XCF error when attempting to send the reply message to the front-end then a user abend
0119 occurs (in addition to the log record type 67D0 subtype 02 to trace the back-end EMHB),
the output message is discarded and the remote client notified. Again, DFSCMUX0 can be
coded to change the default action.
Anytime an RRS error is received, a user abend 0711 is issued and the transaction is stopped
(also writing an log record 67D0 subtype 02 to trace the back-end EMHBRRS return code into
the EMHB - a new trace entry).

12.3.3 Other miscellaneous migration considerations


It is possible for the synchronous support to be disabled while a message is being processed,
for example, if RRS is stopped. If this occurs then the transaction in the back-end will be
unable to send its reply and could receive a U0711 abend.
Existing messages that are sent to the system console when RRS is activated or deactivated
continue to be valid in IMS Version 8. These include:
DFS0698W PROTECTED CONVERSATION PROCESSING NOT ENABLED - RRS IS
NOT AVAILABLE
DFS0653I PROTECTED CONVERSATION PROCESSING WITH RRS ENABLED

Additionally, if the transaction is processing in a back-end system and the back-end fails, the
front-end is notified and a message is sent to the client which releases the synchronous wait:
DFS2224 BACK-END SYSTEM ABENDED

DFSCMUX0 continues to be invoked on the front-end and back-end when the output reply
cannot be sent. The exit can change the default abort action (except for sync-level of
syncpoint which must continue with the abort).

Chapter 12. Shared queues support for APPC and OTMA synchronous messages

151

When the synchronous support is enabled, it is worth noting that RRS performance directly
impacts IMS performance. The primary areas to consider are the dispatching priority of the
address space as well as the setup of the logstream. Refer to: z/OS V1R2.0 MVS Setting Up
a Sysplex, SA22-7625 for more information.

12.3.4 Support considerations


As already mentioned, the synchronous APPC and OTMA shared queue enablement
supports following transactions (with certain dependencies or restrictions according to this
one message scheduling IMS which is holding the connection to the partner LU):

Full Function transactions


Fast Path transactions
Conversational transactions
MSC routed transactions

CPI-C transactions are not supported.

Fast Path support


Since the synchronous APPC and OTMA shared queue support also for Fast Path
transactions is implemented in IMS Version 8, there are some messages worth mentioning:
If no local IFP region active; IMS rejects the input message and issues the following new
message:

DFS2196 UNABLE TO PROCESS SHARED EMH DUE TO RRS ERROR


If no IFP region is active in IMSplex; IMS sends the following message to APPC or OTMA
client to take it out of response mode (not new in IMS Version 8):

DFS2529I NO FAST PATH REGION IS ACTIVE


During abort processing, if output message has not already been sent, the following
message is now issued in the front-end IMS also for Fast Path transactions:

DFS2766I PROCESS FAILED


The following message is sent only if the Fast Path message is scheduled in another IMS
than the APPC or OTMA session holder, and means that the front-end IMS is not equal to
the back-end IMS:

DFS2193I UNABLE TO PROCESS SHARED EMH


This message has two new possible return codes:
RC06: STORAGE SHORTAGE cannot get storage for PST LU62 Extension
RC07: INTERNAL ERROR DFSLUMIF does not work

152

IMS Version 8 Implementation Guide

Part 4

Part

Common Service
Layer
In this part of the book we discuss a significant architectural evolution in IMS Version 8.
The Common Service Layer (CSL) is a major progression in the management and operation
of the IMS Parallel Sysplex. The components of the CSL lay the groundwork for an IMSplex.

Copyright IBM Corp. 2002. All rights reserved.

153

154

IMS Version 8 Implementation Guide

13

Chapter 13.

Common Service Layer (CSL)


architecture
In this chapter we describe the Common Service Layer architecture and the components
which make it up. The Common Service Layers purpose is to allow the exchange of
information between the components making up the IMSplex. IMSplex consists of the
following address spaces, some old and some new with IMS Version 8.

IMS control region and DBRC


Common Queue Server (CQS)
Structured Call Interface (SCI) (new)
Operations Manager (OM) (new)
Resource Manager (RM) (new)

More detailed information on configuring and managing the CSL environment can be found in
Chapter 20, Common Service Layer configuration and operation on page 289.
Detailed examples and discussions of a number of the topics introduced in this chapter can
be found in Chapter 14, Sysplex terminal management on page 177.

Copyright IBM Corp. 2002. All rights reserved.

155

13.1 Background
IMS has evolved over the years, from Version 1 days where there was no data sharing, to the
current state of IMSplex enablement.
Before IMS Version 1 Release 2:
There was no data sharing capability.
In order to process the database in the batch or utility region, the databases had to be
DBRed.
DBRC was used for recovery only.

Figure 13-1 shows IMS before Version 1 Release 2, and the some of the early processing
restrictions.

IMS
DBRC
No data sharing
Only one IMS at a time could access
the data
Databases protected by user
(DISP=OLD)
To process database in batch or
utility region
/DBR database - PROCESS /START database online
DBRC used for recovery only
No authorization processing

DBs

BATCH
DBRC
UTILITY
DBRC

RECON

Figure 13-1 Pre-Version 1.2

IMS Version 1 Release 2 provided:

Block level data sharing introduced


DBRC added database authorization processing
IRLM was added as global lock manager (maximum of two IRLMs)
IMS used IRLM for lock management and buffer invalidation
IRLMs communicated using pass-the-buck processing (VTAM or CTC link)

Figure 13-2 shows IMS Version 1.2, and the addition of DBRC for database authorization
processing, and the IRLM added as the global lock manager (maximum of two IRLMs) and for
buffer invalidation. The IRLMs communicated via VTAM or CTC links and used pass-the-buck
processing (requiring significant overhead for lock requests.)

156

IMS Version 8 Implementation Guide

PTB

IMS
DBRC

IRLM

IRLM

DBs

BATCH
DBRC

IMS
DBRC

BATCH
DBRC
UTILITY
DBRC

UTILITY
DBRC
RECON

Figure 13-2 Version 1.2 and the beginning of data sharing

IMS Version 5 started to exploit XCF and the MVS Parallel Sysplex by utilizing:

New IRLM (2.1)


Coupling Facility lock structures
Coupling Facility cache structures
XCF communications

Figure 13-3 is an example of IMS Version 5 exploitation of Parallel Sysplex features. Coupling
Facility utilization (lock and cache structures), along with the very efficient XCF lock
communication utilized by the new version of IRLM (2.1).

Coupling
Facility

OSAM
VSAM
LOCK
IMS
DBRC

XCF

IRLM

DBs

BATCH
DBRC

IRLM

IMS
DBRC

BATCH
DBRC
UTILITY
DBRC

UTILITY
DBRC
RECON

Figure 13-3 Version 5 and sysplex exploitation

IMS Version 5 increased capacity (more IMSs could participate in data sharing - up to 32
IRLMs and up to 255 IMSs) and improved performance (CF access as opposed to PTB for
lock management and buffer invalidation.)

Chapter 13. Common Service Layer (CSL) architecture

157

IMS Version 6 added:


Additional shared database support

Shared DEDBs with Virtual Storage Option


OSAM caching
Shared queues for full function and Fast Path EMH
VTAM generic resources support:

Single system image to end user (LOGON IMS)


Sysplex communications:

Use of CRC from E-MCS console to send commands to all IMSs in the sysplex and
receive responses
Automatic Restart Manager (ARM) support

Figure 13-4 shows the addition of Coupling Facility cache structure management of DEDB
with VSO, and OSAM buffers. As you can see, IMS is expanding to take advantage of todays
sysplex technology.

DEDB data stored in


CF cache structure

SVSO
OSAM
VSAM
LOCK
IMS
DBRC

IRLM

XCF

OSAM data stored in


CF cache structure

IRLM

IMS
DBRC

OSAM
BATCH
DBRC

DEDB

BATCH
DBRC
UTILITY
DBRC

UTILITY
DBRC
RECON

Figure 13-4 Version 6 additional sysplex exploitation

As shown in Figure 13-5, IMS Version 6 also added VTAM generic resources support (VGR).
This provided a single system image to the end user, and allowed VTAM to route logon
requests to any IMS in the VGR group. Sysplex wide commands were also enabled using a
common command recognition character for all related IMS systems.
IMS shared queues was added to enable any suitable IMS participating in the shared queues
group to process transactions. IMS now could have a single set of message queues for all
IMSs with the queues stored in CF list structures. A new New Common Queue Server (CQS)
address space was added to manage the queue structure.

158

IMS Version 8 Implementation Guide

@START DB

E-MCS

@START DB

LOG
SMQ
OSAM
VSAM
SVSO
LOCK
VGR

LOGR

CQS

LOGR

CQS

IMSA
DBRC

IMSB
DBRC
VTAM

VTAM may route


logon request to
any active IMS
in the VGR group.

VTAM
Network
LOGON IMS

Figure 13-5 Version 6 VTAM generic resources, and sysplex command routing

IMS Version 7 added shared queues support for asynchronous OTMA and APPC
transactions and support for VTAM multinode persistent sessions (MNPS). The only thing
missing is synchronous OTMA and APPC shared queues transaction support to complete the
shared queues implementation for IMS. This functionality is provided with IMS Version 8.

LOG
SMQ
OSAM
VSAM
SVSO
LOCK
VGR
MNPS

LOGR
CQS1
IMS3
IMS1
DBRC
DBRC
VTAM
TCP/IP

Network

LOGR
CQS2
IMS2
DBRC
VTAM
TCP/IP

APPC/OTMA

Figure 13-6 Version 7 and asynchronous APPC and OTMA transactions

What weve seen is the evolution of the IMSplex.

Chapter 13. Common Service Layer (CSL) architecture

159

13.1.1 The IMSplex


An IMSplex is a set of IMS address spaces that are working together as a unit and are most
likely running in a Parallel Sysplex. Note the IMSplex is not new we are just now formalizing
the term.
As the IMSplex has developed, its capabilities and capacity has increased. For example, the
number of IMSs which may take part in data sharing has increased from two up to the current
number 255. Even with small numbers of IMS systems participating in the IMSplex
environment, other issues related to IMS systems management and control start to become
important.

Parallel Sysplex Operations


Traditionally IMS TM systems have been managed from the master terminal, user written
automation, and the system console or E-MCS console. Within an IMSplex with an increasing
number of members the following issues rapidly assume increased importance and at the
same time become more difficult to accomplish.
The need to control resources consistently across multiple IMSs
The need to coordinate online changes across all IMSs, and ensure that all of the IMSs
are using the same libraries
The ability to send commands to all IMSs in the IMSplex

Shared queues
With a shared queues implementation, messages are placed on a shared queues list
structure according to destination. IMS registers interest in the queues it can process, and the
CQS (common queues server) monitors these queues and informs IMS when there is work to
process. However, some issues remain:
Resource type consistency is not enforced within an IMSplex. There is no guarantee that
destination names are defined consistently on each IMS member (transactions, LTERMs,
etc.)
Resource name uniqueness is not enforced within the IMSplex. The same resource
(name) may be active on more than one IMS (NODE, LTERM, user, user ID).
Global callable services is not supported. IMS user exits cannot determine whether a
terminal or a user resource is already in use in the IMSplex.
Users with significant status cannot resume status on another IMS in the event of an IMS
or other failure.
Conversation mode: If a conversation fails on IMS1 it cannot resume the
conversation on IMS2 if there is a failure on IMS1.
Fast Path response mode: If a transaction is processing in Fast Path response mode
on IMS1, the user cannot retrieve the response on IMS2 in the event of a failure on
IMS1.
Set and Test Sequence Number (STSN) from IMS1 will not be restored if operations
resume on IMS2 in the event of an error on IMS1.
Command significant status such as STOPped, TESTMFS and EXCLUSIVE cannot
be resumed if the session moves from IMS1 to IMS2.

VTAM generic resources


When a VTAM session logs on, it establishes an affinity with that IMS. If this affinity still exists
at the next logon it will be routed by VTAM to the same IMS. When the session is terminated
(for whatever reason) if IMS is managing the affinities (GRAFFIN=IMS) then IMS will not
delete the affinity if the terminal had significant status, (Conversation, Fast Path, STSN,
command). If (GRAFFIN=VTAM) then VTAM will delete all affinities (except LU 6.1 sessions)
resulting in the IMS significant status being deleted. This results in the following VGR issues:
160

IMS Version 8 Implementation Guide

If IMS manages the affinities:

Users with significant status at session termination cannot use VGR to logon to
another IMS if the session or IMS fails, hence the availability benefits are lost.
If the user bypasses VGR and logs directly onto another IMS the significant status is
not resumed on the new IMS.
If VTAM manages the affinities:

All affinities, apart from LU 6.1, are deleted maintaining availability benefits, but forcing
the user to reestablish the desired state, which may not always be possible.

13.1.2 Systems management


The Common Service Layer (CSL) introduces features to address the systems management
issues discussed above.
Operations management:

Better operational control of IMSplex members


Resource management:

Better management of IMSplex resources


State switching between members of a shared queues group
IMSplex-wide process management, for example better coordination of online change
Improved VGR support
Support for global callable services

13.1.3 Operations management


The Common Service Layer (CSL) introduces features to address the management of the
IMSplex as a whole. Operations management features include the following:
A single point of control (SPOC):

Automation and user entered commands may be routed through an Operations


Manager to any or all IMSs in the Parallel Sysplex.
Responses are consolidated as a single response.
Does not require use of the OS/390 command recognition character (CRC).
IMS provides a TSO based SPOC application:

User may develop their own SPOC application using the provided APIs.

13.1.4 Resource Management


The Common Service Layer (CSL) introduces features to address the resource management
concerns discussed above. The features address resource management at the IMSplex level
and include:

Resource name and type consistent across IMSplex


Active resource name uniqueness within IMSplex
Terminal and user state resumption on another IMS
Online change coordination across IMSplex
Management of VGR affinity deletion consistent with terminal recovery requirements
Global callable services support

The components of the CSL, the Structured Call Interface (SCI), the Operations Manager
(OM), and the Resource Manager (RM) are described individually in the following sections.

Chapter 13. Common Service Layer (CSL) architecture

161

13.2 Common Service Layer (CSL) architecture


The Common Service Layer is an evolving architecture for the IMSplex. It is an architecture,
not an address space. The IMSplex consists of the three new CSL address spaces built on
the Base Primitive Environment (BPE), plus existing address spaces. The new CSL address
spaces are the following:
Structured Call Interface (SCI)
Operations Manager (OM)
Resource Manager (RM)

The intra-IMSplex communication between components are handled by the Structured Call
Interface. For the resource management, a new resource structure can be defined in the
Coupling Facility. Resource Manager uses CQS to manage the resource structure.
Figure 13-7 describes the CSL components and shows their relationship to each other. All the
address spaces above use the SCI to communicate as required. It is the SCI that provides the
mechanism for the intercommunication. You require one SCI per OS/390 or z/OS which has
one or more IMSs taking part in the IMSplex. You only require one OM and if using the RM,
one RM per IMSplex (because the SCI allows communication as required). However for
performance and availability it is better to also have one OM and one RM per OS/390 or
z/OS.

CSL Configuration
SPOC
Automation

SPOC

Automatic RECON
Loss Notification

Coordinated Online Change


Sysplex Terminal Management
Resource

Automation

Operations
Manager
(OM)

Structured
Call
Interface

Resource
Manager
(RM)

SCI

SCI

SCI

Shared
Queues

SCI
Communications
Master
Terminal

IMS
Control
Region

S
C
I

S
C
I

Common
Queue
Server
(CQS)

End User
Terminal

DBRC

Figure 13-7 Common service layer components

162

IMS Version 8 Implementation Guide

SCI

CF

IMS

CQS
SCI

New CSL
address spaces

CQS

Online DBRC
DBRC Batch Utility
Batch with DBRC
Utility with DBRC

SCI
IMS

Figure 13-8 depicts an IMSplex running on a four operating systems image sysplex, each with
the CSL address spaces (SCI, OM, RM), a CQS address space, data sharing, and with VTAM
generic resources.

IMSplex Configuration
OM

SCI

RM

RM

SCI

OM

SCI

SCI

S CI

SCI

SCI

SCI

XCF

SCI
IMS
CTL

S
C
I

S
C
I

SCI
S
C
I

CQ
S

CF

XCF

IMS
CTL

CQ
S

S
C
I

XCF

Resource
List Structure
LOGR
List Structures
SMQ
List Structures
OSAM
Cache Structure
VSAM
Cache Structure
Shared VSO
Cache Structures

IMS
CTL

S
C
I

S
C
I

CQ
S

SCI

S
C
I

CQ
S

XCF

IMS
CTL

S
C
I

SCI

OM

SCI

RM

RM

SCI

OM

SCI

SCI

SCI

SCI

SCI

SCI

IRLM
Lock Structure
VGR
List Structure

This chart depicts an IMSplex with 4 IMS "modular units" on 4 OS images,


each with the CSL address spaces (SCI, OM, RM), a CQS address
space, data sharing, message queue sharing, and with VTAM generic
resources.
Figure 13-8 Multi OS IMSplex with CSL

Automatic RECON loss notification requires at a minimum that there is an SCI on each
OS/390 image and that each IMS DBRC or utility or batch job using the RECONs has
specified the same IMSPLEX= value, either via an execution parameter or a user exit.
Single point of control, global online change, and sysplex terminal management require the
full CSL environment, that is in addition to the SCI as above, at least one OM and one RM in
the IMSplex.
Sysplex terminal management also requires a resource structure.
Global online change can be implemented without a resource structure, but to have
consistency checking of the online change ACBLIB, FORMAT, MODBLKS, and MATRIX data
sets, including all concatenations, you require a resource structure. Otherwise you must
ensure consistency of the data sets involved.

13.3 Structured Call Interface (SCI)


The Structured Call Interface address space provides for standardized intra-IMSplex
communications between members of an IMSplex, provides security authorization for
IMSplex membership, and provides SCI services to registered members.

Chapter 13. Common Service Layer (CSL) architecture

163

The structured call interface services are used by SCI clients to register and deregister as
members of the IMSplex, and to communicate with other members. The SCI client issues
CSL macros to execute code in the SCI address space, cross memory services under its own
TCB, or scheduling an SRB in SCI address space.
The SCI configuration is that one SCI address space is required on each OS/390 or z/OS
image with IMSplex members.
The other address spaces in the IMSplex register with the SCI. The following components all
register and interact with SCI.
CSL address spaces - Operations manager (OM) and the Resource Manager (RM).
Common Queue Server (CQS)
IMS - DB/DC, DBCTL, DCCTL, FDBR
Automated Operator Programs (AOP) such as single point of control (SPOC)
DBRC
Batch with DBRC
Others - the CSL (SCI) interface is documented, and may be accessed by vendor or user
programs.

Note: Registrants may abend if SCI not available when required.

13.4 Operations Manager (OM)


Operations manager is the first step in introducing a single point of control and operator
interaction and management of all or selected members of the IMSplex.

13.4.1 Today
Todays sysplex operations management does not provide an IMS single image. Commands
can be routed to multiple IMS systems using the E-MCS console function that requires a
common command recognition character (CRC) defined in IMS.PROCLIB member
DFSPBxxx. Also, to direct a command to a specific IMS system you need to know the OS/390
system name and the IMS name. The CMD and ICMD DL/I calls of the Automated Operator
Interface only affect the IMS system they are using. They cannot manage other IMS systems.
RACF can be used to secure commands entered from MCS/E-MCS but DFSCCMD0 is
needed to secure command at the verb, keyword, and resource levels.
Commands may be entered by each MTO but most commands and automation processes
today can only affect an individual IMS. Some commands have a GLOBAL parameter, but
asynchronous responses (e.g., DFS0488I message) from other IMSs are not returned to the
MTO entering command see Example 13-1.
Example 13-1 Command with both synchronous and asynchronous responses
/DBR DATABASE XYZ GLOBAL
DFS058I DBR COMMAND IN PROGRESS
DFS0488I DBR COMMAND COMPLETED DBN XYZ

RC=nn

Figure 13-9 gives an example of pre-IMS Version 8 IMSplex command activity.

164

IMS Version 8 Implementation Guide

@ DBR DB XYZ

MTO

M TO

E-M CS
/STA DB

IMS1

IM S2

SM Q
MTO
/STOP NODE

DB

N etwork

IM S3

/STO P TRAN

Shared Resources

IMS4

MTO
/DIS STATUS

They do not share a common command entry point.


E-MCS not part of the IMSplex environment.
Command Routing is awkward (at best).

Figure 13-9 Pre IMS Version 8 IMSplex command activity

13.4.2 OM infrastructure
The OM utilizes BPE services and provides an API allowing single point of command entry
into the IMSplex. Communication with other IMS address spaces is via the SCI. OM is a focal
point for operations management and automation, and consolidates command responses
from multiple IMSs.
The following services are provided to members and clients of an IMSplex:
Route commands to IMSplex members registered for the command
Consolidate command responses from individual IMSplex members into a single response
to present to the command originator
Provide an API for IMS command automation
Provide a general use interface for command registration to support any command
processing client
Provide user exit capability for command and response editing, and for command security
Supports existing IMS commands (classic commands) and introduces new IMSplex
commands

The CSL configuration requires one OM address space per IMSplex, but one per OS/390 or
z/OS image is recommended for performance and availability.
Figure 13-10 shows an IMS Version 8 CSL environment which has the ability to issue IMSplex
wide commands and receive their responses from one place via SPOC.

Chapter 13. Common Service Layer (CSL) architecture

165

SPOC

Operations
Manager
(OM)

Structured
Call
Interface

Resource
Manager
(RM)

Resource
List Structure

SCI

SCI

SCI

Transactions
Lterms

SCI
Communications

Automation

Msnames
IMS
Control
Region

MTO

S
C
I

S
C
I

Common
Queue
Server
(CQS)

Users
Nodes
Userids

VTAM
(TCP/IP)

Processes
.....

End
Users

Figure 13-10 CSL with OM and SPOC issuing IMSplex wide commands

Figure 13-11 shows a much larger IMSplex with four OS/390 images in the sysplex and one
IMS on each of these OS/390s. OM is routing commands and consolidating responses
centrally throughout the IMSplex.

166

IMS Version 8 Implementation Guide

Automation
SPOC

SCI

SPOC or AOP can


specify routing for
any command

SCI

OM

OM

IMS2

IMS1

OM routes command
to one or more IMSs

CF

Each IMS responds


to OM

SCI

IMS3

OM consolidates
responses for SPOC

OM

RM

IMS4

OM

SCI

Additional OM address spaces are optional.

Figure 13-11 OM routing commands in a 4 member IMSplex

Command entry routing


OM provides the ability to route commands to IMSplex members registered for the command.
This can be done to all members of the IMSplex, or selected members. The OM API supports
the routing of commands via the ROUTE parameter on the various macros. The SPOC
application handles creation of the routing list from user input, or from the default
preferences.

Consolidated response
OM consolidates the command responses from the individual IMSplex members into a single
response which is presented to the command originator. You can also specify a time-out
value for the response which represents the maximum amount of time you would like to wait
for the response before the request times out.

Command security
Authorization is performed within OM before sending the commands to IMS using RACF (or
equivalent security product). There is also the option of calling a user written exit to perform
command security.

OM APIs
OM provides two types of API support for Automated Operator Program (AOPs).
CSLOMI API:

The CSLOMI API is used to support command string built by workstations. IMS Connect
could use CSLOMI. Command strings are passed to OM and command responses are
returned to the client in XML format.

Chapter 13. Common Service Layer (CSL) architecture

167

CSLOMCMD API:

The CSLOMCMD API is used by IMS TSO/SPOC. Command keywords are passed to OM
and command response are again returned in XML format.
OM command processing support:

OM does not distribute unsolicited output messages to an AOP. You can use Netview or
another automation product to trap IMS messages and then send them to an automation
program (communication external to IMS). The automation program could then issue a
command on behalf of the event. Also, OM does call the output user exit routine passing
the unsolicited message. The user exit routine can then provide its own technique to
distribute the message. For example, an OS/390 write to operator (WTO) macro could be
used to write the message to the system console. OM does not support command
recovery, except in cases that will be changed for RM processing (discussed in RM
section later), nor does OM support restart.

13.4.3 OM clients and their roles


The OM interface provides several types of clients and their specific interface into the
IMSplex for command processing. The following are the command processing client types,
there roles, and the interface used to interact with OM.
Command processing (CP) clients

These are clients which process commands entered from other address spaces. IMS and
RM are command processing clients.
Command entry (CE) AO clients

These are clients through which commands are entered to the OM and then to the
command processing clients. SPOC is an example of such a client.
These clients use the CSLOMCMD macro interface to OM.
Command forwarding (CF) AO clients

These are clients which are forwarding commands strings built elsewhere, probably by a
workstation SPOC (GUI) and processing the returned responses.
These clients used the CSLOMI macro interface to OM.
All OM services are invoked by CSLOMxxx macros. OM Macro coding and use is described
in IMS Version 8:Common Service Layer Guide and Reference, SC27-1293.

13.4.4 Commands
With the new CSL architecture, new commands and command syntax has been created to
operate on IMS systems at the IMSplex level.

Sysplex commands
The following commands are new with IMS version 8 and can only be processed via the OM:

168

INIT

INITiate process

INIT OLC

Starts a coordinated online change (OLC)

TERM

TERMinate process

TERM OLC

Stops a coordinated online change that is in progress

DEL

DELete resource

DEL LE

Deletes runtime LE options

IMS Version 8 Implementation Guide

UPD

UPDate resource

UPD LE

Updates runtime LE options

UPD TRAN

Updates selected TRAN attributes

QRY

QueRY resource

QRY IMSPLEX

Returns the names of the members in the IMSplex

QRY LE

Returns runtime LE options

QRY MEMBER

Returns status and attributes of the IMS members in the IMSplex

QRY OLC

Returns OLC library and resource information

QRY TRAN

Returns TRAN info similar to /DIS TRAN

QRY STRUCTURE

Returns structure information of the RM resource structure

For complete details on the new IMS commands, see IMS Version 8: Command Reference,
SC27-1291.

IMSplex command characteristics


IMSplex commands can only be entered through the OM interface, they cannot be entered
directly to IMS. The command responses returned are in XML format. XML has been chosen
as a programming interface. The responses must be translated from XML to a suitable
display format.
IMSplex commands also support filters and wildcards for resource name selection. The
percent sign (%) wildcard is used to represent a single character substitution, and the asterisk
(*) wildcard is used for multi-character substitution.

Classic commands
Most classic IMS commands can be entered through the OM API.
IMS commands specific to an input LTERM are not supported by OM, such as those in
Example 13-2.
Example 13-2 Classic commands which cannot be issued through OM
/EXC, /EXIT, /FORMAT
/HOLD, /(UN)LOCK node, pterm,lterm
/SET, /SIGN
/TEST MFS, /RCL, /REL

IMS asynchronous command response


When a command response includes a synchronous DFS058I message followed by one or
more asynchronous messages (e.g., DFS0488I), the synchronous DFS058I message is not
returned, only the messages indicating the command completed are returned. This applies
only to commands routed through the OM API and does not apply to existing interfaces.

Presence of resource structure


Some commands have a global impact, when a resource structure is present, such as the
command in Example 13-3.
Example 13-3 Commands with Global impact if Resource Structure present
/STOP NODE ABC

Chapter 13. Common Service Layer (CSL) architecture

169

NODE XYZ is flagged as stopped in the resource structure, now NODE ABC cannot log on to
any IMS in IMSplex.

Commands indifferent to IMSplex


Some commands execute in every IMS where the command is sent. They are not aware of
the IMSplex, such as the command in Example 13-4.
Example 13-4 Command indifferent to IMSplex
/DIS TRAN TRX1 QCNT

The above command will execute in each IMS where the command is routed, and all will
return the same value (global queue count).
Most commands depend on several factors:

The command source


If RM is active with a resource structure
The effect of significant status
If the resource exists on the resource structure
If the resource is owned by this IMS
If the resource owned by another IMS
If the command is for display or update

13.4.5 User exits


User may write user exits for the OM address space. All OM exits are optional, no samples
are provided. Exits may be called for connection, initialization, termination, input, output, and
security.
Since CSL components are built on the Base Primitive Environment, they can take advantage
of BPE common services and interfaces. This aids in creating a common look and feel to the
configuration of new IMS components. User exits are defined in the BPE User Exit List
PROCLIB member for OM as shown in Example 13-5.
Example 13-5 User exit definition in BPE
EXITDEF=(TYPE=xxx,EXITS=(XXX,YYY),COMP=OM)

OM and BPE definition is addressed in more detail in Chapter 20, Common Service Layer
configuration and operation on page 289.
The available OM exits consist of the Client Connection User Exit, the
Initialization/Termination User Exit, the Input User Exit, the Output User Exit and the Security
User Exit. Detailed information on these exits can be found in IMS Version 8:Common
Service Layer Guide and Reference, SC27-1293, and IMS Version 8: Base Primitive
Environment Guide and Reference, SC27-1290.
The following is a brief description of the exits and their function.

OM Client Connection User Exit (TYPE=CLNTCONN)


This is called when client registers or deregisters commands with OM and indicated it is
ready to accept commands.
It is called again when the client deregisters from OM.

170

IMS Version 8 Implementation Guide

OM Initialization/Termination User Exit (TYPE=INITTERM)


Called after OM completes initialization/termination
Called after each CSL configured IMS sysplex completes initialization/termination
Not called for abnormal termination

OM Input User Exit (TYPE=INPUT)


Called to view command text received form AO client before it's processed
Can change command text or reject command

OM Output User Exit (TYPE=OUTPUT)


Called when command response received from CP client before it is sent to AO client
Can change response text
Also called when OM receives unsolicited output, for example, late reply

OM Security User Exit (TYPE=SECURITY)

Called when CMDSEC=A or E


After Input User exit
After RACF if CMDSEC=A
Can accept or reject command
Can override RACF

13.5 Resource Manager


The Resource Manager allows IMS to begin to address the task of resource management
and coordination across the IMSplex. Prior to IMS V8, IMSplex-wide resource management
was the responsibility of operations. It was possible that these resources could be defined
inconsistently such that the results of an operation differed between IMSs. For example, An
resource name could represent an LTERM on one IMS and a transaction on another.
Depending on where the message arrived, the message would be queued differently. Or an
LTERM could be assigned to different nodes on different IMSs. Although the Resource
Manager does not itself provide any management or coordination functions, it does provide
the infrastructure which allows IMSs in the IMSplex to implement some resource
management functions. Figure 13-12 shows an example of an IMSplex and its resources.

Chapter 13. Common Service Layer (CSL) architecture

171

IMSplex
Resources are
Shared

Shared
Queues
Data
Sharing

CF

Descriptor
Name
APPC

CPI-C
Transaction

Static

Node
(Static node
user)
Lterm
Userid

IMSplex
IMS1

IMS2

ISC

Parallel Sessions

Dynamic

Node
User
Lterm
Userid

Single Session

IMS3

MSC

Msname

ISC

Online Change
Libraries

Node
User
Lterm
Userid

Figure 13-12 Resources in an IMSplex

13.5.1 Resource management functions


IMS V8 begins the process of exploiting the RM infrastructure by providing the following
resource management functions:
Sysplex terminal management
Resource type consistency
Resource name uniqueness
Resource status recovery
Global callable services
Coordinated global online change

Sysplex terminal management is described in detail in Chapter 14, Sysplex terminal


management on page 177 and Coordinated global online change is described in detail in
Chapter 15, Global online change on page 215). This chapter discusses the RM
infrastructure.

13.5.2 Resource management infrastructure


Figure 13-13 shows a typical single image configuration of one IMS in a CSL configuration.
It includes the IMS control region, the new CSL address spaces (SCI, OM, and RM), a
resource structure (optional), and a Common Queue Server (CQS) to access the resource
structure. With the exception of the IMS control region, all of these address spaces are built
on the Base Primitive Environment (BPE) - not shown in the figure. When additional IMSs on
other OS/390 images are part of the IMSplex, each image must include an SCI address
space and, if using a resource structure, a CQS address space. RM and OM need only be

172

IMS Version 8 Implementation Guide

started on one OS/390 image in the IMSplex, although multiples are recommended for
availability and performance.

Operations
Manager
(OM)

Structured
Call
Interface

Resource
Manager
(RM)

SCI

SCI

SCI

SCI
Communications

IMS
Control
Region

S
C
I

S
C
I

Common
Queue
Server
(CQS)

Resource management in the IMSplex is performed


by a combination of the IMS Control Region, the
Resource Manager, the Common Queue Server,
and a Resource Structure.
OM and SCI also play a supporting role for
communications and command entry.

Resource
List Structure
LOGR
List Structures
SMQ
List Structures
OSAM
Cache Structure
VSAM
Cache Structure
Shared VSO
Cache
Structures
IRLM
Lock Structure
VGR
List Structure
MNPS
List Structure

Figure 13-13 Resource Manager, Common Queue Server, Resource Structure

13.5.3 RM clients and their roles


Figure 13-14 shows how these different components communicate with each other. IMS and
RM communicate using SCI services. If IMS and RM are on the same OS/390 image, SCI
uses cross-memory calls. If they are on different OS/390 images, SCI uses XCF signalling.
Because the CQS interface was developed in IMS Version 6 for shared queues (before SCI),
all address spaces communicating with CQS use this interface instead of the SCI interface.
CQS still registers with SCI to join the IMSplex but does not use its communication services.
CQS use OS/390 cross-system extended services (XES) to connect to and to access the
resource structure. Note that a single CQS address space per OS/390 image can be used to
access both the resource structure and the shared queue structures.

Chapter 13. Common Service Layer (CSL) architecture

173

IMS uses RM to manage resource information


RM uses CQS to manage resource structure
Resource

The resource structure contains information


about IMS and IMSplex resources
IMS and RM communicate using SCI services
RM and CQS communicate using CQS services
CQS uses XES services to access structure

IMS
Control
Region
(CTL)
SCI

Resource
Manager
(RM)

Common
Queue Svr
(CQS)

SCI

SCI

CQS

CF
Resource
Structure

CQS

CF structure limit raised from 255 to 512 in


OS/390 2.9
Figure 13-14 RM clients and roles

13.5.4 Resource structure


The resource structure is defined in the CFRM policy and allocated by CQS as a list structure.
It is optional in a CSL environment, but it if exists, it contains information about IMSplex
resources. Each resource has an entry in the structure with information about that resource.
For example, a node resource entry would identify the IMS system to which that node is
logged on. A user resource entry would contain information about any active or held
conversations for that user. The following is a list of the types of entries found in the resource
structure:

Transactions
Nodes, LTERMs, MSNAMEs, APPC descriptors, users, user IDs
Global processes such as online change
IMSplex global and local information

Although the structure is optional, if it does not exist, then there is no global repository for
sysplex terminal information and the sysplex terminal management function of IMS is
disabled. Global online change, while it makes use of the structure for recovery from some
error conditions, does not require the structure. More information about the resource structure
and how its contents can be found in 14.7, Resources and the resource structure on
page 195.

13.5.5 Common Queue Server (CQS)


Common Queue Server (CQS) was introduced in IMS Version 6 to provide an interface
between IMS and the shared queue structures. In IMS Version 6, one CQS could support only

174

IMS Version 8 Implementation Guide

a single IMS control region. In IMS Version 7 CQS was enhanced to support multiple IMS
control regions as long as they are on the same OS/390 image and belong to the same
shared queues group.
In IMS Version 8 CQS has been further enhanced to also support the interface between RM
and the resource structure while continuing to support shared queues. That is, a single CQS
can support both shared queues and the resource manager. However, CQS shipped with IMS
Version 8 does not support IMS Version 6 or IMS Version 7 shared queues. For example, if
you have both IMS Version 7 and IMS Version 8 in the same shared queues group, they
require different CQSs. Figure 13-15 shows one CQS supporting both IMS shared queues
and RM.
These are some of the CQS enhancements delivered with IMS Version 8:
Support for list structures with programmable list entry ids (LEIDs) to guarantee
uniqueness
Support for structure alter and autoalter, system-managed rebuild, and system-managed
duplexing
Ability to initiate structure repopulation to rebuild a failed resource structure

Resource

OS
Shared
Queues

IMS
Control
Region
(CTL)

Operations
Manager
(OM)

Structured
Call
Interface

SCI

SCI

SCI

IMS
Control
Region
(CTL)

Resource
Manager
(RM)

Common
Queue Svr
(CQS)

SCI

SCI

SCI

CF
CQS
SCI
IMS

SCI Interface
CQS
interface

CQS can support multiple IMSs in a


SQGROUP and multiple RMs in an IMSplex.
Must be on same OS image.

Figure 13-15 CQS supporting shared queue and RM

13.5.6 Resource Manager (RM) address space


The Resource Manager address space is a required component of the Common Service
Layer and an integral part of the resource management infrastructure. While the RM does not
in itself provide any of the resource management functionality, it does offer services to its
clients (IMS control regions) to enable them to provide these functions. IMS exploits RM
services to provide sysplex terminal management (if a resource structure has been defined)
and global callable services.
Chapter 13. Common Service Layer (CSL) architecture

175

At least one RM address space is required within the IMSplex. If a resource structure exists,
then multiple RMs can be started and is recommended for performance and availability. If
there is no resource structure then there can be only one RM in the IMSplex. Like other
IMSplex components, RM registers with SCI and uses SCI to communicate with its clients
(IMS control regions). A single RM can support multiple IMS Version 8 control regions if they
are in the same IMSplex. This is true even if those control regions are on different OS/390
images. Figure 13-16 shows one RM providing support for two IMS control regions on the
same OS/390 image and one IMS control region on another OS/390 image. A total of 32 IMS
control regions can be supported by a single RM. Note that, if a resource structure is being
used, each RM requires a local (same OS/390 image) CQS to access the structure.

Resource

z/OS
Shared
Queues

IMS
Control
Region
(CTL)

Operations
Manager
(OM)

Structured
Call
Interface

SCI

SCI

SCI

IMS
Control
Region
(CTL)

Resource
Manager
(RM)

Common
Queue Svr
(CQS)

SCI

SCI

SCI

CF
CQS
SCI
IMS

SCI Interface
CQS
interface

RM can support multiple IMSs.


RM can support any IMS in IMSplex.

Figure 13-16 RM supporting IMSs

13.5.7 RM characteristics
The IMS Version 8 Resource Manager is both a client and a server within the IMSplex. It has
the following characteristics:
RM is a client if the Structured Call Interface (SCI) and registers with SCI as a member of
the IMSplex, enabling SCI communications with other members of the IMSplex.
RM is a client of the Operations Manager (OM) and registers RM-related commands for
which RM is a command processing client. At this time, the only command RM registers
with OM is QUERY STRUCTURE, which display resource structure statistics.
RM is a client of the Common Queue Server (CQS) and uses CQS to manage the
resource structure. The RM interface with CQS uses the CQS interface, not the SCI
interface.
RM is a server to the IMS control region. The control region registers with RM to perform
sysplex terminal management and to coordinate global online change.
176

IMS Version 8 Implementation Guide

14

Chapter 14.

Sysplex terminal management


In this chapter we describe sysplex terminal management (STM). Note that in the previous
chapter, 13.1.4, Resource Management on page 161, we introduced the concepts and
architectural requirements of the resource management functions available to IMS Version 8
when using the Common Service Layer. Sysplex terminal management is one of those
functions.
In the current chapter, we cover these topics:

STM objectives
STM environment
IMSplex resources
STM terms and concepts
Resource type consistency
Resource name uniqueness
Resource status recovery
Resource ownership and affinities
Impact of STM on IMS exits
Examples of STM in action
Global callable services

Copyright IBM Corp. 2002. All rights reserved.

177

14.1 Sysplex terminal management objectives


In an IMSplex, IMS operations and users are faced with the complexities of managing
resources which may have been defined to multiple IMSs within the IMSplex, but which must
be managed with the same level of control as if there were a single IMS. For example, in a
single IMS, a single session NODE can log on only once, an LTERM can be assigned to only
one NODE at a time and therefore can be active only once, optionally, a USER can be signed
on only once, and so on. A single IMS can enforce this because that single IMS is aware of all
instances of that resources activity.
IMS is a queuing system. When a message arrives or is created in IMS, IMS uses a process
called find destination (FINDDEST) to determine how to handle (queue) the message.
Destinations (typically) are transactions codes, logical terminal names, or MSC logical link
names (MSNAMEs). When IMS queues a message to one of these destinations, there is
never any confusion about the type or end-point of that destination.
Similarly, an end-user can have some significant status, such as being in a conversation, in
MFS TEST mode, or have several other statuses which we might call significant status. If
that users session fails, the IMS to which the user was connected can remember that
significant status and restore it the next time the user signs on.
Note: In all of these cases, when multiple IMSs are involved, it is necessary for each IMS
to know the status of a resource on other IMSs in the IMSplex.

Sysplex terminal management addresses these and similar resource management issues of
the IMSplex. Specifically, STMs objectives are to accomplish the following:
Provide for resource type consistency:

Do not allow message destination resources to be defined as different resource types on


different IMSs. Message destinations are names which IMS uses to queue a message to
its destination, and include transaction codes, LTERM names, MSC logical link names
(MSNAMEs), and APPC descriptor names. For example, do not allow the same resource
name to be defined as a transaction on IMS1 and as an LTERM on IMS2.
Provide for resource name uniqueness:

Do not allow a resource to be active on more than one IMS at a time. Resources managed
by this function are single session VTAM NODEs, Extended Terminal Option (ETO)
USERs, LTERMs, and (if requested) USERIDs. For example, do not let LTERMA be active
on IMS1 and IMS2 at the same time. Do not allow USERA to be signed on to both IMS1
and IMS2.
Support global resource status recovery:

Allow a terminal user to terminate a session on one IMS (normally or abnormally) and
resume (or recover) that users status on another session with another IMS. A status for
which recovery is supported is called significant status. For example, if USERX is in a
conversation on IMS1, and IMS1 fails, that user may resume the conversation after
reestablishing a new session on IMS2.
Support global callable services:

Allow an IMS exit using callable services to determine the status of a resource anywhere
within the IMSplex. For example, the Signon Exit (DFSSGNX0) should be able to
determine if a USER is signed on anywhere within the IMSplex.

178

IMS Version 8 Implementation Guide

14.2 Sysplex terminal management environment


Sysplex terminal management is an optional function of IMS. It is enabled only when IMS is
executing in a Common Service Layer (CSL) environment with shared queues. It utilizes the
services of the Resource Manager with a resource structure to save information about, and
manage the status of, IMSplex VTAM-connected terminal and user resources, and to some
extent, transactions. Figure 14-1 shows a possible configuration of a single IMS in a CSL
environment with shared queues. The shaded area indicates those components used directly
by STM. Additional IMSs in the IMSplex would utilize the same resource and shared queue
structures.
Other IMSs on other OS/390 images would have similar configurations. Each OS/390 image
would require its own SCI address space and each may have a local Resource Manager
address space, although only one RM is required within the IMSplex. There is, of course, a
single Resource Structure and a single set of Shared Queue Structures.
Wherever there is a Resource Manager accessing the resource structure, there must also be
a Common Queue Server. A single CQS address space can provide client services to RM for
the resource structure and to IMS for the shared queue structures. CQS must reside on the
same OS/390 image as its clients (RM and IMS).

Operations
Manager
(OM)

Structured
Call
Interface

Resource
Manager
(RM)

SCI

SCI

SCI

SCI
Communications

IMS
Control
Region

S
C
I

S
C
I

Common
Queue
Server
(CQS)

Resource management in the IMSplex is performed


by a combination of the IMS Control Region, the
Resource Manager, the Common Queue Server,
and a Resource Structure.
OM and SCI also play a supporting role for
communications and command entry.

Resource
List Structure
LOGR
List Structures
SMQ
List Structures
OSAM
Cache Structure
VSAM
Cache Structure
Shared VSO
Cache Structures
IRLM
Lock Structure
VGR
List Structure
MNPS
List Structure

Figure 14-1 Sysplex terminal management configuration

14.3 IMSplex resources


IMS resources in an IMSplex include many different types, including IMS system address
spaces, IMS system data sets, IMS defined databases, transactions, and applications,

Chapter 14. Sysplex terminal management

179

dependent regions running application programs, VTAM and OTMA network resources, batch
and utility address spaces, and probably several more. Most of these resources have names
by which they are known to IMS. When IMS systems are cloned, or have similar definitions,
many (or all) of these names are the same throughout the IMSplex and can form the basis for
IMSplex-wide system management functions.
Sysplex terminal management addresses the management of a subset of these resources,
primarily those defined as part of the VTAM network. These resources, and the names they
are known by, are shown in Figure 14-2. Note that STM supports neither BTAM nor OTMA
resources.

SQ

STM-managed
resources

Descriptor
Name

Rsrc

CF

APPC

CPI-C
Transaction

Static

Node
(Static node user)
Lterm
Userid

IMSplex

Transaction

ISC

Parallel Sessions

Dynamic

Node
User
Lterm
Userid

Single Session

MSC

Msname

Node
User
Lterm
Userid

Node
(Static node user)
Lterm
Userid

ISC

Figure 14-2 Sysplex Terminal Resources

This figure identifies the resources managed by sysplex terminal management. Each of these
resources can be the source or destination of an IMS message, and has one or more names
associated with it. Each name represents an IMS control block. How IMS handles these
messages is determined solely by its message destination name, which usually represents
an anchor block from which to queue the message. Each named resource may be
represented by an entry in the resource structure.

Statically defined VTAM resources (not parallel-session ISC)


Statically defined VTAM resources are defined in the IMS system generation using the TYPE,
TERMINAL, and NAME macros, the NODE name being defined on the TERMINAL macro
and the LTERM name being defined on the NAME macro:
TYPE
UNITYPE=SLUTYPE2 (for example)
TERMINAL NAME=NODEname
NAME
LTERMname

180

IMS Version 8 Implementation Guide

These names may be used to create entries in the resource structure. For parallel session
ISC terminals, or for NODEs created dynamically using ETO, another control block (the
SPQB) is created representing the ETO USER. Because there is no USER equivalent for
statically defined single session VTAM resources, a new name is invented to be used strictly
for creating resource structure entries. This new name is the STATIC NODE USER and has
the same name as the NODE name. There is no equivalent IMS control block, and it is used
ONLY for resource structure entries. No IMS command or log record will ever refer to a static
NODE user.
If a user signs on from a static NODE, a USERID is associated with that session for security
(authorization) processing. Although this is optional for static terminals, many IMS shops
require a user to sign on. Users are, by default, prevented from signing on to more than one
session unless the SGN= parameter is coded in PROCLIB member DFSPBxxx. By coding
SGN=M, the user is allowed to sign on multiple times.

Dynamic (ETO) resources


When a user logs on to IMS using the Extended Terminal Option (ETO), a control block
structure (VTCB) representing that NODE is created in IMS. When that user signs on, a user
structure is created consisting of a user control block (SPQB) and one or more LTERMs
(CNTs). The names of the USER and LTERM(s) associated with the signed on user may be
decided by default or by the Signon Exit (DFSSGNX0). Additionally, since the user is required
to sign on, a USERID is also associated with the user.
When a dynamic user with significant status (for example, in a conversation) signs off or logs
off, the user structure (USER + LTERM) is disconnected from the terminal structure but is not
deleted. If that same user then signs on from another NODE, the user control block structure
is connected to the new NODE. Any status and messages queued for that user follow the
user to the new NODE (messages are queued by LTERM name). To be consistent, sysplex
terminal management keeps much recoverable status information about a user in the USER
entry in the resource structure, thus the reason for the static NODE user invention mentioned
above for statically defined resources.

Single session ISC resources


Single session ISC resources are defined just like single session non-ISC resources, with the
same name types. These are NODE, (STATIC NODE) USER, LTERM, and USERID.
TYPE
UNITYPE=LUTYPE6
TERMINAL NAME=NODEname
NAME
LTERMname

Parallel session ISC resources


Parallel session ISC definitions, whether statically or dynamically defined, always have a
SUBPOOL (SPQB) associated with them to represent the USER. An example of a statically
defined parallel session ISC NODE with a maximum of two parallel sessions may be defined
as:
TYPE
TERMINAL
SUBPOOL
NAME
SUBPOOL
NAME

UNITYPE=LUTYPE6
NAME=NODEname,SESSION=2
NAME=username1
LTERMname1
NAME=username2
LTERMname2

(defines
(defines
(defines
(defines
(defines

the
the
the
the
the

NODE resource)
first USER resource)
first LTERM resource)
second USER resource)
second LTERM resource)

NODE name, USER name, and LTERM name are defined during system definition, or during
the ETO logon/signon process. The USERID is also provided during the logon/signon
process. Because parallel sessions have been defined, there may be multiple sessions

Chapter 14. Sysplex terminal management

181

between a NODE and IMS, with the NODE logged on multiple times. In this case, there must
be a different USER (SUBPOOL) and LTERMs (NAMEs) associated with each logon. Each
USER may have a distinct signon USERID.

MSC logical links (MSNAMEs)


MSNAMEs representing MSC logical links are defined during the SYSGEN process by the
MSNAME macro:

name

MSPLINK
MSLINK
MSNAME

(defines the physical link - not in the resource structure)


(defines the logical link - not in the resource structure)
(defines the MSNAME resource)

Like LTERMs and TRANSACTions, MSNAMEs represent message destinations and, in a


shared queues environment, have their own queue type on the shared queue structure. IMS
must be able to distinguish between this destination type and another.

Static transactions
Transactions are defined to IMS with the TRANSACT macro and represent message
destinations. The TRANSACT macro generates an SMB control block which may be used in
a non-shared queues environment for queuing input messages for scheduling. In a shared
queues environment, the transaction message is queued on the Transaction Ready Queue in
the shared queues structure. Again, because they are a destination for queuing messages,
they must be distinguishable from the other destinations.
TRANSACT CODE=trancode

Other than keeping track of defined transactions, there is no other support for transactions in
STM. Transaction characteristics are not kept in the resource structure.

APPC CPI-C driven transactions


CPI-C driven transactions are used by APPC terminals to enter transactions directly to IMS
application programs without going through normal transaction scheduling. The transaction
code itself is not (must not be) defined to IMS with the TRANSACT macro. Instead, the CPI-C
driven transaction is defined in the TP_Profile. Like static transactions, only the transaction
code itself is managed by STM.

APPC output descriptors


APPC output descriptors are defined in IMS.PROCLIB member DFS62DTx.
U descname parms

IMS treats the APPC descriptor name in the same way it treats an LTERM name - to
determine the destination of a message entered, for example, from a program issuing a
CHNG call. It is therefore necessary to be able to distinguish between a real LTERM and an
APPC descriptor LTERM.
Except for keeping track of the defined APPC descriptor names, there is currently no other
support in STM for APPC sessions. No status is kept for APPC descriptors.

Message destinations
Not all of these names are message destination anchor blocks. For example, although a
message may be sent to a NODE, the message is queued off the control block (the CNT)
representing an LTERM assigned to that NODE. Four of the above names are considered
message destinations for purposes of queuing messages:

182

IMS Version 8 Implementation Guide

Transaction codes:

Static transactions defined in the IMS system generation


CPI-C transactions defined in the TP_PROFILE
Dynamic transactions defined by the Output Creation Exit (DFSINSX0)
Logical terminal names (LTERMs):

Static LTERMs defined in the IMSGEN


Dynamic LTERMs created in an ETO environment
Logical link names (MSNAMEs):

Defined by the MSNAME macro in the IMS system generation


APPC descriptor names:

Defined in IMS.PROCLIB member DFS62DTx. Although messages are not queued by


APPC descriptor name, this name is used to determine, from the descriptor definition,
what the message destination is.

Summary of IMS resources managed by STM


The above resources are managed by the sysplex terminal management function of IMS
using the resource structure as a repository for resource-related information. Resources and
the resource structure on page 195 describes these resources in a little more detail,
including when they are created and deleted and how they are used.

14.4 STM terms and concepts


Sysplex terminal management introduces some new terms and concepts that need to be
understood before discussing the functionality.

14.4.1 Resource type consistency


Resource type consistency is based on the concept of an IMS message destination as
described in , Message destinations on page 182. For purposes of resource type
consistency, a message destination is any named resource which may also be the name of a
queue of messages for that resource. For example, in a shared queues environment,
transactions are queued off the (serial) Transaction Ready Queue in the shared queue
structure. Messages destined for logical terminals are queued off the LTERM Ready Queue.
Output messages destined for MSC remote destinations are queued off the Remote Ready
Queue. When a message arrives in IMS, or an application program issues a CHNG or ISRT
call to a TP-PCB, IMS must analyze the name in the message (call FINDDEST) to determine
how to queue the message.
Message destinations have a name type of x01 as shown in Figure 14-6 and include:

Transaction codes
LTERM names
MSNAMEs
APPC descriptor names

Other resource types are not checked for consistency since they are not used for message
queuing. For example, it is perfectly alright to have the same name for a NODE, a USER, an
LTERM, and a USERID.
In an IMSplex, each IMS system has its own system definition, although certainly a single
system definition can be shared by all IMSs. When this is done, we say these IMSs are

Chapter 14. Sysplex terminal management

183

cloned, and the message destinations are obviously consistent. However, since it is not a
requirement, and since each IMS might have its own system definition, it is important that,
when resources are defined to these separate IMSs, resources of the same type are defined
consistently. For example, it would be problematical if IMS1 defined a resource named
PRSNL as a transaction, and IMS2 defined a resource named PRSNL as an LTERM. Since
both transaction codes and LTERM names are message destinations, IMS1 would queue a
message with destination PRSNL on the Transaction Ready Queue, and IMS2 would queue it
on the LTERM Ready Queue. Very confusing.
In a list structure (such as the resource structure) which has been allocated with user
managed list entry IDs (LEIDs), no two entries can have the same LEID - they must be unique
within the structure. When IMS wants the RM to create an entry on the structure, it provides a
LEID consisting of the name type + resource name. This may also be referred to as the
resource ID. In the above example, IMS1 would create a transaction entry with an LEID of
01PRSNL. If IMS2 later tried to create an entry for an LTERM named PRSNL, it would also
provide a LEID of 01PRSNL. Since this LEID already exists in the structure, and since it must
be unique, IMS2 would not be allowed to create a LTERM named PRSNL, and the logon
would fail.
Figure 14-3 shows an example of the same resource name being defined by IMS1 as a
transaction and IMS2 as an LTERM.

Shared Queue
Structure
TRANQ

01PRSNL Data

LOGON
LTERMQ

IMS1
APPLCTN ...
TRANSACT

PRSNL

IMS1 will create PRSNL


Transaction entry in
Resource Structure
during IMS initialization.

IMS2

CF

TRAN

01PRSNL

LTERM

01PRSNL

Resource
Structure

TERMINAL
NAME

NAME=PERS2
PRSNL

Figure 14-3 Resource Type Consistency

There is a difference here between statically defined terminals and dynamic ETO terminals.
With a static terminal, the static NODE users (SNUs) and LTERMs are created and activated
at logon time. If any one of the LTERMs defined for a static NODE is not consistent, then the
logon is rejected. Since the SNU name is the same as the NODE name, if the NODE is valid,
so is the SNU.
184

IMS Version 8 Implementation Guide

For ETO terminals, the logon process creates the terminal structure (VTRB) and must be
successful before signon is attempted; therefore, before any USER or LTERM entry is
created. If any LTERM is valid (consistent), then the logon is accepted, but any invalid
LTERM is not created. If all LTERMs are invalid, then an attempt is made to create a default
LTERM having the same name as the USER. If this is also rejected, then the signon is
rejected. It may then be necessary for the end-user to log off, signon again with a different
user descriptor, or otherwise correct the problem.

14.4.2 Resource name uniqueness


Resource name uniqueness guarantees that the same resource name will not be active on
more than one IMS within the IMSplex at the same time. An LTERM named PRSNL, for
example, cannot be active on both IMS1 and IMS2. Resource name uniqueness applies to
the following resource types:

Single session NODEs (static or dynamic)


LTERMs (static or dynamic)
USERs (including static NODE users, ETO users, and parallel session ISC subpools)
USERIDs (unless SGN=G, Z, or M)

It does not apply to these resource types, which may be active on multiple IMSs concurrently:

Transactions
MSNAMEs
Parallel session ISC NODEs
APPC descriptor names

When a resource (for which the uniqueness requirement is enforced) becomes active
anywhere within the IMSplex, an entry is created in the resource structure identifying the
resource and its owner. The owner is the IMS system on which that resource is active. If that
same resource (or another resource of the same name and type) were to attempt to become
active on this or another IMS in the IMSplex, it would fail. Note that a USERID may be
allowed to be active on multiple IMSs at the same time by coding SGN=G, Z, or M in its
DFSPBxxx PROCLIB member. This is a global (IMSplex-wide) parameter, and the first IMS to
join the IMSplex will set this value in the IMSplex global entry. It then applies to all IMSs
joining the IMSplex later, regardless of the value of SGN set in their DFSPBxxx PROCLIB
members.
Figure 14-4 shows an example with two IMSs both having defined an LTERM resource
named PRSNL. In this example, IMS1 is the first to log on (from NODE PERS1). As a result,
the NODE, SNU, and LTERM resource entries are created in the resource structure (as
shown) with an owner of IMS1. If a user on a different NODE (PERS2) but with the same
LTERM name defined (PRSNL) were to try to log on, that logon would be rejected. This would
be true even if other unique LTERMs were defined for PERS2.
Similar results would happen for ETO terminals, except that if multiple LTERMs are created
and any are successful, then the signon would be successful. Like with resource type
consistency, if no LTERMs are unique, then a default LTERM equal to the USER name is
attempted. If that also fails, then the signon is rejected. If the USER is not unique, then the
signon is rejected. No attempt is made to create a default USER.

Chapter 14. Sysplex terminal management

185

Shared Queue
Structure

PERS1
PRSNL
USER1

TRANQ
11

12

10

11

8
6

IMS1
TERMINAL PERS1
NAME
PRSNL

If resource already active


on (owned by) IMS1 ....

3
4

SIGNON

4
7

12

10

LOGON

PERS2
PRSNL
USER2

LOGON

LTERMQ

IMS2

CF

TRAN
NODE
LTERM
SNU
USERID

Resource
Structure

PERS1
PRSNL
PERS1
USER1

Owner
IMS1
IMS1
IMS1
IMS1

... attempt to activate


resource on IMS2 will fail.

Figure 14-4 Resource Name Uniqueness

Note that for statically defined terminals, the USERID entry is not created until the end-user
signs on. In an ETO environment, the USER (instead of the SNU), LTERM, and USERID
entries are created at signon time.

14.4.3 Resource status


Most IMS terminal/user resources have some kind of status that can be associated with them.
For example, they may be in conversation, in response mode, in TEST MFS mode, and many
others. Some of this status is recoverable and some is not. Some is significant and some is
not. Some is related to a command having been entered from a terminal/user, and some is
related to the work a user is doing. The distinctions between these different status types is
important in understanding how sysplex terminal management deals with resource status
recovery. The following paragraphs describe these distinctions. For purposes of the following
discussion, and for brevitys sake, the term user will be used to describe the terminal/user
end of an IMS session. In some cases, the status associated with this user really applies to
the physical terminal.

Command status
Command status is that user status which is set by the entry of an IMS command. Examples
are:

186

/STOP NODE | LTERM | USER


/TEST
/TEST MFS
/EXCLUSIVE NODE | USER

IMS Version 8 Implementation Guide

/ASSIGN LTERM | USER .... SAVE | NOSAVE


/LOCK LTERM | NODE

End-user status
End-user status is status that is the result of work being done by the user. For example:
Response mode when a user enters a response mode transaction
Conversational mode when the user has entered an IMS conversational transaction
STSN mode sequence numbers updated every time an input message is received
from, or an output message is sent to, a STSN device (for example FINANCE, SLUTYPEP,
or LUTYPE6/ISC)

Recoverable status
Recoverable status is that command or end-user status which, following a successful session
or IMS restart, will be restored by IMS if IMS knows what that status was. Sometimes
recoverable status may not be known to IMS at restart and so is not restored. For example, if
the status was only known locally (that is, not saved in the resource structure) and if IMS is
cold started, then the status would not (could not) be recovered.
This definition applies whether or not IMS is running with STM enabled. When STM is not
enabled, recoverable status is saved and restored from local control blocks and log records
only.
Examples of recoverable status include:

Conversational mode
TEST MFS mode
LTERM assignments made with the SAVE keyword
Fast Path response mode
Note: When a user enters a Fast Path (EMH) transaction, that terminal is put into
response mode. If the session or IMS fails, when it is reestablished, the terminal will be
put back into response mode. See below for the description of non-recoverable full
function response mode.

Non-recoverable status
Non-recoverable status is that command or end-user status which is maintained by IMS only
as long as that user is active. In simple terms, it is whatever status is not in the recoverable
category. If the session terminates, normally or abnormally, that status is discarded by IMS. If
IMS fails, the status will be discarded during emergency restart. This definition applies
whether or not IMS is running with STM enabled. Even when STM is enabled, this status will
not be recovered.
Examples of non-recoverable status include:

/TEST mode (except for /TEST MFS)


/LOCKed
/ASSIGNed LTERMs and USERs without the SAVE option
Response mode for full function transactions
Note: This means is that, if a user is in response mode after entering a full function (not
Fast Path) transaction, and the session (or IMS) fails, then when the user reestablishes
the session, that terminal will NOT be in response mode.

Chapter 14. Sysplex terminal management

187

14.4.4 Significant status


In the context of sysplex terminal management, significant status applies to recoverable
status which will prevent a resource entry from being deleted from the resource structure.
That is, if a resource has significant status at the time of session or IMS failure, then it will
prevent that resource entry from being deleted at session or IMS termination. There are two
types of significant status:
Command significant status
End-user significant status

Command significant status


There are six types of command significant status that can exist for a user in a STM
environment. The command significant status is set by the entry of an IMS command and it is
always global. The notes below the bulleted items identify the information saved for each
status:
/TEST MFS:

Applies to NODEs and users; indicates that the terminal is in TEST MFS mode.
/STOP NODE | LTERM | USER xxx:

Applies to NODEs, LTERMs, and users; indicates that the resource is stopped.
/EXCLUSIVE NODE | USER xxx:

Applies to NODEs and users; identifies the NODE or user which is to be used exclusively
for input or output from this terminal.
/TRACE SET ON NODE xxx:

Applies to NODEs; indicates that the NODE is being traced.


/CHANGE USER xxx AUTOLOGON ... SAVE:

Applies to ETO users; identifies the changed autologon information for that ETO user. This
is not significant unless the SAVE option is specified.
/ASSIGN LTERM | USER xxx TO yyy SAVE:

Applies to LTERMs and users; identifies the assignments for the user or LTERM; this is
not significant unless the SAVE option is specified.

End-user significant status


There are three types of end-user significant status applicable to STM:
Conversational status:

When a user enters a conversational transaction, that user is said to be in conversation


with IMS. Recoverable information about that conversation includes a conversation ID
and, of course, a transaction code. Conversation IDs and transaction codes for HELD
conversations are also significant and are saved. A conversational input in-progress flag
indicates that a conversational transaction has been entered and queued, but that no
output has yet been delivered.
Fast Path response mode:

When a user enters a Fast Path transaction (one which is scheduled into an EMH region),
that user is in Fast Path response mode. Note that, while this is considered significant, full
function response mode is not. Recoverable information consists of a Fast Path input
in-progress flag that indicates that a user has entered a Fast Path transaction and no
output response has yet been delivered.

188

IMS Version 8 Implementation Guide

STSN status:

STSN devices are those that use the Set and Test Sequence Number instruction to
maintain accountability for input and output messages. Each time an input message
arrives from a STSN device, the input sequence number is incremented. A similar function
exists for output messages. STSN devices include terminal defined to IMS (statically or
dynamically) as FINANCE, SLUTYPEP, LUTYPE6 (ISC). The recoverable information
consists of the input and output sequence numbers.
For each of these, the installation may choose whether or not to save the recoverable
information.

14.4.5 Status recovery mode (SRM)


End-user significant status on page 188 described the three types of end-user significant
status. Status recovery mode (SRM) identifies if, and how, that status is to be maintained.
There are three choices, SRM=GLOBAL, LOCAL or NONE.
SRM=GLOBAL

End-user significant status is to be maintained on the resource structure.


Status is also maintained locally while the resource is active. When the
resource becomes inactive, local status is deleted. It is recoverable
across a session termination/restart or IMS restart from information on
the resource structure on any IMS in the IMSplex. A resource structure is
required for SRM=GLOBAL. This is the default when using CSL (RM) with
a resource structure and shared queues.
There is a special case for Fast Path when SRM=GLOBAL but the Fast
Path transaction is scheduled locally (not put on shared EMHQ). When
this occurs, since global recovery is not possible, the SRM is temporarily
changed to LOCAL. After the transaction is complete, and the response
sent, the SRM reverts back to GLOBAL.

SRM=LOCAL

End-user significant status is to be maintained in local IMS control blocks


only. When the resource becomes inactive, its status is maintained in
local control blocks and in the IMS system checkpoint log records. The
status is recoverable only on the local system from the local control blocks
or, in the case of an IMS restart, from the system checkpoint records. This
option is the default SRM when not using RM with a resource structure
and shared queues. A resource structure is not required for
SRM=LOCAL.

SRM=NONE

End-user significant status is to be maintained in local IMS control blocks


only as long as the resource is active on that IMS. When the resource is
no longer active, for any reason, that status is deleted. It is not
recoverable on any IMS. A resource structure is not required for
SRM=NONE.

Note: When a resource structure exists, resource entries with their SRM is maintained in
that structure regardless of the SRM setting. That is, even if SRM=LOCAL or NONE, the
SRM setting will be kept in the resource entry in the structure. When no resource structure
exists, the SRM setting is kept in the local control blocks.

14.4.6 Status recoverability (RCVYxxxx)


Although there are three types of significant, and therefore recoverable, end-user status, the
installation may choose NOT to recover some or all of them. For example, it might decide not
to make STSN sequence numbers recoverable, letting the remote program handle a session
Chapter 14. Sysplex terminal management

189

cold start (sequence numbers set to zero). This is referred to as status recoverability. The
value is set by the RCVYxxxx parameter where xxxx is CONV, STSN, or FP. Section 14.5.1,
Setting SRM and RCVYxxxx on page 191 describes how to set these values.
RCVYCONV=YES | NO

When set to YES, information required to recover a conversation will be kept across a
session termination/restart and IMS termination/restart. Where this information is kept
depends on the status recovery mode.
When set to NO, conversational information is kept locally and only as long as the session
is active. When the session terminates, even without explicitly exiting the conversation
(/EXIT command), the conversational information will be discarded by IMS and the
conversation will be exited. In the case of an IMS failure, emergency restart will do the
same. The last conversational output (SPA) will be passed to the Conversational
Abnormal Termination Exit (DFSCONE0) for processing.
RCVYFP=YES | NO

When set to YES, Fast Path (EMH) response mode is recoverable. This means that if a
session terminates (or IMS terminates) while the terminal is in Fast Path response mode,
when that session is reestablished, it will be returned to response mode. When the
response is available, it can be delivered to the terminal.
When set to NO, Fast Path response mode is not restored after session termination and
restart. Note that for Fast Path transactions, when RCVYFP=NO, the Fast Path response
message is also not recoverable. When the response is discovered by IMS, it will be
discarded. This is not true for full function response mode. Full function response mode is
never recoverable, but the message itself is recoverable. It will not be discarded.
RCVYSTSN=YES | NO

When set to YES, STSN sequence numbers for both input and output messages will be
saved and are recoverable.
When set to NO, they are not recoverable. When a session terminates and then is
restarted, the sequence numbers will revert to zero (cold start). This may have particular
significance for ETO STSN devices. When an ETO STSN session terminates, its control
block structure (VTCB) is NOT collapsed even though it may have no other significant
status. However, if RCVYSTSN is set to NO, then it will be collapsed unless there is some
other significant status.
When a resource structure exists, the RCVYxxxx settings are maintained in that structure
regardless of the SRM setting. That is, even if SRM=LOCAL or NONE, the RCVYxxxx
settings will be kept in the resource entry in the structure. When no resource structure exists,
these settings are kept in the local control blocks. Note that the default for all is YES.

14.5 Enabling sysplex terminal management


Sysplex terminal management is enabled whenever IMS is running in a CSL environment
with a resource structure and shared queues. Resource status recovery requires that all IMSs
in the IMSplex belong to the same shared queues group. Without shared queues, resource
status recovery is not functional. Setting up a CSL environment on page 290 describes the
steps required to enable the Common Service Layer environment. This section provides a bit
more detail on those parameters related directly to STM.

190

IMS Version 8 Implementation Guide

14.5.1 Setting SRM and RCVYxxxx


Status recovery mode and status recoverability apply to each terminal/user session and, with
one exception, are set when the terminal logs on. The exception is for dynamic non-STSN
terminals, in which case the SRM is set when the ETO user signs on.
Each IMS can specify a default, and then the defaults can be overridden at logon or signon
time by the Logon Exit (DFSLGNX0) or the Signon Exit (DFSSGNX0). The following
paragraphs identify how to set SRM and RCVYxxxx values, and how to override them using
IMS exits.

System defaults in DFSDCxxx


In a CSL environment, there are four new parameters in PROCLIB member DFSDCxxx which
determine the system defaults for SRM and RCVYxxxx. One of these is SRMDEF, which is
the IMS system default to use if not overridden at logon or signon time. Note that this is not an
IMSplex-wide default - each IMS can have different defaults, although why one would do this
is unclear. Three other parameters determine the recoverability of each type of end-user
significant status: RCVYCONV, RCVYSTSN, and RCVYFP. These also are IMS system
defaults to use when not overridden. For example, consider the following coding:
SRMDEF=GLOBAL
RCVYCONV=YES
RCVYSTSN=NO
RCVYFP=YES

This will set SRM=GLOBAL (save recoverable status in the resource structure) and turn on
recoverability for conversations and Fast Path, but not for STSN sequence numbers. There
are three rules that apply when specifying these values:
SRMDEF=GLOBAL cannot be specified without a resource structure
RCVYxxxx=YES cannot be specified if SRM=NONE
RCVYSTSN=YES cannot be specified if RCVYFP=NO

If SRMDEF is not specified in DFSDCxxx, IMS will determine the system default according to
the following rules:
SRMDEF=GLOBAL when using a resource structure and shared queues
SRMDEF=LOCAL when not using a resource structure and shared queues
RCVYxxxx=YES unless SRM=NONE
Note: Other than SRM=GLOBAL, these parameters can be set even when not using the
Common Service Layer (CSL). For example, if RCVYSTSN=NO, then STSN sequence
numbers would be discarded no matter how a STSN session is terminated.

ETO descriptors
For ETO, there is an addition to the user descriptor definitions that allows each of these
parameters to be specified as defaults for that session. They are coded the same as in
DFSDCxxx. They override the values specified in DFSDCxxx but can be overridden by the
Logon or Signon Exit. For example:
U HENRY LTERM=(HENRYSLT) SRMDEF=GLOBAL RCVYCONV=YES RCVYSTSN=NO RCVYFP=NO ...

14.5.2 Overriding SRM and RCVYxxxx defaults


The system defaults can be overridden for individual sessions at logon or signon time by the
Logon Exit (DFSLGNX0) or Signon Exit (DFSSGNX0). If, when an end-user logs on or signs
on, a resource entry already exists with end-user significant status (and therefore already has

Chapter 14. Sysplex terminal management

191

SRM and RCVYxxxx settings), then the exits cannot override them. For example, if a session
has SRM=GLOBAL and RCVYCONV=YES, and then that session terminates with
conversational status, the resource entry is not deleted (resource entries are not deleted
when a session terminates with significant status). When that user logs back on, the Logon or
Signon Exit cannot override these settings. Any attempt to override them is ignored. If,
however, the Logon Exit (or Signon Exit) enters an invalid value, then the logon (or signon)
will be rejected. For example, the Logon Exit cannot specify SRM=GLOBAL if there is no
resource structure. This is invalid and the logon would be rejected.

Logon exit (DFSLGNX0)


The Logon Exit, when present in SDFSRESL, is driven whenever a static or dynamic
terminal logs on. This exit knows what the system defaults are and can override them if they
have not already been set in a previous session and still exist. If a resource already has an
SRM and end-user significant status, the exit cannot change it.

Signon Exit (DFSSGNX0)


The Signon Exit is driven only for ETO terminals when the user signs on. For non-STSN
terminals, the exit can override the SRM and RCVYxxxx values if they have not already been
set in a previous session and still exist.
For STSN terminals, since the STSN numbers are set at logon time and are associated with
the VTAM session, the defaults for these settings (SRM and all RCVYxxxx) can only be
overridden by the Logon Exit - not by the Signon Exit.
Important: SRM and RCVYxxxx apply only to end-user significant status. They do not
apply to command significant status. Command significant status is always kept globally if
a resource structure exists, and locally if no resource structure exists. It is always
recoverable.

14.6 Ownership and affinities


There are two types of affinities that are possible with STM. VTAM generic resource affinity is
used by VTAM to route terminal logon requests to a particular IMS which holds significant
status for that resource. Resource Manager (RM) affinity is used by STM to prevent a user
from logging on to one IMS when its end-user significant status may exist only in another
(SRM=LOCAL).

14.6.1 Resource ownership and RM affinity


It has been mentioned several times that when a resource becomes active, a resource entry
is created in the resource structure and ownership is set to that IMS. For example, when a
NODE logs on to IMS1, a resource entry for that NODE is created in the resource structure
and the ownership of that NODE is set to IMS1. That NODE is said to have an RM affinity for
IMS1. The two terms are synonymous.
RM affinity (ownership) applies to NODEs, (Static NODE) Users, and LTERMs. It also applies
to Userids if single signon is being enforced (that is, if SGN is not equal to G, Z, or M). When
a resource is owned by one IMS, it cannot become active on another IMS. Examples later on
will show how this can happen and how IMS reacts.

192

IMS Version 8 Implementation Guide

14.6.2 VTAM generic resources affinity


IMS began supporting VTAM generic resources (VGR) in Version 6. When running with VGR,
an end-user can log on to a generic name (for example, IMSX) representing a VTAM generic
resource group that all of the participating IMSs have joined by including a GRSNAME startup
parameter in DFSDCxxx (for example, GRSNAME=IMSX). While that session is active, it has
a VGR affinity for that IMS. The VGR affinity is kept in an affinity table in the VGR structure in
the Coupling Facility.
When the session terminates, its VGR affinity may or may not be deleted from the affinity
table, depending on the status of that users session and the IMS startup parameter
GRAFFIN. If it has not been deleted by the next time the user logs on to IMSX (for example),
VTAM will direct the logon request to the IMS for which the NODE already has a VGR affinity.
If that IMS is not active, then the logon request will fail. The user has the option of waiting for
that IMS to come back up, or to log on directly to another IMS. If the user logs on directly to
another IMS, then that users status will not be known and cannot be restored. While this still
might be desirable for availability reasons (dont want to wait for a failed IMS to be restarted),
not only is the status lost but when that user later logs back on to the original IMS, the original
status will be restored, possibly causing great confusion.

14.6.3 Setting VGR affinity management responsibility


Prior to IMS Version 8, VGR affinity management was determined by the value of the
GRAFFIN parameter in DFSDCxxx at IMS initialization. GRAFFIN has two possible values:
GRAFFIN=VTAM

When set to VTAM, the VGR affinity will be deleted whenever the
session terminates, regardless of the status of the user. The next time
that user logs on, VTAM will direct the logon request to any active IMS
in the VGR group. That may or may not be the IMS which has
knowledge of the significant status.

GRAFFIN=IMS

When set to IMS, IMS decides whether or not to delete the VGR
affinity when the session terminates. IMS makes this decision based
on whether or not the terminal has significant status at the time of
session termination. The intent is to force (or at least encourage) the
user to log back on to the same IMS which has knowledge of some
significant status, such as conversational status.

Both of these values were system-wide. That is, for any given IMS, every sessions VGR
affinity was managed either by VTAM (always deleted) or by IMS (deletion depends on
status).

14.6.4 VGR affinities and IMS Version 8


VGR affinities have the same purpose in IMS Version 8 as in prior versions - to force
(encourage) a user logging on to return to that IMS which has knowledge of that users
significant status. However, in IMS Version 8, when a resource structure is available and
SRM=GLOBAL, significant status is known by all the IMSs in the IMSplex. It is therefore
unnecessary to use VGR affinity to direct a users logon request back to the same IMS where
the session was terminated with significant status.
When SRM=LOCAL, only the original IMS knows that status and it is necessary to return to
the original IMS to restore that status. When SRM=NONE, we dont care about the status, so
again it is not necessary to return to the same IMS.
VTAM generic resources support in IMS Version 8 depends on the level of the operating
system - OS/390 or z/OS.
Chapter 14. Sysplex terminal management

193

VGR support when running with OS/390 V2R10


The installation still specifies the GRAFFIN parameter, and this parameter is still a
system-wide value determining whether IMS or VTAM will manage all of the VGR affinities.
When GRAFFIN=VTAM, all affinities will be deleted when a session terminates. When
GRAFFIN=IMS, IMS will determine whether or not to delete the VGR affinity. But in IMS
Version 8, this decision is based not just on whether significant status exists, but also on the
SRM setting.
SRM=GLOBAL or SRM=NONE

GRAFFIN=IMS or GRAFFIN=VTAM
With any combination of the above, the VGR affinity will be deleted at session
termination.
SRM=LOCAL

GRAFFIN=IMS
If significant status exists at session termination, IMS will not delete the VGR affinity.
The next generic logon will go back to the same IMS and its recoverable status will be
restored from local control blocks.
GRAFFIN=VTAM
With or without significant status, VGR affinity will be deleted by VTAM (doesnt know
that significant status exists). It will be shown later, however, that if significant status
does exist for the user, even without VGR affinity, an RM affinity will still exist.

VGR support when running with z/OS V1R2 or higher


Beginning with z/OS 1.2, VTAM supports session level affinities. This means that, as each
NODE logs on, IMS can decide whether VTAM or IMS is to manage the VGR affinities. The
GRAFFIN parameter is no longer required and will be ignored if still included. Like with
OS/390, there are some rules about how IMS sets the affinity management attribute.
VTAM managed affinities:

When this is set, VTAM will delete the affinity whenever the session terminates. The next
time the user logs on, VTAM may route the logon request to any available IMS in the VGR
group. VTAM-managed is set for:
All terminals with SRM=GLOBAL or NONE:
In this case, we either have the status in the resource structure, or we dont care about
recovering the status. So, we dont care which IMS the user logs on to.
All ETO non-STSN terminals:
For ETO non-STSN terminals, the SRM is not set until the user signs on. But the VGR
affinity management attribute must be set at logon time. Not knowing what the SRM
value will be, IMS assumes GLOBAL (or NONE) and sets it to VTAM. When the
session terminates, IMS will disconnect the user structure from the terminal structure
and VTAM will delete the affinity. Any end-user significant status stays with the User not the NODE. If SRM had been set to LOCAL, that user could log on to another. This
would be allowed since there is no RM affinity for the NODE. However, when the user
tries to sign on, IMS would find the RM affinity for the User (and LTERM) and reject the
signon.
IMS managed affinities:

When this is set, IMS will decide at session termination whether or not to delete the
affinity. IMS uses end-user significant status to make this determination. If it exists, then
IMS will not delete the VGR affinity. The next generic logon will be routed to the same IMS.

194

IMS Version 8 Implementation Guide

All static terminals with SRM=LOCAL:


When SRM=LOCAL, end-user significant status is known only to the local IMS.
Dynamic STSN terminals with SRM=LOCAL:
Unlike ETO non-STSN terminals, for ETO STSN terminals, the SRM is set at logon
time, so IMS knows whether SRM is LOCAL or not. If it is LOCAL, then IMS will want to
manage the VGR affinity since it will be the only IMS to know whether the user has
end-user significant status.
Table 14-1 summarizes how IMS sets VGR affinity management.
Table 14-1 VGR affinity management
Terminal Type

SRM

Managed by

Static

Global, None

VTAM

Dynamic STSN

Global, None

VTAM

Dynamic non-STSN

Any

VTAM

Static

Local

IMS

Dynamic STSN

Local

IMS

The existence of command significant status, when a resource structure exists, does not
affect the deletion of a VGR affinity. Because command significant status is always GLOBAL
when a structure exists, command significant status can always be recovered on another IMS
in the IMSplex. When no resource structure exists, then command significant status is always
LOCAL and will prevent the deletion of a VGR affinity.

14.7 Resources and the resource structure


The resource structure is used keep information about the IMSplex and its members (IMS,
CQS, and the RM), about global IMSplex processes (such as the global online change
process), and about individual terminal and user resources. This section describes those
resource entries used for the sysplex terminal management function, including when they are
created, what data they contain, how they are used, and when they are deleted.

14.7.1 Resource structure components and characteristics


When a connector connects to a structure defined in the CFRM policy, that connector
identifies the type of structure (list, cache, or lock) and its characteristics. For the resource
structure (and for the shared queues structures), the Common Queue Server (CQS) is the
connector. Some of the characteristics of the resource structure specified by CQS are:
How many List Headers (LH)

The resource structure is allocated with 192 list headers. A list header is an anchor point
for list entries.
What type of List Entry (LE)

Each list entry represents one IMSplex or terminal/user resource on the structure. Each
list entry is composed of several parts.

Chapter 14. Sysplex terminal management

195

List Entry Controls (LEC)


Header for each list entry. The LEC contains information about list entry, such as:

Entry Key (ENTRYKEY):


For RM, the entry key is the resource type plus resource name (for example,
x01 + trancode). When RM asks CQS to create an entry, CQS uses the resource
type to identify a range of 11 list headers, and the resource name to determine
which LH in that range on which to place the list entry. The entry key does not have
to be unique, although for the resource structure, RM keeps them unique.

User-managed List Entry ID (LEID):


The LEID is the name type plus resource name (for example, x01 plus LTERM
name). The LEID must be unique within the structure. STM uses this characteristic
to enforce resource type consistency.

Version:
Number incremented by one each time the resource entry is updated. This is used
by RM to ensure that updates to the structure are serialized.

Other:
There are several other fields in the list entry controls that are used by CQS, XES or
the CFCC to manage the structure. They are not of interest for this discussion.

Adjunct area (ADJ):


This is optional for list structures in general, but for the resource structure, every list
entry has one. The adjunct area is 64 bytes and contains the resource owner and a
small amount (up to 24 bytes) of client data (DATA1).
Data elements (DE):
Data elements are also optional for a list structure, but when defined, are used to
contain user data that is too large to fit in the adjunct area. The resource structure is
defined with 512 byte data elements. Each list entry may have zero or more data
elements, containing recoverable resource data (DATA2) that is too large to fit in
DATA1.
Other characteristics of the resource structure include:

Structure is persistent:
Will not be deleted when all connectors are gone.
Connection is persistent:
Will not be deleted if connector abends.
Supports (new) structure management enhancements. See Chapter 10, Coupling
Facility structure management on page 137 for a description of the following
enhancements.

196

Alter and autoalter


System managed rebuild
System managed duplexing

IMS Version 8 Implementation Guide

Figure 14-5 is a simple illustration of the resource list structure. Each resource entry on the
structure is composed of one list entry control (LEC), one adjunct area (ADJ) and zero or
more data elements (DE). Whether a data element is needed depends on how much
recoverable data is kept for that resource.

List
Headers

CQS
Private
Lists

Client
Lists

0
1
2
3
4
..
68
69
70
71
72
73
..
190
191

List Entry
List Entry Controls
ENTRYID (LEID)
ENTRYKEY
VERSION
Other ...
Adjunct Area
Owner
Client DATA1
Reserved
Data Entry
0 or more data elements
CQS Prefix
Client DATA2

12
16
8

8
24
32
512
<61k

Figure 14-5 Resource Structure Components

CQS reserves list headers 0-70 for its own use. Presently, the only LH used by CQS in the
resource structure is list header zero (LH0). LH71 through LH191 are available for use by the
CQS client (Resource Manager). The ENTRYKEY is used by CQS to determine which list
header any particular resource entry (list entry) is to be placed.

14.7.2 Resource entries in the resource structure


Figure 14-6 on page 198 identifies the IMSplex resources, resource types, and name types.
In each case, the resource entry identifies the existence of a resource in the IMSplex, whether
or not it is active, and other information about that resource, some of which is recoverable
status. The ENTRYKEY (resource type + resource name) identifies the resource. Information
about that resource is kept in the adjunct area (OWNER and DATA1) and, when DATA1 is not
large enough, in one or more data elements (DATA2).

Chapter 14. Sysplex terminal management

197

Resource
Type

Name
Type

Definition

Meaning

01

01

Transaction

Application program message destination

02

01

Lterm

Logical terminal message destination

03

01

Msname

MSC logical path name message destination

04

04

User

Person signed on to an ETO terminal or


parallel session ISC SUBPOOL

05

05

Node

VTAM terminal (not APPC)

10, 21, 32,


(+11),...,252

10, 21, 32,


(+11), ..., 252

Vendor and Customer

Reserved for vendor & customer use

11

11

IMSplex

IMS global information

12

01

CPI-C Transaction

CPI-C transaction

14

01

APPC Descriptor

APPC descriptor name - message destination

15

15

Userid

User ID for security checking

26

26

Static Node User

User of a statically defined terminal or single


session ISC node

242

242

RM Process

RM global process status

253

253

RM Global

RM global information

Figure 14-6 STM Resource Types and Name Types

IMSplex entries
Several of the resource entries apply to the IMSplex itself. Some of these are global (only one
entry for the IMSplex) and some are local (multiple entries). Only two of these entries play a
significant role in STM. They are the DFSSTMGBL entry (one per IMSplex) and the
DFSSTMLimsid entries (one for each IMS). The others are not discussed here.

DFSSTMGBL
Contains the multiple signon indicator (SGN=). This entry is created when the first IMS
joins the IMSplex. The signon indicator cannot be changed by the second or succeeding
IMSs. This is used to determine whether to enforce resource name uniqueness for
Userids. To change this value for the IMSplex, all IMSs must be shut down. Then the first
to rejoin can reset this value. The entry is never deleted.

DFSSTMLimsid
Created for each IMS that joins the IMSplex. When the entry is created, or when IMS
rejoins the IMSplex, then the ownership of this resource is set to that IMS. When IMS
shuts down normally, the ownership is released. If IMS fails, then it cant release
ownership, so the entry remains with its ownership set to the failed IMS. This is used by
other IMS systems in the IMSplex to determine whether an IMS system has failed, and is
important for cleanup activities after an IMS failure. This entry is deleted only if IMS is shut
down with the LEAVEPLEX keyword:
/CHE FREEZE LEAVEPLEX

When XRF is enabled, both IMSs (active and alternate) have entries. They contain a flag
indicating whether this IMS instance is an XRF active or alternate system, and the
RSENAME. The RSENAME is used as the owner of a terminal resource instead of the
IMSID. If the alternate must take over, then it owns those resources which had been
owned by the old active IMS.
198

IMS Version 8 Implementation Guide

Sysplex terminal entries


Most of the entries in a resource structure represent the STM-managed resources associated
with the terminal/user environment. These entries are used to identify a resource and to
maintain information about the status of that resource. Although these entries are always
created when they are activated (for example, when a NODE logs on), their deletion is usually
determined by the existence of significant status and the values of SRM and RCVYxxxx.
The following paragraphs identify each type of resource and some of the information about
these resources.

Static transaction entry


Represents a transaction code defined statically to IMS during the IMSGEN process. One is
also created for transactions dynamically defined by the Output Creation Exit (DFSINSX0).
Entry key = x01 + transaction code; LEID = x01 + transaction code
Created
During IMS initialization
When added by global or local online change
Usage and comments
Enforce resource type consistency
No recoverable information
Deleted
Never deleted; must delete (SETXCF FORCE) structure and create new one to delete
a transaction

CPI-C driven transaction entry


Represents a transaction code defined in APPC TP_PROFILE and entered from an APPC
device. The transaction code is used to schedule a dependent region program to process
CPI-C driven transaction.
Entry key = x0C + transaction code; LEID = x01 + transaction code
Created
When first CPI-C transaction driven on any IMS
Usage
Enforce resource type consistency
The IMSID of each IMS where the transaction is driven is added to DATA1 and then, if
more than two IMSs, to DATA2.
When IMS terminates normally, it removes its IMSID from this entry. When IMS
terminates abnormally, another IMS is informed (by XCF) and removes the IMSID.
No recoverable information
Deleted
When all IMSIDs in the resource entry are gone, the last IMS deletes the resource.

NODE entry
Represents a static or dynamic (ETO) NODE.
Entry key = x05 + NODEname; LEID = x05 + NODEname
Created
When NODE first becomes active on any IMS. It will also be created when a command
sets significant status (for example, /STOP NODE). If the NODE does not exist when
this command is entered, it will be created with the stopped status.
Usage and comments
Enforce resource name uniqueness for single session VTAM NODEs. This is not
enforced for parallel session ISC NODEs.
Maintain recoverable status related to the NODE (for example, NODE is stopped)
Used, along with (Static NODE) User entry, to restore recoverable status across
session termination/restart (including IMS termination/restart)
Chapter 14. Sysplex terminal management

199

Deleted
When NODE is no longer active and has no significant status

User entry
Represents an ETO user or a parallel session ISC SUBPOOL.
Entry key = x04 + username (subpool name); LEID = x04 username
Created
For ETO, when user structure (SPQB) is created at signon time.
For parallel session ISC, when parallel session (subpool) becomes active. There will
(probably) be multiple user entries for each NODE.
When command sets significant status (for example, /STOP USER).
Usage and comments
Enforce resource name uniqueness
Maintain user-related recovery information. Most significant status, such as
conversational, Fast Path, and command status is kept here. Also, this entry contains
segments in DATA2 with information about each assigned LTERM and its status. For
example, a /STOP LTERM command would set significant status in the LTERM
segment of this entry, not in the LTERM entry.
Used, along with NODE entry, to restore recoverable status across session
termination/restart (including IMS termination/restart)
Deleted
When the user is no longer active and contains no significant status. Note that, when
SRM=LOCAL and IMS fails, surviving IMSs will not know whether there is significant
status for this entry in the failed (local) IMS, so the resource will not be deleted.

Static NODE user entry


Represents the user associated with a static NODE. This resource does not exist in the IMS
environment (that is, no definitions or control blocks). It is invented by STM to provide a
similar construct for static NODEs, ETO NODEs, and parallel session ISC NODEs and their
users.
Entry key = x1A + NODEname; LEID = x1A + NODEname
Created
Same time the NODE is created
Usage and comments
This entry has the same usage and content as the User entry described above. One
difference between this entry and the User entry is that this entry cannot exist without
the NODE entry. For ETO, the User entry can exist independently of the NODE.
Deleted
Same time NODE is deleted

LTERM entry
Represents a static or dynamic logical terminal (LTERM)
Entry key = x02 + LTERMname; LEID = x01 + LTERMname
Created
For static terminals, the same time the NODE is created
For ETO terminals and ISC sessions, the same time the USER is created
Usage and comments
It is used only to enforce resource type consistency and resource name uniqueness.
Contains no recovery related information. Any significant status for an LTERM is kept
in the associated (Static NODE) User entry.
Deleted
For static LTERMs, deleted when the NODE and Static NODE User entries are deleted
For ETO LTERMs, or LTERMs assigned to parallel session ISC NODEs, deleted when
the User entry deleted
200

IMS Version 8 Implementation Guide

USERID entry
Represents a signed on user.
Entry key: x0F + USERID; LEID = x0F + USERID
Created
When user signs on, if single signon is being enforced; otherwise, not created
Usage and comments
Enforce resource name uniqueness when single signon specified
There is no recoverable information
Deleted
When user is no longer signed on; even if (Static NODE) User has significant status

MSNAME entry
Represents an MSC logical link as defined on MSNAME macro
Entry key = x03 + MSNAME; LEID = x01 + MSNAME
Created
During IMS initialization
Usage and comments
Enforce resource type consistency. Resource name uniqueness is not enforced for
MSNAMEs. They can be simultaneously active on multiple IMSs in the IMSplex.
There is no recoverable information.
Deleted
Never deleted; must delete (SETXCF FORCE) structure and create new one to delete
a MSNAME

APPC descriptor name


Represents an APPC descriptor name defined in DFS62DTx. This name is used by
applications on the CNHG call to direct output to an APPC destination. It is, therefore, one of
the message destination name types.
Entry key = x0E + descriptor name; LEID = x01 + descriptor name
Created
During IMS initialization for each descriptor in DFS62DTx; also created when
descriptors are dynamically added using /START L62DESC x command.
Usage and comments
Enforce resource type consistency. Resource name uniqueness is not enforced for this
resource.
As each IMS is initialized, its IMSID is added to DATA1 or to DATA2 (DATA1 can
accommodate two IMSIDs).
When IMS terminates normally, it removes its IMSID from this entry; when IMS
terminates abnormally, a surviving IMS is informed (by XCF) and removes the failing
IMSs IMSID from the entry.
No recoverable information
Deleted
When all IMSIDs have been deleted from the resource entry

14.8 STM in action


The following paragraphs show some of what happens as IMS joins an IMSplex, as users log
on and sign on, as significant status is created for those users, when a user session fails,
when that user signs back on to another IMS, when an IMS fails, when its users log on to
another IMS, when a failing IMS is restarted, and when an IMS leaves the IMSplex.

Chapter 14. Sysplex terminal management

201

14.8.1 Before the first IMS joins the IMSplex


There are a number of definitional requirements before an IMSplex can be established.
Details of these requirements can be found in Setting up a CSL environment on page 290.
They include:
Define the resource structure and shared queues structures in the CFRM policy and
activate the policy
Define all the (new) CSL address spaces, including the JCL and PROCLIB members; the
name of the IMSplex must be defined to each
Update or create the CQS JCL and PROCLIB members to provide services for both RM
and IMS shared queues; the IMSplex name must be defined for the RM/CQS function
Update the IMS PROCLIB members to include CSL and shared queues parameters and
for VTAM generic resources (optional); be sure SGN is set correctly

Before any IMS joins the IMSplex, no resource structure will exist.

14.8.2 Start IMSplex address spaces


These include IMS, SCI, OM, RM, and CQS. SCI should be started (on every OS/390 image)
first since every other address space will register with SCI to join the IMSplex. If SCI is not
started when IMS starts, it will put out a WTOR warning message and wait for a reply. If SCI is
not available to the other address spaces, then will retry six times and then abend.
When the first IMS joins the IMSplex, IMS will register with RM and RM will register with CQS.
CQS will connect to the resource structure, causing it to be created. At this time the IMSplex
global entries will be created, as well as the local entry for this first IMS. As each additional
component joins the IMSplex, their local entries will be created.

14.8.3 Log on from a static NODE


When a user logs on using a generic resource name, VTAM will check its VGR affinity table
and, if there is no affinity, route the logon request to one of the active IMSs in the VGR group.
The IMS that receives the logon request will check the resource structure for a NODE entry
and, if the NODE is not already there, or is there but not owned, it will accept the logon. If the
NODE entry exists and is owned by another IMS, the logon will be rejected. The logon will
also be rejected if any LTERM assigned to that NODE is already owned. This is true even if
only one of several LTERMs is owned by another IMS.
If this is a new logon, IMS will create the appropriate NODE, Static NODE User, and LTERM
entries. At this time the values for SRM and RCVYxxxx are set according to the system
default or the Logon Exit, and if this is a STSN device, STSN significant status will be set in
the Static NODE User entry. If signon information is provided with the logon request, and if
single signon is being enforced, then a Userid entry is also created. If signon is done later
using the /SIGN ON command, the entry is created (or not created) at that time.
VTAM sets the VGR affinity to this IMS and IMS tells VTAM who is to manage the VGR affinity
according to the SRM. See Table 14-1 on page 195 to see how this is determined.

14.8.4 Logon from an ETO NODE


Like with static NODEs, IMS will first make sure the NODE entry does not already exist, or if it
does, that it is not owned. If it does not exist, it will accept the logon and create a NODE entry
with itself as the owner. VGR affinity management will be set according to the SRM and
device type (see Table 14-1 on page 195 to see how this is determined). Little else is done
until the user signs on.
202

IMS Version 8 Implementation Guide

14.8.5 Signon from an ETO NODE


Most STM activity for ETO sessions takes place at signon time. At this time, the User and
LTERM entries are created to match the User control block structure in IMS. Nearly all
recoverable information about the NODE and the user are kept in this User entry. SRM and
RCVYxxxx values are set, and if this is a STSN device, STSN status is set.

14.8.6 Commands that change significant status


Command significant status on page 188 identified IMS commands that set command
significant status. When one of these commands is entered, then the NODE or (Static NODE)
User resource entry is updated accordingly. When LTERM status is changed, it is kept in the
corresponding LTERM segment in the (Static NODE) User entry. When command significant
status is reset (turned off) then the resource entry is updated to reflect this. For example, a
User may be put in MFSTEST mode by the /TEST MFS USER command. The /END
command takes this User out of MFSTEST mode. The User entry would be updated
accordingly and, if this is the last significant status, the resource entries would be deleted.
Sometimes a command may be entered for a resource which does not exist on the resource
structure. When it sets significant status, for example, a /STOP NODE command to a NODE
which does not exist, the resource will be created and the significant status set (for static
NODEs, the Static NODE User and LTERM entries are also created). This then becomes a
global status and will prevent that NODE from logging on anywhere in the IMSplex. When a
command resets (ends) significant status, if that resource has no other significant status and
is not currently active, it will be deleted.

14.8.7 Work which changes end-user significant status


End-user significant status on page 188 identified three types of end-user significant status.
There are several differences between the processing of command and end-user significant
status.
End-user significant status cannot be set when no resource entry exists. It is always the
result of end-user activity.
The installation has the choice about where (and if) the end-user significant status is kept.
See Status recovery mode (SRM) on page 189 and Status recoverability (RCVYxxxx)
on page 189 for this discussion. When end-user significant status is changed, the
appropriate (Static NODE) User entry is updated accordingly.
End-user significant status, if it exists, causes at least two updates to the resource
structure with every transaction.

For conversations, the conversational input message in-progress flag is set when the
transaction is entered and reset when the response is delivered.
For Fast Path, the Fast Path input in-progress flag is set when the Fast Path (EMH)
transaction is entered and reset when the response is delivered.
For STSN, the sequence numbers are updated for each input and output message.

14.8.8 Commands which change end-user status


Although no command can create end-user status, some commands can delete or change it.
For example, if an ETO User is in a conversation but is currently signed off, the User and
LTERM structure still exist in the resource structure. If SRM=GLOBAL, the /EXIT CONV
USER command will delete the conversational data from the User entry and, if there is no
other significant status, the User and LTERM entries will be deleted. Note that when
SRM=GLOBAL and the resource is not active, this command can be entered from any IMS. If

Chapter 14. Sysplex terminal management

203

SRM=LOCAL, then the /EXIT command must be entered from the IMS which owns the
resource. But the same rule would apply - when all significant status is gone, the entries are
deleted.

14.8.9 Session termination with significant status (not IMS failure)


When a session terminates due to session failure, or because the user logs off (not because
of IMS failure), IMS examines that sessions resource entries to see if any significant status
exists. If either command or end-user significant status exists, the resource entries will not be
deleted. If end-user significant status exists, then clearing of ownership depends on the SRM.
If SRM=GLOBAL (or NONE), then ownership will be cleared since everything needed to
recover the status is in the structure. In addition, IMS will move all locked messages for that
LTERM. This is done to allow the user to log on to another IMS and access those messages.
If SRM=LOCAL, then ownership is not cleared since this is the only IMS that knows how to
restore the end-user status. Locked messages remain locked.

14.8.10 Logon from NODE which already exists in resource structure


When a user logs on using a generic resource name, if a VGR affinity exists, VTAM will route
that logon request to the IMS to which it had an affinity. Usually the VGR affinity will only
continue to exist after session termination if SRM=LOCAL and
It had significant end-user status when the session terminated, or
IMS failed and the clean-up IMS didnt know whether it had significant status or not.

This is intended to route the users next logon request to the IMS where its significant status is
known. However, that user may elect to log on directly to an active IMS, bypassing VTAM
generic resources. When this happens, the new IMS detects the ownership (RM affinity) in
the resource entry and rejects the logon. That user then has to make a choice.
Wait for the failed IMS to restart.

When the failed IMS is restarted (see 14.8.12, IMS emergency restart on page 205) and
the user logs on, its significant status is restored and work continues.
Log on with user data telling the Logon Exit to steal the NODE if it is owned by an
inactive IMS. Refer to IMS exits on page 211 for more information about the Logon Exit.

The ownership of that resource is changed and the user loses any end-user significant
status that might have existed on the failed IMS. When the failed IMS is restarted, it will
discover that it no longer owns the resource and will delete any local status.
When a user logs on and IMS finds a resource entry without an owner (must have had
significant status and not SRM=LOCAL), then the logon request is accepted, the ownership is
set to that IMS, and the significant status is recovered. All is as it should be.

14.8.11 IMS failure


When an IMS system fails, it has no opportunity to clean up its entries on the resource
structure. However, since all the IMSs in the IMSplex are members of the same IMSplex, SCI
will inform other surviving IMSs that one has failed. One of those IMSs will get ownership of
the failing IMSs DFSSTMLimsid entry and request, from the Resource Manager, a list of all
resources owned by the failing IMS. It will then proceed to clean up the resource structure.
If a resource has no significant status and SRM=GLOBAL or NONE, then the resource
entry is deleted.

204

IMS Version 8 Implementation Guide

If a resource has command significant status, or end-user significant status with


SRM=GLOBAL, then the resource entry is not deleted but ownership is released.
If a resource has SRM=LOCAL, the surviving IMS does not know whether or not that user
had significant end-user status being maintained by its local (failed) IMS, so it does not
delete the resource and it does not clear ownership. That user must wait for the failed IMS
to restart before logging back on. Note that in this case, the VGR affinity would have been
IMS-managed and therefore not deleted.

The surviving IMS also does some clean-up work on the shared queues structure. For all
resources which were not SRM=LOCAL, IMS requests CQS to move all output messages
which were locked (on the LOCKQ) by the failing IMS back to the LTERM Ready Queue
(LRQ). These messages would include:
Output messages which were in the process of being delivered by the failed IMS but which
had not completed (Q1 and Q4 messages)
Active SPA+MSG for the last conversational output message for users who were between
iterations of a conversation (Q5 messages).
SPA+MSG for every held conversation (Q5 messages)

These messages may join output messages which were on the LRQ at the time of the failure:
Responses to transactions which had not yet been retrieved by the failing IMS (Q1
messages)
Unsolicited messages not in response to an input transaction (Q4 messages)

Note that input messages (transactions) are not unlocked. This is because only the failed IMS
knows the status of that message. It may have been in-flight, or it had reached sync point but
IMS failed before it was able to delete it from the LOCKQ. The status of that message will not
be known until the failed IMS is emergency restarted.
At the end of this process, all output messages locked by the failed IMS have been returned
to the LTERM Ready Queue, including between iteration conversations and held
conversations. Output messages on the LRQ with destinations previously active on the failed
IMS are still there. Input messages from the failed IMS which had not yet been scheduled are
still on the Transaction Ready Queue and can be scheduled anywhere. Responses to these
messages will be put on the LRQ. This concludes the work of the clean-up IMS.
Sometimes an IMS may fail and there is no surviving IMS to clean up. In this case, of course,
no clean up work is done until the next IMS restart - either the failed IMS or another. When
restart completes, that IMS will query RM for any IMSs which need to be cleaned up (see
DFSSTMLimsid on page 198 to see how RM knows) and then performs the clean up.

14.8.12 IMS emergency restart


While IMS was executing (before the failure) it knew (and logged) the status of each of its
logged on and signed on users. At restart, it recovers this information from log records. For
those which had SRM=GLOBAL or NONE, it will delete any status because it was either not
recoverable or it was (and perhaps still is) on the structure. But if SRM=LOCAL, then it must
decide what to do. It knows that there should be resource entries on the structure since the
clean-up IMS would not have deleted them if SRM=LOCAL. But it doesnt know if that user
was able to successfully log on to another IMS and continue processing. So, it checks the
resource structure for any entries and, if they still exist, who the owner is. If that IMS is still the
owner, then the user must not have logged on to another IMS and the local status is still valid.
It is not deleted. If the resource is gone from the structure, or if it is there but has a different
owner, or no owner, that its local status is no longer valid and will be deleted. In all cases, the
restarting IMS does not change the resource entries.
Chapter 14. Sysplex terminal management

205

14.8.13 Recovering significant status


Recovery of command significant status is relatively easy. Just set the appropriate status in
the local resource control blocks. Recovering end-user significant status globally is somewhat
more complex.

14.8.14 Recovering conversations


When a conversational transaction is entered, that user acquires end-user significant status.
Where that status is kept depends on the SRM value. If SRM=GLOBAL, then the status is
kept in the resource structure. If SRM=LOCAL, the status is kept only in local control blocks.
If SRM=NONE, then the status is kept only in the local control blocks and is deleted when the
session terminates. In all cases, the value of SRM is kept in the resource structure entry.
Key to being able to recover a conversation is knowing that a user was in conversational
mode and what the status of that conversation was at the time of an IMS failure or session
termination. The primary pieces of information kept about a users conversational status are
the conversation ID and transaction code for the active conversation and all held
conversations. Also a flag is kept that indicates whether the user has a conversational
transaction in progress (as opposed to being between iterations). This flag is important for
knowing whether and how the conversation can be recovered on another IMS. We will call
this the conversational input in-progess (CONV-IP) flag. It is set when a conversational
transaction is queued, and is reset when the response is delivered.

Example of conversation recovery


The following example illustrates the events that might occur in establishing and recovering a
conversation. Assume that there are two IMSs - IMS1 and IMS2. IMS0 is the GRSNAME for
both IMSs. TRANC is defined as a conversational transaction to both IMSs. Both IMSs have
static definitions for NODEA and LTERMA.

Normal operation
Users logs on and signs on successfully to IMS1 (logon IMS0)
NODE, Static NODE User, and LTERM entries created; owner=IMS1
SRM=GLOBAL; RCVYCONV=YES
VGR affinity set to IMS1; VTAM-managed affinity
User enters conversational transaction (TRANC)
IMS1 creates SPA and puts SPA+MSG on Transaction Ready Queue (TRQ)
IMS1 updates Static NODE User entry
Conversation ID
Transaction code
CONV-IP ON
TRANC is scheduled and processed by IMS2 (or any IMS in SQGROUP)
Response (SPA+MSG) is put on LTERM Ready Queue (LRQ) and IMS1 is notified
IMS1 retrieves response from LRQ and sends to LTERMA (NODEA)
SPA+MSG moved to LOCKQ
CONV-IP flag turned OFF when NODEA acknowledges receipt
Static NODE User still has significant status
SPA+MSG still on LOCKQ
User enters another transaction and process is repeated
Static NODE User entry gets updated for each input and output (CONV-IP flag)

IMS failure with terminal/user in conversational mode


When an IMS fails, the conversation could be in one of several states. Some of these
represent conversational input in progress, and the other is between iterations. Which of

206

IMS Version 8 Implementation Guide

these three states the user is in can be determined from the (Static NODE) User entry in the
resource structure and the messages on the shared queue structure.
User is between iterations of conversation
Last SPA+MSG are on LOCKQ
CONV-IP flag is OFF
Transaction was processing in IMS when IMS failed
Input SPA+MSG are on LOCKQ
No output messages for this LTERM on LOCKQ
CONV-IP flag is ON
IMS was in the process of delivering the response
Transaction has committed
Input SPA+MSG has been deleted
Output SPA+MSG is on LOCKQ
CONV-IP flag is ON
Last input is still on TRQ, last input is currently processing on IMS2, or response to last
input is available on LRQ; in each case, response to last input will eventually get to LRQ
Response SPA+MSG are on LRQ
CONV-IP flag is ON

Continuing the example between iterations of conversations


Assume that, in our example, the user is between iterations of a conversation. That is, the
response to the last input message has been sent and the CONV-IP flag is OFF. The
SPA+MSG are on the LOCKQ. This is probably the most likely status of the conversation.
When IMS1 fails:
VTAM deletes VGR affinity
IMS2 is informed by SCI that IMS1 has failed
IMS2 queries RM2 for all resources owned by IMS1 and cleans up
Finds NODEA/Static NODE UserA/LTERMA entries with ACTIVE conversational
status and SRM=GLOBAL
Conversational status in structure
IMS2 cleans up for IMS1
Resources not deleted
Resource ownership cleared
IMS2 queries CQS for all output messages on LOCKQ owned by IMS1
Finds SPA+MSG for last output on LOCKQ
Moves SPA+MSG from LOCKQ back to LRQ for LTERMA
IMS2 is done for now

When user logs and signs back on (using IMS0), VTAM routes logon request to IMS2:
IMS2 checks resource structure and finds entries for NODEA, Static NODE UserA, and
LTERMA
Conversation active; no ownership
CONV-IP flag is OFF
Logon accepted; user not in response mode
Response mode not recoverable for full function transactions
IMS2 finds SPA+MSG on LRQ; moves it back to LOCKQ (locked by IMS2); also saves
locally on Q5
User must retrieve last output message to refresh screen and continue conversation
/HOLD
Conversation held; Static NODE UserA entry updated
DFS999I HELD CONVERSATION ID IS 0001
/RELEASE CONVERSATION 0001
Conversation released; Static NODE UserA updated

Chapter 14. Sysplex terminal management

207

Last SPA+MSG retrieved from LOCKQ and sent to NODEA


Screen refreshed; conversation continues

Continuing the example - conversational input in progress


The CONV-IP flag may be on for several reasons as documented above. When the user logs
on to IMS2, IMS2 queries RM for the status or those resources. If it accepts the logon
(resources not owned), it will register interest in the logical terminal LTERMA
Resource entries show
CONV-IP flag is ON
Conversation active; no ownership
Logon accepted
Register interest in LTERMA
IMS2 informed if SPA+MSG on LRQ
If SPA+MSG on LRQ
Must be response to last input
Leave on LRQ; also keep locally on Q1
Wait for user to request message (for example, PA1)
If no SPA+MSG on LRQ
Transaction has not executed (still on TRQ - probably not the case) -or Transaction was in-progress in IMS1 (most likely case)
User must wait for response
If no response soon, user has choice
Keep trying until response arrives
May have to wait for IMS1 to restart
/EXIT conversation and do other work
Even though conversation is exited, it will still be scheduled and processed. When
the output is queued and retrieved by an interested IMS, the Conversational
Abnormal Termination Exit (DFSCONE0) will be driven to handle it

14.8.15 Recovering Fast Path


When SRM=GLOBAL and RCVYFP=YES, Fast Path response mode can be recovered on
any IMS in the IMSplex. This means that, if a user is in Fast Path response mode when IMS
fails, that user can log on to any IMS in the IMsplex. When logon/signon is complete, the user
session will be put into response mode. When the output message becomes available, it will
be delivered to the user. A detailed example of how this is done is not shown here, but it is
similar to recovering a conversation. There is a Fast Path input message in progress (FP-IP)
flag that performs the same function that the CONV-IP flag performed.
There is one significant difference, however. When SRM=GLOBAL but the transaction is
processed locally without going through the EMHQ (Sysplex Processing Code = Local-Only
or Local-First), then its SRM is temporarily changed to LOCAL. When this occurs, if IMS
fails, ownership of the resource will not be released and the user must wait for the failed IMS
to restart.
Note also that, when RCVYFP=NO, that all Fast Path output messages are deleted if IMS or a
session fails. They are not recoverable and will be deleted when discovered on the shared
queue.

14.8.16 Recovering STSN sequence numbers


Recovery of STSN sequence numbers is much simpler than recovery of conversations or
Fast Path. The input and output STSN sequence numbers are maintained in the (Static
NODE) User entry and, when logging on to another IMS, can be used to resynchronize the
message traffic with the STSN device. The intent of STSN is to resolve the indoubt message.
208

IMS Version 8 Implementation Guide

For example, if IMS sent a message to a terminal but did not get an acknowledgment, IMS
does not know whether the message was received or not. By informing the STSN device of
the sequence number of the last output message, the device can ask IMS to resend it, or it
can tell IMS to dequeue it. A similar function applies to an input message when the STSN
device does not know if it was received by IMS.
By supporting STSN recovery, the user can log on to another IMS. Using the sequence
numbers in the resource entry, that IMS can resynchronize its input and output messages
with the STSN device.

14.8.17 Summary of STM in action


The above descriptions and examples, although quite detailed, are meant to give the reader
some idea of how sysplex terminal management works, and the impact that various actions
and failures can have. It is not complete. When migrating to a CSL environment where STM is
enabled, it is recommended that the installation develop extensive tests to identify all possible
scenarios, and adjust end-user procedures accordingly.

14.9 Resource structure


Most resource structure management functions are done by the Resource Manager, CQS, or
the system. The installations primary responsibility is to define the structure in the CFRM
policy, occasionally (perhaps) issuing some structure commands to alter or rebuild the
structure, and then manage its size and location.

14.9.1 Defining the resource structure


Like all Coupling Facility structures, the resource structure must be defined in the CFRM
policy and that policy must be activated. When defining the structure, the user must consider:
SIZE, INITSIZE, MINSIZE

Sets initial, maximum, and minimum size for structure. Appendix B, Resource structure
sizing on page 315 describes a technique for sizing a resource structure using the
CFSIZER tool on the web at URL:
http://www.ibm.com/servers/eserver/zseries/cfsizer/ims.html

DUPLEX(ALLOWED | ENABLED | DISABLED)

Determines whether structure is to be duplexed


ALLOWAUTOALT(YES | NO)

Determines whether the system can alter the size and entry-to-element ratio of the
structure
FULLTHRESHOLD(80 | nn)

Sets percent full which will invoke autoalter


REBUILDPERCENT(nn)

Determines percentage of lost connectivity to invoke system initiated rebuild


PREFLIST(cfnn cfnn cfnn)

Identifies candidate coupling facilities where structure can be allocated

Chapter 14. Sysplex terminal management

209

14.9.2 Managing the resource structure


The resource structure and CQS support all the new structure management enhancements
described in Chapter 10., Coupling Facility structure management on page 137, and the
reader should look there to understand these enhancements. Briefly, they include:

Alter and autoalter support


Structure full threshold monitoring
System managed structure rebuild
System managed duplexing

14.9.3 Structure failure


Structure failure can be the result of a complete Coupling Facility failure, or a problem with an
individual structure itself. Loss of connectivity to a structure is not considered a structure
failure, even when all connectors lose connectivity.
When a resource structure fails, it must be repopulated. Because structure checkpoints are
not taken (no equivalent to the shared queues structure recovery data set), and changes to
the structure are not logged (no equivalent to the shared queues logstream), a failed resource
structure cannot be recovered.

Structure repopulation
Structure repopulation is the responsibility of CQS, RM, and IMS. When any structure fails, it
is the connector that first discovers it.
When CQS discovers that a resource structure has failed, it begins the repopulation process
by reallocating the structure and then creating any CQS global and local entries. Currently,
there are no CQS entries. It then notifies all of the RMs (using SCI services) to begin their
own repopulation. At this time, a message is written indicating that structure repopulation has
been requested:
CQS210I STRUCTURE strname REPOPULATION REQUESTED

Each Resource Manager will repopulate the structure with its own information. RM entries
include a RM Global entry (CSLRGBL) which contains the IMSplex name, local RM entries
(CSLRLrmid) which contain local RM information, such as the RM version number, and a
resource type table entry (CSLRRTYP) which identifies the supported resource types and
names types. This is done by each RM, but only the first will add the global entries. As each
RM completes, it will issue the message:
CSL2020I STRUCTURE strname REPOPULATION SUCCEEDED

Note that this does not mean that structure repopulation has completed. It only means that
RM has completed it piece of it. RM then directs each of its registered IMSs to repopulate the
structure with whatever local information they have.
Each active IMS will then add its own entries. This is not a robust process. There are several
cases where resource entries will not be repopulated. If it is important to the user that the
resource structure be immune from structure failure, then the structure should be duplexed.
Causes for resource entries not being repopulated include:
One or more IMSs are not active

If an IMS is not active, then it cannot repopulate. IMS will not attempt to repopulate if it is
started later.
A resource became inactive with command significant status or end-user significant status
with SRM=GLOBAL

210

IMS Version 8 Implementation Guide

In this case, when the resource became inactive, the local IMS deleted all of its knowledge
about the resource. It was available only in the resource entry.
Global online change is never repopulated

If a structure fails during a global process, that process is not repopulated to the structure.
In the case of global online change, each IMS must terminate the online change process
and then reinitiate it, either at the PREPARE phase or COMMIT phase, depending on
whether or not all IMSs had completed commit phase 2. See Global online change on
page 215 for a description of how global online change works.
After the repopulation is completed, IMS issues the following message:
DFS4450 RESOURCE STRUCTURE REPOPULATION COMPLETE

As noted above, not all resource information is repopulated. If a user with SRM=GLOBAL
were in a conversation, and the session were terminated while still in that conversation, then
that users status would not be repopulated and the conversation would, in effect, not exist.
When the SPA+MSG is found on the LRQ, it will be passed to DFSCONE0 for processing.

14.9.4 Loss of connectivity to a structure


Loss of connectivity is not the same as structure failure, even though all CQSs may have lost
connectivity. Loss of connectivity by all CQSs is most likely to happen when there is only one
CQS active and its link fails. If this does happen, repopulation is not invoked. The only
recourse is to try to fix the problem - perhaps using system managed rebuild to move the
structure to a Coupling Facility where the CQS(s) do have connectivity would work - or fix the
CF link.

14.9.5 SCI, RM, CQS, or structure failure


SCI, RM, CQS, or a structure failure will have the following impact until they are restarted (or
repopulated).
Any attempt to update the structure will fail
Logons and signons will be rejected
Commands which affect global status will be rejected
Normal activities (for example, conversations, STSN, Fast Path) will continue with the
status maintained only locally. When the failed components are again available, the status
will be updated in the resource entry (if necessary).
Logoffs and signoffs will wait until all components are available
Note: Automatic Restart Manager (ARM) is supported for these address spaces and is
recommended to minimize the duration of any outage.

14.10 Miscellaneous other considerations


There are a number of other considerations when enabling sysplex terminal management.

14.10.1 IMS exits


There are three IMS exits which must be considered when using STM. The reader should
refer to IMS Version 8: Customization Guide, SC27-1294 for a complete description of these
exits.
Chapter 14. Sysplex terminal management

211

Logon Exit (DFSLGNX0)


The Logon Exit, if it exists, is driven whenever a terminal logs on to IMS. With STM enabled,
the Logon Exit has some additional capabilities when any static or dynamic STSN terminal
logs on. It is able to:
Override the SRM and RCVYxxxx defaults for this logon session. This applies to all static
and dynamic STSN terminals. It does not apply to dynamic non-STSN.
Allow NODEs which are owned by an inactive IMS to be stolen. This function applies to all
terminals for which this exit is driven. It should only be necessary when SRM=LOCAL and
the user does not wish to wait for the failed IMS to be restarted. User data could be
included on the logon request which could then be used by the exit as a signal that the
user really means it. That is, that the user wants the NODE to be stolen. When a NODE
is stolen, the logon is accepted, ownership is reset to the new IMS, and any end-user
status will be lost. Command significant status will be restored on the new IMS.

Signon Exit (DFSSGNX0)


The Signon Exit, if it exists, is driven when a user signs on from a dynamic (ETO) terminal. It
has capabilities similar to the Logon Exit, except that it only applies to the User signon. It can:
Override the SRM and RCVYxxxx defaults for this User. This function applies only to
dynamic non-STSN terminals. For all others, these values can only be overridden in the
logon exit.
Allow User resource to be stolen if it is owned by an inactive IMS. Like with the Logon
Exit, all end-user status will be lost. Command significant status will be restored.

Output Creation Exit (DFSINSX0)


When a message arrives in IMS, and IMS does not know the destination (FINDDEST fails),
then that IMS will query RM to determine whether that destination is already defined by
another IMS. If RM finds the resource, then IMS will dynamically create the control
blocks (requires ETO for LTERM creation) and queue the message accordingly.
However, without further knowledge, IMS does not know the characteristics of the resource,
and so chooses the defaults. For example, if the destination were a transaction, the IMS
would queue it as non-response mode, non-conversational, non-Fast Path, and so on. This
could cause problems if the default characteristics are not correct.
When a message with an unknown local destination is received, IMS will query RM to see if it
exists in the RM structure. If the structure has this message destination registered as a
transaction, then the Output Creation Exit (if it exists) will be called. The exit can allow the
message to be queued as a transaction with the IMS default characteristics, it can change the
characteristics of the transaction (for example, conversational, Fast Path, etc.), or it can reject
the message, in which case IMS will discard it as an unknown destination. The user should
consider writing an Output Creation Exit to do just this. It is unlikely that the default
characteristics of a message are correct. To avoid queuing messages invalidly just because
some other IMS has them defined, this exit should just reject all messages unless it has
explicit knowledge of the characteristic of the destination.
If RM identifies the message destination as an LTERM, then the exit will not be called and
IMS will queue the message according to its definitions in the resource structure.

14.10.2 Global callable services


Callable services is a services available to most IMS exit, including the Logon Exit and
Signon exit. There is one change that applies when STM is in effect. When the exit is using
callable control block services, then a new default global option is in effect. When using the
FIND or SCAN option, if the control block is not active in the local system, then RM will be
212

IMS Version 8 Implementation Guide

queried. If the resource is found on the structure, then the address returned will be that of
hidden control blocks with global information contained in the structure. These are mapped
the same as the real control blocks, but contain only that information known to the structure. If
the resource is active on another IMS, that IMS is NOT queried for more information. One
example of the use of this capability is for a signon exit. The exit may issue a FIND or SCAN
for an LTERM name for purposes of assigning a name to an ETO user during the signon
process. If that LTERM is registered anywhere in the IMSplex, the exit will find it and can use
a different name to build a CNT control block.
Note: The default is global. If the user wants only to FIND or SCAN local control blocks,
the exit must be changed to set a flag in the function specific parameter list. This is
documented in IMS Version 8: Customization Guide, SC27-1294.

14.10.3 Extended Recovery Facility (XRF) considerations


There are several considerations when running with Extended Recovery Facility (XRF) and
STM.
Both the active and alternate IMSs must have a local SCI address space
The owner of a resource is the RSENAME, not the IMSID. This is so that if the alternate
has to take over, it will own those resources owned by the failing IMS.
XRF and VTAM generic resources are mutually exclusive, so only the RM affinity will force
a user to log back on to the right IMS.
Terminal/use status recovery depends on the terminal class

Class 1 terminals
A backup session is maintained with a class 1 terminal. When the alternate takes over,
the session is automatically reestablished with the new active. Any session status that
existed on the old IMS is determined from the log records - not from the structure. The
SRM option only applies if a session is terminated - not if IMS fails. If global and local
status differ, then local status prevails. And, even if SRM=NONE, terminal/user status
is recovered (based on log records, not on the resource entry).
Class 2 terminals
For class 2 terminals, the new active will reacquire the session (OPNDST). In this case,
status recovery is determined by the SRM value. Whatever is in the resource entries
prevails. If SRM=NONE, then the status will be deleted.
Class 3 terminals
Class 3 terminals are treated like any terminal in a non-XRF environment. Ownership
(RM affinity) is released only if SRM=GLOBAL or NONE. Status is recovered globally
or locally, depending on SRM. If SRM=LOCAL, the user must log on to the new active.

14.10.4 Rapid Network Reconnect (RNR) considerations


Rapid Network Reconnect (RNR) is intended to hold a users VTAM session information in a
data space (SNPS) or the Coupling Facility (MNPS) until that IMS is restarted, opens its
VTAM ACB, and issues /START DC (logons accepted). At this time, the user is back in
session with the original IMS (assuming RNR=ARNR). Any attempt to log on to another IMS
would fail, since that NODE is already in session (discuss this with your VTAM expert to see
how to get around this). This defeats the purpose of STM, since the user is always logging
back on to the same IMS. So it is unlikely that RNR would be used in conjunction with STM, at
least not with SRM=GLOBAL.

Chapter 14. Sysplex terminal management

213

14.11 Summary of sysplex terminal management


Sysplex terminal management is a complex topic which will impact the way in which IMS
systems programming, operations, and end-users view the IMSplex. While much of it just
happens without specific user intervention or definition, it can be very confusing when, for
example, a users logon request is rejected because that user has RM affinity to another
IMS. But its objectives have been met, and the infrastructure is there to continue building on
those accomplishment:

214

Resource type consistency


Resource name uniqueness
Resource status recovery
Global callable services

IMS Version 8 Implementation Guide

15

Chapter 15.

Global online change


In this chapter we introduce the new global online change function for Version 8. We discuss
the following topics:

Local and global online change


Implementing global online change
Global online change processing
Controlling and displaying online change status
Migration and fallback issues

Copyright IBM Corp. 2002. All rights reserved.

215

15.1 Online change


It is possible to make system definition changes without shutting down the online system. A
request to add an application or to modify the current set of programs, transactions, and
database usage need not force a complete system definition and an IMS restart, with
possible interruption for the online users. You can examine the request to see if an online
change can be made. If the request does not involve changes to the IMS network or the use
of static terminals as defined in the current IMS system definition, you can arrange for the
changes to be made while the IMS system is executing online.
Example 15-1 lists the allowable changes to various resources that can be made via an
online change. The list is based on the Stage 1 system definition macros.
Table 15-1 System defined resources and permissible online changes
System definition macro

Permissible online changes

APPLCTN

Add a PSB (application) and its attributes


Change attributes
Delete a PSB

DATABASE

Add a database and its attributes


Change attributes
Delete a database

RTCODE

Add a routing code and inquiry attributes


Delete a routing code

TRANSACT

Add a transaction and its attributes


Change attributes
Delete a transaction

15.1.1 Review of local online change


IMS online change has been available with previous IMS releases. It allows changes to be
made to IMS resources without bringing down the IMS system. It is invoked and controlled
with the various /MODIFY commands:
/MODIFY PREPARE
/MODIFY COMMIT
/MODIFY ABORT
/DISPLAY MODIFY

Online change switches online libraries between an active library and an inactive library that
contains the changes to be implemented for the following libraries.

IMSACBA and IMSACBB


MODBLKSA and MODBLKSB
MATRIXA and MATRIXB
FORMATA and FORMATB

The libraries in use by the running IMS (active libraries) are recorded in MODSTAT data set.
In an IMS Parallel Sysplex environment, online change poses some operational problems.
The online change must be coordinated between the multiple systems in the sysplex.

216

IMS Version 8 Implementation Guide

The IMS resources are generated with the appropriate utility process:

DBDGEN, PSBGEN, and ACBGEN


MODBLKS system generation
MFS utility
Security generation using the security maintenance utility (SMU)

Figure 15-1 shows the various preparation of resources for the online change process. The
staging libraries are updated with the changed resources.

Prepare staging libraries


for online change.
MODBLKS
Source

DBDLIB

MODBLKS
Gen

SMU
Gen

MATRIX

MODBLKS

ACBGEN

ACBLIB

MFS
Gen

FMTLIB

PSBLIB

MFS
Source

Online Change
Staging Libraries
Figure 15-1 Online change preparation

Before the online change may be done, the updated staging libraries must be copied to the
inactive libraries. This is done with the online change copy utility (DFSUOCU0). The utility
reads a MODSTAT status data set to determine the inactive libraries to updated. The
changed resources are then copied to the inactive library.
Figure 15-2 shows the staged changes, in a Parallel Sysplex. After the changes have been
staged to the inactive IMS libraries, a /MODIFY PREPARE is issued for IMS1, to prepare the
changes for implementation in the online system. A /DISPLAY MODIFY command is issued
to display work in progress for resources to be changed or deleted. If there is no work in
progress, a subsequent /MODIFY COMMIT on IMS1 completes the online change. The
inactive and active libraries are switched, the MODSTAT data set is updated to indicate the
new active libraries, and processing continues.

Chapter 15. Global online change

217

Inactive libraries

Active libraries

OLC
Utility
Staging
Libraries

IMS1

A
MODSTAT
for IMS1

IMS2

/MODIFY PREPARE
/DIS MODIFY - OK
/MODIFY COMMIT

/MODIFY PREPARE
/DIS MODIFY - OK
/MODIFY COMMIT

Successful
Switch libraries
Resume processing

B
MODSTAT
for IMS2

Unsuccessful
What to do?

Figure 15-2 Manually coordinated online change in a sysplex environment

To coordinate the online change with IMS2 in this sysplex environment, a /MODIFY
PREPARE, /DISPLAY MODIFY, and /MODIFY COMMIT are issued on IMS2. Unfortunately
the online change can fail and leave you with the IMS systems in the example using different
libraries and therefore different resources as shown in Figure 15-3.

Active libraries
Staging
Libraries
A

Active libraries

IMS1

MODSTAT
for IMS1

IMS2
Choices
1. Correct problem with IMS2
and redo OLC
2. Backout OLC for IMS1

MODSTAT
for IMS2

Figure 15-3 In-consistent libraries in a sysplex environment

This leaves you with two options:


1. Correct the problem with IMS2, and re-issue the online change command.
2. Backout the successful online change that was done to IMS1.
As you can see, trying to successfully coordinate online change in a data sharing
environment can be cumbersome and problematic.
IMS Version 8 offers and alternative to the traditional online change. Global online change,
using the new Operations Manager to process commands and the new Resource Manager to
coordinate the online change across the IMSplex.
218

IMS Version 8 Implementation Guide

15.1.2 Overview of global online change


The processing of global online change is similar to that of local online change. The resource
preparation and staging is the same as in the past with local online change. For IMS Version
8, modifications have been made to coordinate the change across multiple IMS systems.
The online change status (OLCSTAT) data set is shared by all of the IMS systems. Its
function is similar to that of the MODSTAT data set with local online change. OLCSTAT
contains the DDNAMEs of the active libraries. An active library is either an A or B library.
For example, the active ACBLIB is either IMSACBA or IMSACBB. We will see later that
OLCSTAT data set also contains some other information.
With global online change, typically all of the IMS systems share the same libraries. They
each use the same DDNAME. For example, if one IMS is using its IMSACBA DD for ACBLIB,
all other IMSs will also using this DD for ACBLIB. We will see later that there is an option to
use different data sets.
Global online change is invoked by IMSplex commands, such as the INITIATE OLC
command.

15.2 Setting up the global online change


The following topics discuss how to implement and manage the global online change feature
included with IMS Version 8. Global online change is activated for an IMS when
OLC=GLOBAL is specified in the DFSCGxxx member which it is using.
The following parameters in DFSCGxxx PROCLIB member are used for global online
change:
OLC=

LOCAL | GLOBAL, GLOBAL indicates that global online change is


enabled

OLCSTAT=

Data set name, required with OLC=GLOBAL and it must be the same
for all IMSs in IMSplex

NORSCCC=

Data set name consistency checking not done for specified libraries
(ACBLIB,FORMAT,MODBLKS), MODBLKS also does not check for
MATRIX)

If OLC=GLOBAL is specified, an OLCSTAT specification is required. OLCSTAT is the data set


name for the online change status data set (OLCSTAT). All IMSs in an IMSplex must define
the same physical OLCSTAT data set.
If a resource structure is defined to the IMSplex, IMS ensures that the OLCSTAT data set
names are consistent. If they are not consistent, IMS initialization fails.
The NORSCCC parameter is used specify that consistency checking will not be done for
some libraries. Consistency checking verifies that all IMS systems are using the same data
set names for their libraries affected by global online change. Up to three values may be
specified for NORSCCC. ACBLIB indicates that ACBLIB data set names are not checked for
consistency. FORMAT indicates that the FORMAT data set names are not checked for
consistency. MODBLKS indicates that MODBLKS and MATRIX data set names are not
checked for consistency. There is no ALL value for NORSCCC. If you want to turn off
consistency checking for all of the libraries, specify:
NORSCCC=(ACBLIB,FORMAT,MODBLKS)

Chapter 15. Global online change

219

If a resource structure is defined, the default is that resource consistency checking is


performed for all data sets. Resource consistency checking is optional, but as it is the default,
you have to explicitly disable it if you dont want it to happen.
Consistency checking creates a single point of failure. If a data set is lost, it is lost for all
members of the IMSplex using this capability. An advantage of consistency checking is that
only one execution of the online change copy utility is required to copy the staging data set to
the inactive data set. It also ensures that the same datasets are used for all IMS systems in
the IMSplex.
Turning off consistency checking creates some operational exposures. If the data sets used
by each of the IMSs differ, unexpected and unwanted results could occur. To avoid this, the
online change copy utility must be executed multiple times, once for each IMS system. Each
time the same input data set must be used but the output data set must be different.
Figure 15-1 is and example of a DFSCGxxx member used to enable online change.
Example 15-1 DFSCGxxx for global online change
*--------------------------------------------------------------------*
* IMS COMMON SERVICE LAYER PROCLIB MEMBER
*
*--------------------------------------------------------------------*
CMDSEC=N,
/* NO CMD AUTHORIZATION CHECKING
*/
IMSPLEX=PLEX1,
/* IMSPLEX NAME
*/
OLC=GLOBAL,
/* GLOBAL ONLINE CHANGE
*/
OLCSTAT=IMSPSA.IMA0.OLCSTAT
* NORSCCC=(ACBLIB,FORMAT,MODBLKS)
*--------------------------------------------------------------------*
* END OF MEMBER DFSCG000
*
*--------------------------------------------------------------------*

15.2.1 Preparation for global online change


Preparation for global online change is the same as that of local online change. The IMS
resources are generated with the appropriate utility process:

DBDGEN, PSBGEN, and ACBGEN


MODBLKS system generation
MFS utility
Security generation using the security maintenance utility (SMU)

Before the online change may be done, the updated staging libraries must be copied to the
inactive libraries. This is done with the online change copy utility (DFSUOCU0). It is the
same utility that is used with local online change. It is available with previous IMS releases,
and has been enhanced to support global online change. It reads OLCSTAT to determine
inactive libraries.

15.2.2 Overview of execution


Global online change is invoked using the new IMSplex commands.
Online change is started invoking the INIT OLC PHASE(PREPARE) command. This is similar
to the /MODIFY PREPARE command used with local online change. It also coordinates the
prepare processing across all of the IMSs.

220

IMS Version 8 Implementation Guide

After the prepare has completed, the actual changes are invoked with the INIT OLC
PHASE(COMMIT) command. This is similar to the /MODIFY COMMIT command used with
local online change.
An online change may be aborted by issuing a TERMINATE OLC command. It coordinates
the online change abort phase across all the IMSs in the IMSplex. TERMINATE OLC is
similar to the /MODIFY ABORT command used with local online change.
A QUERY MEMBER command may be used to show online change status of IMSs. It reports
the current status of each IMS participating in the online change.

15.2.3 OLCSTAT data set


The OLCSTAT data set is new for IMS Version 8. One OLCSTAT data set is shared by all of
the IMS systems in the IMSplex. Its function is similar to that of the MODSTAT data set used
with local online change. It contains the online change status. This includes an indication of
which DDNAMEs are the active libraries. It also contains a list of the IMS systems which
participate in global online change.
The OLCSTAT data set is BSAM. It contains one record of variable size. IBM suggests the
data set attributes shown in Figure 15-2, to support an IMSplex of up to 65 IMSs.
Example 15-2 OLCSTAT allocation attributes
DSORG
RECFM
LRECL
BLKSIZE

Sequential
V
5200
5204

The OLCSTAT data set is initialized by using the new global online change utility. Later, we
will see how this is done.
The OLCSTAT data set is dynamically allocated by IMS using the data set name defined in
the DFSCGxxx PROCLIB member (OLCSTAT=data set name). An OLCSTAT DD statement
should not be defined in the IMS procedure.

15.2.4 DFSUOLC0 functions


The global online change utility (DFSUOLC0) is a new utility in IMS Version 8. It is used for
the following four functions:
To initialize the OLCSTAT data set.
To add IMS systems to the list of IMSs using global online change. This is usually not
required. This function may be needed if you lose the OLCSTAT data set and must rebuild
it.
To delete IMS systems from the list of IMSs using global online change. You may need to
do this if you decide not to use an IMS system again. This will be explained later.
To unlock the OLCSTAT data set. This is only required in the rare case that every IMS
system participating in global online change abends while doing online change.

15.2.5 DFSUOLC procedure


IMS supplies the online change utility procedure in PROCLIB. Figure 15-3 is an example of
the DFSUOLC procedure.

Chapter 15. Global online change

221

Example 15-3 DFSUOLC procedure


//
PROC
//STEP1 EXEC
//STEPLIB DD
//OLCSTAT DD
//SYSUDUMP DD
//SYSPRINT DD
//SYSIN
DD

FUNC=INI,ACBS=,MDBS=,FMTS=,MDID=,PLEX=,SOUT=A
PGM=DFSUOLC0,PARM=(&FUNC,&ACBS,&MDBS,&FMTS,&MDID,&PLEX)
DSN=IMSPSA.IMA0.&SYS2.SDFSRESL,DISP=SHR
DSN=IMSPSA.IMA0.OLCSTAT,DISP=OLD
SYSOUT=&SOUT
SYSOUT=&SOUT
DUMMY

The procedure contains the following symbolics and in turn program parameters:
FUNC
Function (initialize, add, delete, or unlock)
ACBS
Specifies initial ACBLIB DDNAME suffixes
MDBS
Specifies initial MODBLKS and MATRIX DDNAME suffixes
FMTS
Specifies initial format DDNAME suffixes
MDID
Specifies initial modification id
PLEX
Specifies IMSplex name, used only for unlock function
//SYSIN DD is used to specify IMS systems for the add and delete functions
The modification id is a number which is incremented with each online change. It is similar in
function to the MODSTAT identifier used by local online change.

15.2.6 Initializing OLCSTAT


Example 15-4 is an example of using the global online change utility to initialize an OLCSTAT
data set. The DFSUOLC PROC is used.
Example 15-4 Initializing the OLCSTAT data set
//JOUKO3X JOB (999,POK),'OLC',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,
//
MSGLEVEL=(1,1)
//*
//S EXEC DFSUOLC FUNC=INI,ACBS=A,MDBS=A,FMTS=A,MDID=1

This shows how a typical user would initialize the OLCSTAT data set. The initialization
function (INI) is specified. All DDNAME suffixes are set to A. This means we will begin with
IMSACBA, MODBLKSA, MATRIXA, and FORMATA as the active library DDNAMEs. The
initial modification id number is set to 1

15.2.7 OLC copy utility


The online change utility (DFSUOCU0) has a new name in IMS Version 8. It is the online
change copy utility. The new name more clearly defines its function. Typically, it copies a
staging library to an inactive library, but there are several other options for input and output
data sets.
Example 15-5 contains an excerpt of the online change copy utility procedure (OLCUTL) from
PROCLIB.
Example 15-5 OLCUTL procedure
//
PROC TYPE=,IN=,OUT=,SOUT=A,SYS=,SYS2=,
//
OLCGLBL=,OLCLOCL='DUMMY,'
//S
EXEC PGM=DFSUOCU0,PARM=(&TYPE,&IN,&OUT)
//STEPLIB DD DSN=IMSPSA.IMA0.SDFSRESL,DISP=SHR
...

222

IMS Version 8 Implementation Guide

//MODSTAT
//
//OLCSTAT
//
...

DD &OLCLOCL.DSN=IMSPSA.&SYS.MODSTAT,
DISP=SHR
DD &OLCGLBL.DSN=IMSPSA.IMA0.OLCSTAT,
DISP=OLD

To support global online change the utility can read the OLCSTAT data set to determine the
inactive library. A new value for the OUT= parameter (G) allows this. The OLCSTAT DD
statement has been added to the procedure for this utility.
TYPE

Specifies the library to be copied. It can be the ACB, FORMAT, MATRIX, or


MODBLKS library.

IN

Defines the library DDNAMEs to be used as input:


If S, the IMS staging library (IMSACB, FORMAT, MATRIX, or MODBLKS)
If I, a user input library (IMSACBI, FORMATI, MATRIXI, or MODBLKSI).

OUT

Defines the library DDNAMEs to be used for output:


If A, the IMS A library (IMSACBA, FORMATA, MATRIXA, or MODBLKSA).
If B, the IMS B library (IMSACBB, FORMATB, MATRIXB, or MODBLKSB).
If O, a user output library (IMSACBO, FORMATO, MATRIXO, or
MODBLKSO).
If U, the target library (inactive) determined by the utility, using the MODSTAT
data set. The target will be the library not currently in use by the IMS online
system. This is the recommended value when local online change is used.
If G, (new in IMS Version 8) the target library (inactive) determined by the
utility, using the OLCSTAT data set. The target will be the library not currently
in use by the IMS online system. This is the recommended value when global
online change is used.

15.3 Global online change processing


Global online changed processes in a number of phase to coordinate the changes across all
of the IMSs in the IMSplex. The phases are: prepare, commit phase 1, commit phase 2, and
commit phase 3, or the clean up phase.

15.3.1 Prepare
The prepare phase stops (QSTOPs) all incoming messages and transactions for changed
resources. This includes changed transactions, changed PSBs, and transactions whose
PSBs refer to changed databases. Prepare does not affect other processing. Input messages
already on the queue may be processed. All unaffected transactions may be processed. Any
PSB may be scheduled. This includes BMPs, IMS transactions, and PSB schedules by CICS
and ODBA. These actions occur with both local and global online change.
No scheduling occurs during commit phase 1 or 2.

Chapter 15. Global online change

223

Figure 15-4 shows the prepare process, and the interaction between the CSL components,
and the IMSs.

Prepare Phase
From SPOC, enter new OLC command

CSL
SPOC

INIT OLC PHASE(PREPARE) TYPE(ALL)

SCI

OM sends command to Master IMS

OM
RM

PREPARE

INIT OLC
COORDINATE

Master (IMS1)
completes prepare
phase then tells RM
to coordinate with
other IMSs.

PREPARE
COMPLETE

INIT OLC

Stop queuing
Drain queues

IMS1

ACBLIBA
MODBLKSA
FORMATA

IMS2

OLCSTAT

Figure 15-4 Prepare phase

When a prepare command is entered in OM, it is sent to one of the IMSs in the IMSplex. This
IMS is the master IMS for prepare processing. It does its prepare processing first. If it
succeeds, the master IMS tells RM to coordinate the prepare processing across the other
IMS systems. RM invokes prepare processing in these IMSs.
If the prepare processing fails in the master IMS, online change is aborted. The problem may
be resolved and the prepare command issued again.
If the prepare processing succeeds in the master IMS, the other IMSs are told to invoke
prepare processing. If prepare processing fails in one of them, other IMSs are not affected.
Their prepare processing is not backed out. In this situation users must cause the back out to
occur. This is done by issuing the TERMINATE OLC command. This will back out prepare
processing in all IMSs. Then the prepare may be attempted again. Of course, one should
resolve the problem which caused the previous failure.
These are just some of the reasons a prepare command could fail. They are the same
problems that could occur with local online change.
The library could be enqueued because the online change copy utility is still executing. An
open error could occur because the data set name specified for the library is wrong. A read
error could occur because of a DASD failure. A member of a library could be invalid because
is was generated by ACBGEN from another release of IMS.

224

IMS Version 8 Implementation Guide

15.3.2 Commit phase 1


Commit phase 1 locks the OLCSTAT data set by updating it to indicate the online change is in
progress. New IMS systems with OLC=GLOBAL cannot be initialized while the data set is
locked. Phase 1 also checks to ensure that no resources to be changed are in use. When all
IMS systems have successfully completed commit phase 1, commit phase 2 begins.
Figure 15-5 shows the processing that occurs when commit phase 1 begins.
.

Commit Phase 1
SPOC

CSL
SCI
OM
RM
COMMIT
PHASE 1

INIT OLC COMMIT

Master performs
commit phase 1
and signals RM
to coordinate
process with
other IMSs.

COORDINATE
PHASE 1

IMS1
Stop Scheduling

PHASE 1
COMPLETE
or FAILED

ACBLIBA
MODBLKSA
FORMATA

IMS2

OLCSTAT

Figure 15-5 Commit phase 1

Figure 15-6 shows the completion of commit phase 1.

Chapter 15. Global online change

225

Commit Phase 1 Complete


SPOC

CSL
If any IMS fails phase 1
OLC aborted

SCI
OM
RM

INIT OLC
COMMIT
If all successful,
Master updates

OLCSTAT

New suffixes
OLC status for
each IMS
OLC cannot be
terminated after
OLCSTAT updates
Signal RM to
coordinate phase 2

PHASE 1
COMPLETE

IMS1

ACBLIBB
MODBLKSB
FORMATB

IMS2

OLCSTAT
Figure 15-6 Commit phase 1 completed

When a commit command is entered in OM, it is sent to one of the IMSs in the IMSplex. This
IMS is the master IMS for commit processing. It is not necessarily the same IMS that was the
master for prepare processing. The master does its processing for each commit phase
locally. Before it can move to the next commit phase, it invokes the phase in the other IMSs
by using RM.

15.3.3 Commit phase 2


The changes are made in commit phase 2 by switching the libraries. After a system makes
the change, it resumes scheduling. After all systems have made the change, commit phase 2
ends. Coordination of commit phase 2 begins as shown in Figure 15-7.

226

IMS Version 8 Implementation Guide

Commit Phase 2
SPOC

CSL
SCI
OM
RM
COMMIT
PHASE 2

Master performs
commit phase 2
and signals RM
to coordinate
process with
other IMSs.

COORDINATE
PHASE 2

IMS1

Switch libraries
Resume processing

PHASE 2
COMPLETE

ACBLIBB
MODBLKSB
FORMATB

IMS2

OLCSTAT

Figure 15-7 Commit phase 2

After completion of commit phase 2, the libraries have been switched, and the completion is
communicated as shown in Figure 15-8.

Chapter 15. Global online change

227

Commit Phase 2 Complete


SPOC

CSL
SCI
OM
RM

B
When RM signals
Master that phase 2
is complete
Phase 3 initiated to
cleanup local storage
and OLCSTAT.

PHASE 2
COMPLETE

IMS1

ACBLIBB
MODBLKSB
FORMATB

IMS2

OLCSTAT
Figure 15-8 Commit phase 2 completed

If a commit phase fails in any IMS, it does not cause the processing in any other IMS to be
backed out. Commit processing stops you must take one of two actions. First, you can issue
the TERMINATE OLC command. This aborts all online change processing. Second, you could
reissue the INIT OLC PHASE(COMMIT) command. This reattempts the commit processing in
the phase where it failed.
These are some of the reasons that commit processing could fail. They are problems that
could also occur with local online change commit processing.
It may not be possible to change a PSB because it is currently in use. Prepare processing
would have stopped queuing IMS transactions using the PSB, but the PSB might have been
scheduled for another reason These include the starting of a BMP, the processing of an input
transaction which was queued before the prepare, the continued processing of a
wait-for-input transaction which was scheduled before the prepare, and the scheduling of the
PSB by a CICS transaction, or the scheduling of a PSB via ODBA.
It may not be possible to change a database because it is currently in use by a transaction or
BMP.

15.3.4 Commit phase 3


In commit phase 3 the OLCSTAT data set is unlocked. This allows new IMS systems with
OLC=GLOBAL to be initialized.
If the commit failed in phase 1, reissuing the command will cause phase 1 to be retried in
those IMSs where it failed. If it succeeds, commit processing continues to phase 2.

228

IMS Version 8 Implementation Guide

If the commit failed in phase 2, reissuing the command retries phase 2 in those IMSs where it
failed. If it succeeds, commit processing continues to phase 3.

15.4 Terminate command usage


As was mentioned before, you may issue the TERMINATE OLC command after a previous
prepare or commit attempt has failed.
If the failure occurred in prepare processing or in commit phase 1, the TERMINATE OLC backs
out all online change processing. The IMSs continue processing with the libraries they were
using before the online change attempt.
If the failure occurred in commit phase 2, the TERMINATE OLC command is rejected. In this
case, some of the IMSs have completed commit processing and are using the new libraries.
The problem preventing commit success in some of the IMSs should be resolved and the
INIT OLC PHASE(COMMIT) command should be reissued.

15.5 Status display commands


IMSplex QUERY commands and IMS /DISPLAY commands can be used to display the status of
the online change processing.
The QUERY MEMBER TYPE(IMS) command shows the online change status of the IMSs
participating in global online change. It displays the current online change phase and one of
the following: in progress, completed, or failed.
The QUERY OLC LIBRARY(OLCSTAT) command shows the contents of the OLCSTAT data set.
The /DISPLAY MODIFY command shows the same information that it shows for local online
change. This is the information about online change processing in the IMS systems where the
command is processed. In particular, it shows what resources are to be changed and what
processing, if any, is preventing online change from continuing.

15.5.1 QUERY MEMBER TYPE(IMS)


The QUERY MEMBER TYPE(IMS) command results in several possible statuses and
attribute indicators:

OLCABRTC
OLCABRTI
OLCCMT1C
OLCCMT1I
OLCCMT2C
OLCCMT2F
OLCCMT2I
OLCMSTR
OLCPREPC
OLCPREPF
OLCPREPI
OLCTERMC
OLCTERMI
GBLOLC

Online change abort completed


Online change abort in progress
Online change commit phase 1 completed
Online change commit phase 1 in progress
Online change commit phase 2 completed
Online change commit phase 2 failed
Online change commit phase 2 in progress
Online change command master
Online change prepare phase completed
Online change prepare phase failed
Online change prepare phase in progress
Online change terminate completed
Online change terminate in progress
Global online change enabled
Chapter 15. Global online change

229

Example 15-6 shows the response to a QUERY MEMBER TYPE(IMS) command.


Example 15-6 QUERY MEMBER response
MbrName
IM1A
IM1A
IM3A
IM4A

CC
0
0
0
0

TYPE
IMS
IMS
IMS
IMS

STATUS
LclAttr
OLCPREPC,OLCMSTR
GBLOLC
GBLOLC
GBLOLC

LclStat
OLCCMT1C
OLCCMT1C
OLCPREPC

The first response line with IM1A shows that INITIATE OLC PHASE(PREPARE) completed
successfully for the IMSplex. The global status is OLCPREPC. IM1A was the master of the
prepare.
The second response line with IM1A shows that IM1A is enabled for global online change and
it has completed commit phase 1.
The response line with IM3A shows that it also is enabled for global online change and has
completed commit phase 1.
The response line with IM4A shows that it is enabled for global online change and it has
completed prepare. This implies that it has failed commit phase 1.

15.5.2 QUERY OLC


Example 15-7 shows the response to a QUERY OLC LIBRARY(OLCSTAT) SHOW(ALL) command.
In the response:

A or B
LastOLC
MbrList

Current DDNAME suffix for library


Last type of online change: FMTLIB, ACBLIB, and/or MODBLKS
IMSs using current libraries; they are allowed to warm start

Example 15-7 QUERY OLC response


MbrName
IM1A

CC Library
0 OLCSTAT

DSName
IMSPSA.IMS0.OLCSTAT

ACBLIB
A

FMTLIB
B

LastOLC
FMTLIB

MODBLKS
A

Modid
02

MbrList
IM1A,IM3A,IM4A

The current online change active libraries are ACBLIBA, FMTLIBB, MODBLKSA, and
MATRIXA. Modid 2 indicates that one online change has been performed. The OLCSTAT
data set name IMSPSA.IMS0.OLCSTAT is listed. The last online change was only for
FMTLIB. The member list of IMSs that are current with the online change libraries includes
IM1A, IM3A, and IM4A.

15.5.3 /DISPLAY MODIFY


/DISPLAY MODIFY is the same command and same response format that is used for local
online change in IMS Version 8 and previous releases.

230

IMS Version 8 Implementation Guide

Example 15-8 shows the response from a /DISPLAY MODIFY command.


Example 15-8 /Display modify response
LIBRARY IMSACBA (A) IMSPSA.IM0A.ACBLIBA
LIBRARY FORMATA (A) IMSPSA.IM0A.MFS.FORMATA
LIBRARY MODBLKSA (A) IMSPSA.IM0A.MODBLKSA
LIBRARY MATRIXA (A) IMSPSA.IM0A.MATRIXA
LIBRARY IMSACBB (I) IMSPSA.IM0A.ACBLIBB
LIBRARY FORMATB (I) IMSPSA.IM0A.MFS.FORMATB
LIBRARY MODBLKSB (I) IMSPSA.IM0A.MODBLKSB
LIBRARY MATRIXB (I) IMSPSA.IM0A.MATRIXB
DATABASE RXLDB101
PSB SCHEDULED
TRAN
SNWFT116
QUEUING
6
DISPLAY MODIFY COMPLETE *02182/110535*
IM1A

This example shows the current active libraries, marked with (A), and the current inactive
libraries, marked with (I).
Database RXLDB101 is being changed. A PSB using this database is currently scheduled.
Transaction SNWFT116 is being changed. There are currently 6 instances of this transaction
on the queue.

15.6 Adding and deleting IMS subsystems


When an IMS system defined with global online change is first started, it is added to the list of
IMS systems in the OLCSTAT data set. That means that a cold start is done after newly
defining OLC=GLOBAL in the systems DFSCGxxx member. A system may also be added to
the data set by using the ADD function of the global online change utility (DFSUOLC0),
however, this is typically not required. The ADD function would typically be used only if the
data set were being rebuilt.
An IMS system is removed from the OLCSTAT data set list by either of two actions. First, the
system may be terminated with a /CHE FREEZE LEAVEPLEX command. This means the system
is leaving the IMSplex. If you have a system which you do not plan to restart, you should
terminate it this way. Second, the IMS system may be removed from OLCSTAT by using the
DEL function of the global online change utility. You would typically use this if you terminated
a system and later decided that you would not restart it.
Example 15-9 shows a sample job stream to delete IMS subsystems from the OLCSTAT data
set.
Example 15-9 Deleting IMS subsystems from OLCSTAT
//JOUKO3X JOB (999,POK),'OLC',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,
//
MSGLEVEL=(1,1)
//*
//STEP1 EXEC DFSUOLC FUNC=DEL
//SYSIN DD *
IMSA
IMSB
/*

Chapter 15. Global online change

231

15.7 Inactive subsystems


Online change cannot be done with an inactive IMS in the IMSplex, unless the INIT OLC
PHASE(PREPARE) command includes either OPTION(FRCNRML) or OPTION(FRCABND).
FRCNRML allows online change to be done even though there are normally terminated IMS
systems within the IMSplex. FRCABND and FRCNRML specified together allow online
change to be done even though there are normally terminated or abended IMS systems
within the IMSplex.
IMS restricts the type of restart that may be done by a system when it was not active during
an online change. This is done to prevent a restart from processing records created with a
different set of libraries than are used by the restart.
Table 15-2 shows the types of restarts that are permitted when an IMS was not active during
the online change. If multiple online changes occur while and IMS is inactive, the inactive IMS
must be cold started (/NRE CHECKPOINT 0), regardless of the type of online changes made.
Table 15-2 Inactive IMS restart options
Last online changed type

Restart command permitted

ALL

/NRE CHECKPOINT 0

MODBLKS

/NRE CHECKPOINT 0

ACBLIB

/ERE COLDBASE
/NRE CHECKPOINT 0

FORMAT

/NRE
/ERE
/ERE COLDCOMM
/ERE COLDBASE
/NRE CHECKPOINT 0

IMS restart is not sensitive to changes made in the MFS FORMAT library. So, an online
change only for FORMAT does not restrict the restart. All other online changes require a cold
start of the affected part of IMS. A change to ACBLIB requires a cold start of the database
manager part of IMS. A change to MODBLKS requires a cold start of all of IMS.

15.8 Resource consistency checking


Resource consistency checking may be used to ensure that each IMS system using global
online change is using the same data sets. This capability requires the use of a resource
structure. The checking may be turned off by use of the NORSCCC parameter in the
DFSCGxxx member for an IMS system. Specifying MODBLKS indicates that neither
MODBLKS nor MATRIX data set names will be checked for consistency.
If resource consistency checking is used for ACBLIB, initialization for an IMS system will fail if
it is not using the same ACBLIB data set names that are being used by other members of the
IMSplex which are also invoking consistency checking.

232

IMS Version 8 Implementation Guide

15.9 Migration and fallback


You can have a mixed online change environment where some IMSs use global online
change and some use local online change. Of course, you would have to coordinate the
changes between the systems using global online change and those using local online
change.
Some members of the IMSplex be defined with global online change (OLC=GLOBAL in
DFSCGxxx), and the INITIATE OLC commands would effect all instances.
Some members could be defined to used local online change (the default or OLC=LOCAL in
DFSCGxxx), and would use the /MODIFY commands on each IMS instance. You would then
have to coordinate the changes

Important: Note that a change to Global online change requires a cold start of the IMS
system.
IMS systems may be migrated to global online change one system at a time. The process is
shown here.
1. Define OLCSTAT data set
2. Run the DFSUOLC0 utility to initialize OLCSTAT data set, before the IMS is cold started
for the first time
3. Shut down an IMS
4. Remove MODSTAT DD statements from the IMS control region JCL
5. If you are using XRF, also remove MODSTAT2 DD
6. Define the IMS's DFSCGxxx with OLC=GLOBAL & OLCSTAT=dsname
7. Cold start the IMS
It is advisable to limit the time that some systems are using local online change and some are
using global online change. The support for the mixed environment is provided to facilitate the
migration of systems one at a time.
Fallback from global online change to local online change is also supported. This may also be
done one system at a time.

Important: Note that fallback to local online change from global also requires an IMS cold
start.
Define DFSCGxxx OLC=LOCAL
1.
2.
3.
4.
5.
6.

Shut down IMS


Add MODSTAT DD statement to IMS control region JCL
If you are using XRF, also add MODSTAT2 DD
Remove the OLCSTAT DD statement from the control Region JCL
Run INITMOD job to initialize MODSTAT data set
Cold start IMS

Chapter 15. Global online change

233

15.10 Requirements
Global online changed requires the OLCSTAT data set and the use of the Common Service
Layer (CSL). In IMS PROCLIB member DFSCGxxx, OLC=GLOBAL and OLCSTAT=data set
name, must be defined.
Common Service Layer (CSL) requirements:
Structured Call Interface address space on each system in the IMSplex
Operations Manager
Resource Manager
At least one RM in the IMSplex
Resource structure required for resource consistency checking

234

IMS Version 8 Implementation Guide

16

Chapter 16.

Single point of control


In this chapter we introduce the concept of a single point of control (SPOC), in an IMSplex
environment. We discuss the supplied single point of control TSO ISPF application which
utilizes components of the Common Service Layer (CSL) to provide operator interaction
across multiple IMS systems in an IMSplex, and provide examples of its use. Also, we provide
information on the REXX SPOC interface, which provides programmable access to SPOC
functions and outputs.

Copyright IBM Corp. 2002. All rights reserved.

235

16.1 Introduction to SPOC


A major new function being delivered with IMS Version 8 is the ability to manage a group of
IMSs (an IMSplex) from a single point of control (SPOC). The purpose of the single point of
control is to ease IMSplex management by: providing a single system image of the IMSplex,
and allow commands to be entered and routed to all, or selected IMSs in the IMSplex. Prior to
IMS Version 8, most commands and automated processes only affected a single IMS.
This functionality is provided by components of the new Common Service Layer (CSL).
The Operations Manager provides an application programming interface (API) for application
programs that perform automated operator actions. These programs are called automated
operator programs (AOPs).
An AOP issues commands that are embedded in OM API requests. The responses to these
commands are returned to the AOP (through the OM API) embedded in XML tags. An AOP
can be written in assembler using the macro interface documented in IMS Version 8:Common
Service Layer Guide and Reference, SC27-1293. An AOP can also be written in REXX
enabling it to run TSO, or in the NetView environment. A REXX AOP uses the REXX SPOC
API shipped with IMS Version 8. The REXX SPOC API supports a subset of the functions
supported by the OM API.
Before an AOP can start issuing OM requests, it must register with SCI. The OM uses SCI to
route the commands or requests to the IMSplex components on behalf of the AOP. When the
AOP completes its work with the IMSplex, it must deregister from SCI.
One example of an AOP is the TSO SPOC application described in section 16.2, TSO SPOC
application on page 238, that is shipped with IMS Version 8.
IMS Version 8 does not require the use of SPOC, as the existing command interfaces for the
WTOR, MTO, and E-MCS console continue to be supported. The new format IMSplex
commands, however, can only be entered from a SPOC.
Figure 16-1 show an overview of the CSL environment and the command interfaces provide
by the new SPOC facilities using the SCI, and the existing classic interfaces that are still
available.

236

IMS Version 8 Implementation Guide

SPOC

Operations
Manager
(OM)

Structured
Call
Interface

Resource
Manager
(RM)

Resource
List Structure

SCI

SCI

SCI

Transactions
Lterms

SCI
Communications

Automation

Msnames
IMS
Control
Region

MTO

S
C
I

S
C
I

Common
Queue
Server
(CQS)

Users
Nodes
Userids

VTAM
(TCP/IP)

Processes
.....

Figure 16-1 IMS command access in a CSL environment

The SPOC application may or may not be on the same system as the OM, but it must be on
same system as the SCI, as the SCI is used to communicate with OM. For detailed
information on the Common Service Layer, and the various configuration options see
Chapter 13, Common Service Layer (CSL) architecture on page 155, and Chapter 20,
Common Service Layer configuration and operation on page 289.
You can write your own SPOC applications, use the IBM-provided TSO/ISPF SPOC
application, or write a REXX SPOC application that use the REXX SPOC API. Using a SPOC
application, you can:
Issue commands to all the IMS subsystems in an IMSplex.
Display consolidated responses from those commands.
Send a message to an IMS terminal that is connected to any IMS in the IMSplex using the
BROADCAST command.

The limitations (commands not supported) of the TSO SPOC application are the same as
those for the OM API. The command supported by the OM API can be found in the IMS
Version 8: Release Planning Guide, GC27-1305. Complete information on all of the IMS
commands, can be found in the IMS Version 8: Command Reference, SC27-1291.

Chapter 16. Single point of control

237

16.1.1 Command behaviors


In an IMSplex environment, IMS commands issued through OM can behave differently than
when those same commands are issued to an individual IMS subsystem. As noted, the new
IMSplex commands can only be issued through the OM API. Existing IMS commands can
be issued through the OM API or to individual IMSs. These commands are called classic
commands hereafter.
Commands that are issued to OM are, by default, routed to all the IMSplex components that
are active and have registered interest in processing those commands. If you want to route a
command to one or more specific IMSs in the IMSplex, use the ROUTE() parameter on the
command request.
Depending on whether an IMSplex is defined with a Resource Manager (and there is a
resource structure available to RM), command behavior can be affected. When a resource
structure is not defined, resource status is maintained on local IMSs in the IMSplex. In this
case, commands only have a local effect.
If RM is defined with a resource structure in the IMSplex, RM maintains global resource
information, including resource status. So, in this scenario, resource status is maintained both
globally and locally. Usually, if a user signs off or a client shuts down, resources status is
maintained globally but deleted locally.
Another behavior that is worth noting is how a command processing clients process classic
commands that are routed to the entire IMSplex. In general, OM chooses one of the
command processing clients in the IMSplex to be the master to coordinate the processing of
the classic commands. Whether the master (or a non-master) client will process a classic
command depends on where the command resource status is kept. If the command resource
status is kept in a resource structure, the classic command will usually be processed by a
non-master client where the command resource is active. If the command resource is not
active on any of the command processing clients in the IMSplex, OM will route the classic
command to the master client. If the classic command is being routed to all the clients in the
IMSplex, command processing clients where the command resource is not active will reject
the classic command.
For additional information on command processing by Operations Manager, see Chapter 13,
Common Service Layer (CSL) architecture on page 155.

16.2 TSO SPOC application


IMS Time Share Option (TSO) single point of control (SPOC) is a supplied application from
which a user can manage operations of all IMS systems within a single sysplex (IMSplex).
There can be more than one TSO SPOC in an IMSplex.
The TSO SPOC uses an ISPF panel interface and communicates with a single Operations
Manager (OM) address space. OM then communicates with all of the other address spaces in
the IMSplex (for example, IMS) as required for operations.
As shown in Figure 16-2 TSO SPOC will register with an Structured Call Interface (SCI),
which must be on the same OS/390 or z/OS image as the SPOC. It will also register with OM
which may be on any image in the IMSplex. After registration, the SPOC will use SCI
communications to send commands through OM to any or all IMSs in the IMSplex.

238

IMS Version 8 Implementation Guide

TSO/ISPF
Single Point of Control
(DFSSPOC)

IMS
Control
Region

Structured
Call
Interface

Operations
Manager
(OM)

SCI

SCI

S
C
I

IMS
Control
Region

S
C
I

IMS
Control
Region

S
C
I

Register with SCI


Command entry and response
Figure 16-2 TSO SPOC registration and communication

The TSO SPOC provides the following functions to an IMSplex:


Presents a single system image for an IMSplex by allowing the user to enter commands to
all IMSs in the IMSplex from a single console
Displays consolidated command responses from multiple IMS address spaces
Sends a message to an IMS terminal connected to any IMS control region in the IMSplex
by using the IMS /BROADCAST command
Allows the user to enter commands
Receives command responses and formats the XML tagged responses into readable
format
Displays IMSplex and classic IMS command responses
Allows the user to sort IMSplex command response by column
Allows the user to set default command parameters
Keeps a history of commands
Allows the user to enter long commands
Allows user specified grouping of IMSplex members

As an ISPF application, it also provides other typical ISPF application features:

Print

The command responses and log information can be put to the ISPF list file

Save

Information can be saved to a user specified data set

Find

Command responses can be searched for text strings

Help

Help dialogs are available for application use information

Scrolling

Where applicable, data can be scrolled left and right, or up and down

16.2.1 Getting started


The SPOC application is enabled by allocating the required files to the users TSO session.
Table 16-1 shows the DD names, and the associated files that need to be allocated via
ALTLIB and LIBDEF functions.

Chapter 16. Single point of control

239

Table 16-1 SPOC file allocation associations

Usage

Data set

FILE (ISPLLIB)

IMS.SDFSRESL

FILE (ISPPLIB)

IMS.SDFSPLIB

FILE (ISPMLIB)

IMS.SDFSMLIB

FILE (ISPTLIB)

user.ISPTLIB
IMS.SDFSTLIB

FILE (ISPTABL)

user.ISPTLIB

FILE (SYSPROC)

IMS.SDFSEXEC

A SPOC startup REXX exec is supplied in IMS.SDFSEXEC(DFSSPSRT), which takes the


IMS high level qualifier as a parameter and performs the necessary allocations. It can be
invoke from a TSO command prompt as follows:
TSO ex 'IMSPSA.IMS0.SDFSEXEC(DFSSPSRT)' 'hlq(imspsa.ims0)'

This will allocate the libraries and invoke the TSO SPOC application, and will place you on the
SPOC control panel. Table 16-2 shows the hierarchy of the TSO SPOC application menu
options, and their function.
Table 16-2 SPOC application menu structure

Menu item

Sub-menu item

Function

File

1. Save As

Saves the current (displayed) command response to an


ISPF data set. When you save a command response,
you are prompted to input a data set name.
The command entered, IMSplex name, and a timestamp
are included in the heading of the saved file.

2. Print

Sends only the command response information that you


have chosen to view on the screen to the ISPF list data
set. When the data has been successfully sent, a
confirmation message appears at the bottom of the
screen.
After the information has been sent to the ISPF list data
set, you can manipulate the list data set file from within
ISPF.

3. Print All

Sends both log information and IMSplex command


response to the ISPF list data set. When the data has
been successfully sent, a confirmation message
appears at the bottom of the screen.
After the information has been sent to the ISPF list data
set, you can manipulate the list data set file from within
ISPF.

1. Cmd entry & response

Changes the IMS SPOC display so that IMSplex


command input area is at the top of the screen, and the
command response information is at the bottom of the
screen.
To toggle the display so that the command message log
is shown at the bottom of the screen, choose Showlog
(F4).

Display

240

IMS Version 8 Implementation Guide

Menu item

View

Options

Help

Sub-menu item

Function

2. Cmd entry & log

Changes the IMS SPOC display so that IMSplex


command input area is at the top of the screen, and the
command message log is shown at the bottom of the
screen.
To toggle the display so that the command response
information is shown at the bottom of the screen, choose
Showlist (F4).

3. Command status

Enables you to re-issue or delete any previously entered


command from any IMS SPOC session. For commands
issued during this IMS SPOC session, you can also
re-display command responses. All previously entered
commands are stored in your personal ISPF ISPTABL
file.
When you select this command, a table listing all of your
previously entered commands displays. The table shows
the command issued and the command status.

4. Command shortcuts

Enables you to maintain a table of frequently used


commands to save time and keystrokes.

5. Expand command

Use this command when you need to issue an IMSplex


command that is longer than 145 character command
line. Using Expand command, you can issue IMSplex
commands up to 1024 characters long.

1. Find

Enables you to search the current (displayed) command


response for a specific character or series of characters.
All fields that are in the command response are
searched, not just the fields displayed on the screen.

2. Sort

Enables you to sort the current (displayed) command


response by a specific column, in ascending or
descending order.

1. Preferences

Use this command to set default IMSplex routing and


IMS SPOC operating preferences.

2. Set IMS groups

Enables you to add, delete, and set default names of


IMSplex group members.

1. Help for Help

Explains the general function of the help dialogs

2. Extended help

Explains how to use the SPOC application to issued


commands

3. Keys help

Explains current function key processes

4. Help Index

Provides an index of help topics

5. Tutorial

Provides access to an online guide to help you use the


SPOC application.

6. About

Displays release and copyright information

Tip: Take the time to review the function key settings on each of the screens. The TSO
SPOC is a CUA compliant application, but some of the function key assignments may not
be what you expect. The application uses a number of ISPF dialog keylists to provide
context specific function key processing.

Chapter 16. Single point of control

241

Figure 16-3 shows the TSO SPOC control panel. The initial panel contains the command
input area for IMS commands up to 165 characters, the scrollable response area, and fields
to override the target IMSplex name, the specific members to receive the routed command,
and the time to wait for the command to complete. These fields can be used to temporarily
override the default values specified in the IMS SPOC preferences.

Figure 16-3 SPOC control panel

16.2.2 Preferences
Before utilizing the SPOC application, it is necessary to provide some default preferences. To
access the preferences dialog, select Options >1. Preferences. Figure 16-4 shows the
SPOC preference panel. Specify the name of the IMSplex that you want to use as the TSO
SPOC default. Whenever the TSO SPOC application prompts you to enter an IMSplex name,
the default IMSplex name that you enter here will be used unless you specify a different
IMSplex name when prompted. You must specify a default IMSplex value the first time you
use TSO SPOC application.
You may optionally specify which IMS systems or previously defined IMS groups (which will
be discussed later), within the default IMSplex, will be the default IMS systems to process
your commands. That is, all commands you enter will only affect these IMS systems, unless
you override the preference by using the plex and route fields on the SPOC control panel.
Each IMS system or group should be delimited by a space or comma. You can enter as many
IMS names or group names that the space provided accommodates.
You can optionally specify a wait interval to set a default time limit for all commands to
process before the system times-out the command. That is, all your IMS commands must
process within the time you specify here unless you override this value with a wait value on
the TSO SPOC control panel.
If the IMS system does not process the command within the specified time, the system
times-out the command and no response is returned. You know the IMS system is processing
the command you entered because of the single point of control - Executing panel that
appears until the command has been completed or until the wait interval has expired. Once
the command process has finished, or has timed-out, you will be returned to the TSO SPOC
control panel.
The default wait time, if no time is entered, is five minutes. All wait intervals are in the form of
MMM:SS (M=minutes; S=seconds). If you only enter a number and no colon, the SPOC will
interpret the number as seconds. For example, if you enter 120 as your wait interval value,
the TSO SPOC will interpret this as 120 seconds, or two minutes.

242

IMS Version 8 Implementation Guide

Optionally specify whether you want to wait for the command to complete and return a
response or not. If you prefer not to wait for command completion, you can review the
command status and output by selected Display >3. Command status from the action bar.
You can enable or disable whether the TSO SPOC is to process shortcuts which will be
discussed later. TSO SPOC will not used shortcuts if this selection is not specified.
Optionally indicate whether you would like for the TSO SPOC command and response panel
that allows you to enter commands and receive the output, or the SPOC status list that
displays the execution status of all commands you submitted to appear upon startup. The
command and response panel is the default if unspecified on the preference panel.
The preferences shown in Figure 16-4 should be adequate to get you started exploring the
features of the TSO SPOC application.

Figure 16-4 SPOC preferences specification panel

16.2.3 IMSplex and classic command displays


With the necessary preferences set you can begin working with the TSO SPOC application.
Select Display >1. Cmd entry & response from the action bar to access the command
entry and response panel.
The basic operation is to enter a command in the command line and for the command
response to be provided in the data area below the command line. The command line is
cleared in preparation for the next command: the command issued is shown just above the
command response. There are three short fields: plex, route, and wait. These are temporary
overrides of the fields in the preferences panel. These values are discarded after you exit
SPOC.

Chapter 16. Single point of control

243

Figure 16-5 shows the results of an IMS DISPLAY QUEUE TRAN command which is one of
the classic IMS commands. The command can be retrieved to the command prompt by
placing the cursor on the command display in the response area and pressing enter. It may
then be modified, or re-submitted.
As you can see in the example, for IMS classic commands, you dont need to enter the
command recognition character / (slash). If you desire, you can still enter the slash, as IMS
Operations Manager can handle both formats of classic commands (it removes the slash
before actual processing).

Note: The TSO SPOC uses the OM API. The OM API does not support all of the variations
in syntax that were acceptable before. IMS has to register with OM and indicate which
commands it can process. Only a few of the variations are registered by IMS. Refer to IMS
Version 8: Command Reference, SC27-1291 for the list of the command verbs and primary
keywords that can be issued through the OM API.

Figure 16-5 SPOC classic command results display

For classic IMS commands, the response is in a sequential format. At top are some execution
statistics and log information. Below are the messages produced by the IMS command. The
text is in the same format as that of prior releases. The text is prefixed by the member name,
and information from each member is grouped together. Each message line is a single XML
tagged value.
Figure 16-6 shows the results of a QRY IMSPLEX SHOW(ALL) command which is one of the
new format IMSplex commands.

244

IMS Version 8 Implementation Guide

Figure 16-6 SPOC IMSplex command results display

IMSplex commands may have log information too if there were some messages to display.
For example, one system may say 'no resources found' while other systems provide valid
resource information. The 'no resources' indication would appear in the log. To review the log
information select Display>2. Cmd entry & log from the action bar. The command and log
panel is displayed as shown in Figure 16-7.

Figure 16-7 SPOC command and log panel

Chapter 16. Single point of control

245

16.2.4 Defining groups


The TSO SPOC application provides an IMS group function. To access the group definition
panel select Options>2. Set IMS groups from the action bar. Use the group definition
dialog to create user defined groupings of command processors. When a command is routed
to this group, only the command processors listed will execute the command. Figure 16-8
shows the definition of several groups.
Use the blank line to create a new group. Use the 's' to select a default group, or 'd' to delete a
group definition.
If you delete a group name that is in the default routing list, you will be prompted to confirm
the delete.

Figure 16-8 SPOC group definition panel

16.2.5 Defining command shortcuts


The TSO SPOC application allows you to create and manage a table of selectable commands
that consist of predefined parameters for your convenience.
If you issue one of the commands in the table, the additional parameters are appended to the
entered command. This occurs before default routing is added.
To access and manage shortcuts, select Display>4. Command shortcuts from the action
bar. Figure 16-9 shows the command shortcuts screen and several defined shortcuts.
To use the command shortcuts, you must first indicate that shortcuts should be used on the
preference panel as shown in Figure 16-4 on page 243.

246

IMS Version 8 Implementation Guide

Figure 16-9 SPOC command shortcut definition panel

If you issue one of the commands in the table, the additional parameters are appended to the
entered command. This occurs before default routing is added.
In addition to IMSplex commands and the supported classic commands, you can create a
shortcut by using the ampersand (&) as the first character. In this case, the command
parameters are not appended to the shortcut, but they replace the shortcut entirely.
A blank line is provided for you to define new commands. As you add new commands, they
are translated to upper case. Adjacent blanks are eliminated so that only one blank is
included. The command shortcuts table has the following field columns with these
possibilities:
An action field is provided to allow you to manage the command parameter list by entering
these actions:

Retrieve the command (slash). The command appears in command


area of the panel and is ready for execution.

Delete the command shortcut from the list.

Issue the command.

The Cmd & resource field is considered the shortcut and is used as the key field. The table is
sorted by this field. When you issue a command, it is compared against the shortcuts in this
table, before submitting it to IMS. If there is a match, the additional parameters are appended
to your command, or command replacement occurs if the shortcut begins with an ampersand
(&).
The additional parameters field allows you to set a default for parameters to be added to the
entered command, or the command that is to replace the shortcut for shortcuts beginning
with an ampersand (&).

Chapter 16. Single point of control

247

16.2.6 Saving and printing


The TSO SPOC application includes options to print and save the currently displayed
information. To print the information to your ISPF list data set, select File> 1. Print or
File> 1. Print all from the action bar. If you select print, TSO SPOC sends only the
command response information that you have chosen to view on the screen to the ISPF list
data set. When the data has been successfully sent, a confirmation message appears at the
bottom of the screen. If you select print all, both log information and IMSplex command
response is sent to the ISPF list data set.
You can also save the information to a data set. Select File> 1. Save as from the action bar,
an you will be presented with the save response and log panel as shown in Figure 16-10.

Figure 16-10 SPOC save response and log panel

This panel gives you the option to save the currently display information. You may save
command response information from IMSplex commands, or 'log' information which will
typically have command response from classic IMS commands, or both.

248

IMS Version 8 Implementation Guide

16.2.7 Sorting and searching results


The TSO SPOC application provides a basic text search function that applies to both classic
and IMSplex command responses. A sort function is available for IMSplex responses.
Figure 16-11 shows the command entry and response for a QRY TRAN SHOW(ALL)
command.

Figure 16-11 SPOC results from a QRY TRAN command

To search for a particular text string, from the action bar select View>1. Find to invoke the
search command responses dialog shown in Figure 16-12. Enter the search string and press
enter.

Chapter 16. Single point of control

249

Figure 16-12 SPOC find text dialog box

The command response display will be scrolled to the location of the search string as shown
in Figure 16-13.

Figure 16-13 results of a find command

250

IMS Version 8 Implementation Guide

To sort on a particular IMSplex response field, select View>2. Sort from the action bar to
present the sort column name selection dialog as shown in Figure 16-14. Select the column
using either a /, or the character A for ascending order, or D for descending order and
press enter.

Figure 16-14 SPOC sort selection panel

Chapter 16. Single point of control

251

As shown in Figure 16-15, the command response area will be sorted by the selected
column. In our example, the command response output has been sorted by MbrName.

Figure 16-15 Results from the sort request

252

IMS Version 8 Implementation Guide

16.2.8 Command status


The TSO SPOC application maintains a table of the previously entered commands. To access
the command and command status panel select Display>3. Command status from the
action bar. Figure 16-16 shows the command and command status panel.
.

Figure 16-16 SPOC command and command status panel

The commands are displayed with the most recent commands at the top of the list. A status
column indicates if a command has completed, is still executing or was issued sometime in
the past. The table includes an input field to allow you to resubmit a command, delete the
command completely or display the command response.

16.2.9 Leaving the SPOC application


When you end your session with the TSO SPOC application, you are presented with the
following options:
1. Do not exit
2. Exit and keep the command responses
3. Exit and erase the command responses
The first option returns you to the previous SPOC panel, and is provided in case you pressed
the exit or cancel key accidentally and do not wish to leave the application.
Selecting the second option exits the SPOC application, but the responses of any commands
entered in SPOC will still be available if you start SPOC again later in this TSO session. The
command responses are discarded if you log off of TSO.
The third option discards the command responses immediately.

Chapter 16. Single point of control

253

254

IMS Version 8 Implementation Guide

17

Chapter 17.

User written interface to


Operations Manager
In this chapter we briefly describe the new Operations Manager (OM) interface available in
IMS Version 8 when running in a Common Service Layer environment. We provide an
example of a REXX SPOC that will issue commands to IMS through the OM interface.

Copyright IBM Corp. 2002. All rights reserved.

255

17.1 Introduction to Operations Manager user interface


The interface to all of the new CSL services provided by the new components, SCI, OM, and
RM, are documented in IMS Version 8:Common Service Layer Guide and Reference,
SC27-1293.
The Operations Manager (OM) (see Operations Manager (OM) on page 164) provides
services to the IMSplex by allowing user supplied Automated Operator Programs (AOPs) to
send commands to CSL components and then consolidating the responses to those
commands before returning them to the AOP. OM can optionally provide command security
using RACF or a user written command authorization exit.
The IMSplex environment will have been established before an AOP can join the IMSplex
meaning all address spaces are started (SCI, OM, RM, and IMS are required). IMS will have
registered with OM for the commands that it can process (includes both classic commands
and the new IMSplex commands). IMS is called the command processing (CP) client.
When an AOP is started, it must join the IMSplex by registering with the local SCI address
space. This is done with a CSL (CSLSCREG) macro described in the referenced manual. SCI
may invoke RACF to authorize that AOPs user ID to join the IMsplex. Once the AOP has
registered with SCI, it can begin submitting commands from end-users and receiving
responses from the CP client(s) through OM using other CSL macros. The following are
typical of the steps that an AOP might take.
1. Register with SCI; join the IMSplex as an AOP.
2. Receive a request from an AOP client to submit a command to one or more IMSs in the
IMSplex. This request might be from a directly connected z/OS user (for example, a TSO
user), from an external (network attached) client, or from another system address space
such as Netview.
3. Submit the command to OM using CSL provided macros (CSLOMI or CSLOMCMD).
Included in the request are the command and any routing (which IMSs) and timeout
(WAIT) values.
4. When OM receives the command, it will (optionally) authorize the AOP user ID for the
command. It will then forward the command to the CP client(s) according to the routing
information provided by the AOP. It will wait for a response for a period of time specified on
the input from the AOP.
5. The CP client will execute the command and return the response to OM encapsulated in
XML tags.
6. When OM has all the responses, or the WAIT interval has expired, it will forward the
responses to the AOP, still encapsulated in XML.
7. The AOP would process the response depending on where the command originated.
a. If it came from a locally (z/OS) attached user with a display device, the AOP should
format the response for display, interpreting the XML tags and putting the response in a
format understandable by a person. An example of this type of AOP is the TSO SPOC
provided with IMS Version 8 and described in detail in Single point of control on
page 235.
b. If it came from a network client, then the AOP can simply forward the response, XML
tags and all, to the network client and let that client format the response.
c. A third possibility is a user written automation program running, for example, as a
Netview EXEC, which would in turn invoke a function to register with the local SCI and
issue commands through OM to any or all IMSs in the IMSplex. In this case, the client
is the Netview EXEC and there is no external client to display a response to. The EXEC
merely examines the response to determine whether the command was successful or
not.

256

IMS Version 8 Implementation Guide

8. The AOP would continue receiving, forwarding, and responding to requests from its
clients, then degister from SCI.
Figure 17-1 shows how each of the first two of these AO client types might be configured in
an IMSplex.

Workstation
AO Client

Command Entry
AO Client

(TCP/IP)
VTAM

TSO
SPOC

Command
Forwarding
AO Client

Command
Entry
AO Client

Structured
Call
Interface

IMS
CONNECT?

Operations
Manager

Command
Processing
Client
(IMS)

Structured
Call
Interface

Operations
Manager

Command
Processing
Client
(IMS)

Figure 17-1 Typical AOP configurations

The following section describes in detail how a REXX EXEC might be written to register with
the IMSplex and invoke the OM interface.

17.2 REXX SPOC example


In addition to the TSO ISPF SPOC application, a REXX SPOC API has been provided. The
REXX interface allows REXX programs to submit commands to OM and to retrieve the
responses. The REXX programming language is frequently used to implement automation
software. Programs written in REXX can run in a NetView environment, foreground TSO, or
batch TSO.
In order to use the REXX interface, the executables must be available to the TSO session.
This is done either by adding the SDFSRESL library to the TSO JCL in the STEPLIB DD
statement or by using the TSOLIB command to add the SDFSRESL library to the TSO search
order. Table 17-1 shows the DD names, and the associated files that need to be allocated via
ALTLIB functions.

Chapter 17. User written interface to Operations Manager

257

Table 17-1 SPOC REXX interface file allocation associations

Usage

Data set

TSOLIB command

IMS.SDFSRESL

FILE (SYSPROC)

user.written.execs

17.2.1 The REXX SPOC environment


The REXX SPOC host environment is made up of:
CSLULXSB TSO command
IMSSPOC subcommands
CSLULGTS REXX function

CSLULXSB
The REXX host command environment is setup by the CSLULXSB TSO command. The
purpose of this command it to establish the IMSSPOC host command environment, establish
the CSLULGTS function, and provides REXX variables for return code and reason code
processing within the REXX program. Example 17-1 shows the call to CSLULXSB to
establish the REXX SPOC environment.
Example 17-1 CSLULXSB TSO command call
Address TSO 'CSLULXSB'

IMSSPOC
Once CSLULXSB has been successfully processed, the IMSSPOC environment is available
to the REXX program. Host commands are typically quoted strings and are passed directly to
the host command processor. Commands IMS, ROUTE, WAIT, CART, and END are
supported and perform specific local IMSSPOC functions. Any other passed string is
assumed to be a command and is passed to SCI. Example 17-2 shows the use of the
IMSSPOC environment.
Example 17-2 IMSSPOC environment call
Address IMSSPOC
"IMS
plex1"
/* set the IMSplex name
*/
"ROUTE im1a"
/* set explicit route for commands
*/
"CART mytoken" /* define the command and response token */
"WAIT 0:30"
/* set the OM timeout interval
*/
"QRY IMSPLEX SHOW(ALL)"

These are the subcommands processed by the IMSSPOC command:


IMS

sets the name of the target IMSplex to plex1.

ROUTE explicitly routes this command to IMSplex member IM1A. This subcommand is
optional, and if not specified, the command will be processed for all members of the
IMSplex.

258

CART

defines the command and response token to be used for this command to be
mytoken.

WAIT

sets the OM time-out interval to 30 seconds. This subcommand is optional, and if


not specified, the default time-out interval of five minutes will be used.

IMS Version 8 Implementation Guide

CSLULGTS()
This REXX external function is used by the REXX interface to get the response from OM. The
XML tagged response returned by OM is parsed, and each individual line is saved in a REXX
stem variable. The name of the stem variable is specified by the user as the first parameter to
the CSLULGTS function call. Since there could be more than one active command response,
the response for a given command invocation is correlated using the Command and
Response Token value (CART) which is specified in both the IMSSPOC, and the CSLULGTS
function call to retrieve the response. Example 17-3 shows the used of the CSLULGTS
function to retrieve the command response.
Example 17-3 CSLULGTS call
results = cslulgts('resp.','mytoken',"0:30")

The parameters passed to the CSLULGTS function are:


resp.

The stem variable to receive the command response

mytoken

The name of the command and response token used to correlate responses with
commands

0:30

Defines the CSLULGTS functions time-out value to 30 seconds

Return and reason codes


Each of the IMSSPOC host commands and the CSLULGTS function set return code and
reason code values. The values are provided in REXX variables:
imsrc
imsreason

The values of the variables are character representations of hex values. For example, the
imsrc value is c'08000008X' when a parameter is not correct. The character 'x' is at the end of
the string so REXX will treat it as a character data type.
Table 17-2 shows the REXX SPOC API return codes and their meanings.
Table 17-2 REXXREXX SPOC API return codes

Return code

Meaning

"00000000X"

Request completed successfully

"08000004X"

Warning

"08000008X"

Parameter error

"08000010X"

Environment error

"08000014X"

System error

Table 17-3 contains the REXX SPOC API warning reason codes and their meanings.
Table 17-3 REXX SPOC API warning reason codes

Reason code

Meaning

"00001000X"

Command still executing

Table 17-4 contains the REXX SPOC API parameter error reason codes and their meanings.

Chapter 17. User written interface to Operations Manager

259

Table 17-4 REXX SPOC API parameter error reason codes

Reason code

Meaning

"00002000X"

Missing or invalid wait value

"00002008X"

Missing or invalid IMSplex value

"00002012X"

Missing or invalid STEM name

"00002016X"

Missing or invalid token name

"00002020X"

Too many parameters

"00002024X"

Request token not found

"00002028X"

Missing or invalid CART value

Table 17-5 contains the REXX SPOC API system error reason codes and their meanings.
Table 17-5 REXX SPOC API system error reason codes

Reason Code

Meaning

"00004000X"

Getmain failure

17.2.2 Sample REXX API program


In order to clarify the concepts and processing discussed, we have included a very simple
sample REXX SPOC API program, JCL and sample output.
Example 17-4 contains a sample REXX SPOC API program. The program takes the
command to be submitted as a parameter.
Example 17-4 REXX SPOC API sample
/* Rexx */
parse upper arg theIMScmd
say 'IMS Command Input:'
say theIMScmd
say ''
a='plex1'
b='ops1'
c='0:30'
theims
thecart
thewait

= 'IMS '||a
= 'CART '||b
= 'WAIT '||c

Address TSO 'CSLULXSB'


if rc = 0 then
do
Address IMSSPOC
theims
thecart
thewait
theIMScmd
results = cslulgts('resp.',b

,c

say 'Return and reason code information:'

260

IMS Version 8 Implementation Guide

say 'imsrc
='imsrc
say 'imsreason='imsreason
say ''
"END"

/* end IMSSPOC interface

*/

if resp.0 /= '' then


do
say 'There were 'resp.0' line(s) returned.'
say 'Command results (XML output):'
do indx = 1 to resp.0
say resp.indx
end
end
end
Exit

Example 17-5 contains a sample batch job stream to execute the REXX SPOC program.
Example 17-5 REXX SPOC sample batch job stream
//USERIDX JOB (999,ABC),'REXX SPOC',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,
//
MSGLEVEL=(1,1)
//*
//SPOC
EXEC PGM=IKJEFT01
//STEPLIB DD DISP=SHR,DSN=IMS.SDFSRESL
//SYSPROC DD DISP=SHR,DSN=USERID.CLIST
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
%REXXSPOC QRY TRAN NAME(A*)
%REXXSPOC DIS Q TRAN
/*

Example 17-6 is sample output from the two executions of the sample REXX program.
Example 17-6 Sample output from the two sample command invocations
READY
%REXXSPOC QRY TRAN NAME(A*)
IMS Command Input:
QRY TRAN NAME(A*)
Return and reason code information:
imsrc
=0200000CX
imsreason=00003000X
There were 39 line(s) returned.
Command results (XML output):
<?xml version="1.0"?>
<!DOCTYPE imsout SYSTEM "imsout.dtd">
<imsout>
<ctl>
<omname>IM1AOM </omname>
<omvsn>1.1.0</omvsn>
<xmlvsn>1 </xmlvsn>
<statime>2002.156 21:31:19.998201</statime>
<stotime>2002.156 21:31:20.014262</stotime>
<staseq>B7BC2D585B2F91A6</staseq>

Chapter 17. User written interface to Operations Manager

261

<stoseq>B7BC2D585F1B6F45</stoseq>
<rqsttkn1>OPS1
</rqsttkn1>
<rc>0200000C</rc>
<rsn>00003000</rsn>
</ctl>
<cmderr>
<mbr name="IM4A
">
<typ>IMS
</typ>
<styp>DBDC
</styp>
<rc>00000004</rc>
<rsn>00001000</rsn>
</mbr>
</cmderr>
<cmd>
<master>IM1A
</master>
<userid>JOUKO3 </userid>
<verb>QRY </verb>
<kwd>TRAN
</kwd>
<input>QRY TRAN NAME(A*)</input>
</cmd>
<cmdrsphdr>
<hdr slbl="TRAN" llbl="Trancode" scope="LCL" sort="a" key="1" scroll="no" len="8" dtype="
CHAR" align="left" />
<hdr slbl="MBR" llbl="MbrName" scope="LCL" sort="a" key="4" scroll="no" len="8"
dtype="CHAR" align="left" />
<hdr slbl="CC" llbl="CC" scope="LCL" sort="n" key="0" scroll="yes" len="4" dtype="INT"
align="right" /></cmdrsphdr>
<cmdrspdata>
<rsp>TRAN(ADDPART ) MBR(IM1A
) CC(
0) </rsp>
<rsp>TRAN(ADDINV ) MBR(IM1A
) CC(
0) </rsp>
</cmdrspdata>
</imsout>
READY
%REXXSPOC DIS Q TRAN
IMS Command Input:
DIS Q TRAN
Return and reason code information:
imsrc
=00000000X
imsreason=00000000X
There were 35 line(s) returned.
Command results (XML output):
<?xml version="1.0"?>
<!DOCTYPE imsout SYSTEM "imsout.dtd">
<imsout>
<ctl>
<omname>IM1AOM </omname>
<omvsn>1.1.0</omvsn>
<xmlvsn>1 </xmlvsn>
<statime>2002.156 21:31:23.203989</statime>
<stotime>2002.156 21:31:23.206455</stotime>
<staseq>B7BC2D5B69D95921</staseq>
<stoseq>B7BC2D5B6A7374A4</stoseq>
<rqsttkn1>OPS1
</rqsttkn1>
<rc>00000000</rc>
<rsn>00000000</rsn>
</ctl>
<cmd>
<master>IM1A
</master>

262

IMS Version 8 Implementation Guide

<userid>JOUKO3 </userid>
<verb>DIS </verb>
<kwd>Q
</kwd>
<input>DIS Q TRAN</input>
</cmd>
<msgdata>
<mbr name="IM1A
">
<msg>
CLS
PTY
MSG CT TRAN
<msg>
*NO QUEUES*</msg>
<msg>
*2002156/173123*</msg>
</mbr>
<mbr name="IM4A
">
<msg>
CLS
PTY
MSG CT TRAN
<msg>
*NO QUEUES*</msg>
<msg>
*2002156/173123*</msg>
</mbr>
</msgdata>
</imsout>
READY
END

PSBNAME </msg>

PSBNAME </msg>

Chapter 17. User written interface to Operations Manager

263

264

IMS Version 8 Implementation Guide

18

Chapter 18.

Automatic RECON loss


notification
In this chapter, we discuss the automatic RECON loss notification (ARLN) enhancement
provided in IMS Version 8. This feature provides IMSplex wide notification in the event of a
RECON loss.

Copyright IBM Corp. 2002. All rights reserved.

265

18.1 The benefits of automatic RECON loss notification


The intention for automatic RECON loss notification (ARLN) is that the first DBRC instance
detecting that one of the RECON data sets has had an error will not only force a
reconfiguration to use and activate the spare copy by copying the good RECON into it, but it
will also automatically notify all other DBRC instances about the reconfiguration and the
discarded RECON data set. The notification will be routed to DBRC instances which have
been registered with the Structured Call Interface (SCI) when they are started. This includes
those DBRC instances which are indicated by message DSP0388I (added in IMS Version 7)
and includes online, batch with DBRC, utilities with DBRC and the DBRC utility itself
(DSPURX00). The SCI is used for the communication between all registered DBRC
instances across your IMSplex.
The unavailable RECON data set gets discarded instantly from all notified DBRC instances.
All notified DBRC instances will issue message DSP1141I in response to the notification. As
long as you are using dynamic allocation for your RECON data sets, all involved DBRC
instances will also deallocate the discarded RECON data set. Now you can delete and
redefine this RECON data set. The next access to your RECONs will propagate the newly
defined RECON data set as the new available spare RECON data set. Please see
Example 18-5 on page 272 and Example 18-6 on page 273.
The propagation of an unavailable RECON data set across your sysplex (SCI registered
DBRC participants) prevents your IMS Version 8 subsystems from holding the allocation of
the original RECON data set, possibly for a long time. Prior to IMS Version 8, the deallocation
was usually done by the next access to the RECON so you had to wait or it was forced by any
command against the RECON issued from every active online subsystem. Figure 18-1 gives
you an overview how DBRC is cooperating with SCI.

DBRC with SCI

SCI

DBRC

S
C
I

DBRC

S
C
I

SCI

SCI

SCI
IMS
CTL

SCI

SCI

S
C
I

IMS
CTL

S
C
I

XCF
XCF

CF

RECONs

XCF
XCF
IMS
CTL

S
C
I

SCI

SCI

DBRC

SCI

Figure 18-1 DBRC with SCI

266

IMS Version 8 Implementation Guide

IMS
CTL

SCI

S
C
I

DBRC

S
C
I

SCI
SCI

S
C
I

18.2 Getting started with automatic RECON loss notification


The use of automatic RECON loss notification feature is optional and is automatically enabled
if:
The SCI address space is up and running on the same OS/390 image
The DBRC instance registers with SCI as a member SCIs IMSplex

The first DBRC instance that registers with SCI providing the IMSplex name, saves this name
in the RECON and invoke the ARLN.
Keep in mind, once you have invoked ARLN by specifying this IMSplex name, all following
IMS Version 8 DBRC instances using the same RECONs must specify the same IMSplex
name, otherwise they will fail, issuing the following message:
DSP1136A RECON ACCESS DENIED, IMSPLEX NAME nnnnn NOT VALID

This message is also shown in Example 18-2 on page 271 and Example 18-4 on page 272.
We are referring to any interested party accessing the RECON as a DBRC instance. This
includes all IMS online and batch regions, most IMS utility programs and the DBRC utility
(DSPURX00) and they all need to specify the same IMSplex name. The RECON loss
notification as well as the initialization and termination of any DBRC instance will be
propagated to all other running DBRC instances as long as they are registered with SCI.

Note: DBRC can use SCI even if its IMS control region is not using SCI. When both DBRC
and IMS control region register with SCI, they must specify the same IMSplex name (they
cannot join different IMSplexes).
The SCI itself is assigned to the IMSplex name in the CSLSIxxx PROCLIB member by the
IMSPLEX parameter, as follows:
IMSPLEX(NAME=PLEX1),

For the description of CSLSIxxx member, refer to Set up the Structured Call Interface on
page 293).

18.2.1 Two choices to enable ARLN


There are two ways to indicate the registration of a DBRC instance with SCI. You can specify
an IMSplex name as an execution parameter within the EXEC statement and / or by
customizing the new DBRC SCI registration exit (DSPSCIX0). Here we will explain both
options.

IMSPLEX execution parameter


The IMSplex name can be set and referred by the execution parameter IMSPLEX=. It is the
24th positional parameter for batch regions (PGM=DFSRRC00). Additionally, the IMS Version
8 PROCLIB procedures used for the DBRC address space also provide this new keyword. To
use the IMSPLEX parameter in this way requires consistent changes in your JCL for all
DBRC instances for a certain set of RECONs.

DBRC SCI registration exit (DSPSCIX0)


We recommend that you use your own customized user exit rather than the execution
parameter if you feel comfortable coding this exit routine. The exit routine must be named
DSPSCIX0 and may be used to decide if a DBRC instance should register with SCI and
which IMSplex name it should use.
Chapter 18. Automatic RECON loss notification

267

The data set name of one of the RECON data sets (RECON1, RECON2 or RECON3) is
passed to the exit routine. If an IMSPLEX= execution parameter is provided, this value (one
to five characters) is passed to the exit routine as well.
The exit passes back a return code to indicate the intended registration action to take and the
IMSplex name to use, if the decision was made to register this DBRC instance.
The return codes are listed in Table 18-1 below.
Table 18-1 Return codes of the DSPSCIX0 exit

Return code

Meaning and comment

0000

IMSplex name is used to register with SCI (ARLN invoked)

0004

No SCI registration; RECON access fails if the RECON contains an


IMSplex name. Use this to avoid (accidental) registrations or if you do not
want to use ARLN. This would also prevent any job step with any
IMSplex name specified from setting the IMSplex name in the RECONs,
if it is not yet set.

0008

No SCI registration; any IMSplex name found in RECON is ignored and the
RECON access is allowed. Use this with caution! Maybe you should provide

a special copy of your exit routine for using this option only in emergency
cases. Access to this exit routine should be restricted to authorized
personnel only!
0012

RECON access denied; SCI registration fails or could not be attempted


for severe reasons, for example, because of an exit error. This generally
results in an IMS user abend 2480.

18.2.2 How to migrate and fallback from automatic RECON loss notification
For some reasons, for example during test period, you may want to change the IMSplex
name or to stop using ARLN and remove the IMSplex name from RECON. The IMSplex
name is saved in the RECON and it can be changed by using the following command:
CHANGE.RECON IMSPLEX(IMSplex name)

If you want to stop using ARLN, the IMSplex name can be removed from RECON by using
the following command:
CHANGE.RECON NOPLEX

To use the commands mentioned above, the DBRC instance itself, which is a DBRC
command utility execution, you still need to provide the valid old IMSplex name even if you
want to go back to using option NOPLEX (otherwise RECON access is denied). But again,
be aware that any DBRC instances registering with SCI following your changes need to be
re-adjusted for the new IMSplex name or, in case of NOPLEX option, the next one starting
with any IMSplex name is the first one which would save the given IMSplex name (by
execution parameter or modified user exit) into the RECON.
Please be aware of the following considerations:
This CHANGE command cannot be used for the initial setting of the IMSplex name into
the RECON. The initial value of the IMSplex name into the RECON will be done by the first
DBRC instance providing the IMSplex name either as execution parameter or using the
user exit. The sample exit tries to find a table entry associating the RECON data set name
with an IMSplex name.

268

IMS Version 8 Implementation Guide

This CHANGE command is not supported for /RMCHANGE usage, it only may be invoked
with DBRC command utility execution.
This CHANGE command cannot be used if any DBRC instance using this RECON is
active, except for any DBRC instance which was started before ARLN was activated.
Any subsequent commands in the DBRC command utility (DSPURX00) job step will fail.
We recommend restricting access to this command to authorized personnel.

18.3 DSPSCIX0 details


Since the exit routine could be used with DBRC instances using different RECONs, the exit
routine should be able to handle different RECONs, maybe all of your RECON sets. Our
sample exit routine can be easily modified to do this. There is a table (in PLEXTABL DSECT)
in which you can create table entries for each set of RECONs. Some sample entries will give
you an example. For those sets using ARLN, the entries should include the IMSplex name
and an indicator that RC=00 will be returned. You should override any passed IMSplex
execution parameter as well.
You can code entries, including an indicator for a RC=04, to be returned for other sets of
RECONs not intended for use with ARLN. As mentioned in previous items, this allows the
RECONs to be accessed without SCI registration of the DBRC instance.
Using SMP/E for your own common version of this exit routine, you can provide a USERMOD
to apply to all of your zones and install it into the associated SDFSRESL libraries to be
implemented in all of your environments. Please refer to Example 18-8 on page 274.

The IMS provided default exit


If there is no DBRC SCI exit provided, IMS behaves as if the IBM supplied default exit were
used. You will find a sample DSPSCIX0 exit in the ADFSSMPL distribution library. Please
take a few minutes to review this exit. Our supplied default exit routine works as described
below:
If IMSPLEX= parameter is specified:

Returns the specified IMSplex name with RC=00


If IMSPLEX= parameter is not specified:

Uses lookup table to match the passed RECON data set name with an entry containing
the corresponding IMSplex name (the table is empty as supplied, you need to modify it)
Returns IMSplex name from the table entry with RC=00
If IMSPLEX= parameter is not specified and no match found in the table:

Sets RC=04, meaning no SCI registration will be done. No IMSplex name will be saved
in the RECON and RECON access fails if the RECON contains an IMSplex name. If
the RECONs contain an IMSplex name, RECON access is denied.

Note: The DSPSCIX0 exit must be in an authorized library. If the library is concatenated,
only the data set containing the exit needs to be authorized. DBRC performs a specific
check on the dataset the module is loaded from to determine that it is contained in the
Authorized Program Facility (APF) list.
The SCI is a prerequisite for automatic RECON loss notification.That means ARLN is only
available for IMS Version 8 systems running in an IMSplex environment. All IMS Version 6
and IMS Version 7 systems using DBRC are allowed to share the RECONs even though they
are not able to register with SCI and they cannot support ARLN.
Chapter 18. Automatic RECON loss notification

269

18.4 Examples
Here are some examples relating to the explanations and messages mentioned before.
Example 18-1 SCI registration
J E S 2

J O B

L O G

--

S Y S T E M

S C 5 3

--

N O D E

17.08.02 JOB10316 ---- THURSDAY, 13 JUN 2002 ---17.08.02 JOB10316 IRR010I USERID JOUKO1
IS ASSIGNED TO THIS JOB.
17.08.03 JOB10316 ICH70001I JOUKO1
LAST ACCESS AT 17:06:13 ON THURSDAY, JUNE
17.08.03 JOB10316 $HASP373 LISTRCN STARTED - INIT A
- CLASS A - SYS SC53
17.08.03 JOB10316 IEF403I LISTRCN - STARTED - TIME=17.08.03 - ASID=03F3 - SC53
17.08.03 JOB10316 +DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1
17.08.04 JOB10316 --TIMINGS (MINS.)-17.08.04 JOB10316 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
17.08.04 JOB10316 -LISTRCN
D
00
232
.00
.00
.0
17.08.04 JOB10316 IEF404I LISTRCN - ENDED - TIME=17.08.04 - ASID=03F3 - SC53
17.08.04 JOB10316 -LISTRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=
17.08.04 JOB10316 $HASP395 LISTRCN ENDED
------ JES2 JOB STATISTICS -----...
2 //D
EXEC PGM=DSPURX00,PARM=('IMSPLEX=PLEX1')
3 //STEPLIB DD DISP=SHR,DSN=IMSPSA.IMS0.SDFSRESL
4 //
DD DISP=SHR,DSN=IMSPSA.IM0A.MDALIB
...
DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
PAGE 0002
LIST.RECON STATUS
2002.164 17:08:03.2 -04:00
LISTING OF RECON
PAGE 0003
------------------------------------------------------------------------------RECON
RECOVERY CONTROL DATA SET, IMS V8R1
DMB#=13
INIT TOKEN=02015F0058438F
NOFORCER LOG DSN CHECK=CHECK17
STARTNEW=NO
TAPE UNIT=3480
DASD UNIT=3390
TRACEOFF
SSID=IM1A
LIST DLOG=YES
CA/IC/LOG DATA SETS CATALOGED=YES
MINIMUM VERSION = 6.1
LOG RETENTION PERIOD=00.001 00:00:00.0
COMMAND AUTH=NONE HLQ=**NULL**
SIZALERT DSNUM=15
VOLNUM=16
PERCENT= 95
LOGALERT DSNUM=3
VOLNUM=16
TIME STAMP INFORMATION:
TIMEZIN = %SYS
-LABEL- -OFFSETUTC
+00:00
OUTPUT FORMAT: DEFAULT = LOCORG LABEL PUNC YYYY
CURRENT = LOCORG LABEL PUNC YYYY
IMSPLEX = PLEX1
-DDNAMERECON1
RECON2
RECON3

-STATUSCOPY1
COPY2
SPARE

-DATA SET NAMEIMSPSA.IM0A.RECON1


IMSPSA.IM0A.RECON2
IMSPSA.IM0A.RECON3

If you use the provided sample SCI registration user exit, you get the following SCI
registration message DSP1123I with the additional info USING EXIT:
19.00.40 JOB10437

270

IMS Version 8 Implementation Guide

+DSP1123I

DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT

You get this message even though you have stated the correct IMSPLEX parameter on the
exec statement. It indicates that the final decision is made by the exit.
If there is already an IMSplex name in the RECON (PLEX1), and you are trying to access
this RECON without any execution parameter or an SCI user exit, the access is denied and
return code 12 is issued as shown in Example 18-2:
Example 18-2 RECON access denied
...
19.16.52 JOB09571 IEF403I LISTRCN - STARTED - TIME=19.16.52 - ASID=0035 - SC54
19.16.53 JOB09571 +DSP1136A RECON ACCESS DENIED, IMSPLEX NAME
NOT VALID
19.16.53 JOB09571 +DSP1136A RECON ACCESS DENIED, IMSPLEX NAME
NOT VALID
19.16.53 JOB09571 --TIMINGS (MINS.)-19.16.53 JOB09571 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
19.16.53 JOB09571 -LISTRCN
D
12
170
.00
.00
.0
19.16.53 JOB09571 IEF404I LISTRCN - ENDED - TIME=19.16.53 - ASID=0035 - SC54
...
1 //LISTRCN JOB (999,POK),'LISTINGS',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,TIME=1439,
//
REGION=0M,MSGLEVEL=(1,1)
/*JOBPARM SYSAFF=SC54
//*JOBPARM SYSAFF=SC47
//********************************************************************
//* SC53 IM1SC , SC54 IM3SC , SC67 IM4SC
//* IMS0 RESLIB WITH DSPSCIX0 MODIFIED WITH UMSCIX0
//* IM0A RESLIB WITHOUT
//********************************************************************
IEFC653I SUBSTITUTION JCL - (999,POK),'LISTINGS',NOTIFY=JOUKO1,CLASS=A
MSGLEVEL=(1,1)
2 //D
EXEC PGM=DSPURX00
<==== no parm string !
...

If the IMSplex name stated on execution parameter is wrong (no SCI address space found
responsible for an IMSplex called "PLEX4" on this LPAR), the "SCI registration failed"
message is issued as shown in Example 18-3:
Example 18-3 SCI registration failed
J E S 2

J O B

20.03.21
20.03.21
20.03.22
20.03.22
20.03.22

JOB10471 ---- THURSDAY, 13 JUN 2002 ---JOB10471 IRR010I USERID JOUKO1


IS ASSIGNED TO THIS JOB.
JOB10471 $HASP373 LISTRCN STARTED - INIT A
- CLASS A - SYS SC53
JOB10471 IEF403I LISTRCN - STARTED - TIME=20.03.22 - ASID=03F3 - SC53
JOB10471 +DSP1135A SCI REGISTRATION FAILED, IMSPLEX NAME=PLEX4,
RC=01000010, RSN=00004000
JOB10471 --TIMINGS (MINS.)-JOB10471 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
JOB10471 -LISTRCN
D
12
129
.00
.00
.0
JOB10471 IEF404I LISTRCN - ENDED - TIME=20.03.22 - ASID=03F3 - SC53
JOB10471 -LISTRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=
JOB10471 $HASP395 LISTRCN ENDED

20.03.22
20.03.22
20.03.22
20.03.22
20.03.22
20.03.22
...

L O G

--

S Y S T E M

S C 5 3

--

N O D E

//
CLASS=A,MSGCLASS=T,TIME=1439,
//
REGION=0M,MSGLEVEL=(1,1)
/*JOBPARM SYSAFF=SC53
//********************************************************************
//* LPAR & SCI asid STARTED :
//* SC53
IM1ASC PLEX1 (CSLSIxxx mbr with stmt IMSPLEX(NAME=PLEX1)

Chapter 18. Automatic RECON loss notification

271

//********************************************************************
IEFC653I SUBSTITUTION JCL - (999,POK),'LISTINGS',NOTIFY=JOUKO1,CLASS=A
MSGLEVEL=(1,1)
2 //D
EXEC PGM=DSPURX00,PARM=('IMSPLEX=PLEX4')
3 //STEPLIB DD DISP=SHR,DSN=IMSPSA.IMS0.SDFSRESL

If there are two SCI address spaces running on the same LPAR (another SCI participating in
another IMSplex, PLEXC for instance), and you specify the wrong IMSplex name
(PLEXC) not matching the really intended RECON (IMSPLEX value is PLEX1), you get
the registration message (issued by the other but unwanted SCI), however, the RECON
ACCESS DENIED message also appears because of the mismatch between IMSPLEX
name PLEX1 already inserted into your RECON and the wrong name passed by your
execution parameter:
Example 18-4 SCI registration even so RECON access denied
J E S 2

J O B

L O G

20.10.37
20.10.37
20.10.38
20.10.38
20.10.38
20.10.38
20.10.38
20.10.38
20.10.39
20.10.39
20.10.39
20.10.39
20.10.39
20.10.39
...

JOB10479 ---- THURSDAY, 13 JUN 2002 ---JOB10479 IRR010I USERID JOUKO1


IS ASSIGNED TO THIS JOB.
JOB10479 ICH70001I JOUKO1
LAST ACCESS AT 20:10:15 ON THURSDAY, JUNE
JOB10479 $HASP373 LISTRCN STARTED - INIT A
- CLASS A - SYS SC53
JOB10479 IEF403I LISTRCN - STARTED - TIME=20.10.38 - ASID=03F3 - SC53
JOB10479 +DSP1123I DBRC REGISTERED WITH IMSPLEX PLEXC USING EXIT
JOB10479 +DSP1136A RECON ACCESS DENIED, IMSPLEX NAME PLEXC NOT VALID
JOB10479 +DSP1136A RECON ACCESS DENIED, IMSPLEX NAME PLEXC NOT VALID
JOB10479 --TIMINGS (MINS.)-JOB10479 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
JOB10479 -LISTRCN
D
12
163
.00
.00
.0
JOB10479 IEF404I LISTRCN - ENDED - TIME=20.10.39 - ASID=03F3 - SC53
JOB10479 -LISTRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=
JOB10479 $HASP395 LISTRCN ENDED
2 //D

--

S Y S T E M

S C 5 3

--

N O D E

EXEC PGM=DSPURX00,PARM=('IMSPLEX=PLEXC')

...

Again, the DSP1123I message issues with "... USING EXIT" if the provided IBM user exit is
used.
In the following example you can see the DSP0388I message listing the subsystems up and
running identified by the active subsystem records flagged in RECON. We have used the
CHANGE.RECON REPLACE(RECON1) control statement running the DBRC batch utility to force the
notification message. This statement caused the copy of RECON2 into the SPARE and the
RECON1 to be discarded.
Example 18-5 RECON change forced by REPLACE(RECON1)

272

J E S 2

J O B

L O G

15.27.34
15.27.34
15.27.35
15.27.35
15.27.35
15.27.35
15.27.36
15.27.36
15.27.36
15.27.36

JOB10976 ---- FRIDAY,


14 JUN 2002 ---JOB10976 IRR010I USERID JOUKO1
IS ASSIGNED TO THIS JOB.
JOB10976 ICH70001I JOUKO1
LAST ACCESS AT 14:50:40 ON FRIDAY, JUNE 14
JOB10976 $HASP373 CHNGRCN STARTED - INIT A
- CLASS A - SYS SC53
JOB10976 IEF403I CHNGRCN - STARTED - TIME=15.27.35 - ASID=03F3 - SC53
JOB10976 +DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
JOB10976 +DSP0380I RECON2
COPY TO RECON3
STARTED
JOB10976 +DSP0388I SSID=IM1A
FOUND
JOB10976 +DSP0388I SSID=IM3A
FOUND
JOB10976 +DSP0388I 0002 SSYS RECORD(S) IN THE RECON AT RECONFIGURATION

IMS Version 8 Implementation Guide

--

S Y S T E M

S C 5 3

--

N O D E

15.27.36 JOB10976 +DSP0381I COPY COMPLETE, RC = 000


15.27.36 JOB10976 --TIMINGS (MINS.)-15.27.36 JOB10976 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
15.27.36 JOB10976 -CHNGRCN
D
00
241
.00
.00
.0
15.27.36 JOB10976 IEF404I CHNGRCN - ENDED - TIME=15.27.36 - ASID=03F3 - SC53
15.27.36 JOB10976 -CHNGRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=
15.27.36 JOB10976 $HASP395 CHNGRCN ENDED
------ JES2 JOB STATISTICS -----14 JUN 2002 JOB EXECUTION DATE
21 CARDS READ
104 SYSOUT PRINT RECORDS
0 SYSOUT PUNCH RECORDS
6 SYSOUT SPOOL KBYTES
0.02 MINUTES EXECUTION TIME
1 //CHNGRCN JOB (999,POK),'LISTINGS',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,TIME=1439,
//
REGION=0M,MSGLEVEL=(1,1)
/*JOBPARM SYSAFF=SC53
DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
PAGE 0002
CHANGE.RECON REPLACE(RECON1)
DSP0380I RECON2
COPY TO RECON3
STARTED
DSP0388I SSID=IM1A
FOUND
DSP0388I SSID=IM3A
FOUND
DSP0388I 0002 SSYS RECORD(S) IN THE RECON AT RECONFIGURATION
DSP0381I COPY COMPLETE, RC = 000
DSP0242I RECON1 DSN=IMSPSA.IM0A.RECON1
DSP0242I REPLACED BY
DSP0242I RECON3 DSN=IMSPSA.IM0A.RECON3
DSP0203I COMMAND COMPLETED WITH CONDITION CODE 00
DSP0220I COMMAND COMPLETION TIME 2002.165 15:27:36.4 -04:00
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
PAGE 0003
DSP0211I COMMAND PROCESSING COMPLETE
DSP0211I HIGHEST CONDITION CODE = 00
******************************** BOTTOM OF DATA ********************************

In the following joblog example you can see the notification message in one of these
registered DBRC instances.
Example 18-6 RECON loss notification
J E S 2
12.34.19
12.34.19
12.34.19
12.34.19
12.34.21
12.34.21
15.27.36

J O B

L O G

--

S Y S T E M

S C 5 3

--

N O D E

STC10813 ---- FRIDAY,


14 JUN 2002 ---STC10813 IEF695I START IM1ADBRC WITH JOBNAME IM1ADBRC IS ASSIGNED TO USER
STC10813 $HASP373 IM1ADBRC STARTED
STC10813 IEF403I IM1ADBRC - STARTED - TIME=12.34.19 - ASID=00C7 - SC53
STC10813 DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
STC10813 DFS3613I - DRC TCB INITIALIZATION COMPLETE IM1A
STC10813 DSP1141I RECON LOSS NOTIFICATION RECEIVED
1 //IM1ADBRC JOB MSGLEVEL=1
2 //STARTING EXEC IM1ADBRC,IMSID=IM1A
3 XXIM1ADBRC PROC RGN=6M,DPTY='(14,15)',SOUT=A,
XX
IMSID=,SYS='IMS0.',SYS2='IM0A.'
XX*
IMSID=,SYS='IM1A.',SYS2='IM0A.'
4 XXIEFPROC EXEC PGM=DFSMVRC0,REGION=&RGN,
XX
DPRTY=&DPTY,PARM=(DRC,&IMSID)

...

Chapter 18. Automatic RECON loss notification

273

If you try to change your RECON IMSplex value while any other DBRC instance which is SCI
registered and running within this IMSplex you will get following informational message as
shown in Example 18-7:
Example 18-7 DSP1137I message
J E S 2

J O B

L O G

--

S Y S T E M

S C 5 3

--

N O D E

12.54.41
12.54.41
12.54.41
12.54.41
12.54.41
12.54.41
12.54.42
12.54.42
12.54.42
12.54.42
12.54.42
12.54.42
12.54.42
12.54.42
...
DSP1123I

JOB10833 ---- FRIDAY,


14 JUN 2002 ---JOB10833 IRR010I USERID JOUKO1
IS ASSIGNED TO THIS JOB.
JOB10833 ICH70001I JOUKO1
LAST ACCESS AT 12:08:16 ON FRIDAY, JUNE 14
JOB10833 $HASP373 LISTRCN STARTED - INIT A
- CLASS A - SYS SC53
JOB10833 IEF403I LISTRCN - STARTED - TIME=12.54.41 - ASID=03F3 - SC53
JOB10833 +DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
JOB10833 +DSP1137I IMSPLEX MAY NOT BE CHANGED, DBRC ACTIVE FOR
JOB10833 +DSP1137I IM1ADBRC
JOB10833 +DSP1137I IM3ADBRC
JOB10833 --TIMINGS (MINS.)-JOB10833 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
JOB10833 -LISTRCN
D
12
211
.00
.00
.0
JOB10833 IEF404I LISTRCN - ENDED - TIME=12.54.42 - ASID=03F3 - SC53
JOB10833 -LISTRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=

DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT


IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
PAGE 0002
CHANGE.RECON NOPLEX
DSP1137I IMSPLEX MAY NOT BE CHANGED, DBRC ACTIVE FOR
DSP1137I IM1ADBRC
DSP1137I IM3ADBRC
DSP0209I PROCESSING TERMINATED WITH CONDITION CODE = 12
DSP0217I THE FOLLOWING SYSIN RECORDS HAVE BEEN SKIPPED:
LIST.RECON STATUS
DSP0218I END OF SKIPPED SYSIN RECORDS
******************************** BOTTOM OF DATA ********************************

Any subsequent command after the CHANGE.RECON IMSPLEX(...) | NOPLEX control statement
will not executed, indicated by the DSP0217I message. The message issued at the bottom of
Example 18-7 for the LIST.RECON command.
In the following examples, we use a different SCI user exit modified with some RECON data
set entries, maintained by SMP/E and intended for global use, for example, across your test
and development environments (and their RECONs), ignoring any IMSplex value specified in
the execution parameter. The following examples are shown only for documentation
purposes, your version should be modified to suit your own environment.
In this simple user exit modification, there is only one set of RECONs (assigned for PLEX1)
intended to register with SCI and use ARLN, whereas the other entries
(IMSPSA.IM0B.RECON* , IMS810.RECON* ) are intended to bypass any SCI registration.
The first example is a job including sample JCL for compile and link of the sample exit. It is
only intended to use as input for an SMP JCLINREPORT job to create the necessary SMP/E
target zone elements preparing the target for the apply step (Of course, in substitute you can
define the MOD and LMOD element explicitly with the known SMP/E statements):
Example 18-8 JCLINREPORT input
********************************* Top of Data ***************************
//IMSEXITS JOB (999,POK),
// 'HK',

274

IMS Version 8 Implementation Guide

// CLASS=A,MSGCLASS=X,MSGLEVEL=(1,1),
// NOTIFY=&SYSUID,
// REGION=64M
//*
//* JCLLIB ORDER=(IMS810C.PROCLIB)
/*JOBPARM L=9999,SYSAFF=*
//*********************************************************************
//* IMS EXIT FROM SAMPLE LIB
//* !! THIS IS ONLY THE DECK FOR JCLINREPORT !!
//* C.SYSINs LLQ DSNAME IS MAPPED TO SRC ENTRY DISTLIB(DDD)
//* L.SYSLMODs LLQ DSNAME IS MAPPED TO LMOD SYSLMOD(DDD)
//* INCLUDE DDDEF(MODUL) IS CREATING MOD ENTRY
//*********************************************************************
//C
EXEC PGM=ASMA90,PARM='OBJECT,NODECK'
//SYSPRINT DD SYSOUT=*
//SYSLIB
DD DISP=SHR,DSN=IMS810C.ADFSMAC
//
DD DISP=SHR,DSN=SYS1.MACLIB
//
DD DISP=SHR,DSN=ASM.SASMMAC2
//SYSLIN
DD ...
...
//SYSIN
DD DISP=SHR,DSN=IMS810C.ADFSSMPL(DSPSCIX0)
//L
EXEC PGM=IEWL,
//
PARM='XREF,LIST,RENT',COND=(0,LT,C)
//SYSPRINT DD SYSOUT=*
//SYSLMOD DD DISP=SHR,DSN=IMS810C.SDFSRESL
//SYSUT1
DD UNIT=(SYSALLDA,SEP=(SYSLMOD,SYSLIN)),
//
DISP=(,DELETE,DELETE),SPACE=(CYL,(1,1))
//SYSLIN
DD DISP=(OLD,DELETE,DELETE),
//
DSN=*.C.SYSLIN,VOL=REF=*.C.SYSLIN
//
DD *
INCLUDE ADFSLOAD(DSPSCIX0)
ENTRY DSPSCIX0
MODE AMODE(31),RMODE(ANY)
NAME DSPSCIX0(R)
/*

After creation of the MOD and LMOD elements (and after the RECEIVE process) the APPLY
(CHECK at first!) of the following USERMOD shown in Example 18-9 will link the changed
(++SRCUPD.ated) SCI user exit into your target SDFSRESL:
Example 18-9 SMP/E USERMOD for an SCI user exit change
++ USERMOD(UMSCIX0) /* DSPSCIX0 USER EXIT
*/.
++ VER(P115) FMID(HMK8800) .
++ SRCUPD(DSPSCIX0) DISTLIB(ADFSSMPL).
./ CHANGE NAME=DSPSCIX0
*/*01*
CHANGE-ACTIVITY: 05/24/02 table entries for
*/
*/*
IMSPSA.IM0A.RECON1,2,3 using PLEX1
*/
*/*
IMSPSA.IM0B.RECON1,2,3 no ARLN
*/
*/*
IMS810C.RECON1,2,3
no ARLN
*/
*--- DONT CARE ABOUT EXEC VALUE , ONLY TABLE IN USE
*
L
R4,8(,R1)
R4 = A(IMSPLEX VALUE FROM EXEC CARD)
*--------------------------------------------------------------------* IF IMSPLEX= SPECIFIED ON EXEC CARD, IGNORE IT !
*--------------------------------------------------------------------*
LTR
R4,R4
IMSPLEX= ON EXEC STATEMENT?
*
BZ
NEXECPRM
IF NOT SPECIFIED, BRANCH
*
MVC
0(PNL,R3),0(R4)
ELSE COPY VALUE TO RETURN AREA
*
SR
R15,R15
SET RC00
*
B
EXIT
AND RETURN TO DBRC

00010002
00020002
00030004
00040002
07551000
07552000
07553007
07554007
08849907
08850000
08900000
08950007
09000000
09050000
09100000
09150000
09200000
09250000

Chapter 18. Automatic RECON loss notification

275

DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC
DC

CL(DSNL)'IMSPSA.IM0A.RECON1'
RECON NAME
CL(PNL)'PLEX1'
IMSPLEX NAME
XL(RCL)'00000000'
RC00 = USE THE IMSPLEX NAME
CL(DSNL)'IMSPSA.IM0A.RECON2'
RECON NAME
CL(PNL)'PLEX1'
IMSPLEX NAME
XL(RCL)'00000000'
RC00 = USE THE IMSPLEX NAME
CL(DSNL)'IMSPSA.IM0A.RECON3'
RECON NAME
CL(PNL)'PLEX1'
IMSPLEX NAME
XL(RCL)'00000000'
RC00 = USE THE IMSPLEX NAME
CL(DSNL)'IMSPSA.IM0B.RECON1'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC04 = NO SCI REGISTRATION
CL(DSNL)'IMSPSA.IM0B.RECON2'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC04 = NO SCI REGISTRATION
CL(DSNL)'IMSPSA.IM0B.RECON3'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC04 = NO SCI REGISTRATION
CL(DSNL)'IMS810C.RECON1'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC04 = NO SCI REGISTRATION
CL(DSNL)'IMS810C.RECON2'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC04 = NO SCI REGISTRATION
CL(DSNL)'IMS810C.RECON3'
RECON NAME
CL(PNL)'*****'
IMSPLEX NAME
XL(RCL)'00000004'
RC00 = USE THE IMSPLEX NAME

./ ENDUP

11811000
11812000
11813000
11814000
11815000
11816000
11817000
11818000
11819000
11820000
11821000
11822000
11823000
11824000
11825000
11826000
11827000
11828000
11828100
11828200
11828300
11828400
11828500
11828600
11828700
11828800
11828900
11829000

This exit has been modified to match RECON data set names with IMSplex values regardless
of any IMSPLEX execution parameter. If any DBRC instance is running with a customized
user exit as shown in previous Example 18-9, it will get access to the right RECON despite
the wrong IMSPLEX name being stated in the execution parameter (Example 18-10):
Example 18-10 modified SCI user exit ignoring execution parameter
J E S 2 J O B L O G -- S Y S T E M S C 5 3 -- N O D E
21.07.47 JOB10501 ICH70001I JOUKO1
LAST ACCESS AT 20:10:38 ON THURSDAY, JUNE
21.07.47 JOB10501 $HASP373 LISTRCN STARTED - INIT A
- CLASS A - SYS SC53
21.07.47 JOB10501 IEF403I LISTRCN - STARTED - TIME=21.07.47 - ASID=03F3 - SC53
21.07.47 JOB10501 +DSP1123I DBRC REGISTERED WITH IMSPLEX PLEX1 USING EXIT
21.07.48 JOB10501 --TIMINGS (MINS.)-21.07.48 JOB10501 -JOBNAME STEPNAME PROCSTEP
RC
EXCP
CPU
SRB CLOCK
21.07.48 JOB10501 -LISTRCN
D
00
223
.00
.00
.0
21.07.48 JOB10501 IEF404I LISTRCN - ENDED - TIME=21.07.48 - ASID=03F3 - SC53
21.07.48 JOB10501 -LISTRCN ENDED. NAME-LISTINGS
TOTAL CPU TIME=
21.07.48 JOB10501 $HASP395 LISTRCN ENDED
------ JES2 JOB STATISTICS -----1 //LISTRCN JOB (999,POK),'LISTINGS',NOTIFY=&SYSUID,
//
CLASS=A,MSGCLASS=T,TIME=1439,
//
REGION=0M,MSGLEVEL=(1,1)
/*JOBPARM SYSAFF=SC53
//********************************************************************
//* LPARS & SCI asids STARTED :
//* SC53
IM1ASC PLEX1 (CSLSIxxx mbr with stmt IMSPLEX(NAME=PLEX1)
//* SC54
IM3ASC PLEX1 (CSLSIxxx mbr with stmt IMSPLEX(NAME=PLEX1)
//* SC67
IM4ASC PLEX1 (CSLSIxxx mbr with stmt IMSPLEX(NAME=PLEX1)
//*
//* SC53
IMCCSC PLEXC (CSLSIxxx mbr with stmt IMSPLEX(NAME=PLEXC)
//********************************************************************

276

IMS Version 8 Implementation Guide

...
2 //D
3 //STEPLIB
4 //

EXEC PGM=DSPURX00,PARM=('IMSPLEX=PLEXC')
DD DISP=SHR,DSN=IMSPSA.IMS0.SDFSRESL
DD DISP=SHR,DSN=IMSPSA.IM0A.MDALIB

...
IMS VERSION 8 RELEASE 1 DATA BASE RECOVERY CONTROL
LIST.RECON STATUS
2002.164 21:07:47.4 -04:00
LISTING OF RECON
---------------------------------------------------------------------RECON
RECOVERY CONTROL DATA SET, IMS V8R1
DMB#=13
INIT TOKEN=02015F0058438F
...
LOGALERT DSNUM=3
VOLNUM=16
TIME STAMP INFORMATION:
TIMEZIN = %SYS
-LABEL- -OFFSETUTC
+00:00
OUTPUT FORMAT: DEFAULT = LOCORG LABEL PUNC YYYY
CURRENT = LOCORG LABEL PUNC YYYY
IMSPLEX = PLEX1
-DDNAME-STATUS-DATA SET NAMERECON1
COPY1
IMSPSA.IM0A.RECON1
RECON2
COPY2
IMSPSA.IM0A.RECON2
RECON3
SPARE
IMSPSA.IM0A.RECON3

Since any DBRC instance, including batch and utility jobs running with DBRC=Y, needs SCI
up and running for the registration and the use of ARLN, you have to make sure of the
following:
1. On every LPAR available to schedule your jobs, there is one SCI active and a member of
the same IMSplex
or
2. You might define SCHEDULE environment(s) in your WLM policy describing SCI started
tasks based on unique IMSplex name(s) as elements to control your batch scheduling
across your sysplex and to avoid any batch job starting outside of your IMSplex LPARs.

Chapter 18. Automatic RECON loss notification

277

278

IMS Version 8 Implementation Guide

19

Chapter 19.

Language Environment (LE)


dynamic run time options
In this chapter we introduce the new LE dynamic run time options support, which provides a
customized LE environment to selected transactions, users, programs and LTERMS, within
your IMS environment. This functionality is made possible by the new Common Services
Layer. We discuss enabling and using this feature, and offer a brief overview of the related
commands. Additionally, we provide default and recommended LE parameter values for use
in the IMS environment.

Copyright IBM Corp. 2002. All rights reserved.

279

19.1 LE overview
Before Language Environment, the high-level languages run time services were distributed
with the compiler language product. Each high-level language had its own run time library to
perform these and other functions, as shown in Figure 19-1.

COBOL

PL/I

C/C++

FORTRAN

Compiler

Compiler

Compiler

Compiler

Link-edit

Link-edit

Link-edit

Link-edit

Run-time

Run-time

Run-time

Run-time

Figure 19-1 Language specific run time services

Language Environment (LE) is the replacement product of older language-specific run time
libraries and is now a base element of the z/OS, OS/390, VM/ESA, and VSE/ESA operating
systems.
Today, LE has become IBM's key language product for all new run time services as shown in
Figure 19-2. These services are available to the application developer from multiple high-level
languages including COBOL, PL/I, C, C++, and FORTRAN. Even Assembler programs can
use these run time services (as long as they conform to the LE conventions).

COBOL
Compiler

PL/I
Compiler

C/C++
Compiler

FORTRAN
Compiler

LANGUAGE ENVIRONMENT
(Callable services interface, common services, and support routines)

Link-edit
Run-time
Basic support routines: Initialization/termination, storage, messages, conditions, ...
Callable services: Date, time, ...
Language-specific routines: C, C++, COBOL, PL/I, FORTRAN

Figure 19-2 LE as a common run time environment

19.2 Defining LE run time options


LE run time options are defined in run time parameter modules that are created and installed
in appropriate libraries. LE provides three levels of option specification:

280

IMS Version 8 Implementation Guide

CEEDOPT

Default options module, provides installation level defaults. Each


runtime option in CEEDOPT must be designated as overrideable or
non-overrideable. Options designated as non-overrideable cannot be
overwritten in CEEROPT nor CEEUOPT.

CEEROPT

Region options module, provides defaults that apply to a dependent


region. The options specified as overrideable in CEEDOPT may be
changed or overwritten in CEEROPT. You can specify one set of
options for all programs that run in a dependent region. You can
segregate transactions by class so only specific transactions run in a
dependent region, using specific options. This requires use of Library
Routine Retention (LRR) and DFSINTxx member of IMS.PROCLIB
(where xx is a suffix specified by the PREINIT keyword) that includes
the name CEELRRIN

CEEUOPT

User options module, provides user options at the program level.


Runtime options specified as overrideable in CEEDOPT and
CEEROPT may be changed or overwritten in CEEUOPT. This module
is assembled and linked with the program.

There are occasions when the LE run time options may need to be changed. Possibly to
collect problem diagnostic information by producing a dump, collecting trace data, or to
enable a debug tool. Another reason may be to change storage options for an application.
Prior to IMS Version 8, you may need to:
Recompile and relink the application with the new or changed run time options module.
Stop and restart the dependent region with the new or changed run time options module.
Recompile and relink LE modules used to supply the run time options.

Dynamic run time option support eliminates the need to change, recompile or reassemble,
and relink-edit modules used to supply run time options for an IMS application, or an IMS
dependent region

19.3 Dynamic run time option support


The Language Environment (LE) enhancement in IMS Version 8 provides a dynamic method
of specifying LE run time options for a transaction, logical terminal (LTERM), user ID, and/or
program.
This new capability requires Operations Manager (OM) and the TSO single point of control
(SPOC).
A new IMS startup parameter, LEOPT=, determines whether or not the IMS system allows LE
parameter overrides.
New IMSplex commands allow the customer to specify LE run time options with 'filters'. A
filter is a keyword (such as TRAN or USERID) on the UPDATE LE, DELETE LE, or QUERY LE
commands. For example to start Debug Tool for a particular transaction code but limit the
scope of Debug Tool invocation to a particular individual (user ID). IMSplex commands
require use of the Operations Manager address space and the single point of control (SPOC)
application (or equivalent automated operator).
Additionally, a new IMS exit, DFSBXITA, provides an IMS specific version of the assembler
exit CEEBXITA optimized for this environment. DFSBXITA is used to provide support for
dynamic overrides to LE run time options for a specific application or the entire IMS
environment. The new IMSplex commands UPDATE LE and DELETE LE are then used to
specify the dynamic run time option overrides. The exit issues the DL/I INQY LERUNOPT call
Chapter 19. Language Environment (LE) dynamic run time options

281

to retrieve the run time option overrides that have been set by the UPDATE LE and/or
DELETE LE commands.

19.4 DFSCGxxx
The DFSCGxxx PROCLIB member is used to specify parameters related to the Common
Service Layer, Operations Manager, and the Resource Manager which are common to all
IMS subsystems that are in the IMSplex. The suffix is specified on the CSLG= parameter.
A new keyword in the DFSCGxxx member, LEOPT=Y | N, indicates whether or not IMS
applications running on this system can dynamically override LE run time parameters.
LEOPT=Y indicates IMS should allow override parameters, if they exist for the application
that is executing. This enables the DL/I INQY call to retrieve the address of the run time
option overrides.
LEOPT=N indicates IMS should not allow any overrides. The DL/I INQY call will not return the
address of the run time option overrides. N is the default.
Regardless of the LEOPT= specification, the UPDATE LE and DELETE LE commands perform
updates to the LE run time options; however the run time option overrides (updates) are not
used unless LEOPT=Y is also used.

19.5 New commands and enhanced DL/I INQY call


Commands issued through the SPOC allow you to specify LE run time options with 'filters'.
Filters are one or more of the keywords on the command (i.e. TRAN, USERID, PGM, or
LTERM). The commands are used to change the LE run time options.
Commands are issued from and responses received at the SPOC.
The DL/I INQY call with the new LERUNOPT subfunction is used to retrieve the dynamic LE
run time options.
For complete information on the new commands, refer to IMS Version 8: Command
Reference, SC27-1291.

19.5.1 Update
The UPDATE LE command is used to change LE run time options for a transaction code,
logical terminal (LTERM name), user ID and/or application program. At least one of the
keywords and filters (TRAN, LTERM, USERID, or PGM) must be specified on the UPDATE
LE command.
The SET keyword is used to:
Specify the LE run time options that you want to dynamically change. The new or changed
run time options are designated by the LERUNOPTS specification(s).
Specify whether IMS is to enable (YES) or disable (NO) the ability to dynamically override
LE run time options. The LEOPT keyword is used for this purpose. The UPD LE
SET(LEOPT(YES)) command enables LE dynamic run time options for every IMS in the
IMSplex. The command CANNOT be directed to a specific IMS in the IMSplex.

282

IMS Version 8 Implementation Guide

When dynamic run time option support is disabled (LEOPT(NO)), UPDATE LE command
issued to change run time options result in run time option updates in an area of storage used
internally by IMS. However, the updates are not used until dynamic option support is enabled
SET(LEOPT(YES)). The UPDATE command is entered from the SPOC. The command is
routed to all IMS systems in the IMSplex. The dynamic run time option overrides supplied by
the UPDATE LE command are only allowed when dynamic LE overrides have been enabled.
The syntax for the UPDATE LE command is as follows:
UPDATE LE TRAN(trancode) LTERM(ltermname) USERID(userid) PGM(pgmname)
SET(LERUNOPTS(xxxxxxxx))

Example 19-1 shows two sample UPDATE LE commands. Note that generic parameters on
the filters are not usable with the UPDATE LE command.
Example 19-1 Update LE command

UPD LE TRAN(PART) LTERM(TERM1) USERID(USER1) SET(LERUNOPTS(xxxxxxxxx))


UPD LE SET(LEOPT(YES))

Note: The LE override function only takes place at program scheduling time, so filtering on
LTERM or USERID is only effective for the first message processed following program
scheduling.

19.5.2 Delete
The DELETE LE command is used to delete LE run time options for a transaction code, logical
terminal (LTERM name), user ID and/or application program. At least one of the keywords
and filters (TRAN, LTERM, USERID, or PGM) must be specified on the DELETE LE
command.
The DELETE command is entered from the SPOC. The command is routed to all IMS
systems in the IMSplex. The run time option overrides supplied by the DELETE LE command
are only allowed when dynamic LE run time option overrides have been enabled.
The syntax for the DELETE LE command is as follows:
DELETE LE TRAN(trancode) LTERM(ltermname) USERID(userid) PGM(pgmname)

Example 19-2 shows a sample DELETE LE command. Note that you can use generic
parameters on the filters with this command.
Example 19-2 Delete LE command

DEL LE TRAN(PART*) LTERM(TERM1)

19.5.3 Query
The QUERY LE command is used to query LE run time parameters in effect for a transaction
code, logical terminal (LTERM name), user ID and/or application program.
The SHOW parameter specification displays output fields selected in the filter. At least one
SHOW (ALL | TRAN | LTERM | USERID | PGM | LERUNOPTS) field is required.
The syntax for the QUERY LE command is as follows:

Chapter 19. Language Environment (LE) dynamic run time options

283

QUERY LE TRAN(trancode) LTERM(ltermname) USERID(userid) PGM(pgmname) SHOW(ALL |


TRAN | LTERM | USERID | PGM | LERUNOPTS)

Example 19-3 shows a sample QUERY LE command. Note that you can use generic
parameters on the filters with this command.
Example 19-3 Query LE command

QRY LE LTERM(TERM2) USERID(USER2) SHOW(ALL)

19.5.4 DFSBXITA
CEEBXITA is an existing LE exit and continues to function as it has in previous releases.
When CEEBXITA is linked with the Language Environment initialization/termination library
routines during installation, it functions as an installation-wide user exit. When CEEBXITA is
linked in your load module, it functions as an application-specific user exit. The
application-specific exit is used only when you run that application. The installation-wide
assembler user exit is not executed.
What is new in IMS V8 is an IMS supplied version of CEEBXITA. The exit is delivered as
DFSBXITA but must be reassembled and linked as a part of CEEBXITA. Code in the exit
checks to see if the environment is IMS or not. If it is IMS then an INQY call is made to
retrieve the LE run time overrides; otherwise for non-IMS environments, the code is
bypassed.
In Java regions, the enclave initialization takes place before the application is known or given
control. Therefore, if DFSBXITA is linked with the Java application, it will not be invoked. If
DFSBXITA is linked with the LE libraries, it will be invoked, but the ECP (pointer to
parameters that made the last call to DL/I) will not be allocated at this point of a Java region
initialization. The Java region CREATE THREAD will make the DLI INQY LERUNOPT call. If
parameters are returned as a result of the LERUNOPT call, the C SETENV call will be issued
to pass the overrides parameters to the application.
DFSBXITA can be modified by user if needed but the exit must be linked as CEEBXITA If you
have an existing CEEBXITA, you will need to incorporate the logic from DFSBXITA into your
existing exit to provide dynamic LE options support.
Incorporate DFSBXITA source into to CEEBXITA
Assemble and link-edit CEEBXITA
If exit invoked from IMS environment
Execute DFSBXITA logic prior to returning control to LE
If exit invoked from non-IMS environment
Decide where to branch to in CEEBXITA for non-IMS related processing

19.5.5 DL/I INQY LERUNOPT


The DL/I INQY LERUNOPT call is issued by DFSBXITA to retrieve the LE run time option
overrides for an application. If an override parameter string is found, the address of the run
time overrides is returned by the DL/I INQY LERUNOPT call to the exit routine. The output
from the DL/I INQY LERUNOPT call contains AIBRETRN and AIBREASN codes. The
AIBRSA2 field contains either an address of the LE run time parameter string or zero.
AIBRSA2 contains the address of the LE run time options parameter string when:
Overrides are enabled and the override string was successfully found

AIBRSA2 contains zero for each of the following conditions:


Overrides are disabled for the IMS system

284

IMS Version 8 Implementation Guide

Overrides are allowed for the system, but the override table has not yet been initialized
Overrides are allowed for the system, but there is no applicable override string for the
caller

Run time options are defined to IMS using the UPDATE LE command. In the following
command, the SET(LERUNOPTS(xxxxxxxx)) specification is used to define the run time
overrides:
UPD LE TRAN(tttt) LTERM(llll) USERID(uuuu) PGM(pppp) SET(LERUNOPTS(xxxxxxxx)).

MPP and JMP region overrides are based on the combination of transaction name, LTERM
name, user ID, or program name. IFP, BMP and JBP region overrides are based on the
program name. Message driven BMP regions may also have overrides specified by
transaction name.
If more than one override is appropriate for a transaction, the override value is chosen based
on which table entry matched the most filters. Therefore, if only a TRAN is specified on one
entry (and it matches), and on another entry the LTERM and PGM are specified and match,
the options in the second entry will be returned because it specifies more filters. In the case of
a tie, the first entry to appear in the table is used. No filter type takes precedence over
another. Therefore if two entries match, one specifying just a TRAN and the other specifying
just a PGM, the entry that gets used is the one that appears first in the table.

19.5.6 Migration considerations


The new IMSplex commands (UPDATE LE, DELETE LE, and QUERY LE) are supported
through the use of the Operations Manager (OM) address space. Commands are entered
from the SPOC. The commands are routed to all IMS systems in the IMSplex. The dynamic
parameter overrides supplied by the UPDATE LE and DELETE LE commands are only
allowed when dynamic LE overrides have been enabled, which means that LEOPT=Y has
been specified in the DFSCGxxx member of IMS.PROCLIB or the overrides are enabled with
UPD LE SET(LEOPT(YES)) command.
RACF or an equivalent product may be used to secure the new IMSplex commands.
Authorization checking is performed on the command verb and keyword level.
IMS does not validate the resource names/filters used in the commands.
An IMS going through initialization retrieves existing LE run time options from another IMS.
Two new messages are issued when using the LE dynamic run time options enhancement.
The messages are:
DFS1003I LERUNOPT OVERRIDES INITIALIZED FROM imsid, RC=rrrrrrrrRSN=ssssssss.
DFS1004I LE PARAMETER OVERRIDE PROCESSING state.

The DFS1003I message indicates the LERUNOPTS have been initialized. When the phrase
FROM imsid is in the message it indicates that the run time options have been initialized
from another IMS, the imsid indicates from which IMS the information was received.
The DFS1004I message indicates a change in LE parameter override processing for the
system. The 'state' indicator in the message can either be ENABLED indicating overrides are
allowed or DISABLED indicating overrides are not allowed. When overrides are enabled, the
DL/I INQY LERUNOPT call returns an applicable LE parameter override string to the caller.
This message is issued during IMS restart when IMS is running with a Common Service
Layer. The message is also issued as a result of the UPD LE SET(LEOPT()) command. The
message is sent to the system console and to OM as an unsolicited output message.

Chapter 19. Language Environment (LE) dynamic run time options

285

Two new ABEND subcodes are also issued. They are as follows:
U0071 subcode 5

A new abend subcode is added for the failure to create an IMS ITASK.
DFSXSL10, the IMS Input Exit Server module, issues a U0071-5
abend for this error.

U0718 subcode 3

A new abend subcode is added for the IMODULE LOAD failure of


DFSSINP0, the CSL ITASK. DFSXSL10, the IMS Input Exit Server
module, issues a U0718-3 abend for this error.

19.5.7 Software requirements


For the LE dynamic runtime options, IMS Version 8 system using Operations Manager (OM)
is required, as is the single point of control (SPOC). DB/DC, DBCTL, DCCTL environments
are supported are MPP, BMP, IFP dependent regions, and new IMS Java dependent regions.
Batch regions cannot take advantage of LE dynamic run time options, but can still utilize
CEEROPT.
Dynamic run time option support does not apply to Open Database Access (ODBA)
applications.

19.5.8 LE option recommendations


Table 19-1 provides a list of LE run time options for IMS environments, and shows the
defaults and recommended values.
Table 19-1 LE options, defaults and recommended settings

286

Option

Default

Recommended

ABPERC

None

None

ABTERMENC

ABEND

ABEND

AIXBLD

NOAIXBLD

NOAIXBLD

ALL31

ON

ON

ANYHEAP

16K,8K,ANY,FREE

16K, 8K, ANY FREE (C,


FORTRAN, Multi, PL/I)
48K,8K,ANY,FREE (Fortran)

ARGPARSE

ARGPARSE

ARGPARSE

AUTOTASK

NOAUTOTASK

NOAUTOTASK

BELOWHEAP

8K,4K,FREE

8K,4K,FREE

CBLOPTS

ON

ON

CBLPSHPOP

ON

N/A

CBLQDA

OFF

OFF

CHECK

ON

ON

COUNTRY

US

User defined

DEBUG

DEBUG(OFF)

DEBUG(OFF)

DEPTHCONDLMT

10

ENV

No default

User defined

IMS Version 8 Implementation Guide

Option

Default

Recommended

ENVAR

No default

User defined

ERRCOUNT

ERRUNIT

EXECOPS

EXECOPS

EXECOPS

FILEHIST

FILEHIST

FILEHIST

FILETAG

NOAUTOCVT,NOAUTOTAG

NOAUTOCVT,NOAUTOTAG

FLOW

NOFLOW

NOFLOW

HEAP

32K, 32K, ANY, KEEP, 8K, 4K

32K, 32K, ANY, KEEP, 8K, 4K


(C, COBOL, Multi, PL/I)
4K, 4K, ANY, KEEP, 8K, 4K
(FORTRAN)

HEAPCHK

OFF, 1, 0, 0

OFF, 1, 0, 0

HEAPPOOLS

OFF, 8, 10, 32, 10, 128, 10, 256,


10, 1024, 10, 2048, 10

User defined

INFOMSGFILTER

OFF

OFF

INQPCOPN

INQPCOPN

INQPCOPN

INTERRUPT

OFF

OFF

LIBRARY

SYSCEE

SYSCEE

LIBSTACK

4K,4K,FREE

4K,4K,FREE

MSGFILE

SYSOUT, FBA, 121, 0, NOENQ

DD name

MSGQ

15

15

NATLANG

ENU

ENU

NONIPTSTACK

Replaced by THREADSTACK

OCSTATUS

OCSTATUS

OCSTATUS

PC

NOPC

NOPC

PLIST

HOST

HOST

PLISTASKCOUNT

20

20

POSIX

OFF

OFF

PROFILE

OFF,

OFF,

PRTUNIT

PUNUNIT

RDRUNIT

RECPAD

RECPAD(OFF)

RECPAD(OFF)

REDIR

REDIR

REDIR

RPTOPTS

OFF

OFF

Chapter 19. Language Environment (LE) dynamic run time options

287

288

Option

Default

Recommended

RPTSTG

OFF

OFF

RTEREUS

RTEREUS(OFF)

RTEREUS(OFF)

RTLS

OFF

OFF

SIMVRD

SIMVRD(OFF)

SIMVRD(OFF)

STACK

128K, 128K, ANY, KEEP, 512K,


128K

128K, 128K, ANY, KEEP, 512K,


128K (C, FORTRAN, Multi,
PL/I)
64K, 64K, ANY, KEEP
(COBOL)

STORAGE

NONE,NONE,NONE,0K

NONE,NONE,NONE,0K

TERMTHDACT

TRACE, , 96

TRACE, , 96 (C, FORTRAN,


Multi, PL/I)
UATRACE, , 96 (COBOL)

TEST

NOTEST (ALL, *, PROMPT,


INSPREF)

NOTEST (ALL, *, PROMPT,


INSPREF)

THREADHEAP

4K, 4K, ANY, KEEP

4K, 4K, ANY, KEEP

TRACE

OFF, 4K, DUMP, LE=0

OFF, 4K, DUMP, LE=0

TRAP

ON,SPIE

ON,SPIE

UPSI

00000000

00000000

USRHDLR

NOUSRHDLR

NOUSRHDLR

VCTRSAVE

OFF

OFF

VERSION

XPLINK

OFF

OFF

XUFLOW

AUTO

AUTO

IMS Version 8 Implementation Guide

20

Chapter 20.

Common Service Layer


configuration and operation
In this chapter we describe how to configure and operate the Common Service Layer (CSL).
The major requirements for defining the IMS Version 8 Common Service Layer (CSL)
environment include creating JCL for the address spaces, creating (or updating) PROCLIB
members, and defining the shared queues and resource structures.

Copyright IBM Corp. 2002. All rights reserved.

289

20.1 Setting up a CSL environment


Figure 20-1 highlights the major requirements for defining the IMS Version 8 Common
Service Layer (CSL) environment. These would be to create JCL for the address spaces,
create (or update) PROCLIB members, and define the shared queues and resource
structures. Four manuals contain information for defining this environment:

IMS Version 8: Installation Volume 2: System Definition and Tailoring, GC27-1298


IMS Version 8: Common Queue Server Guide and Reference, SC27-1292
IMS Version 8: Base Primitive Environment Guide and Reference, SC27-1290
IMS Version 8:Common Service Layer Guide and Reference, SC27-1293

Operations Manager

Structured Call
Interface

Resource Manager

PGM=BPEINI00

PGM=BPEINI00

PGM=BPEINI00

BPECFG=BPExxxxx
BPEINIT=CSLOINI0
OMINIT=xxx

BPECFG=BPExxxxx
BPEINIT=CSLSINI0
SCIINIT=xxx

BPECFG=BPExxxxx
BPEINIT=CSLRINI0
RMINIT=xxx

PROCLIB contains
initialization and
execution parameters
for CSL environment

IMS
PROCLIB

Common
Queue Server

IMS
Control Region
PGM=DFSRRC00
CSLG=xxx

CFRM couple Data Set


contains definitions
for shared queue and
resource structures

PGM=BPEINI00

CFRM
CDS

BPECFG=BPExxxxx
BPEINIT=CQSINI00
CQSINIT=xxx

Figure 20-1 High level view of CSL definition requirements

20.1.1 Basic rules


When setting up the CSL environment, the following rules might be helpful to simplify the
process.
All IMSplex members (IMS, CQS, SCI, OM, and RM) can share the same PROCLIB data
set.
All IMsplex members based on BPE (CQS, SCI, OM, and RM) can share the same BPE
configuration PROCLIB member (BPECFG=). However, the parameters defined here all
have defaults and it is not a requirement to even create this member except to change the
defaults or to define some component exits.
All BPE components can share the same BPE exit PROCLIB member (EXITDEF= in
BPECFG). None of these exits are required.
Each BPE component requires its own initialization module

290

IMS Version 8 Implementation Guide

Each CSL component requires its own initialization PROCLIB member(s)

20.1.2 Base primitive environment (BPE)


The base primitive environment is a requirement for CQS, SCI, OM, and RM. For the most
part, this is transparent to the user, but there are some definitional requirements.

Update the OS/390 Program Properties Table (PPT)


All of the BPE-based CSL address spaces can execute program BPEINI00. This is true even
for CQS which, in earlier releases, executed CQSINIT0. BPEINI00 must be added to the
OS/390 program properties table (PPT).
Add program name BPEINI00 into the OS/390 PPT Example 20-1 by editing SCHEDxx
member in SYS1.PARMLIB.
Example 20-1 PPT input
PPT PGMNAME(BPEINI00)
CANCEL
KEY(7)
NOSWAP
NOPRIV
DSI
PASS
SYST
AFF(NONE)

/* IMS */

BPE configuration
Each BPE-based CSL component may specify a BPE configuration PROCLIB member which
identifies such things as trace levels and user exits. All of the parameters specified in this
member have defaults, and it is not even necessary to define one. However, if you want to
change the trace levels for various components, or define user exits (such as the OM
command security exit), then a BPE Configuration (BPECFG=) member must be defined and
a BPE User Exit List (EXITMBR=) member must be defined as shown in Example 20-2.
Example 20-2 Sample BPE configuration proclib members
LANG=ENU
STATINV=600
TRCLEV=(*,LOW,BPE)
TRCLEV=(*,LOW,CQS)
TRCLEV=(*,LOW,RM)
TRCLEV=(*,LOW,OM)
TRCLEV=(*,LOW,SCI)
EXITMBR=(SHREXIT0,BPE)
EXITMBR=(SHREXIT0,CQS)
EXITMBR=(SHREXIT0,RM)
EXITMBR=(SHREXIT0,OM)
EXITMBR=(SHREXIT0,SCI)

Example 20-3 shows a sample user exit list member shared by all components. The new
COMP= parameter allows the user to define all exits in one member and then specify for
which components the exits should be called.

Chapter 20. Common Service Layer configuration and operation

291

Example 20-3 Sample BPE user exit list proclib member (SHREXIT0)
EXITDEF=(TYPE=INITTERM,EXITS=(RMITEXIT),COMP=RM)
EXITDEF=(TYPE=STATS,EXITS=(BPESTATX),COMP=BPE)
EXITDEF=(TYPE=SECURITY,EXITS=(OMSECYX),COMP=OM)

Each component will also define, in its JCL, the name of an initialization module specific to
that component type.

20.1.3 Update the CFRM couple data set (CDS)


There are the following requirements for this step.
Update the CFRM Couple Data Set (CDS) to allow system managed rebuild and system
managed duplexing.
Define the resource structure.
Define the shared queue structures, including the message queue primary and overflow
structures, the Fast Path shared EMH structures, and the system logger structures. This
may already have been done if the system is already using shared queues. This definition
is not shown here.

You should also review the definitions of the Automatic Restart Manager (ARM) and System
Failure Management (SFM) policies to be sure they accurately describe your requirements.

CFRM CDS
If you are intending to use either system managed rebuild or structure duplexing, the CFRM
CDS must be reformatted. This can be done using the couple data set format utility. The
following statements should be included:
ITEM NAME(SMREBLD) NUMBER(1)
ITEM NAME(SMDUPLEX) NUMBER(1)

You should reference the appropriate sysplex documentation for the exact requirements for
this step, including any hardware and software prerequisites.

Resource structure
The resource structure is not a requirement for running IMS in a CSL environment. It is only
required if sysplex terminal management is to be enabled. Automatic RECON loss
notification, global online change, and the SPOC do not require a resource structure.
Example 20-4 shows how the resource structure might be defined in the CFRM policy. The
meanings of the new CF management parameters are discussed in Sysplex terminal
management on page 177. A methodology for sizing the resource structure an be found in
Appendix B, Resource structure sizing on page 315. Dont forget to activate the new policy.
Example 20-4 Sample resource structure definition in CFRM policy
STRUCTURE NAME(IMS0_RSRC_STR)
SIZE(8192)
INITSIZE(4096)
MINSIZE(2048)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
DUPLEX(ENABLED)
REBUILDPERCENT(10)
PREFLIST(CF01 CF02 CF03)

292

IMS Version 8 Implementation Guide

SETXCF START,POLICY,POLNAME=IMS0_RSRC_STR

Shared queue structures


There are several structures required for shared queues, including both the shared queue
structures themselves and the system logger structures used by CQS to log updates to the
shared queues structures. Definition of these structures is not included in this document.

20.1.4 Set up the Structured Call Interface


The started task JCL for the Structured Call Interface must be put in SYS1.PROCLIB
Example 20-5 shows a sample JCL for SCI.
Example 20-5 SCI started task JCL
//* SCI
PROCEDURE
//*
//*
PARAMETERS:
//*
BPECFG - NAME OF BPE MEMBER
//*
SCIINIT - SUFFIX FOR YOUR CSLSIxxx MEMBER
//*
PARM1 - OTHER OVERRIDE PARAMETERS e.g.
//*
ARMRST - Indicates if ARM should be used
//*
SCINAME - Name of the SCI being started
//*
//*
EXAMPLE:
//*
PARM1 ='ARMRST=Y,SCINAME=IM1ASC'
//*
//******************************************************************
//IM1ASC PROC RGN=3000K,SOUT=A,
// RESLIB='IMS.SDFSRESL',
// BPECFG=BPECFGSC,
// SCIINIT=001,
// PARM1=
//*
//IM1ASC EXEC PGM=BPEINI00,REGION=&RGN,
// PARM=('BPECFG=&BPECFG,BPEINIT=CSLSINI0,SCIINIT=&SCIINIT,&PARM1')
//*
//STEPLIB DD DSN=&RESLIB,DISP=SHR
//
DD DSN=SYS1.CSSLIB,DISP=SHR
//*
//PROCLIB DD DSN=IMS.PROCLIB,DISP=SHR
//SYSPRINT DD SYSOUT=&SOUT
//SYSUDUMP DD SYSOUT=&SOUT
//*

SCI initialization PROCLIB member CSLSI001


Example 20-6 shows a sample SCI initialization PROCLIB member. There is an optional
parameter FORCE=(ALL,SHUTDOWN) (not shown). This tells SCI to cleanup its global
blocks from ECSA. This is useful if you want to change the SCINAME without an IPL.
Example 20-6 Sample SCI Initialization Proclib Member
ARMRST=N,
/* ARM should restart OM on failure
SCINAME=IM1A,
/* SCI Name (SCIID = SCI1SCI)
IMSPLEX(NAME=PLEX1)
/* IMSplex Name (CSLPLEX1)
* same identifier must be used for IMSPLEX= parameter

*/
*/
*/

Chapter 20. Common Service Layer configuration and operation

293

* in CSLOIxxx,CSLRIxxx, DFSCGxxx proclib members*/


*--------------------------------------------------------------------*

RACF for SCI - define FACILITY


Example 20-7 is an example of the RACF commands required to define a RACF FACILITY
class for SCI and to permit the IMS and TSO user IDs to that facility.
Example 20-7 RACF for SCI
RDEFINE FACILITY CSL.CSLPLEX1 UACC(NONE)
PE CSL.CSLPLEX1 CLASS(FACILITY) ID(IMSuserid) ACCESS(UPDATE)
PE CSL.CSLPLEX1 CLASS(FACILITY) ID(TSOuserid) ACCESS(UPDATE)
SETR RACLIST(FACILITY) REFR

Note: The user IDs of the IMS address spaces and the user IDs of the TSO users issuing
commands using the SPOC must be given update access to the RACF FACILITY class
named CSL.CSLimsplexname. In this name, the string imsplexname is the value of
IMSPLEX parameter in the CSLSIxxx PROCLIB member.

20.1.5 Set up the Operations Manager


Example 20-8 shows the JCL required for the Operations Manager (OM) started task.
Example 20-8 OM started task JCL
//******************************************************************
//* OM PROCEDURE
//*
//*
PARAMETERS:
//*
BPECFG - NAME OF BPE MEMBER
//*
OMINIT - SUFFIX FOR YOUR CSLOIxxx MEMBER
//*
PARM1
- OTHER OVERRIDE PARAMETERS
//*
ARMRST
- Indicates if ARM should be used
//*
CMDLANG - Language for command description text
//*
CMDSEC
- Command security method
//*
OMNAME
- Name of the OM being started
//*
//*
EXAMPLE:
//*
PARM1='ARMRST=Y,CMDSEC=R,OMNAME=IM1AOM,CMDLANG=ENU'
//*
//******************************************************************
//IM1AOM PROC RGN=3000K,SOUT=A,
// RESLIB='IMS.SDFSRESL',
// BPECFG=BPECFGOM,
// OMINIT=001,
// PARM1=
//*
//IM1AOM PROC EXEC PGM=BPEINI00,REGION=&RGN,
// PARM=('BPECFG=&BPECFG,BPEINIT=CSLOINI0,OMINIT=&OMINIT,&PARM1')
//*
//STEPLIB DD DSN=&RESLIB,DISP=SHR
//
DD DSN=SYS1.CSSLIB,DISP=SHR
//*
//PROCLIB DD DSN=IMS.PROCLIB,DISP=SHR
//SYSPRINT DD SYSOUT=&SOUT
//SYSUDUMP DD SYSOUT=&SOUT
//*

294

IMS Version 8 Implementation Guide

Operations Manager (OM) PROCLIB member CSLOI001


Example 20-9 shows a sample Operations Manager initialization proclib member CSLOIxxx.
Example 20-9 OM initialization member
**************************************
* OM INITIALISATION PARAMETERS
*
* PROCLIB MEMBER - CSLOI001
*
**************************************
CMDSEC=R
/* R for RACF */
IMSPLEX(NAME=PLEX1)
OMNAME=IM1A
CMDTEXTDSN=IMS.SDFSDATA

Command security
When CMDSEC=R in the OM initialization member, RACF will be called to authorize a SPOC
user to enter the command. Security for these commands is defined in the OPERCMDS class
in RACF. Example 20-10 is an example of RACF command security for a few of the
OM-entered commands. Note that the IMSplex name is part of the definitions, and that
generic substitution is allowed.
Example 20-10 RACF security statements
RDEFINE
RDEFINE
RDEFINE

OPERCMDS IMS.CSLPLX0.UPD.TRAN UACC(NONE)


OPERCMDS IMS.CSLPLX1.UPD.TRAN UACC(NONE)
OPERCMDS IMS.*.QRY.* UACC(NONE)

PERMIT
PERMIT
PERMIT

IMS.CSLPLX0.UPD.TRAN CLASS(OPERCMDS) ID(JOHN) ACCESS(UPDATE)


IMS.CSLPLX1.UPD.TRAN CLASS(OPERCMDS) ID(HENRY) ACCESS(UPDATE)
IMS.*.QRY.* CLASS(OPERCMDS) ID(JOUKO) ACCESS(READ)

20.1.6 CQS procedure


The started procedure JCL for CQS can either execute program CQSINIT0, which is the
same program delivered with IMS Version 6, or the BPEINI00 program, new with IMS Version
8. Whether using CQSINIT0 or BPEINI00, the CQS initialization and global structure
definition members must be changed to add information about the IMSPLEX name and the
resource structure name. We recommend using BPEINI00 for consistency, and to remove the
need for multiple OS/390 PPT entries for components built using the Base Primitive
Environment.
When using CQSINIT0, there is no change to the started procedure JCL. When using
BPEINI00, use the same procedures used for RM, except that BPEINIT=CQSINIT0 and
replace RMINIT with CQSINIT to identify the CQS initialization PROCLIB member.
For the relevant CQS PROCLIB members, CQSIPxxx, CQSSGxxx, and CQSSLxxx:

CQSIPxxx

Add IMSPLEX(NAME=PLEX1)

CQSSGxxx

Add RSRCSTRUCTURE(STRNAME=IM0A_RSRC). STRNAME value


must match value in CSLRIxxx member

CQSSLxxx

No changes are required

Chapter 20. Common Service Layer configuration and operation

295

20.1.7 Set up the Resource Manager


Example 20-11 shows the JCL for the Resource Manager (RM) started task.
Example 20-11 RM started task JCL
//*****************************************************************
//* RM PROCEDURE
//*
//*
PARAMETERS:
//*
BPECFG - NAME OF BPE MEMBER
//*
RMINIT - SUFFIX FOR YOUR CSLRIxxx MEMBER
//*
PARM1
- OTHER OVERRIDE PARAMETERS e.g.
//*
ARMRST - Indicates if ARM should be used
//*
RMNAME - Name of RM being started
//*
//*
EXAMPLE:
//*
PARM ='ARMRST=Y,RMNAME=IM1ARM'
//*
//******************************************************************
//IM1ARM PROC RGN=3000K,SOUT=A,
// RESLIB='IMS.SDFSRESL',
// BPECFG=BPECFGRM,
// RMINIT=001,
// PARM1=
//*
//IM1ARM EXEC PGM=BPEINI00,REGION=&RGN,
// PARM=('BPECFG=&BPECFG,BPEINIT=CSLRINI0,RMINIT=&RMINIT,&PARM1')
//STEPLIB DD DSN=&RESLIB,DISP=SHR
//
DD DSN=SYS1.CSSLIB,DISP=SHR
//*
//PROCLIB DD DSN=IMS.PROCLIB,DISP=SHR
//SYSPRINT DD SYSOUT=&SOUT
//SYSUDUMP DD SYSOUT=&SOUT
//*

Resource Manager (RM) PROCLIB member CSLRI001


Example 20-12 shows a sample Resource Manager PROCLIB member CSLRIxxx.
Example 20-12 RM Proclib member
*--------------------------------------------------------------------*
* RM INITIALIZATION PROCLIB MEMBER.
*
*--------------------------------------------------------------------*
ARMRST=N,
/* SHOULD ARM RESTART RM ON FAILURE
*/
IMSPLEX(
NAME=PLEX1,RSRCSTRUCTURE(STRNAME=IM0A_RSRC)),
*
IMSPLEX NAME (CSLPLEX1) & RESOURCE MANAGER STRUCTURE NAME *
CQSSSN=CQ1A,
/* NEEDS TO MATCH SSN= PARM OF CQSIPXXX */
RMNAME=IM1A
/* RM NAME (RMID = IM1ARM)
*/
*--------------------------------------------------------------------*
* END OF MEMBER CSLRI001
*
*--------------------------------------------------------------------*

20.1.8 Set up IMS PROCLIB members


Example 20-13 shows the change required to the IMS initialization PROCLIB member
DFSPBxxx to activate the CSL for the IMSplex.

296

IMS Version 8 Implementation Guide

Example 20-13 CSL parameter in DFSPBxxx


CSLG=001

/* suffix of member DFSCGxxx */

DFSPBxxx
One new parameter in DFSPBxxx activates CSL by identifying the suffix of a new PROCLIB
member DFSCGxxx:
CSLG=001

DFSCGxxx
Example 20-14 shows an example of the DFSCGxxx proclib member, which specifies the
name of the IMSplex that this IMS belongs to, and whether or not global online change is
active for this IMS.
Example 20-14 IMS Proclib member DFSCGxxx
CMDSEC=N
IMSPLEX=PLEX1
OLC=LOCAL
*OLC=GLOBAL
*OLCSTAT=
*NORSCCC=

/*
/*
/*
/*
/*
/*

Note - different format of IMSPLEX= */


Local OLC - default */
GLOBAL OLC */
OLC data set, replaces MODSTAT if OLC=GLOBAL */
Defines whether resource consistency checking to be */
bypassed for ACBLIB/FORMAT/MODBLKS */

Note: Make sure all IMS systems in the IMSplex are using the same OLCSTAT data set
and ideally the same DFSCGxxx member. Also make sure you take a backup of your
OLCSTAT data set following every successful on-line change (just as you would for existing
MODSTAT data sets).

Definition of OLCSTAT data set (if global online change required)


If global online change is required, Example 20-15 shows the definitions required for the
BSAM OLCSTAT data set definitions.
Example 20-15 OLCSTAT data set definitions
BSAM - one record, variable size
DSORG= Sequential
RECFM=V
LRECL=5200
BLKSIZE=5204

DFSDCxxx
This member, while not new, does have some new parameters to define the SRM and
RCVYxxxx system defaults when sysplex terminal management is enabled. If not specified in
this member, defaults are determined by whether or not this IMS is running in a CSL with
shared queues, an RM, and a resource structure. If so, then the SRM default is GLOBAL,
otherwise it is LOCAL. RCVYxxxx always defaults to YES unless SRMDEF is set to NONE.
Example 20-16 DFSDCxxx member showing SRM and RCVYxxxx parameters
SRMDEF=GLOBAL | LOCAL |NONE
RCVYCONV=YES | NO
RCVYSTSN=YES | NO
RCVYFP=YES | NO

Chapter 20. Common Service Layer configuration and operation

297

DFSVSMxx
This member merely allows the user to specify whether or not tracing of IMSplex command
activity and CSL activity is turned on, and at what level. The options are the same as for other
traces.
OPTIONS,OCMD=(option),CSLT=(option)

20.1.9 Set up TSO logon procedure for SPOC


The TSO data set allocations required for SPOC are shown in Example 20-17. A sample
startup REXX exec is supplied in SDFSEXEC(DFSSPSRT).
Example 20-17 TSO allocations for SPOC
STEPLIB
imspref.SDFSRESL
ISPPLIB
imspref.SDFSPLIB
ISPMLIB
imspref.SDFSMLIB
ISPTLIB
tsouserid.USERTLIB
............imspref.SDFSTLIB
ISPTABL
tsouserid.USERTLIB
SYSPROC
imspref.SDFSCLST
SYSEXEC
imspref.SDFSEXEC

Note-each SPOC user must have own USERTLIB


Note-each SPOC user must have own USERTLIB

To invoke IMS single point of control (SPOC)


SPOC is invoked (subsequent to the correct allocation of datasets) by issuing the TSO
command DFSSPOC, selecting Options, Set Preferences and then tailoring the resulting
panel something like as follows Example 20-18.
Example 20-18 Invocation of SPOC
From TSO enter:-

DFSSPOC

Select OPTIONS on top line & set default preferences (for example)
e.g. PLEX1
IM1A
2:45
1
1
1
1

Your IMSplex name


IMSID of Your default IMS system
Wait time

20.2 CSL operations


There are two issues when discussing operations in an IMSplex CSL environment:
Startup, execution, and shutdown of the CSL execution environment
Submission of IMS commands through the OM interface and through the classic IMS
interface (MTO, WTOR, IMS terminal, E-MCS console)

20.2.1 The CSL execution environment


The CSL execution environment consists of a multitude of address spaces and Coupling
Facility structures. How each of these is defined was described in Setting up a CSL
environment on page 290. How these components are started and stopped is the subject of

298

IMS Version 8 Implementation Guide

the next few topics. The sequence in which components in a CSL are started is significant.
Some of them are required to be started in the proper sequence, for others it is just
recommended they be started in the proper sequence to avoid warning messages.

20.2.2 Starting IMSplex address spaces


First you need to activate the CFRM policy. While this may sound obvious, updates to the
sysplex CFRM policy do not take effect until that policy is (re)activated. This is important not
only when adding new structures, such as the resource structure, to the policy, but also when
making changes to a structure, such as enabling duplexing or autoalter. A policy is activated
by the following command:
SETXCF START,POLICY,TYPE=CFRM,POLNAME=CFRMPOL1

Start the CSL address spaces


This should be done before the IMS address spaces are started, although if any of them are
not there when IMS tries to connect, IMS will give a warning message with an option to cancel
or retry.

Structured Call Interface


The first address space to be started should always be the Structured Call Interface (SCI)
address space. This address space is needed by every component of the IMSplex and, if
not there when needed, may cause that component to abend.
S SCI1
CSL0020I SCI READY SCI1SC

Note that the CSL ready message identifies the component name (SCI1) and the
component type (SC). This will be true of all CSL messages.

Common Queue Server


Start CQS next. OM does not need CQS but both IMS and RM will register with CQS.
S CQS1
CSL0020I CQS READY CQS1CQS

CQS will register with SCI. If SCI is not available when CQS tries to register, a warning
message is issued but CQS continues initialization. CQS does not require SCI to be
available to complete initialization, and will not abend as RM and OM do if SCI is not
available.
CQS0001E CQS INITIALIZATION ERROR IN ... CSLSCREG ...

Operations Manager
Operations Manager only needs SCI to complete initialization. IMS and RM both register
with OM.
S OM1
CSL0020I OM READY OM1OM

If SCI is not available when OM is initialized, a warning message will be issued.


CSL0003A OM IS WAITING FOR SCI OM1OM

OM will retry every six seconds and, if SCI is still not available after 10 tries, OM will abend
with a U0010.
CSL0002E IMSPLEX INITIALIZATION ERROR IN ...

Resource Manager
Resource Manager registers with SCI, OM, and with CQS. IMS registers with RM.

Chapter 20. Common Service Layer configuration and operation

299

S RM1
CSL0020I RM READY RM1RM

If SCI is not available when RM tries to register, a warning message will be issued. RM will
retry six times, 10 seconds apart. If SCI still is not available, RM will abend with a U0010.
CSL0003A RM IS WAITING FOR SCI RM1RM

Start the IMS control region


When all of the CSL address spaces have been started, IMS can be started. IMS will register
with SCI, OM, RM, and CQS (for shared queues). DBRC address space may also register
with SCI if automatic RECON loss notification is being enabled. If any one of these is not
available, IMS will issue a warning message and continue initialization.
DFS3306A CTL REGION IS WAITING FOR SCI|OM|RM

If any of these address spaces is not available when IMS completes initialization, IMS will
issue a WTOR message indicating which one is not available and then wait.
DFS3309A CONTROL REGION WAITING FOR csltype REPLY RETRY OR CANCEL

If the reply is CANCEL, IMS abends with a U3309 RC=12.


An IMS cold start is required for any of the following reasons:
First time using CSL (CSLG=xxx in DFSPBxx)
Restarting after missing one or more global online changes (see Global online change
on page 215 for this discussion)
When changing the global online change option (OLC=GLOBAL to LOCAL or vice versa)
When changing the use of the resource structure (using it to not using it, or vice versa)

20.2.3 Shutting down IMSplex address spaces


When shutting down an IMS which is part of an IMSplex, the user should always shut down
IMS first, then the CSL address spaces and CQS. IMS has some cleanup work to do at
shutdown which it cannot do if CSL is not active.
If IMS is shut down with the LEAVEPLEX keyword:
All resources owned by this IMS are cleared (no ownership). If no significant status exists,
the resource entry is deleted.
That IMSs local member is removed from the resource structure (DFSSTMLimsid)
Its entry in the //OLCSTAT online change status data set is also removed.
LEAVEPLEX also implies LEAVEGR, so all VTAM generic resource affinities will be
deleted.

After IMS is down, the other address spaces can be stopped. They can be stopped
individually or as a group. When shutting them down individually, SCI should be shut down
last.
P OM1 (repeat for other address spaces)
F SCI1,SHUTDOWN CSLLCL (shuts down all CSL address spaces on the local LPAR)
F SCI1,SHUTDOWN CSLPLEX (shuts down all CSL address spaces in the IMSplex)

The F SCI command does not shut down any of the CQS address spaces.

300

IMS Version 8 Implementation Guide

20.3 IMS commands


Commands may be entered to IMS either through traditional channels, such as the MTO, the
WTOR reply, automated operator programs using the CMD or ICMD call, an E-MCS console,
and now in Version 8, from a SPOC. There are several new commands available when
running in an CSL environment. These are generally referred to as IMSplex commands and
can only be entered through the OM interface. Those IMS commands which we have all come
to know and love, those starting with the slash (/), are referred to as classic commands.
Most of these commands can be entered either through the OM interface or through
traditional means. For the sake of brevity, this chapter will refer to all commands entered
through the OM interface as having been entered from the SPOC. Those entered through the
traditional interface will be referred to as having been entered from the MTO.

20.3.1 IMSplex commands


Five new commands have been added to the IMSplex environment. These commands are:

INITIATE

Initiate a global process (global online change). For example,


INIT OLC PHASE(PREPARE) TYPE(ALL)

TERMINATE

Terminate a global process. For Example, TERM OLC

DELETE

Delete dynamic LE runtime parameters. For example,


DEL LE TRAN(TTT) LTERM(LLL) USERID(UUU) PGM(PPP)

UPDATE

Update dynamic LE runtime parameters or transaction characteristics.


For example,
UPD TRAN(TTT) LTERM(LLL) USERID(UUU) PGM(PPP)
SET(LERUNOPTS(XXXXXX))
UPD TRAN NAME(TTT) STOP(Q,SCHD) SET(CLASS(4))

QUERY

Query IMSplex resources. For example,


QRY
QRY
QRY
QRY
QRY
QRY

IMSPLEX
MEMBER
LE
TRAN
STRUCTURE
OLC

For a complete description of the format and use of these commands, refer to IMS Version 8:
Command Reference, SC27-1291. All of these commands have global scope. That is, they
apply to all IMSs in the IMSplex. For example, if the UPDATE TRAN command is entered to
change the MSGCLASS of a transaction, it will be changed on all the IMSs in the IMSplex.

20.3.2 Classic commands


As with the IMSplex command, the user should refer to IMS Version 8: Command Reference,
SC27-1291 for a complete description of the use of these commands in an IMSplex.

Command entry
Most IMS classic commands can be entered through the SPOC or the MTO. However, not all
classic commands can be entered through the SPOC. Generally speaking these are the
commands that affect the terminal from which they are entered. Since there is no IMS
Chapter 20. Common Service Layer configuration and operation

301

terminal associated with the SPOC, these commands would not make sense. Examples of
these commands are:

/EXC
/SIGN ON | OFF
/HOLD
/REL CONVERSATION
/SET
/FORMAT
/RCLDST

Command scope
When an IMS is running as part of an IMSplex with sysplex terminal management enabled,
the effect of an IMS command can have both local and global impact. For example, a /STOP
NODE command will change the command significant status of a NODE and cause a
resource entry for the NODE to be created or updated with the STOPPED flag on. Other
commands only have local impact. For example, a /STOP TRAN command would stop a
transaction only on the IMS where the command was executed. Generally speaking,
commands which affect terminal resources whose status is maintained on a resource
structure have global scope. These are commands which change the command or end-user
significant status of a NODE, USER, or LTERM. All others have local scope.

Command master
When an IMS classic command is entered through the SPOC to one or more IMSs, OM will
select one of those IMSs as the command master. If it is submitted to only one IMS, that IMS
is the master. If the command is submitted through the MTO, that IMS is the master.
Some classic commands can only be executed by the command master. Others can only be
executed by the resource owner. Still others will be executed by every IMS which receives the
command. There are a few basic rules for which IMS will execute the command.

For commands which update significant status


The rules are:
If the resource is owned

Only that IMS will process the command. Other IMSs, including the master, will reject the
command. For example, if NODEA is owned by IMS1, a /STOP NODE NODEA will only be
executed by IMS1. It will stop the NODE locally and update the status globally to
STOPPED. The NODE would then not be allowed to log on to any IMS in the IMSplex.
If the resource is not owned

Only the master will process the command. Others will reject it. For example, if a NODE
does not exist in RM, then it is not owned. A /STOP NODE command will be executed by
the master which will create a NODE entry and set the STOPPED flag.

For commands which display status


The rules are:
If the resource is owned

Only the owning IMS will display the global status. Other IMSs will display any local status.
For example, if NODEB is owned by IMS2, then IMS2 would display its global and local
status.
If the resource is not owned

The master will display global status and local status. Others will display local status only.
In the example above where a NODE has been stopped globally, but is not owned, if a

302

IMS Version 8 Implementation Guide

/DIS NODE NODEA command is sent to all IMSs, only the master would display the
STOPPED status (this is a global status maintained only in the resource structure). All
IMSs, including the master, would reply with their local status, which for a NODE is usually
just IDLE.

Other commands
The following describes some of the unique considerations when entering classic IMS
commands in an STM environment.

Display SRM and RCVY


The /DISPLAY command has been enhanced to display the owner, and the SRM and RCVY
values of a resource.
/DIS NODE NODE1 RECOVERY
NODE-USR OWNER
NODE1
IMS1

SRM
GLOBAL

CONV
Y

STSN
Y

FP
Y

Assigning LTERMs
There are some rules when assigning LTERMs
/ASSIGN LTERM is permitted between BTAM and VTAM, but BTAM status is not
maintained
/ASSIGN command is not allowed if source and destination are owned by two different
IMS systems
/ASSIGN command without the SAVE keyword is not allowed if destination does not exist
in the RM

Commands not affected by STM


Not all commands are affected by sysplex terminal management. In general, there are
commands which do not reference sysplex terminals. For example, the /DIS TRAN xxx QCNT
command does not follow the rules of command master. Each IMS which receives this
command would retrieve and display global queue counts. These commands should only be
routed to a single IMS to excessive structure access.

20.3.3 CSL operations summary


The world of CSL can have a significantly different and often confusing affect on operations.
The processing of classic commands depends on:

Source of the command (classic or OM API)


Whether RM is active with a resource structure
Whether the command affects significant status
Whether the command parameters include ALL or a generic parameter
Whether the resource exists on this IMS
Whether the resource is owned by this IMS
Whether the resource exists in RM
Whether the resource is owned by another IMS
Whether the command displays or updates terminal status
Whether the resource is managed by STM

The document does not describe all of the differences that command processing has when
running IMS in a CSL environment with STM enabled. It would be well worth the effort to read
the IMS Version 8: Command Reference, SC27-1291 very carefully, and then to test all of the
commands under a variety of conditions.

Chapter 20. Common Service Layer configuration and operation

303

304

IMS Version 8 Implementation Guide

Part 5

Part

Appendixes
In this part of the book we provide the following appendixes:
Appendix A, Hardware and software requirements on page 307
Appendix B, Resource structure sizing on page 315
Appendix C, Additional material on page 321

Copyright IBM Corp. 2002. All rights reserved.

305

306

IMS Version 8 Implementation Guide

Appendix A.

Hardware and software


requirements
In this appendix we describe the hardware and the software required for IMS Version 8. Also,
we list the IMS Data Management Tools for IMS that can be used with IMS Version 8.

Copyright IBM Corp. 2002. All rights reserved.

307

A.1 Hardware requirements


This section describes the basic hardware required for IMS Version 8, and the major
enhancements.

A.1.1 Processors
IMS Version 8 executes on all IBM processors that are capable of running OS/390 Version 2
Release 10, or later.

A.1.2 System console


The console requirements of OS/390 Version 2 Release 10, or later apply.

A.1.3 Tape units


At least one IBM 3420, 3480, or 3490 tape unit is required for installation and maintenance.

A.1.4 Direct access devices


During the binding of the IMS control blocks load modules, both the binder work data set
SYSUT1 and IMS.SDFSRESL must reside on a device that supports a record size of 18KB or
greater. For all other system libraries and working storage space, any device supported by
the operating systems is allowed.
For IMS database storage, any device supported by the operating system is allowed within
the capabilities and restrictions of Basic Sequential Access Method (BSAM), Queued
Sequential Access Method (QSAM), Overflow Sequential Access Method (OSAM), and
Virtual Storage Access Method (VSAM).
The Database Image Copy 2 enhancements require concurrent-copy capable DASD
controllers.

A.1.5 Multiple systems coupling


When the physical link is channel-to-channel and is dedicated to IMS, the System/370
channel-to-channel adapter or a logical channel on the IBM 3088 or ESCON is required.
MSC fiber channel connection (FICON) channel-to-channel (CTC) support requires that at
least one side of the MCS link be an IBM G6 processor or IBM zSeries with the FICON
channel and FICON CTC microcode. The other side (IMS) can be any processor with a
FICON channel.

A.1.6 Terminals supported by IMS Version 8


The following is a list of terminals supported by IMS Version 8.

308

SLU 1

For example, 3230, 3232, 3262, 3287, 3767, 3268, 3770, 3770P,
3790, (type 2 batch and bulk print), 4700, 5280, 5550, S/32, S/34,
S/38, 8100

SLU 2

For example, 3179, 3180, 3267, 3278, 3279, 3290, 3790 (3270 DSC
feature), 3600, Admin PP, 4700, 5280, 5520, 5550, 8100, 8775, S/34,
Display writer

IMS Version 8 Implementation Guide

LU 6 (ISC)
LU 6.2
NTO

For example, 33.35, TTY, 2740, 2741, 3101, 3232, 3767, S/23

The following is a list of terminals supported by IMS Version 8, but withdrawn from marketing.
2740-1, 2740-2, 2741, 2780, 3270, Finance (3600), System/3, System/7

A.1.7 Sysplex data sharing


For data sharing in a sysplex environment (using IRLM Version 2.1), the following is required:
A Coupling Facility level 9 or higher
One of the following with its related hardware
A 9037 sysplex timer
IBM S/390 9674
IBM S/390 9672 Transaction Server
IBM ES/9000 9021 711 model processor

A.1.8 Shared message queues and shared EMH queues


For sharing message queues and sharing EMH queues in a sysplex environment, the
following items are required:
An IBM S/390 9674 Coupling Facility level 9
An IBM S/390 9672 Transaction Server
An IBM ES/9000 9021 711base model processor or an IBM ES/9000 511base model
processor
The related hardware for all of the items discussed in the list above

System managed duplexing support also requires new hardware resources:


CF-to-CF links (such as HiperLink, ICB link, IC link, or others like them)

A.1.9 DEDB shared VSO enhancement


The DEDB Shared VSO enhancement exploits Coupling Facility system managed rebuild,
autoalter and system managed duplexing functions that are available only on processors
supporting these Coupling Facility functions and capabilities. A Coupling Facility level 9 is
required for the autoalter function and a Coupling Facility level 10 is required for the system
managed duplexing.

A.1.10 Remote Site Recovery


Remote Site Recovery (RSR) requires:
A sysplex timer (if data sharing or if the workload is spread across multiple CPUs)
A highband control unit (3172)
At least one tape unit (3420, 3480, or 3490) at the tracking site

Coordinated disaster recovery support for IMS and DB2 requires that the DB2 logs reside on
devices supporting eXtended Remote Copy (XRC).

Appendix A. Hardware and software requirements

309

A.2 Software requirements


This section describes the software required for IMS Version 8, and the major enhancements.
IMS Version 8 operates under OS/390 Version 2 Release 10 configurations, or subsequent
versions, releases, and modification levels, unless otherwise stated, and requires the
following minimum version, or release, or modification levels:
OS/390 Version 2 Release 10 (5647-AQ1)

RACF* or equivalent, if security is used


ISPF Version 4 Release 2 (5655-042)
SMP/E*
e-Network Communications Server for OS/390 V2R10, if IMS Transaction Manager is
used
JES2*
JES3*
TSO/E*

Note: * - These items are OS/390 Version 2 Release 10 base elements that cannot be
ordered separately.
IBM High-Level Assembler Toolkit (5696-234), a separately orderable feature of OS/390
Version 2 Release 10

Note: IMS Version 8 does not support Assembler H. IMS Version 5 was the last version
to do so.
IRLM 2.1 (5655-DB2),if data sharing
Coupling Facility if multi-mode persistent session rapid network reconnect or MADS I/O
timing
If you are going to run IMS Version 8 on z/OS Version 1 Release 1 or higher, you need to
apply APAR OW51598 to the operating system.

z/OS Version 1 Release 2 is needed for the following:


IMS MSC FICON CTC support
Shared queues/expedited message handling (EMH) Coupling Facility duplexing support
z/OS V1R2 communications server affinity enhancement can be used, optionally, with the
IMS sysplex terminal management for enhanced usability
z/OS V1R2 Coupling Facility duplexing is recommended, though not required for the IMS
Version 8 Resource Manager and global online change enhancements.
All systems involved in using APPC and OTMA synchronous shared queues support for
the multi-system cascaded transactions support. Resource Recovery Service (RRS) must
also be active on all of these systems.
For the system managed duplexing of VSO structures (part of the DEDB Shared VSO
enhancements). The duplexing also requires a minimum Coupling Facility (CF) level of 10.

To take full advantage of Coordinated Disaster Recovery support for IMS and DB2, sysplex
terminal management, the Operations Manager, and the Resource Manager, IMS Version 8
should be on all sysplex systems involved.
Coordinated Disaster Recovery support for IMS and DB2 requires the IMS Version 8 Remote
Site Recovery (RSR) Record Level Tracking (RLT) feature.

310

IMS Version 8 Implementation Guide

A.2.1 Data sharing


For block-level data sharing, the IRLM 2.1 is required. The IRLM is an independent
component shipped with IMS Version 8. The IRLM must be defined as an OS/390 subsystem.
Block-level data sharing of full-function databases is supported between all in-service levels
of IMS.

A.2.2 DBRC
IMS Version 8 DBRC requires that the migration and coexistence small programming
enlacement (SPE) be applied to the DBRC on the pre-IMS Version 8 DBRC.
The APAR/PTFs enabling IMS Version 8 DBRC migration and coexistence are:
6.1 - PQ54584/UQ99326
7.1 - PQ54585/UQ99327

A.2.3 IMS Java


IMS Java application support (Java dependent regions) requires the IBM Developer Kit for
OS/390, Java 2 Technology Edition (5655-D35), with a special enhancement referred to as
the Persistent Reusable Java Virtual Machine (JVM).
JDBC access to IMS DB for DB2 Stored Procedures requires DB2 UDB for z/OS and OS/390,
Version 7 (5675-DB2).
JDBC access to IMS DB for CICS applications requires CICS Transaction Server for z/OS
Version 2 (5697-E93).
JDBC access to IMS DB for WebSphere applications requires WebSphere Application Server
z/OS Version 4.0.1 and additional WebSphere Application Server z/OS Connection
Management support.
As of the writing of this redbook, these are the prerequisites for running IMS Java and Java
dependent regions:
LE

PQ42191 / UQ48806, UQ48807 and UQ48817 (depends on fmid)

USS

OW49427 / UW80309, UW80310, UW80311, UW80312, UW80313


and UW80314 (depends on fmid)

USS

OW51798 / UW84303. UW84304, UW84305, UW84306 and


UW84307 (depends on fmid)

Java for OS/390 and z/OS SDK 1.3.1 Service Refresh (SR14):
PQ62112/UQ68572

A.2.4 Small programming enhancements (SPEs)


Several enhancements were provided in IMS Version 8 as small programming
enhancements:
DBRC SPEs are required on IMS Version 6 and IMS Version 7 in order for them to coexist
with IMS Version 8.

The APAR/PTFs enabling IMS Version 8 DBRC migration and coexistence are:
6.1 - PQ54584/UQ67709 and UQ99326
7.1 - PQ54585 and PQ63108/UQ99327

Appendix A. Hardware and software requirements

311

The shared queues OTMA and APPC migration and coexistence SPE is needed on IMS
Version 6 only.

6.1 - PQ29879/UQ36785

A.2.5 Sysplex data sharing


IMS sysplex data sharing (including data caching, shared SDEPs, and shared VSO DEDB
areas) requires IRLM Version 2.1.

A.2.6 Transaction trace


The transaction trace function of IMS Version 8 requires OS/390 APAR number OW50696.

A.3 IBM IMS Tools for IMS Version 8


The following is the list of the minimum versions and required maintenance for IBM Data
Management Tools that can be used with IMS Version 8:

312

5655-A14
5655-E02
5655-E03
5655-E04
5655-E05
5655-E06
5655-E07
5655-E09
5655-E10
5655-E11
5655-E12
5655-E14
5655-E15
5655-E24
5655-E24
5655-E30
5655-E41
5655-E50
5655-E51
5655-E51
5655-E52
5655-F40
5655-F43
5655-F45
5655-F59
5655-F74
5655-F76
5655-F78
5655-I01
5655-I15
5697-E99
5697-E99
5697-H75
5655-J57
5697-H77
5655-F76

Batch Terminal Simulator V2 + UQ62360


Hardware Data Compression Ext V2.2 + UQ59607
DB Repair Facility
Library Management Utilities + UQ61449
Advanced ACB Generator + UQ63355
High Performance Unload + UQ63489
High Performance Load + UQ63485
High Performance Ptr Checker + UQ63507
Image Copy Extension + UQ66270
IMS Sequential Ramdomizer Gen.
IMS ETO Support V2.2 + UQ65127
Program Restart Facility + UQ66680
IMS Performance Analyzer V3.1 + UQ62098
IMS Index Builder V2R1 + UQ65633
IMS Index Builder V2R2
Fast Path Basic Tools V1 R2 + UQ63492
Network Compression Facility + UQ64522
Online Recovery Service + UQ63821
IMS Connect V1 R1
IMS Connect V1 R2
IMS Data Propagator V3 R1 + UQ64207
Command Control Facility + UQ64521
HP Sysgen Tools + UQ64518
MFS Reversal
HP Change Accumulation + UQ66240
IMS Parallel Reorganization + UQ68770
DB Conrtol Suite V2 R1 + UQ64974
IMS Fast Path Online Tools V2 R1 + UQ68420
HALDB Conversion Tool
High Performance Prefix Resolution V2 R1
IMS Queue Control Facility V1R1 + UQ60536
IMS Queue Control Facility V1 R2
IMS Batch Backout Manager
IMS Batch Terminal Simulator V3 R1
IMS Buffer Pool Analyzer
IMS Database Control Suite V2 R2

IMS Version 8 Implementation Guide

5655-E32
5655-J56
5655-E24
5655-E15
5697-B87

IMS DEDB Fast Recovery V2 R2


IMS Image Copy Extensions V2 R1
IMS Index Builder V2 R3
IMS Performance Analyzer V3 R2
IMS WorkLoad Router V2 R3

Appendix A. Hardware and software requirements

313

314

IMS Version 8 Implementation Guide

Appendix B.

Resource structure sizing


IMS may (optionally) use a Coupling Facility List structure, called the Resource Structure, to
share information about IMSplex resources among all members of the IMSplex. Transactions,
MSNAMEs, NODEs, LTERMs, (static NODE) USERs, USERIDs, APPC descriptors, and
IMSplex global resources may all have entries on the Resource Structure.
In this appendix we describe the elements that are stored in the Resource Structure and give
you an idea how to calculate the size for the Resource Structure in your environment.

Copyright IBM Corp. 2002. All rights reserved.

315

The resource structure


IMS may (optionally) use a Coupling Facility List structure, called the Resource Structure, to
share information about IMSplex resources among all members of the IMSplex. Transactions,
MSNAMEs, NODEs, LTERMs, (static NODE) USERs, USERIDs, APPC descriptors, and
IMSplex global resources may all have entries on the Resource Structure. Each entry may
contain zero, one, or multiple 512 byte data elements, depending on the resource type and its
status.
The Resource Structure must be defined in the CFRM policy with a maximum size (SIZE) and
(optionally) an initial size (INITSIZE) and a minimum size (MINSIZE). If defined with INITSIZE
and ALLOWAUTOALT(YES), the allocated size and/or the entry-to-element ratio will be
adjusted automatically by the system in increments up to the maximum size when the
FULLTHRESHOLD (default is 80%) is reached. Determining the correct size specifications
for the Resource Structure depends on the ability to accurately estimate the number and size
of the entries.
The CFSIZER tool provides sizing recommendations based on user-provided input. The user
must estimate the number of resource entries on the structure, and the number of data
elements associated with those entries, and provide those numbers to the tool. The tool then
returns a recommendation for the SIZE (or INITSIZE) parameter for the Resource Structure
definition in the CFRM policy. The user may want to run this tool several times, once with the
maximum number of resources and data elements possible and once with the expected
average number of resources and data elements. The two values returned can then be used
as the SIZE and INITSIZE values for the CFRM policy definition. Calculating an average may
be especially useful when a large number of the potential terminals/users are not logged on
at the same time. By defining the CFRM policy with ALLOWAUTOALT(YES), the size of the
structure and the entry-to-element ratio can be dynamically altered by the system as
conditions warrant.
The CFSIZER tool is available on the web at URL:
http://www.ibm.com/servers/eserver/zseries/cfsizer/ims.html

Resource types
The following are the types of resources on the Resource Structure, including when they are
created and deleted, and whether or not they have data elements associated with them. Use
these descriptions along with the Resource Table shown later in this document when
providing input to the CFSIZER tool.
IMSplex global resources contain information about the IMSplex itself or its individual
members and are created as the IMSplex is initialized, as members join the IMSplex, or as
global processes are initiated. Once these resource list entries are created, they are not
deleted. CFSIZER factors IMSplex global resources into its calculations.
Static transaction and MSNAME resources are created during IMS initialization or when
added by online change. They are never deleted and remain on the Resource Structure as
long as the structure exists. Transaction and MSNAME resources are easy to count - just
count the number of unique TRANSACT and MSNAME statements in the IMS system
definition STAGE1 input. These resource list entries never have data elements.
CPI-C transaction resources are created when the CPI-C transaction is first entered and
are deleted when all IMSs defining that transaction have terminated. CPI-C transaction
resources can be counted by counting the number of unique TRANCODEs in the
TP_PROFILE data sets. These resource list entries only have data elements when the
number of IMSs defining them is greater than two.

316

IMS Version 8 Implementation Guide

APPC descriptor resources are created during IMS initialization and are deleted when all
IMSs defining that resource terminate. APPC descriptors can be counted by counting the
number of unique APPC descriptors in DFS62DTx. These resource list entries only have
data elements when the number of IMSs defining them is greater than two.
Sysplex terminal resources such as NODEs, LTERMs, USERs, and USERIDs are created
when the resource first becomes active (e.g., terminal logon or user signon). They are
deleted from the Resource Structure when the resource becomes inactive (for example,
terminal logoff or user signoff) and the resource has no recoverable significant status.
Note that USERIDs are always deleted when the user becomes inactive. These resource
list entries are the most variable in number and size since they exist only when a resource
is active, or when inactive but with significant status in the resource entry. The number and
size of sysplex terminal resources depends on a number of factors:

What is the resource type (NODE, LTERM, USER, USERID)?


USERIDs have list entries only if single signon is being enforced. LTERMs have list entries
but no data elements. Parallel session ISC NODEs are the only NODEs that have data
elements. The most significant of the list entries in terms of size is the USER (or static NODE
USER) entry which contains the majority of the significant status. End-user significant status,
when it exists, is always kept in a data element in the (static NODE) USER entry. A (static
NODE) USER resource entry always has at least one data element.

Status Recovery Mode and Recoverability settings of the terminal


Defaults for Status Recovery Mode (SRM) and Recoverability (RCVYxxxx) settings are
defined in DFSDCxxx, and can be overridden by the Logon Exit or Signon Exit. They
determine whether end-user significant status is maintained, and if so, where it is maintained.
If SRM=GLOBAL and RCVYxxxx=YES, then end-user significant status will be kept in the
USER entry data element in the resource structure. If SRM=LOCAL or NONE, end-user
status is not kept in the USER entry. Command significant status is always kept in the
Resource Structure when the structure exists, but (usually) doesnt require a data element.

Terminal or user significant status and the type of significant status


Does the terminal or user have significant status and what type of significant status is it? This
determines whether the resource list entry has data elements with it and if it is deleted when
the resource becomes inactive.

Resource number
The Resource Number is the total number of resources expected to be in the Resource
Structure at one time. One resource is stored on the Resource Structure as one list entry,
which may contain zero, one, or more data elements, depending upon the amount of
resource data. One of the following methods can be used to calculate the value to use:
Define a Resource Number of 1, if your installation is DBCTL-only and you only plan to
use the Resource Structure for Global Online Change
Calculate an initial Resource Number by using the Resource Table below and summing all
of the numbers in the Resource Number column.
Tune (adjust) the Resource Number by running workloads with the Resource Structure
enabled and querying CQS statistics periodically to extract the list entry high water mark.
You may want to size your structure large enough to accommodate the list entry high
water mark, or some amount larger than the high water mark. The list entry high water
mark field is SS3ENTHI in mapping macro CQSSSTT3. Structure statistics may be
gathered through the CQS statistics user exit or the CSLZQRY macro interface. This
requires a user-written program to issue the CQS macro and to return the results.
Appendix B. Resource structure sizing

317

The QUERY STRUCTURE command also displays the list entries allocated, the list entries in
use, the data elements allocated, and the data elements in use. You can issue the QUERY
STRUCTURE command periodically to get an idea of the maximum list entries that have been
in use over a period of time. QUERY STRUCTURE output may also be used to determine
how many more resources the Resource Structure can accommodate.

Data element number


A data element is a piece of storage on the Coupling Facility associated with a (resource) list
entry. Resource entries may have zero, one, or more 512-byte data elements, depending on
the amount of resource data stored. One of the following methods can be used to calculate
the value to use for the Data Element Number.
Define a Data Element Number of 1, if your installation is DBCTL-only and you only plan to
use the Resource Structure for Global Online Change.
Calculate an initial Data Element Number by using the Resource Table below and
summing all of the numbers in the Data Element Number column that apply to your
installation. The Data Element Number is rarely more than 1, unless there is an
exceptional amount of significant status associated with the resources (such as hundreds
of LTERMs assigned to a USER, hundreds of held conversations associated with a USER,
etc.), or a very large number of IMSs (more than 34), neither of which is likely.
Tune the Data Element Number by running workloads with the Resource Structure
enabled and querying CQS statistics periodically to extract the data element high water
mark. The data element high water mark field is SS3ELMHI in mapping macro
CQSSSTT3. Structure statistics may be gathered through the CQS statistics user exit or
the CSLZQRY macro interface.

The QUERY STRUCTURE command can be used periodically to get an idea of the maximum
data elements that have been in use over a period of time. QUERY STRUCTURE output may
also be used to determine how many more resource data elements the Resource Structure
can accommodate.

Resource table
Use the following Table 20-1 to calculate the Resource Number and the Data Element
Number needed to define an initial Resource Structure. The number of entries and data
elements used for IMSplex resources is estimated by the tool and not a required input.
Table 20-1 Resource table

318

Resource Type

Resource Number

Data Element Number

APPC descriptor

Total number of uniquely defined APPC


descriptors in entire IMSplex

0 (if #IMSs<3)
-or- if #IMSs > 2
#APPC descriptors *
#IMSs-2)/32 rounded up)

LTERM

Total number of uniquely generated LTERMs


+ maximum number of dynamic LTERMs

MSNAME

Total number of uniquely generated


MSNAMEs in entire IMSplex

IMS Version 8 Implementation Guide

Resource Type

Resource Number

Data Element Number

NODE

Total number of uniquely generated NODEs


+ maximum number of dynamic NODEs

#ISC NODEs with multiple


parallel sessions *
#IMSs-1)/32 (rounded up)

Transaction (static)

Total number of unique static (generated)


transactions in entire IMSplex

Transaction (CPI-C)

Total number of unique CPI-C transactions


invoked by APPC

0 (if #IMSs < 3),


-or- if #IMSs > 2
#CPI-Ctrans *
#IMSs-2)/32 rounded up)

USERID

Maximum number of USERIDs signed on, if


single signon enforced with SGN= not G, M
or Z

USER

Maximum number of dynamic USERs + total


number of unique static ISC subpools

#USERs * 1

USER (for static


NODE)

Total number of unique static single-session


terminals

#staticUSERs * 1

Adjusting the size of the Resource Structure


Once the structure has been sized and is in production, it may become necessary to change
the maximum size upward due to unexpected volumes or initial miscalculation. This can be
done by changing the SIZE parameter in the CFRM policy, activating the new policy, and
rebuilding the structure using the command:
SETXCF START,REBUILD,STRNM=structure-name

The size can be adjusted downward in the same way, but the alter command can also be
used to make the structure smaller. This might be done following an unusual period where the
structure was autoaltered upward due to a high number of concurrent logons which are not
expected to continue. Use the command:
SETXCF START,ALTER,STRNM=structure-name,SIZE=size

Appendix B. Resource structure sizing

319

320

IMS Version 8 Implementation Guide

Appendix C.

Additional material
This redbook refers to additional material that can be downloaded from the Internet as
described below.

Locating the Web material


The Web material associated with this redbook is available in softcopy on the Internet from
the IBM Redbooks Web server. Point your Web browser to:
ftp://www.redbooks.ibm.com/redbooks/SG246594

Alternatively, you can go to the IBM Redbooks Web site at:


ibm.com/redbooks

Select the Additional materials and open the directory that corresponds with the redbook
form number, SG246594.

Using the Web material


The additional Web material that accompanies this redbook includes the following files:

File name
SG246594.zip

Description
IMS Version 8 Highlights Workshop (Lotus Freelance Graphics 9
presentation)

How to use the Web material


Create a subdirectory (folder) on your workstation, and unzip the contents of the Web
material zip file into this folder.

Copyright IBM Corp. 2002. All rights reserved.

321

322

IMS Version 8 Implementation Guide

Abbreviations and acronyms


ACB

application control block

ESCON

Enterpise System Connection

AGN

application group name

ESO

Extended Service Offering

AOI

automated operator interface

ETO

Extended Terminal Option

APAR

authorized program analysis report

EX

execution (IVP)

APF

authorized program facility

FDBR

Fast Database Recovery

APPC

advanced program to program


communication

FICON

Fiber Connection

FMID

function modification identifier

ARLN

automatic RECON loss notification

FT

file tailoring (IVP)

ARM

Automatic Restart Manager

FTP

File Transfer Protocol

AWE

Asynchronous Work Element

GSAM

BMP

batch message program

generalized sequential access


method

BPE

Base Primitive Environment

HALDB

High Availability Large Database

BSDS

boot strap data set

HFS

hierarchical file system

CBPDO

custom built product delivery


offering

HLQ

high-level qualifier

HTML

Hyper Text Markup Language

CI

control interval

HTTP

Hyper Text Transfer Protocol

CICS

Customer Information Control


System

IBM

International Business Machines


Corporation

CQS

Common Queue Server

IFP

IMS Fast Path program

CSA

common system area

ILS

Isolated Log Sender

CSI

consolidated software inventory

IMS

Information Management System

CSL

Common Service Layer

IPCS

Interactive Problem Control System

CST

Consolidated Service Test

IPL

initial program load

CTC

channel-to-channel

IRLM

Integrated Resource Lock Manager

DASD

direct access storage device

ISC

intersystem communication

DB/DC

database/data communications

ISD

independent software delivery

DB2

DATABASE 2

ISPF

DBA

database administrator

Interactive Systems Productivity


Facility

DBCTL

database control

ITSO

DBD

database description

International Technical Support


Organization

DBDS

database data set

IVP

installation verification program

DBRC

data base recovery control

J2C

J2EE Connector Architecture

DEDB

data entry database

J2EE

Java 2 Platform, Enterprise Edition

DL/I

Data Language/I

JBP

Java batch processing region

DLI/SAS

DL/I separate address space

JCL

job control language

DLT

database level tracking (RSR)

JDBC

Java database connectivity

DRA

database resource adapter

JDK

Java Development Kit

ECSA

extended common system area

JES

EMCS

extended multiple consoles support

job entry subsystem (JES2 or


JES3)

EMEA

Europe, Middle East and Africa

JMP

Java message processing region

ESAF

external subsystem attach facility

JVM

Java Virtual Machine

Copyright IBM Corp. 2002. All rights reserved.

323

KSDS

key sequenced data set

RMF

Resource Measurement Facility

LE

Language Environment

RNR

Rapid Network Recovery

LMOD

load module

RRS

Resource Recovery Service

LPA

link pack area

RSR

Remote Site Recovery

LPAR

logical partition

RSU

recommended service upgrade

LTERM

logical terminal

SAF

Security Authorization Facility

LU

logical unit

SCI

structured call interface

LU2

Logical Unit 2

SDM

System Data Mover

MCS

multiple consoles support

SDSF

spool display and search facility

MFS

message format services

SLDS

system log data set

MLPA

modifiable link pack area

SMP/E

MNPS

Multi-Node Persistent Sessions

System Modification
Program/Extended

MOD

message output descriptor (MFS)

SMQ

shared message queues

MOD

module (SMP/E)

SMU

security maintenance utility

MPP

message processing program

SPOC

single point of control

MPR

message processing region

SRDS

structure recovery data set

MSC

multiple systems coupling

SSA

sub-system alias

MSDB

Main Storage Data Base

SVC

supervisor call

MVS

Multiple Virtual System

SVL

Silicon Valley Laboratories

ODBA

open database access

TCB

task control block

OLDS

online log data set

TCO

time controlled operations

OM

Operations Manager

TCP/IP

ORS

Online Recovery Service

Transmission Control
Protocol/Internet Protocol

OSAM

overflow sequential access method

TMS

Transport Manager System

OTMA

open transaction manager access

TPNS

Teleprocessing Network Simulator

OTMA C/I

OTMA callable interface

TSO

Time Sharing Option

PCB

program communication block

USS

Unix System Services

PDS

partitioned data set

USS

unformatted system services (SNA)

PDSE

partitioned data set extended

VG

variable gathering (IVP)

PIA

package input adapter

VOLSER

volume serial (number)

PIC

Product Introduction Centre

VSAM

virtual storage access method

PMR

problem management record

VSCR

Virtual Storage Constraint Relief

PPT

program properties table

VSO

Virtual Storage Option (DEDB


VSO)

PSB

program specification block

VTAM

PSP

preventive service planning

virtual telecommunication access


method

PST

program specification table

WADS

write ahead data set

PTF

program temporary fix

WAS

WebSphere Application Server

QPP

Quality Partnership Program

WWW

World Wide Web

RACF

Resource Access Control Facility

XML

eXtensible Markup Language

RDS

restart data set

XRC

eXtended Remote Copy

RIM

related installation material

XRF

eXtended Recovery Facility

RLDS

recovery log data set

RLT

recovery level tracking (RSR)

RM

Resource Manager

324

IMS Version 8 Implementation Guide

Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 327.
Implementing ESS Copy Services on S/390, SG24-5680
IMS/ESA Shared Queues: A Planning Guide, SG24-5257
IMS Primer, SG24-5352
IMS/ESA V6 Parallel Sysplex Migration Planning Guide for IMS TM and DBCTL,
SG24-5461
IMS Version 7 High Availability Large Database Guide, SG24-5751
A DBAs View of IMS Online Recovery Service, SG24-6112
Using XML on z/OS and OS/390 for Application Integration, SG24-6285
IMS Version 7 Performance Monitoring and Tuning Update, SG24-6404
IMS e-business Connectors: A Guide to IMS Connectivity, SG24-6514
Ensuring IMS Data Integrity Using IMS Tools, SG24-6533
IMS Version 7 Java Update, SG24-6536
IMS Installation and Maintenance Processes, SG24-6574

Other resources
These publications are also relevant as further information sources:
IMS Version 8: Administration Guide: Database Manager, SG27-1283
IMS Version 8: Application Programming: EXEC CICS DLI Commands for CICS and IMS,
SC27-1288
IMS Version 8: Application Programming: Database Manager, SC27-1286
IMS Version 8: Application Programming: Design Guide, SC27-1287
IMS Version 8: Application Programming: Transaction Manager, SC27-1289
IMS Version 8: Administration Guide: System, SC27-1284
IMS Version 8: Administration Guide: Transaction Manager, SC27-1285
IMS Version 8: Base Primitive Environment Guide and Reference, SC27-1290
IMS Version 8: Customization Guide, SC27-1294
IMS Version 8: Common Queue Server Guide and Reference, SC27-1292
IMS Version 8: Command Reference, SC27-1291
IMS Version 8:Common Service Layer Guide and Reference, SC27-1293
IMS Version 8: DBRC Guide and Reference, SC27-1295
IMS Version 8: Diagnosis Guide and Reference, LY37-3742
Copyright IBM Corp. 2002. All rights reserved.

325

IMS Version 8: Failure Analysis Structure Tables (FAST) for Dump Analysis, LY37-3743
IMS Version 8: Installation Volume 1: Installation Verification, GC27-1297
IMS Version 8: Java Users Guide, SC27-1296
IMS Version 8: Installation Volume 2: System Definition and Tailoring, GC27-1298
IMS Version 8: Licensed Program Specifications, GC27-1299
IMS Version 8: Messages and Codes, Volume 1, GC27-1301
IMS Version 8: Messages and Codes, Volume 2, GC27-1302
IMS Version 8: Master Index and Glossary, SC27-1300
IMS Version 8: Operations Guide, SC27-1304
IMS Version 8: Open Transaction Manager Access Guide, SC27-1303
IMS Version 8: Release Planning Guide, GC27-1305
IMS Version 8: Summary of Commands, SC27-1307
IMS Version 8: Utilities Reference: Database and Transaction Manager, SC27-1308
IMS Version 8: Utilities Reference: System, SC27-1309
IMS Version 7 Common Queue Server Guide and Reference, SC26-9426
z/OS V1R2.0 MVS Setting Up a Sysplex, SA22-7625
WebSphere Application Server V4.0.1 for z/OS and OS/390: Installation and
Customization, GA22-7834
WebSphere Application Server V4.0.1 for z/OS and OS/390: Assembling J2EE
Applications, SA22-7836
WebSphere Application Server V4.0.1 for z/OS and OS/390: Messages and Diagnosis,
GA22-7837
WebSphere Application Server V4.0.1 for z/OS and OS/390: System Management User
Interface, SA22-7838
New IBM Technology featuring Persistent Reusable Java Virtual Machines, SC34-6034
DB2 UDB for OS/390 and z/OS V7 Utility Guide and Reference, SC26-9945

Referenced Web sites


These Web sites are also relevant as further information sources:
IMS home page:
http://www.ibm.com/ims/

IBM Redbooks home page


http://www.ibm.com/redbooks

Manual for Persistent Reusable Java Virtual Machine:


http://www.ibm.com/servers/eserver/zseries/software/java/pdf/jtc0a100.pdf

CFSIZER tool on the web at URL:


http://www.ibm.com/servers/eserver/zseries/cfsizer/ims.html

326

IMS Version 8 Implementation Guide

How to get IBM Redbooks


You can order hardcopy Redbooks, as well as view, download, or search for Redbooks at the
following Web site:
ibm.com/redbooks

You can also download additional materials (code samples or diskette/CD-ROM images) from
that site.

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web
site for information about all the CD-ROMs offered, as well as updates and formats.

Related publications

327

328

IMS Version 8 Implementation Guide

Index
Symbols
/ASSIGN LTERM 188
/CHANGE DESCRIPTOR 90
/CHANGE USER 188
/CHE FREEZE LEAVEPLEX 198, 231
/DISPLAY ACTIVE REGION 103
/DISPLAY MODIFY 216218, 229
/DISPLAY PROGRAM 104
/DISPLAY TRACKING STATUS 66
/DISPLAY TRAN 103
/EXCLUSIVE NODE 188
/MODIFY ABORT 216, 221
/MODIFY COMMIT 216218, 221
/MODIFY PREPARE 216218, 220
/NRE CHECKPOINT 0 232
/STA DATABASE 51
/START DB 10
/START DC 213
/START SLDSREAD 9
/START XRCTRACK 66
/STOP NODE 188
/STOP SLDSREAD 9
/STOP XRCTRACK 65
/TEST MFS 188
/TRACE SET ON NODE 188

Numerics
2-phase commit 7

A
ACCEPT 24
ACCJCLIN 22
ADFSBASE 2223
ADFSDATA 23
ADFSJCIC 23
ADFSJDC8 23
ADFSJDOC 23
ADFSJHF8 23
ADFSJHFS 23
ADFSJIVP 24
ADFSJTOL 23
ADFSSMPL 2324, 76, 269
ADFSSRC 24
aggregate functions 16
ALLOC record 5, 55, 79
ALLOWAUTOALT 139, 209
ALOT 35
APPC 62, 89, 147148, 151152, 159
APPC CPI-C driven transactions 182
APPC descriptor name 201
APPC output descriptors 182
APPC/IMS 93
APPC/MVS 93

Copyright IBM Corp. 2002. All rights reserved.

APPCPMxx 93
APPLCTN 216
APPLY 24
ARLN 80
Asynchronous Work Element (AWE) 11
Authorized Program Facility (APF) 269
autoalter 7, 60, 137
Automatic RECON loss notification (ARLN) 46, 14, 69,
80, 163, 265267, 269, 292
Automatic Restart Manager (ARM) 158, 211, 292

B
BACKUP.RECON 79
Base Primitive Environment (BPE) 13, 144, 162, 172
batch backout 9
BLDSNDX 6
block level data sharing 156
BPE 290
BPE configuration parameters member 145
BPE user exit list member 145
BPE user exit parameter list 145
BPECFG 145, 291
BPEINI00 25, 144, 291, 295

C
cache structures 157
CEEBXITA 281, 284
CEEDOPT 15, 281
CEEROPT 15, 281
CEEUOPT 15, 281
CF link 211
CFRM 138, 292
CFRM policy 174, 195, 202, 299
change accumulation (CA) 5, 8
CHANGE.DB NONRECOV 57
CHANGE.DBDS 70
CHANGE.RECON CMDAUTH 74
CHANGE.RECON IMSPLEX 274
CHANGE.RECON MINVERS 80
CHANGE.RECON REPLACE 272
channel-to-channel (CTC) 11
CHKP call 62
CHTS 27, 40
CICS Transaction Server/390 16, 107
classic commands 169, 244, 247248, 301
CLASSIFY command 83
CLASSPATH 120
CMDAUTH 75, 77
COBOL 103, 108, 280
Command entry routing 167
command recognition character (CRC) 161, 164
Command security 167
command significant status 160, 188
Common Queue Server (CQS) 13, 144, 155, 158, 164,

329

174, 176, 195


Common Service Layer (CSL) xiii, 5, 1214, 2526, 76,
80, 144, 153, 155, 161, 177, 179, 190191, 234237, 255,
282, 285, 289290, 338
Common Services Layer (CSL) 279
common system area (CSA) 11
COMPRESS (DFSMSdss) 53
Concurrent Copy 48
Consolidated response 167
conversation mode 160
conversational input in-progess (CONV-IP) flag 206
conversational status 188
Coordinated global online change 172
Coordinated IMS/DB2 disaster recovery 62
coordinated online change 15
copy completion messages 50
Couple Data Set (CDS) 138, 292
Coupling Facility 7, 10, 14, 56, 60, 137140, 157158,
193, 210, 298
Coupling Facility list structure 7
Coupling Facility Resource Manager (CFRM) 60, 138
CPI-C 147, 150, 152
CPI-C driven transaction entry 199
CPI-C transaction 9293
CPUTIME 92
CQS 140, 146, 162, 172, 175, 179, 202, 209, 211, 293,
295, 299
CQSINIT0 144, 295
CQSIPxxx 295
CQSSGxxx 295
CQSSLxxx 295
cross-system extended services (XES) 173
CSL 13, 161, 202
CSL address spaces 13, 172
CSLG 26, 42
CSLOIxxx 295
CSLOMCMD 256
CSLOMCMD API 168
CSLOMI 256
CSLOMI API 167
CSLRIxxx 296
CSLULGTS REXX function 258
CSLULXSB TSO command 258
CSSLIB 102

D
DATABASE 216
database level tracking (DLT) 62
Database Recovery Control (DBRC) 4, 6970
database resource adapter (DRA) 96, 114
DataSource 111
DB/DC initialization parameters 30
DB2 for OS/390 16
DB2 stored procedures 96, 109
DBCTL initialization parameters 30
DBRC 46, 8, 10, 13, 22, 48, 51, 55, 5760, 6970,
7377, 7981, 155156, 163164, 266, 269, 272274,
276277
DBRC batch commands for HALDB 5
DBRC command authorization 45, 69, 73, 76

330

IMS Version 8 Implementation Guide

DBRC command authorization exit (DSPDCAX0) 7576


DBRC SCI registration exit (DSPSCIX0) 80, 267
DCCTL initialization parameters 30
DEALLOC 79
DEDB 10, 56
DEDB randomizing routine 56
DEDB VSO 60
DEL LE 168
DELETE LE 285
DELETE.LOG INACTIVE 72
DFS62DTx 90, 92
DFSALA 22
DFSALB 22
DFSBXITA 281, 284
DFSCGxxx 219, 221, 231232, 282, 297
DFSCLIB 115
DFSCMUX0 151
DFSCONE0 190
DFSDCxxx 9394, 191, 297
DFSINSX0 212
DFSIVP37 25
DFSJBP 102
DFSJCLIN 15, 24, 26
DFSJIDLT 15, 24
DFSJIRLT 15, 24
DFSJMP 102
ENVIRON=member 102
DFSJMP procedure 98
DFSJVMAP member 101
DFSJVMEP 101
DFSJVMEV 102
DFSLGNX0 191192, 212
DFSMS 48
DFSMSdss 6, 4849, 5253
DFSPBxxx 32, 4344, 46, 61, 296297
DFSSCSRT 30
DFSSGNX0 178, 181, 191192, 212
DFSSPSRT 298
DFSSTMGBL 198
DFSSTMLimsid 204
DFSUDMT0 6, 48
DFSUOCU0 217
DFSUOLC procedure 221
DFSVSAMP 6
DFSVSMxx 298
DISPLAY VERSION 13, 146
DL/I INQY LERUNOPT 284
DLIModel 17, 108109
DRA 112
DRA startup table 116, 124
DSPCEXT0 71
DSPDCAX0 7576
DSPSCIX0 269
DSPURX00 5, 75, 80, 269
DUPLEX 141, 209
dynamic backout 9

E
ECSA 11
EJB 108, 110, 124

E-MCS console 164


end-user significant status 188, 205
Enterprise Java Bean (EJB) 96, 110, 112113
ENTRYKEY 196
entry-to-element ratio 139
ETO descriptors 191
EXITMBR 146
Extended Recovery Facility (XRF) 213
Extended Terminal Option (ETO) 178, 181

F
Fast Database Recovery (FDBR) 11
Fast Path 10, 24, 56, 152, 211
Fast Path input message in progress (FP-IP) flag 208
Fast Path response mode 160, 188, 208
FDBR 11
Fiber Connection (FICON) 11
FINDDEST 178, 183, 212
FMID 22, 24
FRCABND 232
FRCNRML 232
FRM policy 209
FULLTHRESHOLD 209
Function Modification Identifier (FMID) 22

G
garbage collection 97
GENERATE 24, 26
global callable services 178, 212, 214
global online change 15, 174, 215, 218219, 223
global online change utility (DFSUOLC0) 221
GRAFFIN 160, 193
group name support 48

H
HALDB 49
HALDB partition definition utility 76
heap
application class system heap 97
middleware heap 97
system heap 97
transient heap 97
Hierarchical File System (HFS) 118
High Available Large Database (HALDB) 6
High Performance Java (HPJ) compiler
migration considerations 96
HIKEY value 5

I
IBM Developer Kit for OS/390, Java 2 Technology Edition
97
IBM IMS Image Copy Extensions 8
IBM IMS Online Recovery Service (ORS) 3
IEALIMIT exit routine 99
IEFUSI exit routine 99
Image Copy 2 6, 8, 48, 51, 308
IMS Connector for Java 109
IMS DataPropagator 61

IMS Java 26, 110


IMS Java metadata class 108
IMS JDBC Resource Adapter 118, 121122, 125126,
132
IMS online change 216
IMS Online Recovery Service (ORS) 8, 52
IMS system generation 180, 183
IMS transaction trace 83
imsjava.jar 102
imsjava.zip 102
imsjavaIVP.ear 125
IMSJdbcIVPEJB.jar 127
IMSJdbcIVPWeb.war 127
IMSPLEX 80, 163, 267, 295
IMSplex 7, 1215, 26, 64, 69, 73, 80, 90, 152153,
155156, 159170, 177180, 183, 185, 189191, 193,
195, 197198, 201204, 208, 213214, 218224,
230243, 245, 247249, 251, 256258, 265269,
271272, 274, 277, 281283, 285, 290, 295, 297302,
315316, 318
IMSplex command characteristics 169
IMSplex commands 244, 256, 301
IMSSPOC subcommands 258
INIT OLC 168
INIT OLC PHASE(COMMIT) 221, 228
INIT OLC PHASE(PREPARE) 220
INIT.DB 57
INIT.RECON 79
INIT.RECON CMDAUTH 74
INITIATE OLC command 219
INITTERM 145
INSTALL 22
INSTALL/IVP 22
INSTALL/IVP dialog 22
installation verification program (IVP) 15
Interactive Problem Control System (IPCS) 83
IOVFI 26, 42, 61
IOVFI parameter 10
IPCS 86
IRLM 140, 156
isolated log sender (ILS) 64
ISPF 30
IVP 22
IVP dialog 22
IVTCM 25

J
J2EE Connector Architecture (JCA) 109110, 118
J2EE Resource Adapter 125
J2EE server 124
J2EE server instance 115
J2EE server region 115
Java 16, 95
Java Application Support 23
Java Batch Processing (JBP) 9596
Java Database Connectivity (JDBC) 110
Java dependent region 16, 96, 109, 286
DFSJMP procedure 98
DFSJVMAP 101
DFSJVMEV member 102
Index

331

IMS region size 99


JVMOPMAS member 99
JVMOPWKR member 99
system definition considerations 102
Java Development Kit (JDK) 112
Java Message Processing (JMP) 9596
Java Native interface
JNI 97
Java standards 16, 95
Java Tooling 17
Java Virtual Machine (JVM) 9596, 109
JBP 285
JBP region 16
JDBC 16, 9596, 104, 107, 109110
JMK8804 24
JMK8805 24
JMP 285
JMP region 16
JVM 16, 96, 112
benefits 97
master JVM 99
worker JVM 100
JVMOPMAS member 99
JVMOPWKR 99

K
KSDS 50

L
LANG=JAVA 103
Language Environment (LE) 15
LE dynamic run time options 279
LERUNOPT 282
libatoe.so 102
libJavTDLI.so 102
libjvm.so 102
LIBPATH 120
List Entry (LE) 195
List Entry Controls (LEC) 196
List Entry ID (LEID) 196
List Headers (LH) 195
list structure 7, 174
LIST.RECON 79
LMOD 24
local online change 216, 219
lock structures 157
LOGALERT 71
logical copy completion 48
loss of connectivity 211
LTERM entry 200
LU 6.2 descriptors 8

M
master JVM 99
Message Control/Error Exit Routine (DFSCMUX0) 151
MINVERS 80, 148
MOD 24
MODSTAT 222

332

IMS Version 8 Implementation Guide

MODSTAT data set 216


MQSeries 7, 61
MSC logical links 182
MSNAME entry 201
multiple systems coupling (MSC) 11, 151

N
NODE entry 199
nonrecoverable DEDB 5657
non-recoverable status 187
NONSPANNED 70, 81
NORSCCC 219, 232

O
OBJAVGSZ 139
ODBA 96, 103, 110, 112, 114
OLCSTAT data set 219, 221, 229, 297
OM 162163, 165, 168, 172, 202, 224, 237, 256, 290,
299, 301
OM API 167, 236, 244
OM clients 168
OM infrastructure 165
ONEJOB 52
online change copy utility (DFSUOCU0) 220, 222
online change status (OLCSTAT) data set 219
online change utility (DFSUOCU0) 222
online log data set (OLDS) 9
open database access (ODBA) 96, 110
Operations 282
Operations management 161
Operations Manager (OM) 1314, 23, 76, 144, 155,
161162, 176, 218, 234, 236, 238, 244, 255256,
281282, 285286, 294295, 299, 310
Operations manager (OM) 164
OPTIMIZE (DFSMSdss) 6, 48
OPTION(FRCABND) 232
OPTION(FRCNRML) 232
OS/390 XML parser 108
OSAM caching 158
OTMA 147148, 151152, 159, 180
OTMAASY 27, 42
OUTBND 8, 93

P
parallel database processing 54
Parallel session ISC resources 181
Parallel Sysplex 137, 160, 216
PCB 97
Persistent Reusable Java Virtual Machine 16, 96
Persistent Reusable JVM 97
PL/I 108, 280
POSIX 97
PREFLIST 138, 209
PRILOG 4, 7072, 74
PROCLIB 26
PROCLIM parameter 92
Program Properties Table (PPT) 144, 291
Program Specification Table (PST) 11

program-to-program switch 150


PSBGEN 97
LANG=JAVA 103

Q
QRY IMSPLEX 169
QRY IMSPLEX SHOW(ALL) 244
QRY LE 169
QRY MEMBER 169
QRY OLC 169
QRY STRUCTURE 169
QRY TRAN 169
QRY TRAN SHOW(ALL) 249
QUERY LE 285
QUERY MEMBER TYPE(IMS) 229
QUERY OLC LIBRARY(OLCSTAT) 229
QUERY STRUCTURE 176

R
RACF 5, 64, 7477, 79, 93, 164, 167, 171, 256, 285, 294
Rapid Network Reconnect (RNR) 213
RCVYCONV 190191
RCVYFP 190191
RCVYSTSN 190191
RCVYxxxx 190, 212, 297
RDEFINE FACILITY 74
REBUILDPERCENT 209
RECEIVE 24
RECON 45, 14, 25, 5152, 55, 62, 67, 7072, 7475,
77, 7981, 148, 163, 265269, 271, 274
RECON upgrade 80
Record Level Tracking (RLT) 67
recoverable status 187
Redbooks Web site 327
Contact us xv
Remote Site Recovery (RSR) 4, 7, 22, 62, 309310
Resource 179
Resource management 161
Resource Manager (RM) 7, 1314, 144, 155, 161162,
164, 171, 179, 192, 210, 218, 234, 238, 282, 296, 299,
310
resource name uniqueness 160, 172, 177178, 185, 214
Resource Recovery Service (RRS) 7, 61, 148, 310
resource status recovery 172, 177178, 186, 214
resource structure 7, 169170, 175, 188, 195, 202,
209210, 219, 238, 292
resource type consistency 160, 172, 177178, 183, 214
REXX API 13
REXX SPOC API 236237, 257
RM 162163, 168, 170, 172, 175176, 179, 202,
210211, 224, 238, 256, 290, 299
RM affinity 192
ROLB call 62
RRS 61, 148151
RSR 72
RSR feature 24
RSR tracker 62, 64
RTCODE 216

S
SAMEDS 8, 48, 52
SAMEDS (DFSMSdss) 6
scatter (SCTR) 25
SCEERUN 102
SCEERUN parameter 102
SCI 1314, 162165, 172, 179, 202, 204, 207, 210211,
213, 236238, 256258, 266269, 290, 293, 299300
SCINAME 293
SDEP 57
SDFSBASE 2223
SDFSDATA 23
SDFSEXEC 30, 240, 298
SDFSJCIC 23
SDFSJDC8 23
SDFSJDOC 23
SDFSJHF8 23
SDFSJHFS 23
SDFSJIVP 23
SDFSJJCL 23
SDFSJLIB 115
SDFSJSAM 23
SDFSJTOL 23
SDFSMLIB 240
SDFSPLIB 240
SDFSRESL 115, 240, 257
SDFSSMPL 2324
SDFSSRC 23
SDFSTLIB 240
security maintenance utility (SMU) 217
session level affinities 194
Set and Test Sequence Number (STSN) 160
SET PATCH commands 53
shared message queue support for synchronous APPC
and OTMA messages 89
shared message queue support for synchronous APPC
and OTMA transactions 147
shared queue structures 140
shared queues 160, 175, 293
Shared VSO 10
significant status 160, 170, 178, 188, 203204, 302
single point of control (SPOC) 13, 15, 25, 161, 235236,
281
Single session ISC resources 181
SIZALERT 71
SLDSREAD OFF 9
SLDSREAD ON 9
Small Programming Enhancements (SPE) 80
SMP/E 22, 24
SMPSTS 24
SMSCIC 50
SOAP services 108
SPANNED 70
SPOC 165
SRM 189, 204, 212, 297
SRM=GLOBAL 193
SRM=LOCAL 193
SRMDEF 191
Static NODE user entry 200
static transaction entry 199

Index

333

Static transactions 182


statically defined VTAM resources 180
STATS 145
status recovery mode (SRM) 189
storage 97
structure alter 7
structure failure 210211
structure full monitoring 7
structure full threshold monitoring 210
structure rebuild 138
structure recovery data set (SRDS) 140
structure repopulation 210
Structured Call Interface (SCI) 5, 1314, 144, 155,
161163, 176, 234, 238, 266, 293, 299
STSN 211
STSN status 189
SUBSYS record 5, 70, 79
sync_level 150
Syntax Checker 16, 25, 2930, 3233, 35, 3739, 45
sysplex terminal management 172, 177178, 180, 183,
188, 190, 195, 211, 214
system log data set (SLDS) 3, 9
system managed duplexing 7, 10, 60, 137, 140, 210, 292
system managed rebuild 7, 10, 60, 137138, 210, 292
Systems management 161

T
TERM OLC 168
TERMINATE OLC 221, 224, 228229
Time History Table (THT) 80
TP_Profile 8, 93, 182
Trace points 84
TRACE TT command 86
TRANSACT 216
transaction trace 16, 8384
transaction trace facility of OS/390 84
transport manager system (TMS) 64
TSO SPOC application 238, 240, 243, 246, 249, 253
two phase commit 61

U
Unix System Services 114, 117
unused IOVF count 10
Unused IOVF count update 56
UPD LE 169
UPD TRAN 169
Updatable ResultSet 16, 104
UPDATE LE 285
User entry 200
USERID entry 201

V
VGR affinity 193
virtual storage constraint relief (VSCR) 3, 11
VSO DEDB 5758
VTAM generic resources 158, 160, 163, 193, 213
VTAM multinode persistent sessions (MNPS) 159

334

IMS Version 8 Implementation Guide

W
WebSphere 95, 107, 109110, 114, 118
WebSphere Application Server 16, 112
WebSphere Application Server (WAS) 96, 108
WebSphere for z/OS J2EE server 122
WebSphere for z/OS System Administration tool 114,
117, 122
WebSphere Studio Application Developer 125
WLM 116
WLM CLASSIFY 84
worker JVM 100
Workload Manager 84
Workload Manager (WLM) 8384

X
XCF 173
XCF communications 157
XMI descriptions 108
XML 95, 108109, 167, 169, 236, 239, 256
XML Enabler for COBOL and PL/I 108
XRC tracking 6364, 66

Z
z/OS 24

IMS Version 8 Implementation Guide: A Technical Overview of the New Features

(0.5 spine)
0.475<->0.873
250 <-> 459 pages

Back cover

IMS Version 8
Implementation Guide
A Technical Overview of the New Features

Explore IMSplex
components and
discover the new IMS
architecture
Utilize your Java skills
with IMS for Java and
WebSphere support
Get familiar with all
the new features

In this IBM Redbook, we describe the new features and functions


in IMS Version 8. We document the tasks necessary to exploit the
features, and identify migration, coexistence, and fallback
considerations. We also identify specific hardware and software
requirements that are needed to exploit certain enhancements.
First we provide an overview, where we have grouped the various
enhancements and their discussion into the categories
availability and recoverability, performance and capacity,
systems management, and application enablement. Then we
have more detailed chapters for describing the individual
enhancements.
The base enhancements part of the book describes the base
product enhancements that apply to all users migrating to IMS
Version 8. The Parallel Sysplex enhancements part of the book
describes enhancements in IMS Version 8 that apply to both
existing users of IMS Version 6 or IMS Version 7 in a Parallel
Sysplex environment and users that are considering sysplex
functionality.
The Common Service Layer part documents the Common Service
Layer (CSL), new in IMS Version 8, which is the next step in IMS
Parallel Sysplex evolution. The CSL enables IMS systems to
operate in unison in an OS/390 Parallel Sysplex. The CSL
components provide the infrastructure for an IMSplex.

INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION

BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.

For more information:


ibm.com/redbooks

SG24-6594-00

ISBN 0738426725

Vous aimerez peut-être aussi