Académique Documents
Professionnel Documents
Culture Documents
Mike Foster Budi Darmawan Joachim Fischer Ralf Krohn Wolfgang von Brand
ibm.com/redbooks
International Technical Support Organization Accounting and Chargeback with Tivoli Decision Support for OS/390 March 2002
SG24-6044-00
Take Note! Before using this information and the product it supports, be sure to read the general information in Special notices on page 293.
First Edition (March 2002) This edition applies to Version 1, Release Number 5 of Tivoli Decision Support for OS/390, Program Number 5698-TD9, for use with the OS/390 Operating System and Tivoli Decision Support for OS/390 Accounting Feature for the Host and Tivoli Decision Support for OS/390 for the Workstation, Program Number 5698-TDW. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. OSJB Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2002. All rights reserved. Note to U.S Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Special notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix IBM Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Chapter 1. Overview of accounting and chargeback . . . . . . . . . . . . . . . . . . 1 1.1 Why accounting is important . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Accounting system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 What chargeback is . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Chargeback influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5 Steps to create a chargeback model . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5.1 Step 1: Define the measurements for charging services . . . . . . . . . . 7 1.5.2 Step 2: Cost center accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5.3 Step 3: Calculate direct service rates . . . . . . . . . . . . . . . . . . . . . . . . 10 1.6 Planning consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.7 Important concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 2. Resource accounting feature . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1 Resource Accounting Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.1 RAF data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.2 RAF subcomponents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.3 RAF lookup and control tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.1.4 RAF security setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 Understanding RAF update flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.1 RAF subcomponent for batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.2 RAF subcomponent for time sharing option (TSO) . . . . . . . . . . . . . . 28 2.2.3 RAF subcomponent for STC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.4 RAF subcomponent for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.5 RAF subcomponent for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.6 RAF subcomponent for IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.7 RAF subcomponent for DASD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Chapter 3. RAF host considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 SMF parameter setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
iii
3.2 TDS/390 log collection process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2.1 SMF and IMS log collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2.2 IDCAMS DCOLLECT collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.3 Log collection automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 Interval recording and long running started tasks . . . . . . . . . . . . . . . . 47 3.3.1 Before installing RAF STC component . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.2 After installing RAF STC component . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Considerations for processes that span downloads . . . . . . . . . . . . . . 59 Chapter 4. Installing the accounting console . . . . . . . . . . . . . . . . . . . . . . . 61 4.1 Accounting console configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2 Minimum requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.3 Installing the accounting console . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4 Setting date format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.5 Setting currency format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.6 Starting the accounting console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Chapter 5. Accounting console components . . . . . . . . . . . . . . . . . . . . . . . 77 5.1 Accounting console functional overview . . . . . . . . . . . . . . . . . . . . . . . 78 5.2 Accounting console database structure . . . . . . . . . . . . . . . . . . . . . . . 78 5.3 Directory structure of the demo database . . . . . . . . . . . . . . . . . . . . . . 79 5.3.1 The Demo database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.3.2 The Data folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.3.3 The System directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.4 Exploring the demo database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.4.1 Getting started with accounting console . . . . . . . . . . . . . . . . . . . . . . 86 5.4.2 Elements of the data explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Chapter 6. Accounting console billing processes . . . . . . . . . . . . . . . . . . . 95 6.1 Accounting console resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.1.1 Custom fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.1.2 Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.1.3 Service Category table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.1.4 Allocations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.1.5 Direct charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.1.6 Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.1.7 Lookups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.1.8 Rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.1.9 Budget table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Sample monthly billing cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Chapter 7. Moving data from host to workstation . . . . . . . . . . . . . . . . . . 141 7.1 Exporting data from DB2 database . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7.2 Transferring billing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
iv
7.3 Importing data into the workstation . . . . . . . . . . . . . . . . . . . . . . . . . . 146 7.3.1 Starting the import wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.3.2 Using the import wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 7.3.3 Import results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.3.4 Posting imported data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 7.4 An in-depth look at importing files . . . . . . . . . . . . . . . . . . . . . . . . . . 170 7.4.1 Import definition [Data] part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4.2 Import definition [Import Opts] part . . . . . . . . . . . . . . . . . . . . . . . . . 173 7.4.3 Import definition [Schema Text] part . . . . . . . . . . . . . . . . . . . . . . . . 175 7.4.4 Import definition [Post Opts] part. . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7.4.5 Import definition [Ledger Updates] part. . . . . . . . . . . . . . . . . . . . . . 175 7.4.6 Import definition [Field Mapping] part . . . . . . . . . . . . . . . . . . . . . . . 176 7.5 BILLED_DATA mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Chapter 8. Sample chargeback implementation. . . . . . . . . . . . . . . . . . . . 179 8.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 8.2 Setting up the Microsoft SQL Server . . . . . . . . . . . . . . . . . . . . . . . . 182 8.2.1 Creating master SQL database. . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 8.2.2 Creating AWO user IDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.3 Preparing accounting console workstations . . . . . . . . . . . . . . . . . . . 188 8.3.1 First time SQL master database creation . . . . . . . . . . . . . . . . . . . . 189 8.3.2 Connecting to an existing SQL Master database . . . . . . . . . . . . . . 191 8.4 Preparing AWO database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 8.4.1 Creating custom fields and active ledger . . . . . . . . . . . . . . . . . . . . 194 8.4.2 Create service category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 8.4.3 Create rate table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 8.4.4 Import the OS/390 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 8.5 Working with active ledger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 8.5.1 Lookup tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 8.5.2 Factor tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 8.5.3 Allocation tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 8.5.4 Direct charge table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 8.5.5 Rates table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 8.6 Working with budget and expense ledger . . . . . . . . . . . . . . . . . . . . . 208 8.6.1 Budget table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 8.6.2 Expense table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 8.7 Data reporting and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Chapter 9. Hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 9.1 Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 9.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 9.2.1 Installing the AFW software using an unzip tool . . . . . . . . . . . . . . . 214 9.2.2 Wrong language welcome panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Contents
9.2.3 Extraction errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.3 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 9.3.1 Defining the required period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 9.4 Version of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 9.5 AFO iKernel error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 9.6 Date format setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 9.7 No error message when import fails . . . . . . . . . . . . . . . . . . . . . . . . . 228 9.8 Duplicate data in the ledger table . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 9.8.1 Unique indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 9.8.2 Post options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 9.9 Limited factor table usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 9.10 Limited allocation table creation . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 9.10.1 Apply allocation function SQL error. . . . . . . . . . . . . . . . . . . . . . . . 233 9.11 Uninstalling the accounting console . . . . . . . . . . . . . . . . . . . . . . . . 235 Appendix A. RAF subcomponent for UNIX System Services data . . . . . 245 RAF sub-component for OMVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Initialization member DRLIOMVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Define tablespace member DRLSOMVS . . . . . . . . . . . . . . . . . . . . . . . . . 249 Define Tables member DRLTOMVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Define updates DRLUOMVS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Define purge conditions DRLPOMVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Appendix B. Host based only accounting. . . . . . . . . . . . . . . . . . . . . . . . . 261 Appendix C. Importing information from RACF . . . . . . . . . . . . . . . . . . . . 277 USRIUSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 USRLUSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 USRRUSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 USRUDSN1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 USRUDSN2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 USRUUSR1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 USRUUSR2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 USRUUSR3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 USRUUSR4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 USRUUSR5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 System requirements for downloading the Web material . . . . . . . . . . . . . 290 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
vi
Related publications . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . IBM Redbooks collections . . . . . . . . . . . . . . . . .
. . . . . .
Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Contents
vii
viii
Figures
1-1 1-2 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 Chargeback methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chargeback links to business processes . . . . . . . . . . . . . . . . . . . . . . . . . 6 Resource Accounting Feature - Data Flow . . . . . . . . . . . . . . . . . . . . . . 18 Components list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Resource accounting feature component parts list . . . . . . . . . . . . . . . . 21 Installation Options panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Data Flow - BATCH Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Data Flow - TSO Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Data Flow - STC Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Data Flow - DB2 Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Data Flow - CICS Resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Data Flow - IMS Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Data Flow - DASD Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 SMF / IMS Pro Image storage method . . . . . . . . . . . . . . . . . . . . . . . . . 42 DCOLLECT storage method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 TDS daily collection process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 TDS Administrator Log Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Main menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Administration panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 List of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Update Definitions panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 RAFADDR_SMF30 Update Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Show Field panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Show Field panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 ID_TIME expression area of update definition panel . . . . . . . . . . . . . . . 56 Abbreviations panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Modified abbreviations confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Update definition modification confirmation . . . . . . . . . . . . . . . . . . . . . . 58 Long running task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 AWO.exe self extracting file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 InstallShield Wizard Extracting Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 InstallShield Wizard welcome panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Choose Destination Location panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 InstallShield Wizard Select Type panel . . . . . . . . . . . . . . . . . . . . . . . . . 66 InstallShield Wizard Select Components panel . . . . . . . . . . . . . . . . . . . 67 InstallShield Start Copying Files panel . . . . . . . . . . . . . . . . . . . . . . . . . 67 InstallShield Wizard Setup Status panel . . . . . . . . . . . . . . . . . . . . . . . . 68 Choose Setup Language panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
ix
4-10 4-11 4-12 4-13 4-14 4-15 4-16 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 6-15 6-16 6-17 6-18 6-19
InstallShield Wizard Complete panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Windows 2000 Control Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Windows 2000 Regional Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Windows 9x Regional Settings Properties panel . . . . . . . . . . . . . . . . . . 71 Currency settings for Windows 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 How to start Accounting Workstation Option 2.0 . . . . . . . . . . . . . . . . . . 74 Start panel of Accounting Feature for the Workstation 2.0 . . . . . . . . . . 75 Overview of the AWO program files and demo directory structure . . . . 79 Directory structure of the AWO demo database . . . . . . . . . . . . . . . . . . 80 Sample data files shipped by the AWO demo database . . . . . . . . . . . . 82 Sample batch script file shipped with AWO demo database . . . . . . . . . 83 Sample export definition shipped with AWO demo database . . . . . . . . 84 Sample import definitions shipped by AWO demo database . . . . . . . . . 85 Sample SQL code files shipped with AWO demo database . . . . . . . . . 86 Start panel of Accounting Feature for the Workstation 2.0 . . . . . . . . . . 87 AWO Open Local Database panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 AWO browse directory panel for local database . . . . . . . . . . . . . . . . . . 88 Accounting workstation option main panel with active icons . . . . . . . . . 89 AWO explorer opened with demo master database . . . . . . . . . . . . . . . 90 AWO panel with data from ledger_active table . . . . . . . . . . . . . . . . . . . 91 AWO tree menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 AWO Definitions tree menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 AWO local database tree menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 AWO master database tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Accounting Feature for the Workstation Maintain menu . . . . . . . . . . . . 96 Select custom fields in the AWO main panel . . . . . . . . . . . . . . . . . . . . . 98 Custom Field definition panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Move field GLACCT down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Move field GLACCT up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Custom Field panel with add function . . . . . . . . . . . . . . . . . . . . . . . . . 100 Editing custom field definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Define the characteristic of a field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Define custom field as index field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Custom field definitions for index and ledger tables . . . . . . . . . . . . . . 103 alloc_abc table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Select Allocation Table panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Verify error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 AWO info message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Application table alloc_AC1_GLAC . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Apply Allocation panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Select Field panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Create interims. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Interims table created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6-20 6-21 6-22 6-23 6-24 6-25 6-26 6-27 6-28 6-29 6-30 6-31 6-32 6-33 6-34 6-35 6-36 6-37 6-38 6-39 6-40 6-41 6-42 6-43 6-44 6-45 6-46 6-47 6-48 6-49 6-50 6-51 6-52 6-53 6-54 6-55 6-56 6-57 6-58 6-59 6-60 6-61 6-62
Apply allocation ended successfully . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Active ledger table after apply allocation . . . . . . . . . . . . . . . . . . . . . . . 110 direct_adjust table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Select Direct Charge Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Create New Direct Charge Table panel . . . . . . . . . . . . . . . . . . . . . . . . 112 Enter table name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Info message about created table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Edit the direct charge table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Apply Direct Charge panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Info message about Create interim . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Info message about created interim table . . . . . . . . . . . . . . . . . . . . . . 114 Info message about successful function . . . . . . . . . . . . . . . . . . . . . . . 115 Result of apply direct charge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 factor_cpu table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Create New Factor Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Enter factor table name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Info message about successful creation . . . . . . . . . . . . . . . . . . . . . . . 118 Insert values into factor table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Apply Factor panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Info message about Create interim . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Info message about created interim table . . . . . . . . . . . . . . . . . . . . . . 120 Info message about successful function . . . . . . . . . . . . . . . . . . . . . . . 120 lookup_glacct table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Select lookup table definition function . . . . . . . . . . . . . . . . . . . . . . . . . 122 Select Lookup Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Create New Lookup Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Select input table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Select input field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Select output field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Mark selected output field as required . . . . . . . . . . . . . . . . . . . . . . . . . 125 Save SQL code of lookup table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Info message about saved SQL code . . . . . . . . . . . . . . . . . . . . . . . . . 126 Define name for lookup table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Info message table created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Edit selected table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 View the lookup table definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Enter lookup values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 rates_active table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Rates_active table with OS/390 mapping data . . . . . . . . . . . . . . . . . . 131 Field definitions of drlsblda.imd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Relationship import, rate, and service category table . . . . . . . . . . . . . 133 Apply rates from AWO explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Apply rates - Accounting Feature for the Workstation drop-down . . . . 135
Figures
xi
6-63 6-64 6-65 6-66 6-67 6-68 6-69 6-70 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 7-15 7-16 7-17 7-18 7-19 7-20 7-21 7-22 7-23 7-24 7-25 7-26 7-27 7-28 7-29 7-30 7-31 8-1 8-2 8-3 8-4
Create budget table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Select Budget Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Create New Budget Table panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Enter new budget table name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Info message about created budget table . . . . . . . . . . . . . . . . . . . . . . 137 New created budget table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Edit custom field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 monthly.mbs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 The import process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Import wizard icon in tool bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Function import drop-down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Import Wizard panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 drlsblda.imd definition selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Import Wizard with field definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Import Wizard panel with target table match . . . . . . . . . . . . . . . . . . . . 152 Import options panel with general tab active . . . . . . . . . . . . . . . . . . . . 153 Criteria selection panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Result of criteria selection usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Error and trace function in import options panel . . . . . . . . . . . . . . . . . 154 Browse for Folder panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Advanced tab of import options panel . . . . . . . . . . . . . . . . . . . . . . . . . 156 Import results for first import test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Imported data from first import test . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Import results for second import test . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Imported data from second import test . . . . . . . . . . . . . . . . . . . . . . . . 161 Ledger_active table without data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Post option in import wizard panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Post option panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Ledger tab in post options panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Post options and posting the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Info message about number of posted records . . . . . . . . . . . . . . . . . . 166 Active_ledger table with posted data . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Functions drop-down menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Post data panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Enter new master table name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Overview relationship import definition . . . . . . . . . . . . . . . . . . . . . . . . 172 Schema.ini file with definition [drlsblda.txt] . . . . . . . . . . . . . . . . . . . . . 173 Import definition file with field mapping definition. . . . . . . . . . . . . . . . . 176 PR Billed Data mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Sample setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Master AFW database property on SQL Server . . . . . . . . . . . . . . . . . 183 AWO Error: Cannot run SELECT INTO . . . . . . . . . . . . . . . . . . . . . . . . 183 Change AFW master database properties on SQL Server . . . . . . . . . 184
xii
8-5 8-6 8-7 8-8 8-9 8-10 8-11 8-12 8-13 8-14 8-15 8-16 8-17 8-18 8-19 8-20 8-21 8-22 8-23 8-24 8-25 8-26 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 9-14 9-15 9-16 9-17 9-18 9-19 9-20 9-21
Create AFW user on SQL Server - Server Roles . . . . . . . . . . . . . . . . 185 Create AFW user on SQL Server - Database Roles . . . . . . . . . . . . . . 186 Create user with read access only . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Create AFW User - insert permission . . . . . . . . . . . . . . . . . . . . . . . . . 188 Create new Local Accounting Feature for the Workstation Database . 189 Create SQL Master Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Create SQL Master Database - Connection . . . . . . . . . . . . . . . . . . . . 190 Successful creation of new SQL Master database . . . . . . . . . . . . . . . 191 Create new Local Accounting Feature for the Workstation Database . 192 Open existing SQL Master Database . . . . . . . . . . . . . . . . . . . . . . . . . 192 Connect to existing SQL Master Database . . . . . . . . . . . . . . . . . . . . . 193 Define PR service category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Rates definition for service category PR . . . . . . . . . . . . . . . . . . . . . . . 197 Lookup_CUSTOMER_AC1 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Lookup_AC2_APPLICAT data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 lookup_Service_SV_GL data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Factor_MVS_SYS data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Allocation table alloc_OverheadIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Direct charge table direct_PC_rent . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Rates table values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Budget table data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Expense table data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 AWO zip file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Unzipping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Unzip extract drop-down list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Extract to folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Windows explorer with AWO directory. . . . . . . . . . . . . . . . . . . . . . . . . 217 Choose setup language panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Preparing InstallShield wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Extract panel from WinZip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Tivoli AWO icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Confirm file overwrite panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 View subdirectory in WinZip panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Error during extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Current billing period just after the first AWO start. . . . . . . . . . . . . . . . 221 Misleading period field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Enlarged period field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Finalize active ledger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Finalize ledger panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Create historical ledger (lg200012) . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Finalize info message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 New current billing period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Operating system version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figures
xiii
9-22 9-23 9-24 9-25 9-26 9-27 9-28 9-29 9-30 9-31 9-32 9-33 9-34 9-35 9-36 9-37 9-38 9-39 9-40 9-41 9-42 9-43 9-44 A-1 B-1 B-2 B-3 B-4 B-5 B-6 B-7 B-8 B-9 B-10 B-11 B-12 B-13 B-14 B-15 B-16
Error iKernel.exe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Date format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Import with failed function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Create Index panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Selecting index fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Post options to prevent duplicate data . . . . . . . . . . . . . . . . . . . . . . . . . 231 Error during table creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Selecting the Control Panel on Windows 2000 . . . . . . . . . . . . . . . . . . 235 Windows 2000 Control Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Windows 9x Control Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Windows 2000 Add/Remove Programs . . . . . . . . . . . . . . . . . . . . . . . . 237 Windows 9x Add/Remove Programs Properties panel . . . . . . . . . . . . 238 Choose Setup Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Preparing the InstallShield Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 InstallShield Wizard Welcome panel . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Confirm File Deletion panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 ReadOnly File Detected panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 InstallShield Wizard complete on Windows 2000 . . . . . . . . . . . . . . . . 241 InstallShield Wizard complete on Windows 9x . . . . . . . . . . . . . . . . . . 241 Add/Remove Programs Properties panel on Windows 9x . . . . . . . . . . 242 Add/Remove Programs panel on Windows 2000 . . . . . . . . . . . . . . . . 243 AWO folder with installed data files . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Confirm File Delete panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Data Flow - OMVS resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Lookup Tables RAF feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Lookup Tables RAF feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 ACCOUNT Lookup table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 RAF Feature Collect with ACCOUNT and CUSTOMER lookup . . . . . 265 Prorate recalculate process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 ACCT_PRORATE table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Job DRLJPROR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Customer Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 BILLED_DATA table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Update definition USE_SUMMARY_D to BILLED_DATA . . . . . . . . . . 270 Update definition USE_SUMMARY_D to BILLED_DATA . . . . . . . . . . 271 PRICE_LIST lookup table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 CREDIT_DEBIT table definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 CREDIT/DEBIT Recalculate Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Setting the billing period in credit/debit . . . . . . . . . . . . . . . . . . . . . . . . 274 Setting date in credit/debit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
xiv
Tables
1-1 1-2 1-3 1-4 1-5 2-1 2-2 2-3 2-4 2-5 2-6 2-7 6-1 8-1 8-2 A-1 Relationship between chargeback and business processes . . . . . . . . . . 6 Cost element types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Calculate CPU seconds rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Calculate DASD rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Calculate print rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Account Information for JES resources . . . . . . . . . . . . . . . . . . . . . . . . . 26 Account information for TSO resources . . . . . . . . . . . . . . . . . . . . . . . . . 28 Account information for STC resources . . . . . . . . . . . . . . . . . . . . . . . . . 30 Account information for DB2 resources . . . . . . . . . . . . . . . . . . . . . . . . . 31 Account information for CICS resources . . . . . . . . . . . . . . . . . . . . . . . . 34 Account information for IMS resources . . . . . . . . . . . . . . . . . . . . . . . . . 35 Account Information From DASD Resource . . . . . . . . . . . . . . . . . . . . . 37 Static fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 BILLED_DATA import structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Custom fields definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Account Information From OMVS Resource . . . . . . . . . . . . . . . . . . . . 246
xv
xvi
Preface
This redbook will help you install, configure and use the Tivoli Decision Support for OS/390 Accounting Feature for the Workstation. This redbook provides you with much needed information on accounting and chargeback in the OS/390 environment, starting with an overview of accounting and chargeback. The Resource Accounting Feature for Tivoli Decision Support for OS/390 is presented along with considerations for setting up the Resource Accounting Feature on the host to provide accounting information to the Accounting Feature for the Workstation. The installation and setup of the Accounting Feature for the Workstation is covered in this redbook. Additionally, the components of the accounting console of the Accounting Feature for the Workstation are discussed, including topics on the resources available to you as a user of the accounting console. In preparation for performing accounting and chargeback, information is presented on the moving of accounting and billing data from the host to the workstation. To provide an overview of how the accounting and chargeback functions are performed using the Accounting Feature for the Workstation, a sample chargeback implementation is presented. You can follow the steps discussed in various parts of this redbook as they are used in an implementation of loading data an active ledger for procession on the workstation. Tivoli Performance Reporter for OS/390 was renamed Tivoli Decision Support for OS/390 following the original shipment of Version 1.4. The product functions remain unchanged. For this reason, you may see the names used interchangeably.
xvii
Mike Foster is an IT Specialist at the ITSO, Austin Center, and holds a Bachelor of Science degree in Electrical Engineering from the University of Kansas. He writes extensively and teaches classes worldwide on a variety of topics, including Tivoli Decision Support for OS/390. Before joining the ITSO in 1995, he held both management and technical positions in IBM marketing and development divisions worldwide for over 25 years. Budi Darmawan is a Tivoli Specialist at the International Technical Support Organization, Austin Center. He writes extensively and teaches IBM classes worldwide on Tivoli, DB2 databases, and OS/390. Before joining the ITSO in February 1999, he worked in IBM Global Services, Indonesia as the lead solution architect for Tivoli system management and business intelligence services. Budi is also a Tivoli Certified Instructor and a Tivoli Certified Enterprise Consultant. Joachim Fischer is a team leader at IBM Global Services Germany. He holds a electronic engineering degree from the Gerhard-Mercator University Duisburg and an economic engineering degree from the University Bochum. He joined IBM Global Services in 1994, where he has worked on accounting, chargeback, service level management, and performance management using several IBM and Tivoli Products. His area of expertise include project management and consultant activity. Ralf Krohn is an IT Specialist with IBM Global Services in Hamburg, Germany. He has worked for IBM for 28 years, with 23 years of experience in the area of performance, accounting, and chargeback. His areas of expertise include OS/390 System and subsystems, Service Level Reporter, Performance Reporter for OS/390 (renamed to Tivoli Decision Support for OS/390), accounting, and performance and capacity management. During his career, he has installed and customized reporting and accounting systems, performed migrations from Service Level Reporter to Performance Reporter for OS/390 and release to release installations. Additionally, he has developed several user defined components for use with Tivoli Decision Support for OS/390. Wolfgang von Brand is a Consultant for accounting and performance measurement projects. He joined IBM in 1963 and his career has included hardware CE, SW CE for VSE, VM, CICS, and VTAM/NCP. In 1980, he became a systems engineer focusing on migrations from VSE to MVS. In 1987/88, he installed one of the first accounting systems based on SLR, tested the dpAM accounting system, and did some implementations of EPDM (later named Performance Reporter for OS/30 and now known as Tivoli Decision Support for OS/390). He teaches and implements dpAM and Tivoli Decision Support for OS/390 in several countries in Europe.
xviii
Thanks to the following people for their contributions to this project: International Technical Support Organization, Austin Center Wade Wallace International Technical Support Organization, Poughkeepsie Center Robert Haimowitz Tivoli Systems Fausto Nigioni IBM System Management Project Office Performance Team Sharon Brower Tivoli Decision Support for OS/390 Software Conversion Page Hite
Special notice
This publication is intended to help enterprise performance administrators and IT financial analysts install, configure, and use Tivoli Decision Support for OS/390 Accounting Feature for the Workstation. The information in this publication is not intended as the specification of any programming interfaces that are provided by Tivoli Decision Support for OS/390. See the PUBLICATIONS section of the IBM Programming Announcement for Tivoli Decision Support for OS/390 for more information about what publications are considered to be product documentation.
Preface
xix
IBM Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries:
AFS CICS DFS e (logo) IBM IMS MORE MVS/ESA Notes PC 300 RACF Redbooks Logo S/390 System/370 VTAM AIX DB2 DRDA FAA IBM.COM Lotus MVS MVS/XA OS/390 Perform Redbooks RMF Sequent SP
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at
ibm.com/redbooks
xx
Chapter 1.
The most common way to chargeback to the consumers of IT services is to create a resource accounting structure and calculate the prices for each service, as shown in level 2 of the pyramid in Figure 1-1. The examples shown in this redbook are based on this level of chargeback. In refinements to your chargeback model, you might want to utilize the higher levels of chargeback accounting: Application Accounting, Functional Accounting, and Business Transaction.
sin ge ffo ion rt r shi equ pb ire etw dt oa een chi bu eve sin res ess ult pro s ces s es an dc ost s
Charging via business activity, for example, per invoice or per order
Functional Accounting
Inc
er rel at
rea
Resource Accounting
Clo s
IT Department Cost
Application Accounting
n n
n n
Associate transactions and user activities to functional areas within the enterprise
Track application usage Track transaction volumes Shared resourse usage CPU seconds Lines or pages printed DASD allocation Consultants and Analysis Dedicated resouce usages Networks PCs and printers IT costs charged directly to the enterprise with no distinction as to useage
The relationships between the chargeback system and the business processes shown in Figure 1-2 are listed in Table 1-1.
Table 1-1 Relationship between chargeback and business processes
Business process Business Planning Relationship to chargeback Departmental understanding of IT component of business costs IT knowledge of cost-production Help for making or buying decisions Ability to evaluate IT alternatives, such as outsourcing or outtasking Cost recovery information Tracking and understanding cost increases Department budget and variance information Improved understanding of IT overhead
Management Information
Relationship to chargeback Low cost provider of IT services Improved analysis of market opportunities Running IT as a business Provide different service quality with different prices Provides negotiating leverage to IT Promotes realistic user selection of adequate service levels Provides indicative or actual workload forecast Efficient and effective use of IT resources Influences user behavior with different processes for day and night CPU second usage
The result of the definition is the cost element layout. Now all costs are located and it is possible to assign to a primary cost center and service cost center.
1 2 3 4 5 6 7 8 9 10 11
1. Cost of materials 1.2 OS/390 hardware costs 1.2.1 CPU 1.2.2 I/O and storage 1.2.2.1 DASD 1.2.2.2 Tape 1.2.2.3 Tape robotic 1.2.3 Printer 1.3 OS/390 software costs 1.3.1Operating system 1.3.2 Database ($) ($) $ $ $ $ ($) ($) ($) ($) ($) ($) $ $ $ $ $ $ $ $
Row Number
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
1.3.3 Subsystem 1.3.4 Print software 1.3.5 DASD/tape software 1.3.6 Software development tools 1.3.7 Standard application software 1.1 Network costs 1.1.1 Hardware 1.1.2 Software 1.1.3 Network services 1.4 External cost 1.4.1 Desktop PCs 1.4.2 Phone systems 1.4.3 Copy services 1.5 Other cost 1.5.1 Training cost 1.5.2 Other material costs Sum cost of materials Iines (1 to 27) 2. Personnel cost 2.1 Management 2.2 System operating 2.3 Administrator 2.4 Programmer Sum personnel cost Sum total cost (line 29. and 34.) $ $ $ $ $ $ $ $ $ $ ($) $
$ $
$ $
$ $ $ $ $
$ $ $
$ $
$ $
Row Number
36
cost distribution from indirect cost center (line 35.) corresponding to portion of the personnel cost (line 34.) Sum indirect cost center multiply factor Overall Costs (Line 35. and 37.)
37 38
$* 60% $$
$* 30% $$
$* 10% $$
CPU rate
Table 1-3 gives an example of calculating the monthly rate for CPU seconds. The costs in the far right hand column come from the information identified in Table 1-2 on page 8. The calculation of the CPU seconds per month is derived from the seconds per month times the number of processors times the utilization percentage (1,971,000 times the number of processors in this example). Finally, the CPU rate is determined by dividing the CPU cost by CPU capacity (utilized seconds per month).
Table 1-3 Calculate CPU seconds rate
CPU capacity Number of processor Hours per day Minutes Seconds Days per year Utilization (%) Seconds/ month n 24 60 60 365 / 12 75 Hardware CPU Operating system Database software Personal costs Apportionment of indirect costs $$ $$ $$ $$ $$ Contributing cost elements Cost ($/ Month)
10
Note: The cost rate for CPU seconds is: Cost per month ($) / CPU seconds per month.
DASD rate
The example given in Table 1-4 calculates the monthly rate for DASD usage. The costs in the far right hand column come from the information identified in Table 1-2 on page 8. The calculation of the DASD capacity per month is derived from the installed DASD times the utilization percentage (80 percent in this example). Finally the DASD rate is determined by DASD cost divided by DASD capacity (GBs per month).
Table 1-4 Calculate DASD rate
DASD capacity Installed DASD GBs 100 GB Contributing cost elements Hardware DASD Operating system DASD management software Personnel costs Utilization (%) = DASD capacity per month (100 * 80%) 80 80 GBs Apportionment of indirect costs = Cost per month Cost ($/ Month) $$ $$ $$ $$ $$ $$
11
Note: The cost rate for DASD is: Cost per month ($) / DASD GBs.
Print rate
The example given in Table 1-5 calculates the monthly rate for Print usage. The costs in the far right hand column come from the information identified in Table 1-2 on page 8. The calculation of the print capacity per month is derived from the number of installed host printers times the print capacity per hour times the number of work hours times the number of work days per month times the utilization percentage (2,112,000 print pages per month). Finally, the Print rate is determined by Print cost divided by Print capacity (pages per month).
Table 1-5 Calculate print rate
Print page capacity Number of system laser printer (PRT1, PRT2) Print capacity (page/hour) Work days per month Work hours per day Utilization (%) = Print capacity (2 * 4.400 * 20 * 16 * 75%) Pages per month 2 4.400 20 16 75 2,112,000 Print software Personnel costs Apportionment of indirect costs = Cost per month $$ $$ $$ $$ Cost element types Hardware Cost ($/ Month) $$
Note: The cost rate for Print pages is: Cost per month ($) / Print capacity.
Additional considerations
The examples in this section may need to be expanded to reflect the requirement in your environment. Some additional areas you may want to consider include: Further differentiations can be made between workloads on your system, such as distinguishing between day or night shifts or between job classes (such as batch and hot batch). A differentiation between DASD and tape is only necessary if the customer can choose to store their data on DASD or tape. Many IT departments use storage management software to control storage usage on DASD and tape. In this case, it makes no sense to have different prices, because the customer can not choose their storage medium. If the customer works with special DASD or tape units, then these resources can be billed as recurrent charges.
12
Most remote workstation printers are charged directly to the using department and are not normally included in the cost of host printing. Line printer usage or microfiche creation may be considered as an additional service. The example in this section utilizes head count has been used to determine overhead distribution. This distribution method may be simpler than what is needed in your environment, but a more detailed discussion of overhead loading techniques is beyond the scope of this redbook.
13
An accounting/chargeback system will only be as accurate as the input data you receive from all sources.
14
Normalization Some of the resources may need to be normalized, for example, CPU usage. A faster CPU may accomplish a job quicker, which mean less CPU time. For example, a job might run for 10 seconds on CPU-1, but run for five seconds on CPU-2, which is quicker. Without normalization, the job will be accounted at half price for running at CPU-2, which is faster (and therefore should be accounted as more expensive). A normalization factor should be accounted, so a job running at CPU-2 will get a normalization factor of, for example, 2.1. This means that a job running on CPU-2 for five seconds will be charged 10.2 seconds for CPU time.
15
16
Chapter 2.
17
18
During the collection process, accounting information is processed from the various logs, as represented by the product event logs on the left side of the data flow shown in Figure 2-1 on page 18. The accounting information is placed into the resource accounting subsystem data tables RAFTSO, RAFBD2, RAFCICS, RAFIMS, RAFDMVS, RAFDASD, RAFBATCH and RAFSTC. The lookup tables RAFADASD, RAFAOMVS, RAFAIMVS, RAFACICS, RAFADB2, and RAFATSO are referenced to correctly identify the resource usage being collected on the users and groups consuming the services. All of the subsystem data tables (RAFTSO, RAFBD2, RAFCICS, RAFIMS, RAFDMVS, RAFDASD, RAFBATCH, and RAFSTC) are used to populate the daily resource usage table USE_SUMMARY_D. The USE_SUMMARY_D table in turn is used to populate the BILLED_DATA table. The information collected in the BILLED_DATA table is transferred to the workstation where the Accounting feature for the Workstation is installed. Note: You can set rates and define customers and accounts on either the host or workstation, depending on how you want to manage accounting and chargeback in your environment. It is important that you select one or the other implementation, because this control information is not passed between the host and workstation. If you plan to use the Accounting Feature for the Workstation for accounting and chargeback, it is recommended that you set rates and define customers and accounts on the workstation to focus control in one tool. A reference to using RACF information to provide the association of user IDs to cost centers in the various accounting lookup tables (RAFADASD,RAFAOMVS, RAFAIMVS, RAFACICS, RAFADB2, and RAFATSO) is also shown in Figure 2-1 on page 18. Refer to Appendix C, Importing information from RACF on page 277 for more information on performing this association.
19
To install any of the Resource Accounting Feature subcomponents, first select the Resource Accounting Component from the Tivoli Decision Support for OS/390 component list panel, as shown in Figure 2-2, and press the PF6 - Install key.
Next, select the subcomponent accounting items you want installed the RAF Component Parts panel shown in Figure 2-3 on page 21 and press Enter.
20
You can choose to install the subcomponents you selected on the RAF Component Parts panel, shown in Figure 2-3, in the install components dialog box shown in Figure 2-4, either via online or batch.
21
Note: You may find installing components via batch to be advantageous as the output of the installation job can be kept for future reference. If you do install the subcomponents via online you can still save the installation output if you immediately copy the output from your user ID.DRLOUT TSO data set. The RAF components are provided as members in the Tivoli Decision Support for OS/390 component library, with a default data set name of DRL150.SDRLDEFS. The members for each component and their data tables can be viewed using the Tivoli Decision Support for OS/390 Components dialog box, shown in Figure 2-2 on page 20. Each subcomponent in the accounting component is made up of the following set of object definitions: Log definitions Log definitions specify the input source for subcomponents. Some subcomponents, for example, IMS, use more than one log definition to accommodate multiple release levels. Record definitions Record definitions define the field layouts for each record in a log. Each log contains one or more record types. Log procedures and record procedure definitions Programs (C or Assembler) that can handle more complex record layouts. The CICS, DASD, and IMS subcomponents provide record procedure definitions, and the IMS subcomponent provides a log procedure definition. Lookup tables Lookup tables provide value substation and additions for subcomponents processing. One lookup table is used for account translation of each subcomponent. Additionally, lookup tables are used through out RAF for resource name translation. More information about lookup tables is given in Section 2.1.3, RAF lookup and control tables on page 23. Data tables Data tables are used to store collected data. The BATCH, STC, and TSO component parts each have a second data table to hold intermediate data during collection. Update definitions The update definitions define the process for updating the data tables with information found in the records.
22
Purge definitions Purge definitions specify retention criteria for data stored in the data tables. In addition to the definitions provided in the different Resource Accounting Feature subcomponents log definitions, record definitions, log procedures, and record procedures from the Tivoli Decision Support for OS/390 components are used. For this reason, the Tivoli Decision Support for OS/390 components need to be installed before the Resource Accounting Feature subcomponents. The record definitions used by the Resource Accounting Feature and the Tivoli Decision Support for OS/390 components are listed in Appendix D, Record definitions supplied with Performance Reporter, in the Tivoli Decision Support for OS/390 Release 5.1 Administration Guide, SH19-6816. For a description of the records, refer to the product manuals noted in the appendix. The Resource Accounting Feature subcomponents utilizes the same control tables as Tivoli Decision Support for OS/390 components. These control tables include: DAY_OF_WEEK PERIOD_PLAN SCHEDULE SPECIAL_DAY CPU utilization figures are not normalized in data tables belonging to Tivoli Decision Support for OS/390 components on any of the data tables used by the individual subcomponents of the Resource Accounting Feature. When Accounting Feature for the Workstation is used, CPU-normalization is performed in the workstation and should not be done inside the host Resource Accounting Feature. The account translation lookup tables are maintained within Tivoli Decision Support for OS/390. See Appendix C, Importing information from RACF on page 277 for more information on maintaining the lookup tables using RACF information.
23
The network resource translation table (NW_RESOURCE) for NPM and NETV The account translation lookup tables for each subsystem: RAFABATCH RAFACICS RAFADASD RAFADB2 RAFAHSMBKUP RAFAHSMMIGR RAFAIMS RAFANETSESM RFAANETV RAFASTC RAFATSO See Appendix C, Importing information from RACF on page 277 for more information on importing. Additionally, the following lookup table are part of the Accounting feature, but are not utilized when the Accounting Feature for the Workstationis used for accounting and chargeback: The billing period table (BILLING_PERIOD) The customer table (CUSTOMER) The account table (ACCOUNT) The account proration table (ACCT_PRORATE) The credit/debit table (CREDIT_DEBIT) The CPU normalization table (CPU_NORMAL_DATA) The price list table (PRICE_LIST) The PERIOD_PLAN and SPECIAL_DAY tables are shipped as part of the Tivoli Decision Support for OS/390 base code. If both tables were customized during the initial Tivoli Decision Support for OS/390 installation, then Resource Accounting Feature will use the values from these tables and no additional updating of these tables is required.
24
When installing and using the DFSMS component, it is important to populate the DFSMS_LAST_RUN lookup table before performing any collections. After each DCOLLECT, the collection process automatic updates this table with record type and time stamp information. The Resource Accounting Feature DFSMS subcomponent uses the DFSMS_LAST_RUN lookup table as well, so if you have updated this lookup table as part of the DFSMS component installation, then you do not need to perform any action on it at this time.
25
During the collection process, Tivoli Decision Support for OS/390 stores JES resource usage information in the RAFJOBLOG and RAFBATCH tables. Table 2-1 shows the list of columns and the corresponding record fields that are used in the batch job accounting purposes.
Table 2-1 Account Information for JES resources
Field name in SMF Records SMF30JBN SMF6JBN SMF30CLS SMF30SID SMF30RUD SMF30GRP SMF30USR SMF6OUT SMF30ACT Column name in RAFJOBLOG J_JOBNAME Column name in RAFBATCH JOBNAME Description Job name from the job card Job class MVS System ID RACF User-ID RACF group Program name Printer name Details obtained from the ACCT parameter of the job card Details obtained from the ACCT parameter of the job card Details obtained from the ACCT parameter of the job card
SMF30ACT
J_ACCT2
ACCT2
SMF30ACT
J_ACCT3
ACCT3
26
Description Details obtained from the ACCT parameter of the job card Details obtained from the ACCT parameter of the job card
SMF30ACT
J_ACCT5
ACCT5
27
28
TSO resources can be associated to a customer using the user ID (USERID) or the account fields (ACCT1, ACCT2, ACCT3, ACCT4, and ACCT5). Using the TSO logon panel, it is possible to insert 40 characters of information into the account fields. Tip: For TSO users who work on several project during a day, it is not acceptable to log off and log on with a new account information each time they begin work on a different project. In such situations, use the default personal account number of the employee to record usage. Then distribute usage and costs over the different projects by other means, such as project time registration. The TSO usage is best represented using a metric of CPU time. To calculate the total time in seconds, sum up the SRB time, TCB time, IIP time, HCT time, and RCT time. If you use the RACF user ID for associating the TSO usage to a cost center, it is possible to fill the lookup table RAFATSO automatically with the cost center information from RACF. Refer to Appendix C, Importing information from RACF on page 277 for more information. Figure 2-6 gives an overview of the possible account information in the TSO accounting component. From Table 2-2 on page 28, you find all the possibilities to account for these resource. You can use the account information from the job card, and if there is no valid information, use the user ID for the account connection from lookup table DRL.RAFATSO.
29
STC resource usage are recorded in the tables RAFADDRLOG and RAFSTC. Table 2-3 shows the columns and related record fields that are used for STC resource accounting.
Table 2-3 Account information for STC resources
Column name STCNAME SYSID PGMNAME USERID RACFGRP ACCT1 ACCT2 ACCT3 ACCT4 ACCT5 Record name SMF30JBN SMF30SID SMF30PGM SMF30RUD SMF30GRP SMF30ACT SMF30ACT SMF30ACT SMF30ACT SMF30ACT Description Job name from job card MVS System ID Program name RACF User-ID RACF group Details obtained from the ACCT parameter of the job Details obtained from the ACCT parameter of the job Details obtained from the ACCT parameter of the job Details obtained from the ACCT parameter of the job Details obtained from the ACCT parameter of the job
STC resource usage can only be accounted for using the account fields (ACCT1, ACCT2, ACCT3, ACCT4, and ACCT5). For this reason, it is necessary to fill account information in all EXEC statements of the started task procedures. You can find more information in Section 3.3, Interval recording and long running started tasks on page 47. The best accounting metric to use for started tasks is CPU time. To calculate the total time taken by the jobs in seconds, sum up the SRB time, TCB time, IIP time, HCT time, and RCT time. There are three different methods for associating STC resources with a cost center: System STCs, such as RMF, VTAM, JES2, or TSO, cannot be assigned directly to a cost center. The STC usage can be accounted for by using a dummy account that is internal to the IT department. The various cost centers can be charged for these STCs as a system overhead. This overhead cost can be charged by distributing the cost over the various cost centers, based
30
on another usage value, or the cost can be charged based on a fixed amount to each of the cost centers. STC resources that provide accounting information. For example, DB2, CICS, and IMS have their own accounting information. DB2 has resource accounting information in the SMF type 101 records, CICS has resource accounting in SMF type 110 records, and IMS stores accounting information in the IMF logs. These STCs can be accounted for by using a dummy account for the internal IT department. For all other STCs, assign the STC directly to a cost center based on the cost center account information in the EXEC statements. Figure 2-7 gives an overview about the possible account information in the Started Task accounting component. In Table 2-3 on page 30, you can see all the possibilities to account for these resources. You can use the account information from the job card, so it is not necessary to fill the lookup table DRL.RAFASTC.
31
Description RACF User-ID used for the DB2 connection. Information about the connection is used. It contains information for BATCH, TSO, DB2CALL, SERVER, UTILITY or CICS / IMS systems or user defined information. Information from DB2. DB2 plan name.
CORRELAT PLANNAME
QWHCCV QWHCPLAN
When an application accesses DB2, the CPU time used by DB2 is reported back to the calling application for BATCH, TSO, STC and IMS connections. However, DB2 does not report the CPU resource usage for all applications accessing it. For this reason, it is important to understand the following when doing accounting for DB2 resources: CICS and DB2 When a CICS program runs an SQL statement, the CICS task goes into wait state and DB2 takes over. The CICS accounting data is in SMF record 110 and the DB2 data is in SMF record 101. There is no overlap. For this reason, it is necessary to account the DB2 resources usage separately from the CICS usage. When an IMS program runs an SQL statement, the accounting data is recorded both in DB2 and IMS (record type 07 in the IMS log). When an TSO program runs an SQL statement, the accounting data is recorded both in DB2 and TSO (SMF record type 30). When an BATCH program runs an SQL statement, the accounting data is recorded both in DB2 and BATCH (SMF record type 30). When an STC program runs an SQL statement, the accounting data is recorded both in DB2 and STC (SMF record type 30). When connecting from a workstation, using, for example, Tivoli Decision Support for OS/390 Viewer,
32
the resource consumption is recorded only in SMF101. The resources can be distinguished by a connect type equal to SERVER. This resource usage has to be accounted for separately. Based on the double recording for IMS, TSO, STC, and BATCH, to completely account for DB2 resource usage, you only need to account for the DB2 resource usage for the CICS and SERVER connections. The best accounting metric for DB2 usage is CPU time, which provides the total time taken by the transaction in seconds, calculated as: Sum of SRB time and TCB time (for DB2, until Version 5.1). Only TCB time (for DB2 Version 6.1 and later); the DB2 SRB time is included in a carrier system, such as TSO, IMS, or CICS. Figure 2-8 gives an overview about the possible account information in the DB2 accounting component. From Table 2-4 on page 31, you get all the possibilities to account for these resources. To generate the account information, use the lookup table DRL.RAFADB2.
33
CICS transaction accounting can be based on the RACF user ID, transaction ID, terminal ID, or operator ID. It is recommended that you use the RACF user ID as the base for performing CICS resource accounting. The best metric for CICS resource usage is the CPU time associated to transactions. If you work with the RACF user ID for associating the CICS usage to a cost center, it is possible to fill the RAFACICS lookup table automatically with the cost center information from RACF. Refer to Appendix C, Importing information from RACF on page 277 for more information. Figure 2-9 gives an overview of the possible account information in the CICS accounting component. From Table 2-5, you get all the possibilities to account for these resource. To generate the account information, use the DRL.RAFACICS lookup table.
34
The IMS transaction can be associated to the RACF user ID, transaction name and logical terminal name. Historically, many IMS transaction were only associated to a logical terminal name in earlier releases of IMS, but since IMS Version 4, a RACF user ID is becoming the more accepted way for associating IMS resource usage. The best accounting metric for IMS transactions is the CPU time.
Note: The RAFIMS table contains summarized information for different types of transactions. For batch message program (BMP) transactions, the CPU time is also collected in the table RAFBATCH for SMF. This can result in the value in USE_SUMMARY_D containing an high number. This is documented in APAR PQ35089. The comments from APAR PQ35089 and a possible solution are shown in Example 2-1.
Example 2-1 Change Update RAFIMS_USSM - APAR PQ35089 NOTE: Please, consider that in some cases the USE_SUMMARY_D | table may contain some high invalid value. This happens | when the table is populated with both IMS and SMF data | generated by IMS BMP transactions. The BMP is not | started by the IMS control region, but is started by | submitting a batch job, for example by a user via TSO, | or via a job scheduler such as OPC/ESA. This kind of | transactions generates both IMS and SMF 30 records | with the same CPU_SECONDS value. In this case, if both
35
| IMS and MVS Components are installed, the USE_SUMMARY_D | column CPU_SECONDS could be incriminated twice for the | same job causing a too high value. | In order to avoid this problem, You should modify, for | examples, the RAFIMS_USSM Update definition in Performance | Reporter member DRLUIMS filtering the BMP transactions | as follows: | | FROM: | | DEFINE UPDATE RAFIMS_USSM | VERSION 'IBM.130' | FROM &PREFIX.RAFIMS | WHERE ACCOUNT <> 'NULLACCT' | | TO: | | DEFINE UPDATE RAFIMS_USSM | VERSION 'IBM.130' | FROM &PREFIX.RAFIMS | WHERE ACCOUNT <> 'NULLACCT' AND TRANS <> '$BMP ' | | This is just an example of how to solve this kind of problem
If you use the RACF user ID for accounting IMS accounting information to a cost center, it is possible to fill the RAFAIMS lookup table automatically with the cost center information from RACF. Refer to Appendix C, Importing information from RACF on page 277 for more information. Figure 2-10 gives an overview of the possible account information in the IMS accounting component. From Table 2-6 on page 35, you get all the possibilities to account for these resources. To generate the account information, use the DRL.RAFAIMS lookup table.
36
Figure 2-11 gives an overview of the possible account information in the DASD accounting component. From Table 2-7, you get all possibilities to account for these resources. To generate the account information, use the DRL.RAFADASD lookup table.
37
38
Chapter 3.
39
40
Example 3-1 SMF parameter modification for TDS/390 INTVAL (15) SYNCVAL (0) SUBSYSTEMS (STC, INTERVAL (SMF, SYNC))
For more information on SMF recording parameter, see OS/390 Version 2 Release 10.0 MVS System Management Facilities, GC28-1783.
Tivoli Decision Support for OS/390 Release 5.1 System Performance Feature Reference Volume 1, SH19-6819 Tivoli Decision Support for OS/390 Release 5.1 System Performance Feature Reference Volume 2, SH19-4494 Tivoli Decision Support for OS/390 Release 5.1 Administration Guide , SH19-6816 Tivoli Decision Support Release 5.1 IMS Performance Feature Guide and Reference, SH19-6825
41
It is necessary to collect all SMF logs from all operating system images and IMS logs from all IMS sub-systems you want to charge for, including any test and development systems. The resources from these test and development systems may not be used for charging back, but they may be used for performance measurements.
Pro Image
Stored in a Monthly-GDG
Figure 3-1 shows the collection process to be followed for the collection and archiving of SMF and IMS log data. Key points in the log collection process are: When a log switch occurs, a dump job should be performed, either started by an automatic operations program or manually by the operator. The SMF log from the SYS1.MANx data set can be dumped using the IFASMFDP program. This dumped log is shown as DRL Log File in Figure 3-1 on page 42. The IMS log is archived into an IMS system log dataset (SLDS) and processed using the DRL2LOGP program in your IMS log collection procedure.
42
The generated SMF and IMS logs from all systems need to be made available to Tivoli Decision Support for OS/390 system for collection. The Tivoli Decision Support for OS/390 collection job (DRLJCOLL) should be executed as soon as possible following the availability of the SMF or IMS log file. Logs can be stored in a daily Generation Data Group (GDG) for archiving and then aggregated into a monthly GDG.
Note: To ensure DCOLLECT is aware of all volumes, all disk volumes should be online when the DCOLLECT is executed.
43
The IDCAMS DCOLLECT job should be run for each volume table of contents (VTOC) every day. It is necessary that the IDCAMS DCOLLECT is running before Data Facility Storage Management Subsystem (DFSMS) housekeeping. To collect the volume usage information, you should run the Tivoli Decision Support for OS/390 collection job on the same day as the IDCAMS DCOLLECT flat file is generated. To analyze the DASD and tape information, you can use either the average or maximum number of allocated space for each month. To minimize any sway or spike in the data, we recommend that you use the average number of the allocated space.
44
Day 1
Day 2
20 21
24
12
24
Lookup Tables Create RACF Flate File TDS Collect DCOLLECT Output of the Log File (IDCAMS DCOLLECT) TDS COLLECT SMF / IMS Switch (00:10) create SMF / IMS Dump start TDS Collect
At this specific point in time, all previous day Performance data will be available for all systems.
Figure 3-3 shows the sequence of events to be performed in the daily collection process, including: Populating the LOOKUP tables from RACF or another mechanism using the RACF flat file collection (discussed in Appendix C, Importing information from RACF on page 277). Running IDCAMS DCOLLECT and its corresponding Tivoli Decision Support for OS/390 collection. It is important that these jobs be run in the same day. Forces SMF and IMS log switch at midnight.
45
Trigger automatic collection jobs. Following the midnight switch of logs and completion of all Tivoli Decision Support for OS/390 collecting, the accounting data from the previous day is consolidated in the host database (DB2). The Tivoli Decision Support for OS/390 administrator should check the daily processes using the Tivoli Decision Support for OS/390 administration dialog boxes and address any problems that may have occurred. From the Log panel, it is possible to choose the import log definition like SMF, DCOLLECT, or user defined. Select the logs that were processed and press the function key F6 to show the log collection history, as shown in Figure 3-4. The collection history provides information about the collections. If the status is OK, the collection process ends with a return code of zero. If the status contains a warning, you can open the record for more information. Normally, in situations where a warning is displayed, you will need to check the job output to gain a full understanding of the problem.
46
For the SMF records type 30 subtype 2 and 3, the accounting codes are taken from the EXEC statements. So, all STCs must have an accounting code in the EXEC statement of the startup procedure. The default settings for the Resource Accounting Feature has the Started Task sub-component (RAFSTC) work with SMF30 subtype 4 records. The subtype 4 record that contains the total resources used from the time when a step started until the time the step completes. The subtype 4 record generally contains the accumulated totals for the step, compared to the data from the subtype 2 and 3 interval records. To be able to process interval accounting information, you will need to modify the STC update definition. The installation of the shipped Resource Accounting Feature Started Task sub-component creates three update definitions: RAFADDR_SMF30, RAFADDR_SMF30_A, and RAFADDR_SMF30_E. The where condition of the provided update definitions refer to subtype 4 records. To work with type 2 and 3 records, these update definitions need to be changed. Also, the accounting time stamp that originally used the start reader time (SMF30RSD and SMF30RST) needs to be changed to use the SMF30ISS value.
47
Changing the SMF30 update definitions can be accomplished either before or after the Recourse Accounting Feature Started Task sub-component is installed. Section 3.3.1, Before installing RAF STC component on page 48 describes the changes to the update definitions to be made before installing the sub-component. Section 3.3.2, After installing RAF STC component on page 50 shows how to modify the update definitions after the sub-component has been installed.
48
SMF30RVN >= 05 AND (SMF30TYP = 2 OR SMF30TYP = 3)<== CHANGE TO &PREFIX.RAFADDRLOG . . . LET ( . . . ID_TIME = TIMESTAMP (SMF30ISS), <== CHANGE USER_FLD = SUBSTR(SMF30JBN,1,1)) GROUP BY (A_TIMESTAMP . . .
= ID_TIME,
Example 3-3 shows the modification to the RAFADDR_SMF30_E update definition to support the use of interval accounting information form type 2 or 3 SMF30 records and time stamp information from SMF30ISS.
Example 3-3 STC interval accounting changes for RAFADDR_SMF30_E DEFINE UPDATE RAFADDR_SMF30_E VERSION 'ITSO.150' FROM SMF_030 SECTION EXCP WHERE SMF30WID = 'STC' AND SMF30RVN >= '05' AND (SMF30TYP = 2 OR SMF30TYP = 3) <== CHANGE TO &PREFIX.RAFADDRLOG LET (ID_TIME = TIMESTAMP (SMF30ISS)) <== CHANGE GROUP BY (A_TIMESTAMP = ID_TIME , A_STCNAME = SMF30JBN, . . .
Example 3-4 shows the modification to the RAFADDR_SMF30_A update definition to support use of interval accounting information form type 2 or 3 SMF30 records and time stamp information from SMF30ISS.
Example 3-4 STC interval accounting changes for RAFADDR_SMF30_A DEFINE UPDATE RAFADDR_SMF30_A VERSION 'ITSO.150' FROM SMF_030 SECTION ACCOUNTING WHERE SMF30WID = 'STC'
49
AND SMF30RVN >= '05' AND (SMF30TYP = 2 OR SMF30TYP = 3) TO &PREFIX.RAFADDRLOG LET (S1 = SECTNUM(ACCOUNTING) , ID_TIME = TIMESTAMP (SMF30ISS) ) GROUP BY (A_TIMESTAMP A_STCNAME . . . <== CHANGE
<== CHANGE
= ID_TIME , = SMF30JBN,
50
Follow these steps: 1. From the main Tivoli Decision Support for OS/390 main panel, as shown in Figure 3-5, select Administration (2) and press Enter.
51
2. From the Administration panel, as shown in Figure 3-6, select Tables (4) and press Enter.
52
3. From the table list panel, as shown in Figure 3-7, page down (F8- Fwd) until you find the RAFADDRLOG table. Select the RAFADDRLOG table by placing a character to the left of the table name, as shown in Figure 3-7, and press the F5 key for Updates.
53
4. From the list of update definitions for the RAFADDRLOG table, as shown in Figure 3-8 on page 53, select the RAFADDR_SMF30 definition from the list by placing a character to the left of the definition name and press Enter to display the definition shown in Figure 3-9.
5. On the RAFADDR_SMF30 Update Definition panel, as shown in Figure 3-9, update the Condition statement. Because the length of the new condition statement will be longer than the input on the Update Definition panel, you will need to use the show field function by placing the cursor on the Condition input area and pressing the F10-Show fld key to display the Show Field panel, as shown in Figure 3-10 on page 55.
54
6. On the Show Field panel, as shown in Figure 3-10, update the condition statement to work with either the type 2 or 3 SMF30 record sub types, as shown in Figure 3-11.
55
7. When you have completed the updating of the condition statement in the Show Field panel, press Enter to save your update and return to the Update Definition panel, as shown in Figure 3-12.
8. To update the time stamp information, you need to change the abbreviation values for ID_TIME. To see the abbreviations, place the cursor in the expression value for ID_TIME, as shown in Figure 3-12, and press the F5=Abbrev key.
56
9. On the Abbreviations panel, as shown in Figure 3-13, change the Expression for the ID_TIME abbreviation to TIMESTAMP(SMF30IIS) from TIMESTAMP (SMF30RSD,SMF30RST). When you have completed the change to the ID_TIME value, press the F3=Exit key to save the change and return to the Update Definition Panel, as shown in Figure 3-14.
57
10.To save all the changes you have made to the update definition, press the F3=Exit key to save and exit the update definition function. You will be returned to the Update Definition list panel, as shown in Figure 3-15.
To complete the updating of all the definitions, follow the process outlined above to make the following changes to the two other update definitions: For RAFADDR_SMF30_E Change Condition to SMF30WID = 'STC' AND (SMF30TYP = 2 OR SMF30TYP = 3) Change Expression for A_TIMESTAMP to TIMESTAMP (SMF30ISS) For RAFADDR_SMF30_A Change Condition to SMF30WID = 'STC' AND (SMF30TYP = 2 OR SMF30TYP = 3) Change Expression for A_TIMESTAMP to TIMESTAMP (SMF30ISS)
58
Note: The update definitions for RAFADDR_SMF30_A and RAFADDR_SMF30_A do not use abbreviations, so the expression for A_TIMESTAMP can be changed on the Update Definition panel directly.
59
An option for managing this issue is to have the download timed for the least busy time of the month when the highest number of jobs from the previous month have completed. It may also be necessary to force a clean up of the print spool a few days before the download date or work with a print output management program.
60
Chapter 4.
61
62
Microsoft Windows 95, Windows 98, or Windows NT Workstation 4.0 (Windows 2000 or Windows ME not officially supported) IBM compatible workstation with a Pentium II processor (equivalent or higher) 100 MHz I/O bus (recommended) 64 MB RAM 100 MB disk storage
63
The installation starts and extracts the installation files on your workstation, as shown in Figure 4-2.
When the extract finishes, the InstallShield Wizard panel is displayed, as shown in Figure 4-3.
64
Click on the Next > button of the InstallShield Wizard panel, shown in Figure 4-3, and the Choose Destination Location panel is displayed, as shown in Figure 4-4.
Select the Browse button and choose the destination folder that you want the Accounting Feature for the Workstation to be installed to, or you can choose to accept the default directory chosen by the install procedure, as shown in the Destination Folder area of the InstallShield Wizard Choose Destination Location panel. When you are ready to proceed with the installation, either in the default destination folder or the one you selected using the Browse button, select the Next button. The Setup Type panel is displayed, as shown in Figure 4-5 on page 66.
65
On the Setup Type panel, shown in Figure 4-5, always select the Custom option for your installation. This will allow you to select the demo database to be installed along with the Accounting Feature for the Workstation. The installation of the demo database is advantageous, and is covered more in Section 5.3, Directory structure of the demo database on page 79. Select the Next button after you have clicked the Custom button. The Select Components panel is displayed, as shown in Figure 4-6 on page 67.
66
On the Select Components panel, as shown in Figure 4-6, select all available components and click on the Next > button. The confirmation of the Start Copying Files panel with all your choices is displayed, as shown in Figure 4-7.
67
After you have verified your input, and corrected the input (if necessary), select the Next > button to start the install process of placing all needed files on the workstation. A setup status panel is displayed, as shown in Figure 4-8, so you can track the progress of the installation.
When all the files are copied, including the demo files, and the database is installed, you will be ask to specify the language to be used on the workstation where Accounting Feature for the Workstation is being installed, as shown in Figure 4-9.
Select your language from the drop-down list and click OK to proceed with the installation. When the installation is finished, the InstallShield Wizard Complete panel is displayed, as shown in Figure 4-10 on page 69.
68
Select the Finish button of the InstallShield Wizard Complete panel, shown in Figure 4-10, to exit the installation process. The installation program has completed the installation, but before you run the accounting console on the workstation, as described in Section 4.6, Starting the accounting console on page 74, you need to verify and correctly set the date and currency formats for the Windows operating system, as outlined in Section 4.4, Setting date format on page 69 and Section 4.5, Setting currency format on page 72.
69
The Regional Options panel is displayed. Select the Date tab to display the Calendar panel, as shown in Figure 4-12 on page 71 for the Windows NT and Windows 2000 environment or as shown in Figure 4-13 on page 71 for the Windows 9x environments.
70
71
On the Regional Setting Properties panel, use the drop-down list for the Short date format to select the date format that is the same as the format of the Tivoli Decision Support for OS/390 exported data from the host system you will be downloading accounting data from. Additionally, use the Date separator drop-down list to select the date separator character used in your host exported data. Note: Most users will find that the date format of their Tivoli Decision Support for OS/390 data is MMDDYYYY and that the separator is the dash (-). After choosing the required date and date separator formats, click on the Apply and then the OK buttons. You have now set the date format to be used by the Accounting Feature for the Workstation and can now check and set, as needed, the currency symbol, as described in the next section.
72
Note: For Windows 9x, the Regional Options setting panel will have a slightly different look, as shown in Figure 4-14.
From the Currency symbol drop-down list of the Regional Options panel (shown in Figure 4-14), select the currency symbol you want the accounting and chargeback reports to use for your accounting console. When you have selected the values you want, use the Apply and then the OK buttons to activate them. Note: We found that in working with the Accounting Feature for the Workstation to prepare this redbook that using the decimal symbol of the period (.) and the digit grouping symbol of the comma (,) caused fewer problems when performing accounting and chargeback activity for all currency symbols. You have now set the currency symbols that will be used by the Accounting Feature for the Workstation and can close the Regional Options panel.
73
To start the Accounting Feature for the Workstation select Start -> Programs -> Tivoli -> Accounting Workstation Option 2.0, as shown in Figure 4-15. The Accounting Feature for the Workstation program will start and, when the accounting console is active, displays an empty panel, as shown in Figure 4-16 on page 75, from which you can start to perform accounting and chargeback activity. Information about using the accounting console in presented in the remaining chapters of this redbook.
74
Figure 4-16 Start panel of Accounting Feature for the Workstation 2.0
The Accounting Feature for the Workstation deals extensively with the timing and currency data. This requires you to set up your machines local settings regarding the date format and currency. Setting these values on your workstation is discussed in Section 4.4, Setting date format on page 69 and Section 4.5, Setting currency format on page 72.
75
76
Chapter 5.
77
78
When the master database is installed on the same machine as the Accounting Feature for the Workstation product, then the master database is a Microsoft Jet database, for example, when you have the Accounting Feature for the Workstation installed and are not sharing the accounting data with any other workstation. Note: A Microsoft Jet database has an extension of .mdb and is accessible using Microsoft Access.
Figure 5-1 Overview of the AWO program files and demo directory structure
79
In Figure 5-1, all the code and demo is shown being in the directory folder named AWO-DEMO. The default installation directory for the Accounting Feature for the Workstation is in the directory located on C:\Program Files\Tivoli\AWO20. The executable code for the accounting console resides in the main directory, that is, AWO-DEMO, as seen in Figure 5-1. The sub-directories under this main directory represent different database instances that the Accounting Feature for the Workstation can work with. In Figure 5-1, the Demo database is the only database being used, because the directory structure is for a newly installed Accounting Feature for the Workstation.
80
Each database directory contains the following: The local.mdb file as the local database. Either the link to the master database or the master database itself. The demo master database is called Master.mdb. The Data folder, which holds the data transferred into and out of AWO. The System folder, which contains the system definitions and resources.
81
Figure 5-3 Sample data files shipped by the AWO demo database
The information contained in the import directory is used to perform the loading of data into the Accounting Feature for the Workstation database. Detailed information on performing an import is discussed in Chapter 7, Moving data from host to workstation on page 141. The contents of the import directory are: The data files can generally be: A text file, either fixed or delimited A CSV (comma separated value) file A Microsoft Excel file (.xls) The file drlsblda.txt is a sample file that holds TDS/390 sample data. It is an extract from table DRL.BILLED_DATA. The file schema.ini is a configuration file that describes the format of the data files in this directory. This file is important for import definition files creation and the import function of the AWO.
82
Batch Scripts Charts Export Definitions Import Definitions Queries Reports SQL For a newly installed Accounting Feature for the Workstation and Demo database, the Charts, Queries Definitions, and Reports folders are empty. The folders containing data are discussed in the following sections.
Figure 5-4 Sample batch script file shipped with AWO demo database
83
The batch script file contains the Accounting Feature for the Workstation commands, such as Apply Lookup, Apply Rates, Summarize, plus others. All the commands can be performed using the Accounting Feature for the Workstation desktop, but the batch script allows you to automate certain operation sequences, making it easier and quicker to perform repetitive accounting operations on the workstation.
Figure 5-5 Sample export definition shipped with AWO demo database
84
85
Figure 5-7 Sample SQL code files shipped with AWO demo database
86
Figure 5-8 Start panel of Accounting Feature for the Workstation 2.0
When initially open Accounting Feature for the Workstation, there is no database assigned. Therefore, the Open Local Database panel is displayed, as shown in Figure 5-9 on page 88. You use the Open Local Database panel to specify the database you want to use. Since this is the first execution of the Accounting Feature for the Workstation following the installation, you are asked to specify the name of the local database you wish to use. On subsequent executions of the Accounting Feature for the Workstations, you will not be asked to specify a local database to work with, but instead the Accounting Feature for the Workstation will open the last database you were using.
87
From the Open Local Database dialog box shown in Figure 5-9, select More... line and click the Open button. Then, using the browse panel shown in Figure 5-10, locate the Demo local database (Local.mdb) in the AWO20\DEMO directory.
88
Select the Local.mdb in the Demo directory and click the Open button. The accounting console panel shown in Figure 5-11 is displayed with the name of the local database currently open displayed in the title of the panel. The accounting console panel is blank, but the function icons are active in the panel action bar.
Figure 5-11 Accounting workstation option main panel with active icons
Using the mouse, slowly move over the active icons. As you pause over each icon, a text description of the icons function is displayed below it. Click on the second icon from the left (the Data Explorer icon), as shown in Figure 5-11, to open the Accounting Feature for the Workstation explorer, as shown in Figure 5-12 on page 90.
89
The data explorer provides a single point of access to all data, tables, and functions of Accounting Feature for the Workstation. The explorer navigation is documented in Chapter 4 of the Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516. When you select a table name from the explorer tree in the left frame of the explorer panel, for example, ledger_active, the contents of the table are shown in tabular form in the right frame of the explorer panel. Figure 5-13 on page 91 shows the contents of the ledger_active table of the demo sample.
90
number of records
Additionally, the explorer panel shown in Figure 5-13 has various values displayed at the bottom of the explorer panel highlighted. The following list is a description of the highlighted values: Number of records Shows the selected record number. When no record is selected, this value shows the number of records of the currently selected table; in the example shown in Figure 5-13, this is the active ledger table. Shows whether you have read and/or write access to the displayed ledger or table. The total associated with the currently displayed ledger or table.
Current billing period The billing period that this ledger or table represents. In the example shown in Figure 5-13, which is the sample demo data and table provided with Accounting Feature for the Workstation, this value is 199901, or the first period of year 1999 representing the latest billing period stored in the sample database. When you have established your accounting system and are working
91
with current accounting and chargeback data in your environment, this value should be the current month. Current date Current time The date from the workstation. The time from the workstation.
Local database
Because the Definitions folder, shown in Figure 5-15, and the Systems directory are aligned, the contents of the Definitions folder in the data explorer closely match the contents of the Systems directory. See Section 5.3.3, The System directory on page 82 for a discussion of the System directory.
The exploded view of the local database for the sample Demo database is shown in Figure 5-16 on page 93. It contains the temporary import tables and query results.
92
Note: Additionally, you can expand the All Tables folder to access any table directly. The master database folder is expanded in Figure 5-17.
93
There are several folders in the master database folder: The ledger (Revenue) folder provides access to revenue ledger tables. There are three types of ledger: Active ledgers Historical ledgers Interim ledgers Most of the work you will perform will be done in the Active ledger. Data from the active ledger is copied to the historical ledger for future reference. Temporarily stores the data that you are still processing in the ledger and have not finalized.
These ledger tables represent revenue for the IT accounting area in the form of chargeback to the users for IT services, other departments, and organizations. Expense Tables provide expenses information for matching with the ledgers for cost analysis and profit/loss analysis. Expense tables also are stored in Active, Historical, and Interim tables corresponding to the Ledgers. Budget Tables provide budget forecasting. Values in the budget tables are loaded using the Accounting Feature for the Workstation for planning and reporting purposes. The Maintain folder contains tables that need to be maintained manually or loaded using the import function. The tables in the Maintain folder are used to provide data manipulation capabilities of the ledgers. The Accounting Feature for the Workstation functions that utilize these tables include: Allocations Direct Charges Factors Lookups Rates These functions are discussed in Section 6.1, Accounting console resources on page 96. Results Tables is where the output of various reports are placed. Many of the reports are generated automatically by functions of Accounting Feature for the Workstation
94
Chapter 6.
95
When we work with a new Accounting Feature for the Workstation database, the items in this menu are empty. Therefore, we need to set up the necessary resources for Accounting Feature for the Workstation operation. The following sections discusses the functions and usage of these Accounting Feature for the Workstation resources. The following resources will be discussed: Section 6.1.1, Custom fields on page 97 Section 6.1.2, Calendar on page 103 Section 6.1.3, Service Category table on page 104 Section 6.1.4, Allocations on page 104 Section 6.1.5, Direct charges on page 110
96
Section 6.1.6, Factors on page 116 Section 6.1.7, Lookups on page 120 Section 6.1.8, Rates on page 129 Section 6.1.9, Budget table on page 135
97
Important: For information regarding the use of custom field definitions in conjunction with the import function, see Section 7.3, Importing data into the workstation on page 146. To define a custom fields definition you need to use the Accounting Feature for the Workstation custom field process. From the Accounting Feature for the Workstation main panel action menu bar, select Maintain -> Custom Field, as shown in Figure 6-2. Note: Remember that some custom fields are defined in the sample Demo database.
This opens the Custom Field definition panel, as shown in Figure 6-3 on page 99. When using the demo database, you see the definition for use in conjunction with the Tivoli Decision Support for OS/390 import definition, which is stored in drlsblda.imd. The custom fields and the predefined static fields will be implemented in the ledger tables, such as the ledger_active table.
98
In the Custom Fields panel, shown in Figure 6-3, you can define new fields (Add), delete fields (Remove), change field definitions (Edit), or order the fields in another sequence (Move up or Move down).
For example, in Figure 6-4, the APPL field has been selected to be moved down in the order of fields. Figure 6-5 on page 100 shows the results of the reordering of the fields with GLACCT being before APPL.
99
To add a field, select the field you want the new field to be added after and then press the Add button. The Create Custom Field panel shown in Figure 6-6 is displayed. In the Create Custom Field panel, enter the name of the new field to be added. In the example shown in Figure 6-6, the field KOSTID is being added. After entering the new field name, press the OK button.
You will next be asked to provide the characteristics (type, size, affected tables, indexed and description) for the new field in a dialog box similar to the Edit field dialog box shown in Figure 6-7 on page 101.
100
To edit a field, select it in the Custom Field panel and click the Edit button. The Edit Field dialog box is displayed, as shown in Figure 6-7. In the areas of the Edit Field dialog box specify the changes you want to make to the field, including type, size, tables, indexed, and description.
Figure 6-8 shows the field types available for custom fields in the drop-down list. You can select from the list of field types for the field you are editing.
101
When you have completed editing the information in the field you are editing, select the OK button on the Edit Field dialog box shown in Figure 6-9. If you decide to not save the changes you made, you can select the Cancel button on the Edit Field dialog box to exit the edit function and not save any changes you had indicated. The Accounting Feature for the Workstation program displays true or false values for the table and index values selected for each field, as shown in Figure 6-10 on page 103. The true state means the field will be in the Revenue, Expense, or Budget tables and it is indexed or not. The false state means the field will not be in the respective tables or indexed.
102
Figure 6-10 Custom field definitions for index and ledger tables
The example shown in Figure 6-10 has the GLacount field set to not be in the index but in the Revenue, Expense, and Budget tables, based on using the edit field dialog box shown in Figure 6-7 on page 101. Important: At first view, the Include in table categories selection looks very harmless, but these selections influence the creation of Expense and Budget tables. Each custom field with a selection of true will then appear in the corresponding table. Additionally, these settings have an impact on the grouping and calculations performed using these fields and the corresponding budget, expense, and revenue tables.
6.1.2 Calendar
The calendar has a minimum use in the Accounting Feature for the Workstation, which is related to the setting of the current billing period, which is based on the value set in the active ledger or historical ledger. Note: The supported billing period in the Accounting Feature for the Workstation is 1 month equals 1 period, with 12 billing periods in the year. The only time the calendar values have any impact on the establishment of dates in the ledger is when no billing period is set in the active ledger and no historical ledger exists. In normal processing, Accounting Feature for the Workstation establishes the date for a new ledger you create based on the billing period of the latest historical ledger by adding one period to the billing period of the latest historical ledger.
103
6.1.4 Allocations
The allocation tables provides a mechanism for manually allocating a percentage of charges to different departments or customers. The allocation table looks similar to the alloc_abc table shown in Figure 6-11.
When you apply an allocation table to the ledger_active table processing the QUANTITY and CHARGES columns of the ledger_active table, the following sequence happens: 1. A new temporary table is created as a join from the ledger_active and alloc_abc tables. This is a left outer join, meaning it will include all the rows from ledger_active and put any entries from alloc_abc that matches the condition ledger_active.ACTIVITY equal to alloc_abc.FROM-ACTIVITY into the additional columns. The following SQL is how this processing is performed:
SELECT * INTO _tmp_20010829_164232 FROM ledger_active LEFT JOIN alloc_abc ON ledger_active.ACTIVITY = alloc_abc.FROM-ACTIVITY ;
104
Note: You can view the SQL behind the apply processing by selecting the Save SQL button in the Apply panel after you have filled in the fields in the Apply panel, for example, the Apply Allocation panel shown in Figure 6-16 on page 107 or the Apply Direct Charges panel shown in Figure 6-28 on page 114. When you select the Save SQL button, you will be asked to specify the text file and directory to save it into. You can view the saved SQL using your favorite text editor. 2. Update the APPL and ACTIVITY fields of the temporary table to match the TO-APPL and TO-ACTIVITY columns if the FROM-ACTIVITY column is not null (means that there is an entry in the alloc_abc table). The following SQL is how this processing is performed:
UPDATE _tmp_20010829_164232 SET APPL = TO-APPL, ACTIVITY = TO-ACTIVITY WHERE FROM-ACTIVITY Is Not Null ;
3. Change the QUANTITY and CHARGES columns according to the percentage from the allocation table. The following SQL is how this processing is performed:
UPDATE _tmp_20010829_164232 SET QUANTITY = QUANTITY * (PERCENT/100), CHARGES = CHARGES * (PERCENT/100) WHERE PERCENT Is Not NULL AND PERCENT > 0;
4. Reapply the rate if necessary (if rate is not empty). The following SQL is how this processing is performed:
UPDATE _tmp_20010829_164232 SET CHARGES = QUANTITY * RATE WHERE RATE Is Not Null;
5. Replace all the data in the ledger_active with the data from the temporary table and delete the temporary table. Since the allocation table splits up charges, it is always checked to ensure that all the sum of the percentages split is 100 percent. This ensures that no charges falls through the crack. Allocation tables are used as input to the Apply Allocation Table function to distribute numerical data across records based on a percentage, for example, to distribute overhead costs. The apply allocation table function takes each record of the input table and matches it to a record in the allocation table based on common fields. If the fields match, new records are placed in the input table and the original values are no longer in the table. To create an allocation table, either
105
select from the accounting console explorer the Allocation folder in the left frame and right-click on the mouse, or select Maintain -> Allocation Tables from the accounting console main panel. The Select Allocation Table panel opens, as shown in Figure 6-12.
This panel lets you define a new table, edit an existing table, delete an existing table or verify the percent input value of an existing table. The result of the percent values must be 100; if not, an error message is displayed, as shown in Figure 6-13.
106
In Figure 6-15, the allocation table distributes 50 percent of the generated resource usage data from Account11111 to customer1 and 50 percent to customer2.
To apply the application table data to the active ledger table, select the required table, right-click on the mouse, and select Apply Allocation or, from the accounting console main panel, select Functions -> Apply Allocations. The Apply Allocation panel is displayed, as shown in Figure 6-16.
107
Select the Input table, the Allocation table and the Allocation field. The input and allocation tables are selected from the drop-down lists. To select the allocation field values, click on the icon on the right side; the Select Fields panel is displayed, as shown in Figure 6-17.
Use the add, all, and remove buttons in the middle of the Select Fields dialog box shown in Figure 6-17 to select the field or fields you want to be processed in the allocation. When the list of fields in the Selected Fields area of the Select Fields dialog box is as you want them, click the OK button. You are returned to the Apply Allocation panel shown in Figure 6-16 on page 107. There is also the ability to save the SQL code (Save SQL button) and choose selection criteria (Criteria button) using the buttons on the Apply Allocation panel, shown in Figure 6-16 on page 107, before applying the data with the Execute button. Click the Execute button to apply the data.
You will be asked in the pop-up box shown in Figure 6-18 if you want to have a interim table created. Click Yes to save the current state of the active ledger table into an interims table.
108
The interims table is created and you are shown its name in the dialog box shown in Figure 6-19. Select the OK button to proceed.
When the apply function ends successfully, you will be shown the successful completion dialog box shown in Figure 6-20. Select the OK button.
109
The active ledger table shown in Figure 6-21 shows the results of the apply allocation function.
The direct charges application to ledger_active are implemented in the following SQL script:
INSERT INTO ledger_active ( DEPT, SERVICE, QUANTITY, CHARGES ) SELECT DEPT, SERVICE, QUANTITY, CHARGES
110
FROM direct_adjust WHERE ( direct_adjust.DC_START <= '199901' AND direct_adjust.DC_END >= '199901' ) ;
The value 199901 is the year and month of the current period. All the necessary columns of the direct_adjust are inserted directly in the ledger_active if the current billing period is between DC_START and DC_END. Direct Charge tables are used as input to the Apply Direct Charge function. Direct charge tables contain entries for one time charge or recurring charges. To create a direct charge table, select either Maintain -> Direct Charge Tables from the Accounting Feature for the Workstation main panel menu or, from the AWO explorer, select the Direct Charges folder and right-click on the mouse button. The Select Direct Charge Table panel shown in Figure 6-23 opens.
Select either an existing table to Edit or choose New to create a new table. Clicking the New button opens the Create New Direct Charge Table panel, as shown in Figure 6-24.
111
In the left frame (Fields:) of the Create New Direct Charge Table dialog box shown in Figure 6-24, select the columns needed and move them to the right frame (Required?). Then click the Execute button.
Next, specify the name of the new direct charge table you are creating in the dialog box shown in Figure 6-25. Enter a new table name and click the OK button.
112
The new table is created and the successfully created message shown in Figure 6-26 on page 112 is displayed. Click the OK button. The Select Direct Charge Table panel is redisplayed, as shown in Figure 6-27. Click the Edit button to enter values into the direct charge table.
Note: Take care with the entered charge and quantity values. These values are inserted for each period that is in the range defined in the DC_START and DC_END fields. Applying the direct charge data to the ledger tables is done by the Apply Direct Charge function. Close the Edit and Direct Charge dialog boxes shown in Figure 6-27 if they are open. Then select Functions -> Apply Direct Charge from the accounting consoles main panel menu. The Apply Direct Charge panel opens, as shown in Figure 6-28 on page 114.
113
Select the ledger and direct charge tables from the drop-down lists and click Execute. There is also the ability to save the SQL code (Save SQL button) and choose the selection criteria (Criteria button) using the buttons on the Apply Direct Charges panel shown in Figure 6-28 before applying the data with the Execute button.
Answer Yes to the create interim question shown in Figure 6-29 to save your current data to an interim table.
114
Note: Creating an interim table creates an image copy of the current status of a certain table. You can use an interim copy to recover a table to a previous status if needed. For example, if a table gets corrupted during the apply rate or apply cost process, you can recover the table, correct the rate or cost table problem, and reapply using the interim. Click the OK button, as shown in Figure 6-30 on page 114.
Click the OK button on the successful completion message dialog box shown in Figure 6-31.
The active ledger table shown in Figure 6-32 shows the results of the apply direct change function.
115
6.1.6 Factors
Factors provide a means to put a factor on charges or quantity based on a search criteria. Factors usually are used for normalization on CPU or applying taxable charges. An example of a factor table is the factor_cpu table shown in Figure 6-33.
When you apply factors, you have a choice to either update existing data or to append data. Update existing data This option modifies the existing value on the table directly. In this situation, it is important to select the update criteria carefully. The criteria function enables selecting a subset of records from the ledger, such as all CPU related service (criteria=Service=CPU). Append data This option adds a row for each record that matches the criteria. The row shows the difference that resulted when the factor is applied. With this function, it is possible to individually identify the rows where the factors are applied. The new rows have an identification under the field TYPE. The append data rows have a value of F and the DATASRC field contains the factor table name. Note: The factor function restricts the name of the factor table to be less than eight characters, as this is the capacity of the DATASRC field. An example of updating existing data with the factor function is to apply factor_cpu to ledger_active for the QUANTITY and CHARGES columns. When the apply function is used, a search for rows in the ledger_active with a SYSTEM value of PROD or SYSA is done and the factor from factor_cpu is performed on the appropriate rows of ledger_active. The following steps outline the process: 1. Creating a temporary table with a left outer join similar to the allocation table processing with the SYSTEM columns as the joining criteria is done as follows:
SELECT ledger_active.*, factor_cpu.FACTOR INTO _tmp_20010829_164709 FROM ledger_active LEFT JOIN factor_cpu
116
ON ledger_active.SYSTEM=factor_cpu.SYSTEM;
3. Replace the content of the temporary table into the ledger_active table. You can define one or more factor tables. Each factor table is used as input to the Apply Factor function to look up and apply a factor value to one or more numeric fields in the target table. The structure of a factor table is defined when you create it. To create a factor table, select either Maintain -> Factor Tables from the Accounting Feature for the Workstation main panel or from the Accounting Feature for the Workstation data explorer panel by clicking the right mouse button on the Factors folder. The Create New Factor Table panel shown in Figure 6-34 opens.
Select the input table and the factor field from the drop-down lists of the Create New Factor Table dialog box shown in Figure 6-34. When you have selected the table and field, click the Execute button.
117
Then enter a name for the new factor table in the dialog box shown in Figure 6-35 on page 117. When you have entered the new factor table name, click the OK button.
The successfully created factor table dialog box shown in Figure 6-36 is displayed. Click the OK button. Now open the Accounting Feature for the Workstation data explorer and select the Factors folder; you will see the newly created table. Select the factor table to open it and insert the field values and factors. Figure 6-37 shows an example where four values for SMF_ID have various factor values specified.
118
To apply the factor data values to the ledger tables, either click the right mouse button on the factor table in the data explorer, as shown in Figure 6-37, or select Functions -> Apply Factors from the Accounting Feature for the Workstation main panel. The Apply Factor panel (Figure 6-38) is displayed.
Select the input and the factor table from the drop-down lists of the Apply Factor panel shown in Figure 6-38. Then click on the icon for the target fields to choose the field for update. In the example shown in Figure 6-38 the QUANTITY target field has been specified. Select one of the Results options, either Update or Append data. Selecting the Append data option activates the Service list drop-down list. In the example shown in Figure 6-38, CPU has been selected, because only this value makes sense for this factor table. When all values are correctly set in the Apply Factor panel shown in Figure 6-38, click the Execute button. You also have the ability to save the SQL code (Save SQL button) and choose selection criteria (Criteria button) using the buttons on the Apply Factor shown in Figure 6-38 before applying the data with the Execute button.
119
Answer Yes to the create interim message shown in Figure 6-39 to save the current data to an interim table.
Click the OK button on the interim table created message dialog box shown in Figure 6-40.
When the apply factor has completed successfully and the message shown in Figure 6-41 is displayed, click the OK button. You have now applied the factor table to your ledger table.
6.1.7 Lookups
The lookup table provides a way of filling in data that is not available from the imported information in your ledger tables.
120
Lookup tables are used as input for an Apply lookup function to map values in one input field to one or more other fields; for example, you need to convert an account ID to a customer name or group a user to a department name. You can use both static and/or custom fields. You can define multiple lookup tables if needed. The use of lookup tables is flexible enough to allow you to define as many as needed to meet your accounting and chargeback requirements. You should attempt to perform lookup table operations as early as possible after importing data from your host system into the Accounting Feature for the Workstation database, so the search criteria used by other processes can take advantage of the information provided by the lookup function. An example of a lookup table (lookup_glacct) can be seen in Figure 6-42.
The lookup_glacct lookup table shown in Figure 6-42 will update the GLACCT and SERVCAT columns in the ledger_active table, based on the value in the SERVICE column of the ledger_active table. The process is implemented using the following SQL command:
UPDATE ledger_active, lookup_glacct SET ledger_active.GLACCT = lookup_glacct.GLACCT, ledger_active.SERVCAT = lookup_glacct.SERVCAT WHERE (lookup_glacct.SERVICE = ledger_active.SERVICE)
121
Note: It is not possible to create a lookup table where multiple fields are used to update one or more fields in a table, for example, using the values in AC1 and AC2 to determine the lookup value to be placed into GLACCT. To create a lookup table, select Maintain - > Lookup Tables from the accounting console main menu, as shown in Figure 6-43.
Next, using the Select Lookup Table panel, shown in Figure 6-44 on page 123, click the New button.
122
The Create New Lookup Table panel shown in Figure 6-45 opens.
Select your input table from the drop-down list of the Create New Lookup Table dialog box shown in Figure 6-46 on page 124.
123
Select the input field from the input field drop-down list of the Create New Lookup Table dialog box shown in Figure 6-47. This field data will be used during the lookup processing. For example, if the AC1 field has an account ID, you can map a department name into the GLACCT field using a lookup function.
For this example, for the output field select the GLACCT field and click the > button to move the GLACCT field to the right frame (Required?).
124
Additionally, you can specify whether the field is required or not by checking the box to the left of the field name in the Required frame, as shown in Figure 6-49. Normally, if a field is mapped, it should be required.
Optionally, you can save the SQL definition for this lookup table by using the Save SQL button. If you select the Save SQL button, an Open SQL program panel opens, as shown in Figure 6-50. Enter the name of the file that the SQL code will be saved in. After entering the file name, click the Save button.
125
A message panel, shown in Figure 6-51, is displayed, telling you the file have been saved.
Click the OK button of the message dialog box shown in Figure 6-51 to return back to the Create New Lookup Table panel shown in Figure 6-49 on page 125. To complete the creation of the lookup table, click the Execute button of the Create New Lookup Table panel shown in Figure 6-49 on page 125.
126
The Create New Lookup Table panel shown in Figure 6-52 opens. Enter a name for the new lookup table and click the OK button.
When the lookup table function has successfully completed, you will receive the information dialog box shown in Figure 6-53 stating that the table was created successfully. Click the OK button. Select the newly created lookup table name, as shown in Figure 6-54, and click the Edit button.
An empty row of the new lookup table is displayed, as shown in Figure 6-55 on page 128.
127
To populate the lookup table, close all the panels of the accounting console and open the Accounting Feature for the Workstation data explorer. Expand the Lookups folder, select the lookup table, as shown in Figure 6-56 on page 129, and enter the values into the lookup table rows. The example shown in Figure 6-56 on page 129 shows the partial entry of data into the table.
128
After all the data is inserted into the lookup table, you can use the apply lookup function using the lookup table. To start the apply lookup function, select Functions -> Apply Lookup from the main panel of the accounting console.
6.1.8 Rates
Rate tables are important for the operations performed by the Accounting Feature for the Workstation, as the rate table is used to maps services to rates and service categories. In the Apply Rate Table function, rate tables are used to assign rates to services and calculate charges for a ledger. The apply rate function calculates and fills the CHARGES column of the active_ledger table by multiplying the content of the QUANTITY column by the corresponding value in the RATE column of the active_rates table. an example of a rate table (rates_active) is shown in Figure 6-57 on page 130.
129
Important: This rate calculation should not be applied to those columns where rate information already exists. For example, data imported from the Tivoli Decision Support for OS/390 DRL.BILLED_DATA table using the supplied import definition drlsblda.imd will have rates applied during the import using the rates_active table. This example will be discussed more later.
The rates_active table updates the ledger_active table based on the SERVICE column of both tables. The apply rates function updates the SERVCAT, RATE, and CHARGES columns in the ledger_active table. It uses the following SQL command:
UPDATE ledger_active, rates_active SET ledger_active.SERVCAT = rates_active.SERVCAT, ledger_active.RATE = rates_active.RATE, ledger_active.CHARGES = ledger_active.QUANTITY * rates_active.RATE WHERE (ledger_active.SERVICE = rates_active.SERVICE) AND (rates_active.APPLY <> 0)
You can create multiple rate tables. The active rate table is called rates_active. All other rate tables are user defined and can be used in place of the active rate table in functions for setting future rates or doing rate simulations. When using an import definition like drlsblda.imd for importing OS/390 data, you must have the rates_active table filled with data, as shown in the example in Figure 6-58 on page 131.
130
The information in the rates_active table, shown in Figure 6-58, is used during the importing of Tivoli Decision Support for OS/390 data. The information in the rate_active table is used for mapping the resource usage fields, as shown in Figure 6-59 on page 132, from the import definition (the supplied definition is drlsblda.imd but the import definition used in this example is billeddat.imd) with the input in the SVMAP field from the rates_active table to the SERVICE field. The input data from the SERVICE field of the rates_active table is mapped to the import table (import_billeddat in this example).
131
Figure 6-60 on page 133 shows the relationship between the import, rate, and service category tables.
132
Some of the rules that apply to the use of the rate table are: Each rate table entry defines a service as described in the SERVICE field. The service field (SERVICE) name can contain up to 10 letters or numbers. A service rate must be associated with each service. The service must also be associated with a SERVCAT service category from the Service Category table and, optionally, with a general ledger account number (SC_GL). Every service in the rate table (active_rates) must have a unique service name (SERVICE).
133
Important: If one of the rules listed above are is not specified, the import may fail. Normally an import failure is noted by no data being imported. During the import operation the Apply Rate Table function matches the service value in the selected rate table to the service value in the input table (normally a ledger table) and populates the corresponding rate field in the input table. The Charges field in the ledger_active table is then calculated by multiplying the quantity and the rate field. Rate tables are used with the Apply Rate Table function to map service values to rates and calculate charges. There are two ways to initiate the apply rate function, as shown in Figure 6-61 and Figure 6-62 on page 135.
In Figure 6-61, the apply rates function is activated by right-clicking on the rates_active entry of the Accounting Feature for the Workstation data explorer and then clicking on the Apply Rates entry in the pop-up dialog box.
134
Figure 6-62 Apply rates - Accounting Feature for the Workstation drop-down
In Figure 6-62, the apply rates function is activated by selecting Functions-->Apply Rates from the Accounting Feature for the Workstation drop-down menu.
135
The Select Budget Table panel shown in Figure 6-64 opens. Click the New button.
136
In the Create New Budget Table dialog box shown in Figure 6-65, select either the Create new empty budget or Copy form existing budget table option and then click the Execute button. In this example, the Create new empty budget option is selected.
In the Create New Budget Table dialog box shown in Figure 6-66, enter the name for your new budget table. In this example, the table name for 2001 is used. After specifying the name of the new budget table, click the OK button.
The successfully created message is presented as shown in Figure 6-67. Click the OK button.
137
The Select Budget Table dialog box shown in Figure 6-68 is redisplayed. Select the newly created table and click the Edit button.
Deleting or creating an additional table is also possible.The created budget table is the result of the definitions defined in the custom fields; see Figure 6-69.
138
139
The monthly.mbs script shown in Figure 6-70 performs the following steps: 1 2,3 4 5,6 7,8 9 10 11 12 13,14 15 16,17 18 19 20 - 23 Verifies itself Creates an interim revenue and expense tables Runs the deletes.sql in the Master database to clean up the active ledgers Imports the additional billing sources, such as ledger and labor data Applies allocation for non divisible costs Performs lookup on the GLACCT table for each service name Applies the rate; rates do not need to be applied to the TDS/390 data Imports the mainframe TDS/390 data Updates the values in the DATASCR field to TDS/390 replacing the value of PR for this example. Imports budget and expense tables from an external source Adds service categories from GLACCT to expense table Applies factors for GST and CPU normalization Adds some overhead and fixed charges Reorders the content of the active ledger Creates control information manually
4. Using the Accounting Feature for the Workstation, you are now ready to generate the monthly reports and export the data into invoices.
140
Chapter 7.
141
142
CHAR(DECIMAL(PRINT_LINES_P, 16, 4)), CHAR(DECIMAL(PRINT_LINES_A, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_V, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_P, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_A, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_V, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_P, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_A, 16, 4)), CHAR(DECIMAL(PRINT_LINES_V, 16, 4)), CHAR(DECIMAL(PRINT_LINES_P, 16, 4)), CHAR(DECIMAL(PRINT_LINES_A, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_V, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_P, 16, 4)), CHAR(DECIMAL(TAPE_MOUNTS_A, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_V, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_P, 16, 4)), CHAR(DECIMAL(PAGES_PRINTED_A, 16, 4)), CHAR(DECIMAL(DASD_MBYTES_V, 16, 4)), CHAR(DECIMAL(DASD_MBYTES_P, 16, 4)), CHAR(DECIMAL(DASD_MBYTES_A, 16, 4)), CHAR(DECIMAL(SESSION_KBYTES_V, 16, 4)), CHAR(DECIMAL(SESSION_KBYTES_P, 16, 4)), CHAR(DECIMAL(SESSION_KBYTES_A, 16, 4)), CHAR(DECIMAL(MISC_A, 16, 4)) FROM DRL.BILLED_DATA WHERE BP_ID = 'BPyyyymm'; //*
You will need to modify the JCL shown in Example 7-1 to conform to your environment by specifying the correct values for your system for the values noted with the change arrows. The following items needs to be changed: DB2 load library Output dataset DB2 subsystem name DB2 library name Billing Period This needs to be changed to your SDSNLOAD dataset This DD statement needs to conform to your dataset naming convention This needs to be matched with the DB2 subsystem that contains the TDS database The dataset name for the DSNTIAUL load module The year and month in which the billing process will be performed
143
Important: Do not modify the SELECT statements of the DSNTIAUL without also modifying the import definition on the workstation side. If the export and import definitions do not match exactly, the import will fail. See Section 7.3, Importing data into the workstation on page 146 for additional information on modifying changes to the import definition.
You need to navigate to the flat file dataset to be downloaded using the dataset name as qualifiers in a directory structure. In the JCL example shown in Example 7-1 on page 142, the flat file is named DRL150.LOCAL.OUTPUT.DRLSBLDA. This could be viewed as being in the DRL150\LOCAL\ directory and a file name of OUTPUT.DRLSBLDA.
144
Use the FTP commands of: cd .. cd new_path Go up the directory structure Go to the new directory path
to navigate to the OS/390 file. Example 7-3 shows navigating the FTP session to the DRL150.LOCAL directory from the TI5208A directory, which was the user directory where the ftp session was started. Also shown in Example 7-3 is enabling the transfer mode (ascii) to perform EBCDIC to ASCII character translation (instead of using the default binary mode).
Example 7-3 FTP host logon - change directory 230 TI5208A is logged on. Working directory is "TI5208A.". ftp> cd .. 250 "" is the working directory name prefix. ftp> cd drl150.local 250 "DRL150.LOCAL" is the working directory name prefix. ftp> ascii 200 Representation type is Ascii NonPrint ftp>
To start the file transfer, issue the get command, as shown in Example 7-4, with the from data set name OUTPUT.DRLSBLDA on the host and the target file name DRLSBLDA.TXT on the workstation.
Example 7-4 FTP host logon - download flat file ftp> get output.drlsblda drlsblda.txt 200 Port request OK. 125 Sending data set DRL150.LOCAL.OUTPUT.DRLSBLDA FIXrecfm 570 250 Transfer completed successfully. ftp: 97240 bytes received in 0.64Seconds 151.70Kbytes/sec. ftp>
When the data transfer is complete, as indicated by the Transfer completed successfully message, you can now end the FTP session by entering the quit command, as shown in Example 7-5.
Example 7-5 Ending the FTP session ftp> quit 221 Quit command received. Goodbye.
C:>
145
146
master database
Billing input
Populating
Posting
local database
Data flow of the import process in Figure 7-1 shows a two stage process, the first phase being populating and the second being posting. Populating Accounting Feature for the Workstation reads the input file and uses the import definition file to populate a result table in the local database. Because the data is not populated directly to the ledger database, we can evaluate the result of the import before actually applying the data in the next stage. Accounting Feature for the Workstation posts the data into the ledger table. This posting process externalizes the change that is caused by our import process.
Posting
147
To start the import wizard from the drop-down function, select Functions -> Import from the Accounting Feature for the Workstation drop-down list, as shown in Figure 7-3. Note: When a function panel is open in the Accounting Feature for the Workstation display area, such as the Data Explorer, as shown in Figure 7-2, then the resource specific menu bar is active and Functions drop-down is not displayed.
148
You need to select the import definition to be used for this import from those provided in the Import Definitions drop-down list at the top of the Import Wizard panel. To import the Tivoli Decision Support for OS/390 billing data downloaded file, select the drlsblda.imd import definition. Automatically, the associated import file type, the input file/table to import, and a preview of the input values of the Import Wizard panel are filled in, as shown in Figure 7-5 on page 150. This happens because these values are known from the control files, drlsblda.imd, and the schema.ini files. These will be described later in Section 7.4, An in-depth look at importing files on page 170.
149
Note: The file schema.ini is not shipped with the Demo data files. It is dynamically created the first time you open the Import Wizard and select an Import Definition. Once the schema.ini file is created, it is referenced and use by the Import Wizard to control import operations.
Other functions available to you on the Import Wizard panel are: The New button allows you to create new import definitions that describe the structured data to be imported into Accounting Feature for the Workstation. Section 8.4.4, Import the OS/390 data on page 198 describes how to build a new import definition. The Browse button allows you to choose another file to import data from.
150
Verify your data structure in the Preview Table area of the Import Wizard panel by moving the lower scroll bar to the right and the right scroll bar to the bottom viewing and checking the information displayed correctly. If the import definition is correct, the values displayed will match the data values in the flat input (import) file. You can also change the input file definition using the Browse button. When you modify the input file name, click the Save Def... button to modify the import definition file and schema file. If your data looks good, click on the Next >> button. The upper half of the import wizard panel will change to show you the field definitions (record mapping), as shown in Figure 7-6.
Moving the right scroll bar down in the upper half of the Import Wizard panel shown in Figure 7-6 shows you all the column definitions for the drlsblda.imd import definition. These columns must match the select statements, shown in Example 7-8 on page 170, used to generate the flat file.
151
If necessary, you can change the field characteristics. But after changing any value in the definition, you must save this value by clicking on the Save Def... button. All the values are saved in the drlsblda.imd file in the <high level directory>\AWO20\Demo\System\Import Definitions directory. You can verify this by viewing the Modified column for the date/time stamp for the import definition file in the subdirectory. When you are ready to continue, click on the Next >> button of the Import Wizard panel shown in Figure 7-6. The Import Wizard now verifies the target table. Using the Target Table (Master) list box, as shown in Figure 7-7, you choose the target table for your import data. In this example, the provided selection of ledger_active table will be used.
In the Import Wizard panel shown in Figure 7-7, click the Options button to display the Import Options panel, as shown in Figure 7-8 on page 153. On the General tab, you can decide to import all data or only a sub selection of the data. This selection is saved under the option ImportAll= in the [Import Opts] part of import definition file.
152
When you select Import Selected, the Criteria frame activates. In the Criteria frame, enter your selection criteria for the import operation. An easier way to enter the selection criteria is click the And or Or buttons of the Import Options frame shown in Figure 7-8, depending on the selection condition to be used. Clicking on the And or Or button opens the Criteria panel shown in Figure 7-9. The Criteria panel let you choose the field name, the operator, and the value to be used in the selection statement for the import. .
An example of the result of the multiple And and Or button selections to create a selection criteria is shown in Figure 7-10 on page 154.
153
The definitions created in the import selection frame will be saved under the [Import Opts] part for option Criteria= in the import definition file. Clicking on the Error/Trace tab of the Import Option panel displays the error and trace frame of the Import Option panel shown in Figure 7-11. You can activate the Error display and/or the trace function and specify the directory where the trace log is to be saved in this frame. These definitions will be saved under the [Import Opts] part for option ShowErrors= , TraceOn= and LogPath= in the import definition file.
154
For the log path you can either type in the path or click on the open folder icon on the right side to open the Browse for Folder panel shown in Figure 7-12.
Select your log path directory using the dialog box shown in Figure 7-12. When you have highlighted the directory where you want the log written, click the OK button. The Advanced tab of the Import Options panel shown in Figure 7-13 on page 156 allows you to select the correct Custom field mapping. For importing Tivoli Decision Support for OS/390 downloaded data, the correct setting is PR Billed data.
155
There is a significant difference between PR Billed Data and user-defined mapping. The PR Billed Data mapping is specifically for unloaded OS/390 data. The user defined mapping requires a one-to-one mapping of columns to be specified under the [Field Mapping] section of the import definition file. This definition will be saved under the [Import Opts] part for option CustMapping= in the import definition file. The PR Billed Data mapping is provided because the BILLED_DATA table has the characteristic of one data record containing multiple resource usage information. For example, one record can include resource usage data for CPU seconds, printed pages, DASD excps, tape excps, and so on. During the import from this one record, many Accounting Feature for the Workstation records (or rows in tables) have to be created. This is due to the logic of the Accounting Feature for the Workstation program processing where each row in the ledger tables reflect one resource usage. The different usage values in the imported record will be separated into usage values identified by their SERVICE and SERVCAT values in the ledger tables. It may be possible that one downloaded import record can create up to nine ledger table entries, depending on how many values are found in the input record. Null values for resource usage data will be ignored during import and no ledger table entry will be created for them. To learn more about the import definition, see Chapter 6, Special Processing for Import Definition (DRLSBLDA) of Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516.
156
When you have selected of the PR Billed Data from the drop-down list of the Advanced tab of the Import Wizard panel shown in Figure 7-13, click the OK button. You be returned to the Import Wizard panel, as shown in Figure 7-7 on page 152. Click on the Import button to import your data to the import_drlsblda table in the local database. Important: Changes to the Import Options dialog box without saving with the Save Def... button may result in incorrect import processing.
When the import is completed, the Import Wizard dialog box, as shown in Figure 7-14 on page 158, is displayed, giving the statistics regarding the import operation just completed.
157
The Import Wizard panel, shown in Figure 7-14, shows the effect of the multiplied number of records. The number of input records is 170 while the number of imported (created) records is 393. The imported data is shown in Figure 7-15 on page 159. Look especially at the first 15 records with a SMF_ID of SC47. Those are the records created from the billeddat.txt that is shown in Example 7-6. Notice that for each resource usage value greater than 0 or null, a record is created from the input data in the import table (import_billeddat).
158
As a demonstration of the data flow during the import operation, we modified the input dataset billeddat.txt shown in Example 7-6 on page 157 and edited the first record. For the SESSION_KBYTES_V field, we changed the value 000000000000.0000 to 000000099999.0000, as shown in Example 7-7.
Example 7-7 Modified billeddat.txt SC47200103BPSC47AUSTINCUSTMACCOUNT11111ACCOUNT22222ACCOUNT33333ACCOUNT44444ACCO UNT555552001-03-03PRIMETIMNTSOSUBSY 000000000002.1400 000000000000.0500 000000000001.0700 000000011111.0000 000000000000.0010 000000000023.4520 000000001201.0000 000000000000.0000 000000000252.0000 000000346365.0000 000000000003.0000 000000002134.1000 000000000011.0000 000000000001.5000 000000000016.5000 000000123414.0000 000000000001.0000 000000000123.4140 000000412341.0000 000000000000.0100 000000041234.1000 000000099999.0000 000000000000.0000 000000000000.0000 000000000000.0000 SC47200103 SC47AUSTINCUSTMACCOUNT11111ACCOUNT22222ACCOUNT33333ACCOUNT44444ACCOUNT555552001 -03-03NIGHTTIMNTSOSUBSY 000000000005.3700 000000000000.0500 000000000002.6850 000000222222.0000 000000000000.0010 000000000431.4230 000000002993.0000 000000000000.0000 000000000252.5200 000003465456.0000 000000000003.0000
159
We imported the data (file billeddat.txt) again. The resulting change in the statistics of the import operation are shown in Figure 7-16. The same number of input records (170) are imported, but now 394 records are created (an additional record is created).
Figure 7-17 on page 161 shows the imported (created) data in the import table (import_billeddat). The highlighted column is the additional record created, because the session value was changed from 0 to 99999 in the input data (billeddat.txt).
160
161
There are some options for posting data into the ledger_active table. These options can be modified from the Import Wizard panel by clicking on the Post Options button shown in Figure 7-19 on page 163.
162
Note: If for any reason you do not want to post the imported data immediately, you can just click on the Close button on the Import Wizard panel shown in Figure 7-19. Then, at a later point in time, you can post the imported data to the ledger_active table. Directions on posting imported data at a later date is covered later in this section.
In the Post Option panel, shown in Figure 7-20 on page 164, you have to decide how to handle your data in the target (ledger_active) table in the master database and your data in the source (import_billeddat) table in the local database.
163
For the target table (the Post Options area of the General tab of the Post Options dialog box shown in Figure 7-16 on page 160), you can either append your data or replace your data in the ledger_active table in the master database. For the source table (the After Post area of the General tab of the Post Options dialog box shown in Figure 7-16 on page 160), you have to decide to either retain, delete, or archive your input table. Additionally, using the Ledger tab area of the Post Options dialog box, you can set the value of a field in the target table (ledger_active in the master database) to a certain value for later processing. There are three fields that can be updated manually: TYPE, DATASRC, or SERVICE. Use the Update ledger field or post drop-down to select which field is to be updated during the post operation. In the example shown in Figure 7-21 on page 165, the post operation will set the value for the DATASRC column in all records posted to the ledger_active table to TDS.
164
After setting the values in the Post Options dialog box, as shown in Figure 7-20, click on the OK button. The Import Wizard panel is displayed, as shown in Figure 7-22 on page 166. You can now post your imported data to the master database by selecting the Post button shown in Figure 7-22. The Accounting Feature for the Workstation will now post your imported data into the ledger_active table.
165
When the posting is finished, a pop-up message, as shown in Figure 7-23, is displayed showing how many records are posted to the ledger_active table.
Click the OK button of the records posted dialog box shown in Figure 7-23. The Import Wizard panel is redisplayed. You have completed the import operation and can close the Import Wizard by clicking the Close button.
166
From the Accounting Feature for the Workstation data explorer, you can examine the data posted by selecting the ledger_active table. As shown in Figure 7-24, the imported data has the value TDS in the DATASRV column for all the records imported into the ledger_active table. If you did not post the data immediately following the import operation by clicking the Close button on the Import Wizard panel shown in Figure 7-19 on page 163, you can do it at any point in time you desire. To post the data into the ledger_active table, select the Functions -> Post Data menu from the Accounting Feature for the Workstation main panel, as shown in Figure 7-25 on page 168.
167
Invoking Functions -> Post Data from the accounting console main panel (shown in Figure 7-25) displays the Post Data dialog box, as seen in Figure 7-26 on page 169. The Option button of the Post Data dialog box shown in Figure 7-26 displays the same dialog boxes, as shown in Figure 7-20 on page 164 and Figure 7-21 on page 165. The Post Data dialog box shown in Figure 7-26 gives you more flexibility than the Post Data dialog box you invoked directly from the Import Wizard panel. In addition to the Options shown in Figure 7-20 on page 164 and Figure 7-21 on page 165, you can choose the target table to be posted with the imported data. You can view the data either in the target or in the input table before posting it by selecting the tables using the Post Data dialog box shown in Figure 7-26. If you want you can create a new target table to post the data to, use the New button.
168
Selecting the New button shows you the New Master Table panel (shown in Figure 7-27), where you enter the name of your new target table in the master database. When you click the OK button, a new table is created in the master database. Note: Selecting the input table and posting the data into this new table will automatically create the table with the correct fields (columns), based on the columns in the input table.
169
Note: Be aware that there is no delete function for the newly created table. To delete this new table or any table in the accounting console, you need to open the data explorer, expand the All Tables folder, select the table you want to delete, right-click on the selected table, and select Delete to delete the table.
170
COL2="BP_ID" Text Width 8 COL3="CU_ID" Text Width 15 COL4="AC1" Text Width 12 COL5="AC2" Text Width 12 COL6="AC3" Text Width 12 COL7="AC4" Text Width 12 COL8="AC5" Text Width 12 COL9="DATE" Text Width 10 . . . {lines omitted} . . . COL55="SESSION_KBYTES_V" Text Width 18 COL56="SESSION_KBYTES_V_NULL" Text Width 1 COL57="SESSION_KBYTES_P" Text Width 18 COL58="SESSION_KBYTES_P_NULL" Text Width 1 COL59="SESSION_KBYTES_A" Text Width 18 COL60="SESSION_KBYTES_A_NULL" Text Width 1 COL61="MISC_A" Text Width 18 COL62="MISC_A_NULL" Text Width 1
The import definition drlsblda.imd shown in Example 7-8 consists of five parts. Each part is named with the part name enclosed in square brackets and shown in bold. The five parts of drlsblda.imd are: [Data] [Import Data] [Post Opts] [Ledger Updates] [Schema Text] Holds the source and target definitions for the data. Defines options to be used. Defines ledger posting options. Holds values for the TYPE and DATASRC fields of the ledger table. Defines the layout of the input (import) data.
The import definitions, including drlsblda.imd, are located in the <high level directory>\AWO20\Demo\System\Import Definitions\ directory. Additional information on the Accounting Feature for the Workstation directory structure can be found in Section 5.3, Directory structure of the demo database on page 79. The following sub-sections describe these parts. Also described is how these parts are influenced by the processing from the graphical interface and import wizard.
171
The import definition shown on the left part of Figure 7-28 contains all the necessary information for performing the import. The information in the import definitions directs Accounting Feature for the Workstation as to where to place the imported data (import table), target (target table), and schema (schema.ini) in addition where the data to import (drlshlda.txt) is located. The directives and tables information in the [Data] part are: DataType Results Target Format of the imported data. Type 3 represents a fixed width text file. Name of the temporary table in the local database where the imported data will be initially loaded. Name of the ledger table in the master database, where the data will be subsequently posted from the local result table.
172
ImportDB ImportTbl
Path name of the source directory, where the import files and definition are located Table or filename of the input data is located. The period in this value will be substitute with a hash sign (#), so the name drlsblda.txt becomes drlsblda#txt. Section name under the file Schema.ini, where the data format of the imported file is stored. Note that the definition in the Schema.ini file, shown in Figure 7-29, needs to match the definition name in the Schema Text part of the import definition.
ImportInfo
173
ImportAll Criteria
Whether to import all rows or filter the rows based on a criteria, ImportAll=1 means importing all rows. Filtering criteria for import, when ImportAll=1, the Criteria is not used. An example of criteria to include only data with SMF_ID of SC47 is Criteria="[drlsbldat#txt].[SMF_ID] = 'SC47'" Whether to show the errors while the import is proceeding. Whether to generate additional trace information. The directory where the log file is to be placed. Whether to activate custom mapping instead of user defined mapping. A value of 1 indicates PR Billed Data mapping is used to import data into the Accounting Feature for the Workstation.
The custom mapping option is only available for data downloaded from the Tivoli Decision Support for OS/390 resource accounting feature DRL.BILLED_DATA table. It is called PR Billed Data mapping. Additional information on this special processing for Tivoli Decision Support for OS/390 data can be found in Chapter 6, Special Processing for Import Definition (DRLSBLDA), in Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516. The user defined mapping uses definitions in the Field Mapping part of the import definition file. Field mapping is not use, and is therefore not present, in the Tivoli Decision Support for OS/390 import definition file drlsblda.imd. More information on field mapping is in Section 7.4.6, Import definition [Field Mapping] part on page 176. The PR Billed Data mapping, as defined by CustMapping=1, maps unloaded DRL.BILLED_DATA into an Accounting Feature for the Workstation ledger format. The DRL.BILLED_DATA table (and the unloaded data from the table) contains multiple usage metric information in a single row and must be mapped to multiple rows in the Accounting Feature for the Workstation ledger table, where each row provides 1 usage metric. More discussion on the PR Billed Data mapping can be found in Chapter 6, Special Processing for Import Definition (DRLSBLDA), in Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516.
174
COLx
175
DATASRC=DRLSBLDA The value of DRLSBLDA is inserted in the DATASRC field of the ledger table for every posted record.
The fields GLACCT and QUANTITY (from the imported table (y2k#txt)) are mapped to the fields GLACCT and QUANTITY in the target table (ledger_active). The fields of the import table are defined in the [Schema Text] part while the from and to tables are defined in the [Data] part.
176
The information in the Field Mapping part is only used when the CustMapping=0 is specified in the import options part of the import definition file.
DRL.BILLED_DATA
SMF ID BP ID CU ID ACCOUNT 1-5 DATE PERIOD NAME CHARGE TYPE SUBSYS ID
CPU
TAPE EXCP
DISK EXCP
LINE PRINT
TAPE MOUNT
PAGE PRINT
DISK MB
SESSION
MISC
service_categories
rates_active
ENTRTIME
DATASRC
TYPE
PERIOD
SERVCAT
PR PR PR PR PR PR PR PR PR
SERVICE
CPU TAPE EXCPS DISK EXCPS LINES MOUNTS PAGES DASD MB SESSION MISC
QUANTITY
RATE
CHARGES
ledger_active
At the top of Figure 7-31, the fields of one row of unloaded Tivoli Decision Support for OS/390 resource accounting data (DRL.BILLED_DATA) is shown. This data contains usage information for different resources. The default BILLED_DATA table has nine billing segments, each having three values (except for the MISC segment, which only has two). The three values in a segment are
177
usage (_V), price (_P), and charge (_A). For example, the three values for CPU_SECONDS are CPU_SECONDS_V (sum of CPU seconds used), CPU_SECONDS_P (price of CPU seconds), and CPU_SECONDS_A (amount or charge for CPU seconds used). When the import is performed, the service_categories table is consulted for the appropriate category of PR and used as the SERVCAT key for the rates_active table. Each segment must have a corresponding entry in the rates_active table to be mapped for a certain service. For example, a TAPE_MOUNTS should map to a service called MOUNTS. When all the segments in the BILLED_DATA are filled, a single record will map to nine rows in the ledger_active, as indicated in Figure 7-31. Note: The number of custom fields that are imported can be increased or reduced from the supplied drlsblda.imd definition. Refer to Section 8.1, Configuration on page 180 for more information on how and what we did to reduce the number of fields being imported in the example given in Chapter 8, Sample chargeback implementation on page 179. You must use all three value definitions for each resource usage type being imported. For example, when CPU_SECONDS are used, you have to also define the field definition for the CPU price and for the CPU amount. These three fields build a segment. If you correctly define and used this field mapping, your import will fail.
178
Chapter 8.
179
8.1 Configuration
The local Microsoft Jet database on each workstation handles the local database request of the Accounting Feature for the Workstation running on that workstation. The use of the Microsoft SQL server allows for the sharing accounting information between the Accounting Feature for the Workstation running on each of the local workstations. The use of a somewhat more complex scenario than just running Accounting Feature for the Workstation on one workstation by one IT financial analysis is provided to simulate an implementation of Accounting Feature for the Workstation in an enterprise environment. Note: Microsoft SQL server is not a prerequisite for installing and using the Accounting Feature for the Workstation, as it can run in a stand-alone fashion, interfacing directly with the Tivoli Decision Support for OS/390 database. In the sample scenario described in this chapter, in order to perform accounting and chargeback, data is collected using Tivoli Decision Support for OS/390 version 1.5. The accounting and chargeback data is unloaded to a flat file using a DB2 utility on the OS/390 system (Section 8.4.4, Import the OS/390 data on page 198 has more information on unloading data using the DB2 utility). The unloaded BILLED_DATA data is first downloaded to one or more of the five PC workstations via FTP. The master database, located on the SQL Server, is loaded from the local database of the workstation. The data in the master database is then accessible by all of the five workstations.
180
The SQL Server is located on an Windows NT Server Version 4 machine with Service Pack SP6. The Accounting Feature for the Workstation is not installed on the SQL Server machine. All the clients run Accounting Workstation Options feature. They are connected to the SQL Server using ODBC connection. These clients can connect to the OS/390 environment to download the monthly BILLED_DATA table records using FTP or ODBC. The use of ODBC is available in Accounting Feature for the Workstation revision 18. In our environment, shown in Figure 8-1, we use IBM PC 300 PL workstations with 256 MB RAM, 450 MHz processors, and disk drive capacities of 30 GB. Note: The workstations we used in our environment had more than the minimum required to run the Accounting Feature for the Workstation. We list the hardware here only for your understanding of our setup. See the actual hardware and software prerequisite for the Accounting Feature for the Workstation in Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516.
181
We used the following hardware and software in our multiple workstation environment: IBM PC 300 PL workstation with Windows 98 Second Edition IBM PC 300 PL workstation with Windows ME IBM PC 300 PL workstation with Windows NT4, Service Pack 6, and Microsoft SQL Server Version 7 with SQL Server Service Pack 2 IBM PC 300 PL workstation with Windows NT4 and Service Pack 6 IBM PC 300 PL workstation with Windows 2000 and Service Pack 1 IBM PC 300 PL workstation with Windows 95 Second Edition
182
The database options needs to be modified for working with the import data feature. The Select into / bulk copy option needs to be activated; otherwise, a message similar to the one shown in Figure 8-3 will be displayed during the Post function of the Accounting Feature for the Workstation.
The option can be changed from the database property dialog box. Right-click on the awomaster database and select Properties. In the awomaster Properties panel, as shown in Figure 8-4 on page 184, select the Options tab. Ensure that the Select into / Bulk copy option is checked and then click the OK button.
183
184
3. Modify the administration rights for this new user so they have system administration set in the Server Roles tab, as shown in Figure 8-5, for the awomaster database set in the Database Roles tabs, as shown in Figure 8-6 on page 186. Additionally, the database roles of database owner and public are permitted.
185
4. When the server roles and database access are set in the SQL Server Login Properties panel shown in Figure 8-6, click the OK button to save the definition.
186
4. In the database user properties for the new user, grant Insert permission for Accounting Feature for the Workstation table activity_log, as shown in Figure 8-8 on page 188.
187
Note: Because of how the product logs every read-only activity, read-only users need to have insert authority for the activity log.
5. When the permissions are set, click on the OK button of the Database Users Properties panel shown in Figure 8-8.
188
Figure 8-9 Create new Local Accounting Feature for the Workstation Database
2. Select the location (drive and directory) where the new database is to be created. Use the New Folder button if necessary. When completed, click the OK button. 3. After the Local database has been created, you next need to create the new master database. This is accomplished using the New Master Connection panel, as shown in Figure 8-10 on page 190. Because the New Master Connection panel allows you to create a new database or open an existing database, ensure the Create new database radio button is selected, as shown in Figure 8-10. Because the new master is to reside on the SQL server, the SQL Server database type, as shown in Figure 8-10, needs to be selected. With the SQL Server database and Create a new database options selected, click the OK button.
189
4. The ODBC Logon panel, as shown in Figure 8-11, is displayed. Fill in the ODBC connection properties. In this example, the administrator user ID (created in Defining the administrator on page 184 and connected to the awomaster database created in Section 8.2.1, Creating master SQL database on page 182) is entered into the panel. Note: ODBC support for DB2 databases should be available in revision 18 of Accounting Feature for the Workstation. The Choose database type list, shown in Figure 8-10, should have three options: DB2, Microsoft Access, and SQL Server.
5. When the ODBC logon information is entered in the dialog box shown in Figure 8-11, click the OK button, and the master database will be created. In the example shown in Figure 8-11, if there is no pop-up message, the AWO master database is created in awomaster on the Microsoft SQL Server.
190
Note: The user ID (awouser in the example shown in Figure 8-11) used to create the Master database (awinaster in this example) must have administrator authority for the database specified in the dialog box. To verify the database creation, open the Data Explorer in Accounting Feature for the Workstation and expand the All Tables folder, as shown in Figure 8-12. You will be able to see the Microsoft SQL server system tables and some Accounting Feature for the Workstation tables, for example, calendar and custom_fields.
191
On the each client where you start the Accounting Feature for the Workstation, you can create a new local database by performing the following steps: 1. Create a new local database using Accounting Feature for the Workstation by using the menu File -> New. This brings up the New Database dialog box, as shown in Figure 8-13.
Figure 8-13 Create new Local Accounting Feature for the Workstation Database
2. Select the location (drive and directory) where the new database is to be created. Use the New Folder button if necessary. When completed, click the OK button. 3. When the New Master Connection panel opens, as shown in Figure 8-14, choose Open existing database by clicking the radio button.
192
4. With the Open existing database radio button active, select the OK button of the New Master Connection dialog box shown in Figure 8-14. The ODBC Logon dialog box shown in Figure 8-15 is displayed.
5. Fill in the necessary SQL Server connection information of the ODBC Logon dialog box shown in Figure 8-15 and click the OK button. In this example, the user ID awouser1 is used. This user ID is not the Accounting Feature for the Workstation administrator. Again, the connection can be verified using the Accounting Feature for the Workstation Data Explorer, as shown in Figure 8-12 on page 191.
193
In the BILLED_DATA table structure shown in Table 8-1, only three data segments (CPU_SECONDS, DASD_MBYTES, and PAGES_PRINTED) are used. Based on the BILLED_DATA format shown in Table 8-1, the custom field definitions shown in Table 8-2 need to be defined. For details on using the Custom Field dialog box to create the needed custom fields, see Creating custom fields in the Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516.
Table 8-2 Custom fields definition
Revenue Expense Budget Filed name Field Type Size Indexed Description
CUSTOMER
Text
15
Yes
No
No
Yes
194
Revenue
Expense
12 12 12 12 15 8 8 10 8
No No No Yes No No No No No
Budget
Filed name
Field Type
Size
Indexed
Description
No No No Yes No No No No No
Account Field 1 from BILLED_DATA Account Field 2 from BILLED_DATA Account Filed 3 from BILLED_DATA Global Account information from Service via Lookup Table Information about used Application via Lookup Table MVS System ID from BILLED_DATA MVS subsystem ID from BILLED_DATA Date information from BILLED_DATA Shift information from PERIOD Field of BILLED_DATA
After the custom fields have been defined, you next need to create the active ledger using the menu Functions -> Data Management -> Create Active Ledgers.
195
196
At this point in the definition process, there is no price information for our service. So the services are defined with rates equal to zero. Note: The APPLY column is -1, so the rates will not be used for the Apply Rate function. The rate will be applied later in the processing, as described in Section 8.5.5, Rates table on page 207. The SVMAP column indicates the segment names of data to be imported from the BILLED_DATA table. In this example, only accounting for three services are being processed. The three services are: CPU seconds from the BILLED_DATA column CPU_SECONDS_V. We use the value CPU_SECONDS in the column SVMAP. DASD allocation from BILLED_DATA column DASD_MBYTES_V. We use the value DASD_MBYTES in the column SVMAP. Print pages from BILLED_DATA column PAGES_PRINTED_V. We use the value PAGES_PRINTED in the column SVMAP.
197
Important: In this example, charges are being applied on the workstation, so the field of most interest in the import are the usage values (fields names ending in _V). Because of how the import data function works, all three values for each segment must be specified (fields names ending in _V, _P, and _A). The special custom mapping of PR Billed Data is shown in Section 7.5, BILLED_DATA mapping on page 177.
When the data has be unloaded using the control statements shown in Table 8-1 on page 194, the data can be transmitted using FTP, as described in Section 7.2, Transferring billing data on page 144.
198
The next step is to build an import definition. The import definition for this example is shown in Example 8-2. More information about the import definition file is in Section 7.4, An in-depth look at importing files on page 170.
Example 8-2 Import Definition from Import Wizard [Data] DataType=3 Results=import_drlsblda Target=ledger_active ImportDB=<.>\ ImportTbl=drlsblda#txt ImportInfo=drlsblda.txt
[Import Opts] ImportAll=1 Criteria= ShowErrors=1 TraceOn=0 LogPath= CustMapping=1 [Post Opts] ReplaceData=0 AfterPost=0 [Ledger Updates] TYPE=I DATASRC=DRLSBLDA [Schema Text] CharacterSet=OEM MaxScanRows=0 Format=FixedLength ColNameHeader=False COL1="SYSID" Text Width 4 COL2="AC1" Text Width 12 COL3="AC2" Text Width 12 COL4="AC3" Text Width 12 COL5="DATE_REC" Text Width 10 COL6="SHIFT" Text Width 8 COL7="SUBSYS" Text Width 8 COL8="CPU_SECONDS_V" Text Width 18 COL9="CPU_SECONDS_V_NULL" Text Width 1 COL10="CPU_SECONDS_P" Text Width 18 COL11="CPU_SECONDS_P_NULL" Text Width 1 COL12="CPU_SECONDS_A" Text Width 18 COL13="CPU_SECONDS_A_NULL" Text Width 1 COL14="PAGES_PRINTED_V" Text Width 18 COL15="PAGES_PRINTED_V_NULL" Text Width 1
199
COL16="PAGES_PRINTED_P" Text Width 18 COL17="PAGES_PRINTED_P_NULL" Text Width 1 COL18="PAGES_PRINTED_A" Text Width 18 COL19="PAGES_PRINTED_A_NULL" Text Width 1 COL20="DASD_MBYTES_V" Text Width 18 COL21="DASD_MBYTES_V_NULL" Text Width 1 COL22="DASD_MBYTES_P" Text Width 18 COL23="DASD_MBYTES_P_NULL" Text Width 1 COL24="DASD_MBYTES_A" Text Width 18 COL25="DASD_MBYTES_A_NULL" Text Width 1
When the import definition is built, you are now ready to import the data using the Import Wizard. Additional information about using the Import Wizard is in Section 7.3, Importing data into the workstation on page 146.
200
Note: There are two limitations to lookup tables: It is not possible to create a lookup table that searches on multiple columns. Lookup process does not support wildcard, since it uses the equal sign comparison. The Apply_Lookup process is essentially running a set of SQL commands. By coding and running your own SQL commands, you can achieve the same results as the Apply_Lookup process and, in some cases, perform more elaborate processing that is supported with the Apply_Lookup processing of Accounting Feature for the Workstation. The lookup tables are created using the process described in Section 6.1.7, Lookups on page 120. We created the lookup_CUSTOMER_AC1 table to associate the AC1 field with a CUSTOMER field in the ledger_active table. The content of this table is shown in Figure 8-18 on page 202.
201
The second lookup table we created is named lookup_AC2_APPLICAT, shown in Figure 8-19 on page 203, to associate the AC2 field to the APPLICAT column in the ledger table
202
The third lookup table we created named lookup_Service_SV_GL, shown in Figure 8-20 on page 204, to associate the SERVICE field to the SV_GL column in the ledger table. The column SV_GL contain the Global Account information for the defined services.
203
To apply a lookup table to the ledger_active, you can use the apply function from the Accounting Feature for the Workstation data explorer by right-clicking on the lookup table name. You apply each lookup table one at a time to the ledger_active table using the data explorer. Tip: If you reply YES to the question Create interim?, an interim table with the last ledger_active data is created. Then, if you examine the results of the apply function and see the results are not correct, you can do a Restore Interim from the last created interim table.
204
To apply the factor table, use the Apply Factor function from the main menu or right-clicking on the factor table name. In the Apply Factor dialog box for this example, the factor is applied to the QUANTITY column and the CHARGES.
205
Using the apply allocation function from either the main menu or right clicking on the allocation table name, the values are distributed to the QUANTITY column and the CHARGES column. You may encounter problems when applying the allocation table. Additional information on allocation table problems can be found in Section 9.10, Limited allocation table creation on page 232. Important: The apply allocation table function replaces each record from the input table that match the allocation table selection based on common field names. It then puts in new records based on the percentage for the allocation, and the original values are no longer in the table.
206
The direct charge table contains information about each customer and the static fields values to be charged. For the direct charge table, you need to manually build the table, giving the quantity (QUANTITY) and the total charge (CHARGES) fields for each customer (or row). You should not apply rate tables for the direct charges, because the QUANTITY and CHARGES should be fixed, reflecting the actual charges. To apply the data from the direct charge table to the ledger_active table, use the apply function from either the main menu or right-click on the direct charges table name in the data explorer.
207
To apply the data from the rates_active table to the ledger_active table, use the apply rates function from either the main menu or by right-clicking on the rates table name in the data explorer. After applying the rates table data, the active ledger table contains all necessary chargeback information for the current billing period.
208
In this example, the data in the budget table was entered manually, but you can also load the data into the budget table by importing data from other sources, such as a spread sheet applications. Tip: The Demo database (see Section 5.3, Directory structure of the demo database on page 79) has some sample budget data import definition.
209
Expense ledger contain expense data collected from external sources. For example, IT expense is often imported from the corporate general ledger system. In this example, the summarized expense data for our three services has been enter manually into the expense_active table.
210
These provided reports provide analysis, trend, and forecasting information. These reporting functions are documented in Chapter 8, Analysis and Reporting, of Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516. The report and analysis functions perform a predefined set of SQL commands, storing the results in tables in the local database. These result tables can then be exported to other applications for further processing or reporting. Because the accounting and chargeback data in the Accounting Feature for the Workstation is kept in the master database, you can also create your own user-defined queries to be run for any billing period that provides your own analysis function.
211
212
Chapter 9.
213
9.1 Fixes
Before using the Tivoli Decision Support for OS/390 Accounting Feature for the Workstation, we strongly recommend that you order all available PTFs for both the host and workstation products and install them. You can contact the IBM or Tivoli support center to order the latest available level of code. When we started this redbook, Accounting Feature for the Workstation Version 2.0 was at Release Level 13. The latest level (at the time of publication) is Release Level 17, and is the level this redbook is based upon. We experienced some problems that, to the best of our knowledge, have been corrected in the latest and current level of the code, which should be release 18 or higher at the time you read this redbook.
9.2 Installation
This section describes issues we encountered with the installation of Accounting Feature for the Workstation in our environment, along with solutions that you can use to resolve these issues in your installation.
214
Before you can install the downloaded file, you will need to use an unzip tool such as WinZip or PKZIP, to extract the installation program onto your workstation. In the example shown in Figure 9-2, the file AWO20.ZIP has been opened in a WinZip panel, listing the files contained in the zip file.
To use the WinZip panel shown in Figure 9-3 on page 216, click on Action in the menu bar area and select Extract from the drop-down list.
215
The Extract panel shown in Figure 9-4 is displayed. Select the All files button in the File area of the Extract panel, select Use folders names, and select the target folder on your workstation from the Folders/drives explorer area in the center of the Extract panel. When you have selected these, click the Extract button to start the unzip operation.
When all files are unzipped you can close the WinZip panel and open a Windows Explorer panel, as shown in Figure 9-5 on page 217. Navigate to the newly unzipped folder and double-click on the Setup icon to start the Accounting Feature for the Workstation installation.
216
The Accounting Feature for the Workstation installation starts and asks you for your language, as shown in Figure 9-6.
Set up your language and click OK to go on with the installation. The InstallShield Wizard panel will be prepared and displayed (see Figure 9-7).
217
From here the installation is as described in Section 4.3, Installing the accounting console on page 63.
If this option is not marked, then when you start the Accounting Feature for the Workstation, the start-up panel will display the Chinese (PRC) language, as shown in Figure 9-9.
This happens because you will receive a Confirm File Overwrite dialog box during the install, as shown in Figure 9-10 on page 219, and when you reply to Yes several times or Yes to All, all but the Chinese language information is overwritten.
218
After the Yes or Yes to All reply, the six setup.bmp files are replaced during the extraction and only the last setup.bmp file is saved in your target folder. The WinZip panel shown in Figure 9-11 shows the path definitions information for the files to be unzipped. The last subdirectory named in the folder setupdir is 0804, and this happens to be the Chinese version of setup.bmp.
219
9.3 Customization
This section gives tips and hints for customizing the Accounting Feature for the Workstation.
220
The first time the Accounting Feature for the Workstation is started and no customizing is done, the Current Billing Period defaults to the current month and year. For example, if you install in April of 2001, the current billing period is 200104, as shown in Figure 9-13.
Figure 9-13 Current billing period just after the first AWO start
To reset the current billing period in Accounting Feature for the Workstation, you need to create an active ledger for the accounting period just before the period you want to start collecting and reporting on. In this example, this would be December of 2000, as we want to collect and report starting in January.
Note: Inserting values in the calendar has no influence on the period setting. Follow these steps: 1. Before you can create an active ledger, you need to define the custom fields. This table is mandatory for creating an active ledger tables. See Section 8.4.1, Creating custom fields and active ledger on page 194 for information on creating custom fields. 2. Next, you need to edit the new active ledger table you just created and insert some dummy values in the first line. The important value in this first line is the PERIOD field. This is the value that you need to set the billing period to before the point when you want to start collecting data. In our example, this would be
221
December 2000 and the value to put into the period field is 200012. Close the active ledger table and the Accounting Feature for the Workstation client. 3. Restart the Accounting Feature for the Workstation client again. When the client starts, the period value is taken from the period of the active ledger table, in this case, 200012. Make sure the PERIOD field width is large enough to see the whole value. If the field is not wide enough, you may be misled as to the value in the field. For example, in Figure 9-14, the value looks to be 20001, or January of 2000, but by looking at the bottom of the Accounting Feature for the Workstation panel, you can see the current billing period set to 200012.
Figure 9-15 on page 223 shows the same ledger_ active data with the PEROID field widened so that the complete value of 200012 for December 2000 is displayed.
222
4. Next, finalize the active ledger by right-clicking on ledger_active, as shown in Figure 9-16 on page 224, and select Finalize Ledger from the list. This action will set the last completed ledger period to the period of the ledger being finalized, 200012 for December 2000, in this example. The dummy active ledger that you created will be removed, but the important thing is that the Accounting Feature for the Workstation client will set the last used date to 200012, so when you create the next ledger it will be set to the next period, or 200001 for January 2001, in this example.
223
5. When the Finalize Ledger panel is displayed, as shown in Figure 9-17, click the Execute button.
6. An Accounting Feature for the Workstation information message is displayed and you are asked to create an historical ledger (lg200012), as shown in Figure 9-18 on page 225.
224
7. Click the OK button and the successful completion message shown in Figure 9-19 is displayed.
8. Click the OK button of the successful completion dialog box shown in Figure 9-19.
225
The Accounting Feature for the Workstation data explorer is displayed. Press the F5 program function key to refresh the panel. The refreshed and updated Accounting Feature for the Workstation data explorer panel, shown in Figure 9-20, is displayed. In the left frame, under the Historical folder, a historical ledger (lg200012) is seen. At the bottom of the Accounting Feature for the Workstation panel, the current billing period has changed to 200101. This is the accounting period we want, so now we can import data for periods before the current calendar month.
226
If your operating system is not at the proper level, you need to exit the installation and install the proper service level or above for your operating system before you can proceed with the installation of Accounting Feature for the Workstation.
This error normally is associated with you (the user doing the installation) not having write access to the hard disk. Change the access permission for the user performing the installation and restart the installation.
227
Note: If you import data from another source than from a Tivoli Decision Support for OS/390 host system, the date format setting may need to be different.
228
A detailed description on importing data from the host is given in Section 7.3, Importing data into the workstation on page 146. Several hints about importing data are given in the notes and guidance of this chapter.
229
In the Create Index panel, enter the index name in the Index Name field and then select the index attributes (Primary and Unique, in this example). Click on the lower Fields icon to the right of the Fields and choose the required index fields (Figure 9-26).
230
All fields from the Billed Data definition can be used as key fields, plus SERVCAT, SERVICE, and DATASRC.
231
SET "QUANTITY" = "QUANTITY" * "FACTOR" WHERE "FACTOR" IS NOT NULL; DELETE FROM "ledger_active" WHERE "ledger_active"."SERVICE" = 'CPU';
<== Change
INSERT INTO "ledger_active" SELECT "CUSTOMER", "AC1", "AC2", "AC3", "SV_GL", "APPLICAT", "SYSID", "SUBSYS", "DATE_REC", "SHIFT", "SEQUENCE", "PERIOD", "ENTRTIME", "TYPE", "DATASRC", "SERVCAT", "SERVICE", "QUANTITY", "RATE", "CHARGES" FROM "_tmp_awouser_20010404_102033"; DROP TABLE "_tmp_awouser_20010404_102033";
Note: Using the Append data option in the Apply Factor Table function can only be done with a factor table name no greater than eight characters in length.
232
We were able to circumvent the problem using the modified SQL code shown in Example 9-2. In this example, the VARCHAR lengths have been reduced form 255 to a size matching the data contained in the from fields of Customer, AC3, Service, and the to field is Customer. The length definitions for the from fields are Customer for 15, AC3 for 12, and Service for 8 characters, while the to field is Customer for 15 (to match from field length).
Example 9-2 SQL code for create allocation table CREATE TABLE "alloc_OverheadIT" ( "FROM-AC3"VARCHAR(12) NOT NULL, "FROM-CUSTOMER"VARCHAR(15) NOT NULL, "FROM-SERVICE"VARCHAR(8) NOT NULL, "TO-CUSTOMER"VARCHAR(15), "PERCENT"FLOAT NOT NULL, "ERROR"SMALLINT );
CREATE UNIQUE INDEX "allocIDX" ON "alloc_OverheadIT"("FROM-AC3", "FROM-CUSTOMER", "FROM-SERVICE", "TO-CUSTOMER");
To allocate a table with the correct field size, you must save the SQL code of the table (do not execute the create table using the dialog box). After saving the SQL code, you need to edit to change the varchar (255) values to the correct lengths. The SQL code can then be executed from the SQL editor to allocate the table correctly, eliminating the error that would occur using the dialog box to create the table.
233
AND "ledger_active"."SERVICE" = "alloc_OverheadIT"."FROM-SERVICE" ; UPDATE "_tmp_awouser_20010404_103445" SET "CUSTOMER" = "TO-CUSTOMER" WHERE "FROM-CUSTOMER" <> ' ' AND "FROM-AC3" <> ' ' AND "FROM-SERVICE" <> ' ' ; UPDATE "_tmp_awouser_20010404_103445" SET "QUANTITY" = "QUANTITY" * ("PERCENT"/100) WHERE "PERCENT" > 0; UPDATE "_tmp_awouser_20010404_103445" SET "TYPE" = 'O' WHERE "FROM-CUSTOMER" <> ' '; UPDATE "_tmp_awouser_20010404_103445" SET CHARGES = QUANTITY * RATE WHERE RATE <> ' '; DELETE FROM "ledger_active"; INSERT INTO "ledger_active" SELECT "_tmp_awouser_20010404_103445"."CUSTOMER", "_tmp_awouser_20010404_103445"."AC1", "_tmp_awouser_20010404_103445"."AC2", "_tmp_awouser_20010404_103445"."AC3", "_tmp_awouser_20010404_103445"."SV_GL", "_tmp_awouser_20010404_103445"."APPLICAT", "_tmp_awouser_20010404_103445"."SYSID", "_tmp_awouser_20010404_103445"."SUBSYS", "_tmp_awouser_20010404_103445"."DATE_REC", "_tmp_awouser_20010404_103445"."SHIFT", "_tmp_awouser_20010404_103445"."SEQUENCE", "_tmp_awouser_20010404_103445"."PERIOD", "_tmp_awouser_20010404_103445"."ENTRTIME", "_tmp_awouser_20010404_103445"."TYPE", "_tmp_awouser_20010404_103445"."DATASRC", "_tmp_awouser_20010404_103445"."SERVCAT", "_tmp_awouser_20010404_103445"."SERVICE", "_tmp_awouser_20010404_103445"."QUANTITY", "_tmp_awouser_20010404_103445"."RATE", "_tmp_awouser_20010404_103445"."CHARGES" FROM "_tmp_awouser_20010404_103445"; DROP TABLE "_tmp_awouser_20010404_103445";
<== Change <== Change <== Change <== Change <== Change
<== Change
234
The Control Panel panel for Windows 2000 (Figure 9-30 on page 236) or Windows 9x (Figure 9-31 on page 236) is displayed.
235
Double-click on the Add/Remove Programs icon in the control panel, as shown in Figure 9-30 for Windows 2000 or Figure 9-31 for Windows 9x, to display the Add/Remove Programs panel, shown in Figure 9-32 on page 237 for Windows 2000 or Figure 9-33 on page 238 for Windows 9x.
236
You may need to slide the slide bar on right side of the panel to locate the Accounting Feature for the Workstation program. Select the Accounting Feature for the Workstation program (AWO 2.0) from the list of programs available to be removed from your system.
237
Note: For Windows 2000, be sure that the Change or Remove Programs push button on the left of the panel is selected, as shown in Figure 9-32,
Select AWO 2.0 from the list of programs. For Windows 2000, click the Change/Remove button or, for Window 9x, click the Add/Remove button. The Choose Setup Language panel is displayed, as shown in Figure 9-34.
238
Click the OK button on the Choose Setup Language panel shown in Figure 9-34 and the InstallShield Wizard panel, shown in Figure 9-35, is displayed, showing the progress of starting and setting up the uninstall wizard.
When the install wizard is initialized, the InstallShield Wizard Welcome panel is displayed, as shown in Figure 9-36.
In the InstallShield Wizard panel, select the Remove option and click the Next button. The Confirm File Deletion panel is displayed, as shown in Figure 9-37 on page 240. By clicking the OK button, the uninstall wizard will uninstall the Accounting Feature for the Workstation program.
239
During the uninstall operation, the wizard will ask you about deleting some files in the ReadOnly File Detected panel, as shown in Figure 9-38.
Read the message carefully in each of these panels you receive and decide for yourself if the file should be deleted or not. Select the Yes button for all files within the Accounting Feature for the Workstation folders. For all other files, such as Microsoft Windows files, use the No button, because these can be used by other applications and should not be removed. Note: Selecting the Dont display this message again option results in a deletion of all installed files, even those that could be used by other applications or the operating system. We suggest you not use this option. When all files are deleted, the InstallShield Wizard Maintenance Complete panel is displayed, as shown in Figure 9-39 on page 241 for Windows 2000 and in Figure 9-40 on page 241 for Windows 9x.
240
Click the Finish button and additional uninstall operations take place.
241
When the uninstall operations complete, the Add/Remove Programs Properties panel will be displayed and the AWO 2.0 program will no longer be shown in the list of programs, as shown in Figure 9-41 for Windows 9x operating system and Figure 9-42 on page 243 for Windows 2000 operating system.
242
Click the OK button for Windows 9x or the Close button for the Windows 2000 operating system. After the program has been uninstalled using the wizard, you will also need to delete any data generated by Accounting Feature for the Workstation on the workstation. We recommend that you delete all the Accounting Feature for the Workstation folders and files. Figure 9-43 on page 244 shows, in a Windows Explorer panel, the directory tree of folders and files remaining after an uninstall of the Accounting Feature for the Workstation program.
243
To delete the files and directories in the AWO directory structure, select the AWO folder as shown in Figure 9-43, and delete it. During the deletion operation, you may be asked to confirm deletion of some objects in the Confirm File Delete dialog box (shown in Figure 9-44).
Reply with Yes or Yes to All to the Confirm File Delete dialog box to delete all files, as seen in Figure 9-44. Thought not required, we recommend you restart the workstation after you have completed all of the uninstall activities for the Accounting Feature for the Workstation program.
244
Appendix A.
245
Figure A-1 gives an overview about the possible account information in the OMVS accounting component.
To install the UNIX System Services subcomponent: 1. Place the members listed in the figures of this appendix into your local definition data set. The default data set name for the local definition data set is DRL150.LOCAL.DEFS.
246
2. Process the DRLIOMVS member using the process PR statements drop-down menu of the Tivoli Decision Support for OS/390 administration dialog box. 3. Install the subcomponent using the Component dialog box of theTivoli Decision Support for OS/390 administration dialog box. To account for UNIX system resource, it is best to use CPU seconds. The calculation to determine the total time taken by a job in seconds is to sum SRB time, TCB time, IIP time, HCT time, and RCT time. Tip: The best way to account OMVS resource is to use the RACF User ID. If you work with the RACF User ID to assign accounting information to a cost center, it is possible to fill the lookup table RAFAOMVS automatically with the cost center information from RACF. See Appendix C, Importing information from RACF on page 277 for more information.
247
VALUES('RAF','Resource Accounting Component',USER); /**********************************************************************/ /* */ /* OpenMVS Subcomponent */ /* */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMP_PARTS (COMPONENT_NAME, PART_NAME, DESCRIPTION, USER_ID) VALUES('RAF','OPENMVS','OpenMVS accounting',USER); /* Table and update definitions */
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','TABSPACE','DRLSOMVL','DRLSOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','TABLE ','RAFOMVSLOG','DRLTOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','TABSPACE','DRLSOMVS','DRLSOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','TABLE ','RAFOMVS','DRLTOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','TABSPACE','DRLSAOMV','DRLSOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','LOOKUP ','RAFAOMVS','DRLTOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','UPDATE ','RAFOMVS_SMF30','DRLUOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','UPDATE ','RAFOMVS_SMF30_O','DRLUOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','UPDATE ','RAFOMVS_UP','DRLUOMVS','OPENMVS');
248
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','UPDATE ','RAFOMVS_USSM','DRLUOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','PURGE ','RAFOMVSLOG','DRLPOMVS','OPENMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME, PART_NAME) VALUES('RAF','PURGE ','RAFOMVS','DRLPOMVS','OPENMVS');
249
PRIQTY 7500 SECQTY 3500 SEGSIZE 32 BUFFERPOOL &TSBUFFERPOOL LOCKSIZE TABLE; COMPRESS YES; /**********************************************************************/ /* Define DRLSOMVL tablespace for OpenMVS JOBLOG table. */ /**********************************************************************/ SQL CREATE TABLESPACE DRLSOMVL IN &DATABASE USING STOGROUP &STOGROUP PRIQTY 3500 SECQTY 1500 SEGSIZE 32 BUFFERPOOL &TSBUFFERPOOL LOCKSIZE TABLE; COMPRESS YES;
250
PRIMARY KEY (RACFGRP, USERID, JOBNAME, SYSID, PGMRNAME, JOBCLASS, USERFLD)) IN &DATABASE.DRLSAOMV; SQL CREATE UNIQUE INDEX &PREFIX.RAFAOMVS_IX ON &PREFIX.RAFAOMVS (RACFGRP, USERID, JOBNAME, SYSID, PGMRNAME, JOBCLASS, USERFLD) USING STOGROUP &STOGROUP PRIQTY 712 SECQTY 712 CLUSTER BUFFERPOOL &IXBUFFERPOOL; SQL COMMENT ON TABLE &PREFIX.RAFAOMVS IS 'This table contains the rules for translating account translation elements in the in SMF log records to account IDs in the RAFOMVS table.'; SQL COMMENT ON &PREFIX.RAFAOMVS (RACFGRP IS 'The value to be matched with the RACF group of the job, obtained from SMF30GRP.', USERID IS 'The value to be matched with the RACF user ID of the job, obtained from SMF30RUD.', JOBNAME IS 'The value to be matched with the job name of the job obtained from SMF30JBN, SMF26JBN, or SMF25JBN.', SYSID IS 'The value to be compared with the system identification obtained from SMF30SID.', PGMRNAME IS 'The value to be compared with the programmer name obtained from SMF30USR.', JOBCLASS IS 'The value to be compared with the jobclass of the job obtained from SMF30CLS.', USERFLD IS 'The value to be compared with J_USERFLD, whose default value is the first character of the job name. The user can modify this default calculation.', ACCTID IS 'The account ID, obtained when the key columns match the corresponding log fields.');
251
SQL CREATE TABLE &PREFIX.RAFOMVSLOG (OMVS_TIMESTAMP TIMESTAMP NOT NULL, OMVS_JOBNAME CHAR(8) NOT NULL, OMVS_SMFID CHAR(4), OMVS_JOBCLASS CHAR(8), OMVS_RACFGRP CHAR(9), OMVS_USERID CHAR(8), OMVS_PGMRNAME CHAR(8), OMVS_USERFLD CHAR(8), OMVS_FUNCTION CHAR(8), OMVS_TCBTIME FLOAT, OMVS_SRBTIME FLOAT, OMVS_HPTTIME FLOAT, OMVS_IIPTIME FLOAT, OMVS_RCTTIME FLOAT, OMVS_PR_ID INTEGER, OMVS_PRGP_ID INTEGER, OMVS_PRUS_ID INTEGER, OMVS_PRUG_ID INTEGER, OMVS_PRSE_ID INTEGER, OMVS_PARENT_ID INTEGER, OMVS_NUM FLOAT, OMVS_CPU_SEC FLOAT, OMVS_DR_READ FLOAT, OMVS_HFS_READ FLOAT, OMVS_HFS_WRITE FLOAT, OMVS_HFS_PIPEREAD FLOAT, OMVS_HFS_PIPEWRITE FLOAT, OMVS_HFS_SPECREAD FLOAT, OMVS_HFS_SPECWRITE FLOAT, OMVS_LCALL_LOGFILE FLOAT, OMVS_LCALL_PHYFILE FLOAT, OMVS_GCALL_LOGFILE FLOAT, OMVS_GCALL_PHYFILE FLOAT, OMVS_READ_NETSOCK FLOAT, OMVS_WRITE_NETSOCK FLOAT, OMVS_BYT_MESSSENT FLOAT, OMVS_BYT_MESSREC FLOAT, OMVS_SYNC_TIMECALL FLOAT, PRIMARY KEY (OMVS_TIMESTAMP, OMVS_JOBNAME)) IN &DATABASE.DRLSOMVL; SQL CREATE UNIQUE INDEX &PREFIX.RAFOMVSLOG_IX ON &PREFIX.RAFOMVSLOG (OMVS_TIMESTAMP, OMVS_JOBNAME) USING STOGROUP &STOGROUP PRIQTY 712
252
SECQTY 712 CLUSTER BUFFERPOOL &IXBUFFERPOOL; SQL COMMENT ON TABLE &PREFIX.RAFOMVSLOG IS 'This table contains intermediate data for the OpenMVS producing system summarized at day level.'; SQL COMMENT ON &PREFIX.RAFOMVSLOG (OMVS_TIMESTAMP IS 'The date and time when the reader recognized the job cards, obtained from SMF30RSD and SMF30RST.', OMVS_JOBNAME IS 'The name of the job.' ); SQL CREATE TABLE &PREFIX.RAFOMVS (DATE DATE NOT NULL, PERIOD CHAR(8) NOT NULL, SYSID CHAR(8) NOT NULL, ACCOUNT CHAR(8) NOT NULL, JOBNAME CHAR(8) NOT NULL, FUNCTION CHAR(8) NOT NULL, JOBCLASS CHAR(8) NOT NULL, NJOBS FLOAT, CPUSEC FLOAT, CPUNMSECS FLOAT, LAST_COLLECT_TIME TIMESTAMP, LAST_IMPORT_TIME TIMESTAMP, PRIMARY KEY (DATE, PERIOD, SYSID, ACCOUNT, JOBNAME, FUNCTION, JOBCLASS)) IN &DATABASE.DRLSOMVS; SQL CREATE UNIQUE INDEX &PREFIX.RAFOMVS_IX ON &PREFIX.RAFOMVS (DATE, PERIOD, SYSID, ACCOUNT, JOBNAME, FUNCTION, JOBCLASS) USING STOGROUP &STOGROUP PRIQTY 712 SECQTY 712 CLUSTER BUFFERPOOL &IXBUFFERPOOL; SQL COMMENT ON TABLE &PREFIX.RAFOMVS IS 'This table contains summarized data for the OpenMVS producing system.' ;
253
SQL COMMENT ON &PREFIX.RAFOMVS (DATE IS 'The day the reader recognized the JOB cards, obtained from SMF30RSD.', PERIOD IS 'The name of the period, obtained from the values ''RAF'', SMF30RSD, SMF25RSD, SMF26RSD or SMF6RSD, and SMF30RST, SMF25RST, SMF26RST or SMF6RST as parameters in the PERIOD function.', SYSID IS 'The system ID, obtained from SMF30SID.', ACCOUNT IS 'The account ID obtained using the mapping rules in the table RAFAOMVS.', JOBNAME IS 'The job or session name, obtained from SMF30JBN, SMF26JBN, SMF25JBN, or SMF6JBN.', FUNCTION IS 'This is by default the first character of the job name. This can be changed by the user to give any other value desired.', JOBCLASS IS 'The job class, obtained from SMF30CLS.', NJOBS IS 'This is the total number of jobs, calculated as COUNT(CPUTIME).', CPUSEC IS 'This is the total time taken by the jobs in seconds, calculated as SUM(SRBTIME + TCBTIME + IIPTIME + HCTTIME + RCTTIME), obtained from SMF30CPT, SMF30CPS, SMF30IIP, SMF30HCT, and SMF30RCT.', CPUNMSECS IS 'This is the normalized CPU time, calculated during import as CPUSEC/CP_POWER, where CP_POWER is obtained by looking up SYSID in the CPU normalization table.', LAST_COLLECT_TIME IS 'This is the last time collect was done for this table.', LAST_IMPORT_TIME IS 'This is the last time import was done for the BATCH producing system.'); SQL GRANT SELECT ON &PREFIX.RAFAOMVS TO &USERS; SQL GRANT SELECT ON &PREFIX.RAFOMVSLOG TO &USERS; SQL GRANT SELECT ON &PREFIX.RAFOMVS TO &USERS;
254
/* */ /**********************************************************************/ /* */ /* Name: DRLUOMVS */ /* Define update definitions: */ /* RAFOMVS_SMF30 */ /* RAFOMVS_SMF30_O */ /* RAFOMVS_UP */ /* RAFOMVS_USSM */ /* */ /* Change activity: */ /* 00 01-07-1998 IBM Joachim Fischer Created */ /**********************************************************************/ DEFINE UPDATE RAFOMVS_SMF30 VERSION 'IBM.150' FROM SMF_030 WHERE SMF30WID = 'OMVS' AND SMF30RVN >= '05' AND (SMF30TYP = 2 OR SMF30TYP = 3) TO &PREFIX.RAFOMVSLOG LET (TCB_STEP_SEC = VALUE (CASE WHEN SUBSTR(SMF30TFL,1,1)= '0' OR SUBSTR(SMF30TFL,2,1)= '0' THEN SMF30CPT/FLOAT(100) ELSE FLOAT(0) END,FLOAT(0)), SRB_STEP_SEC = VALUE (CASE WHEN SUBSTR(SMF30TFL,1,1)= '0' OR SUBSTR(SMF30TFL,3,1)= '0' THEN SMF30CPS/FLOAT(100) ELSE FLOAT(0) END,FLOAT(0)), IIP_STEP_SEC = VALUE (CASE WHEN SUBSTR(SMF30TFL,1,1)= '0' OR SUBSTR(SMF30TFL,10,1)= '0' THEN SMF30IIP/FLOAT(100) ELSE FLOAT(0) END,FLOAT(0)), RCT_STEP_SEC = VALUE (CASE WHEN SUBSTR(SMF30TFL,1,1)= '0' OR SUBSTR(SMF30TFL,12,1)= '0' THEN SMF30RCT/FLOAT(100) ELSE FLOAT(0) END,FLOAT(0)), HPT_STEP_SEC = VALUE (CASE WHEN SUBSTR(SMF30TFL,1,1)= '0' OR SUBSTR(SMF30TFL,11,1)= '0' THEN SMF30HPT/FLOAT(100) ELSE FLOAT(0)
255
END,FLOAT(0)), ID_TIME = TIMESTAMP(SMF30RSD,SMF30RST), T_FUNCTION = SUBSTR(SMF30JBN,1,1), USER_FLD = SUBSTR(SMF30JBN,1,1)) GROUP BY (OMVS_TIMESTAMP = ID_TIME, OMVS_JOBNAME = SMF30JBN) SET (OMVS_SMFID = FIRST(SMF30SID), OMVS_TCBTIME = SUM(TCB_STEP_SEC), OMVS_SRBTIME = SUM(SRB_STEP_SEC), OMVS_IIPTIME = SUM(IIP_STEP_SEC), OMVS_RCTTIME = SUM(RCT_STEP_SEC), OMVS_HPTTIME = SUM(HPT_STEP_SEC), OMVS_JOBCLASS = LAST(VALUE(SMF30CLS,' OMVS_PGMRNAME = LAST(VALUE(SMF30USR,' OMVS_RACFGRP = LAST(VALUE(SMF30GRP,' OMVS_USERID = LAST(VALUE(SMF30RUD,' OMVS_USERFLD = FIRST(VALUE(USER_FLD,' OMVS_FUNCTION = FIRST(VALUE(T_FUNCTION,'
DEFINE UPDATE RAFOMVS_SMF30_O VERSION 'IBM.150' FROM SMF_030 SECTION OPENMVS WHERE SMF30WID = 'OMVS' AND SMF30RVN >= '05' AND (SMF30TYP = 2 OR SMF30TYP = 3) TO &PREFIX.RAFOMVSLOG GROUP BY (OMVS_TIMESTAMP = TIMESTAMP(SMF30RSD,SMF30RST), OMVS_JOBNAME = SMF30JBN) SET (OMVS_PR_ID = FIRST(SMF30OPI), OMVS_PRGP_ID = FIRST(SMF30OPG), OMVS_PRUS_ID = FIRST(SMF30OUI), OMVS_PRUG_ID = First(SMF30OUG), OMVS_PRSE_ID = FIRST(SMF30OSI), OMVS_PARENT_ID = FIRST(SMF30OPP), OMVS_NUM = SUM(SMF30OSC), OMVS_CPU_SEC = SUM(SMF30OST), OMVS_DR_READ = SUM(SMF30ODR), OMVS_HFS_READ = SUM(SMF30OFR), OMVS_HFS_WRITE = SUM(SMF30OFW), OMVS_HFS_PIPEREAD = SUM(SMF30OPR), OMVS_HFS_PIPEWRITE = SUM(SMF30OPW),
256
OMVS_HFS_SPECREAD OMVS_HFS_SPECWRITE OMVS_LCALL_LOGFILE OMVS_LCALL_PHYFILE OMVS_GCALL_LOGFILE OMVS_GCALL_PHYFILE OMVS_READ_NETSOCK OMVS_WRITE_NETSOCK OMVS_BYT_MESSSENT OMVS_BYT_MESSREC OMVS_SYNC_TIMECALL );
= = = = = = = = = = =
SUM(SMF30OSR), SUM(SMF30OSW), SUM(SMF30OLL), SUM(SMF30OLP), SUM(SMF30OGL), SUM(SMF30OGP), SUM(SMF30OKR), SUM(SMF30OKW), SUM(SMF30OMS), SUM(SMF30OMR), SUM(SMF30OSY)
DEFINE UPDATE RAFOMVS_UP VERSION 'IBM.150' FROM &PREFIX.RAFOMVSLOG WHERE OMVS_SMFID IS NOT NULL TO &PREFIX.RAFOMVS LET (D1 = DATE(OMVS_TIMESTAMP), T1 = TIME(OMVS_TIMESTAMP), CPUTIME = (OMVS_TCBTIME + OMVS_SRBTIME + OMVS_HPTTIME + OMVS_IIPTIME + OMVS_RCTTIME), CPUN = VALUE(LOOKUP CP_POWER IN &PREFIX.CPU_NORMAL_DATA WHERE OMVS_SMFID = CP_SMF_ID AND D1 >= CP_START_DATE AND D1 <= CP_END_DATE , FLOAT(1)) ) GROUP BY (DATE = D1, PERIOD = VALUE (PERIOD('RAF',D1,T1),' '), SYSID = OMVS_SMFID, ACCOUNT = VALUE( LOOKUP ACCTID IN &PREFIX.RAFAOMVS WHERE OMVS_RACFGRP LIKE RACFGRP AND OMVS_USERID LIKE USERID AND OMVS_JOBNAME LIKE JOBNAME AND OMVS_SMFID LIKE SYSID AND OMVS_PGMRNAME LIKE PGMRNAME AND OMVS_JOBCLASS LIKE JOBCLASS AND OMVS_USERFLD LIKE USERFLD, 'NULLACCT'), JOBNAME = OMVS_JOBNAME, FUNCTION = OMVS_FUNCTION, JOBCLASS = OMVS_JOBCLASS)
257
= = = =
DEFINE UPDATE RAFOMVS_USSM VERSION 'IBM.150' FROM &PREFIX.RAFOMVS WHERE ACCOUNT <> 'NULLACCT' TO &PREFIX.USE_SUMMARY_D LET (CPUN = VALUE(LOOKUP CP_POWER IN &PREFIX.CPU_NORMAL_DATA WHERE SYSID = CP_SMF_ID AND DATE >= CP_START_DATE AND DATE <= CP_END_DATE , FLOAT(1)), SUBSYS = 'OMVS', AC1 = ACCOUNT, AC2 = ' ', AC3 = ' ', AC4 = ' ', AC5 = ' ') GROUP BY (SMF_ID = SYSID, DATE = DATE, BILLING_PERIOD = VALUE(LOOKUP BP_ID IN &PREFIX.BILLING_PERIOD WHERE DATE >= BP_EFF_DT AND DATE <= BP_END_DT, 'NULLBP'), PERIOD_NAME = PERIOD, SUBSYSTEM_ID = SUBSYS, CUSTOMER_ID = VALUE(LOOKUP AC_CUST_ID IN &PREFIX.ACCOUNT WHERE AC1 LIKE AC_AC1 AND AC2 LIKE AC_AC2 AND AC3 LIKE AC_AC3 AND AC4 LIKE AC_AC4 AND AC5 LIKE AC_AC5, 'NULLCUST'), ACCOUNT_ID1 = AC1, ACCOUNT_ID2 = AC2, ACCOUNT_ID3 = AC3, ACCOUNT_ID4 = AC4, ACCOUNT_ID5 = AC5, OV_FLAG = VALUE(LOOKUP OV_FLAG IN
258
&PREFIX.ACCOUNT WHERE AC1 LIKE AC_AC1 AND AC2 LIKE AC_AC2 AND AC3 LIKE AC_AC3 AND AC4 LIKE AC_AC4 AND AC5 LIKE AC_AC5, 'NULLFLAG')) SET (CPU_SECONDS CPU_NMSECS );
= SUM(CPUSEC), = SUM(CPUSEC/CPUN)
259
260
Appendix B.
261
Figure B-1 and Figure B-2 on page 263 show all the basic unmodified Resource Accounting Feature defined lookup tables.
262
Select the first table, in this case the ACCOUNT table. You must update the table that represents your installation. This can be done using the ISPF interface directly from the EDIT drop-down list of the RAF feature or, if you have DB2 experience, with the DB2 utilities.
263
Figure B-3 shows the ACCOUNT table. The entry 123456 in field AC_AC1 will be associated with ABCD the value of the AC_CUST_ID column. This means Account 123456 belongs to customer ABCD. Each value in the AC_ACx fields can be affiliated with a customer ID. The last column in Figure B-3 is the OV_FLAG. The =Y means that this is a prorate account; any other sign in this field, or even a (blank), means that this is an account that will be fully and directly charged to the corresponding customer. The AC_NAME field can have customer names up to 30 digits. Figure B-4 on page 265 will show you the normal collection data flow through the RAF feature. Please remember that all Record Definitions are not part of the Resource Accounting Feature; they are part of the corresponding feature, such as IMS for all the Log definitions, Record definitions, and Record- and Log procedures are part of the base IMS feature.
264
Figure B-4 RAF Feature Collect with ACCOUNT and CUSTOMER lookup
To account for data produced and collected in a IT center, you must associate the measured data with the user who used this specific resource. This can be a single person, a department, or even a company. The RAFAxxx lookup tables (where xxx is the subsystem name, for example, BATCH, IMS, or STS) are used to do this account translation. During collection, the ACCOUNT field in the RAFxxx table is updated by copying the data from the ACCTID field in the RAFAxxx to the ACCOUNT field in the RAFxxx tables. In the update definition, a field from your input data is compared to a field in the RAFAxxx table. If they match, the copy of the value from field ACCTID in RAFAxxx to ACCOUNT in RAFxxx takes place. In the next step, this ACCOUNT field is copied from the RAFxxx table to the USE_SUMMARY_D table and also to the AC1 column in the BILLED_DATA table, if OV_FLAG is not Y.
265
In the standard unmodified RAF feature, only the ACCOUNT_ID1 column in the USE_SUMMARY_D is used. The other ACCOUNT_ID2-5 columns are free to be modified and are filled by default with (blank). During an update from RAFxxx to USE_SUMMARY_D (by looking up table ACCOUNT), the field CUST_ID is copied to CUSTOMER_ID in USE_SUMMARY_D. Tables USE_SUMMARY_D2 and USE_SUMMARY_D3 are updated with all the data from USE_SUMMARY_D. The data rows containing the NULLACCT account are not moved into the USE_SUMMARY_D table. This data must be corrected, and the USE_SUMMARY_D table must be updated in a separate procedure (not provided by the Accounting Feature).
Let use now discuss prorating with the RAF feature (charging customers only part of specific usage). The ACCT_PRORATE table (Figure B-6 on page 267) is used to define the prorate accounts and the percentage they are charged. This means that you may specify ACCOUNT_IDs when the usage is distributed to different CUSTOMER_IDs. This can be a fixed percentage or a percentage calculated based on the measured CPU usage for this chargeable customer. If the value in
266
the PRORATE_TYPE field contains the value FIXED or is empty, a value must be provided in the PERCENTAGE column of the ACCT_PRORATE table. If the PRORATE_TYPE is CPU, the percentage is calculated on the measured CPU usage of the customers charged by prorating.
Prorating is done by a job that has to be run on a billing period basis. Figure B-5 on page 266 shows how prorating works. During normal collection, all rows with an OV_FLAG of Y are copied from USE_SUMMARY_D to USE_SUMMARY_D4 and not into the BILLED_DATA table. Data without excluding the value of F in the OV_FLAG is summarized in table BILLED_DATE. ALL data, excluding the OV_FLAG, is summarized using keys in tables USE_SUMMARY_D2 and USE_SUMMARY_D3. Using the prorate job DRLJPROR, as shown in Figure B-7 on page 268, the data from USE_SUMMARY_D4 and ACCT_PRORATE2 will be modified and copied to USE_SUMMARY_D. During this process, the value for CALC_PCT will be calculated as defined in an update definition from ACCT_PRORATEV2 to USE_SUMMARY_D, as shown in Figure B-10 on page 270 and Figure B-11 on page 271. The calculation of CALC_PCT is done in different ways, depending on the value in the column PRORATE_TYPE (FIXED, (blank) or CPU). If prorate type is FIXED or (blank), the calculation for the CALC_PCT is PERCENTAGE/100.
267
If the prorate type value is CPU, then following formula is used: USE_SUMMARY_D3(CPU_SECONDS) / USE_SUMMARY_D2(CPU_SECONDS) The tables USE_SUMMARY_D2 contains the CPU_SECONDS summarized by BILLING UNIT, and USE_SUMMARY_D3 contains the data summarized by CUSTOMER and BILLING UNIT. After this process finishes, the prorated data is in USE_SUMMARY_D with an OV_FLAG, which is changed to the value P. The last step in this process is to summarize this data with an OV_FLAG value of P into our BILLED_DATA table.
Another table called CUSTOMER, as shown in Figure B-8 on page 269, is, by default, not referenced in the RAF feature.This table can be used to produce more meaningful reports on the BILLED_DATA table as shown in Figure B-9 on page 269. Columns in this lookup table holds, based in the CU_ID, information about the customer.
268
269
The BILLRD_DATA table is the table that holds our IT center data, summarized by subsystem, billing period, customer, and account. This table will be used for billing and also to hand over the summarized data to the Accounting Feature for the Workstation for further editing.
270
Now we will discuss pricing in the RAF feature. There is a PRICE_LIST lookup table available in the Resource Accounting Feature, where you can specify prices for all the data you would like to account. This table is referenced during update from USE_SUMMARY_D to BILLED_DATA. In BILLED_DATA, there are three fields for the data you are accounting. These columns end with _V for Volume, _P for Price, and _A, which stands for Amount (see Figure B-12 on page 272). During the update, (from USE_SUMMARY_D to BILLED_DATA), the amount is calculated from the volume value and the corresponding price, which is looked up from the PRICE_LIST table (see Figure B-10 on page 270, Figure B-11, and Figure B-12 on page 272 for more details). Never change the date after the recalculation is already done. If the recalculation runs a second time, all columns with the same group in the from and to tables will be replaced in the to table, and columns with different groups will be inserted. As you can see, all columns are defined twice, one with a 2 at the end and another without a 2 at the end. This was implemented to allow you to work with two currencies, if required.
271
Let us know discuss CRDIT_DEBIT (recurrent, occasional (one time) charges, and corrections).
272
The Credit/Debit function handles occasional or one time charges, which is something a customer has to pay only once, and also recurrent charges, which is a fixed charge that will be charged for each billing period. The values for both recurrent and occasional charges are defined in lookup table CREDIT_DEBIT (see Figure B-13 on page 272). All recurrent charges must have a value of R in the Type column. Occasional charges should have a value of O in the Type field; (blank) is handled like the value O as a occasional charge. This is very important, because this flag is used to set the date and billing period in different ways.
The credit/debit job to do a recalculate to the BILLED_DATA table from the CREDIT_EDIT table could look like Figure B-14. In the update definition from CREDIT_DEBIT table to BILLED_DATA table, the date and billing period will be set as follows: If the TYPE in the CREDIT_DEBIT table is equal to R, the billing period value in BP_ID will be set to the value from the set parameter in the recalculate job (Figure B-14), and the current date will be set to the DATE field.
273
If the TYPE field in the CREDIT_DEBIT table is not equal to R, the billing period will be looked up from the BILLING_PERIOD table and the date will be set to the value from the DATE field in the CREDIT_DEBIT table (see Figure B-15 and Figure B-16 on page 275). This Recalculate job must be run at the end of a billing period to get the correct date set and charged in the current billing period.
274
The recalculation function works as follows: There are two alternative ways updating tables with the recalculate statement: Recalculation Recalculation takes place when the from clause is chosen. This means you specify a table name as the source for the recalculate command. The update definition(s) is used to copy and/or manipulate the data, as defined in the update definition, on the way to the target table. If you do not define a table name before the from clause, all subsequent tables will be updated (see Example B-2 on page 276). If you define a table before the from clause, only this table(s) will be updated from the table or view specified after the from clause (see Example B-1 on page 276). All tables used for recalculation with the from clause must have an update defined, because recalculate only uses these definitions to update the to table.
Note: RECALCULATE with the FROM clause never deletes any rows in a table. Be careful when you specify WHERE for a RECALCULATE FROM. Suppose you have three tables: TABLE_H, containing hourly data
275
TABLE_D, containing data from TABLE_H, summarized by day TABLE_M, containing data from TABLE_D, summarized by month Suppose you execute this statement:
RECALCULATE TABLE_D, TABLE_M FROM TABLE_H WHERE DATE=2001-04-02 ;
The data for the specified day in TABLE_D is recalculated correctly. But, the rows for other days in TABLE_D are treated as old rows that are left in the table because of the rule that RECALCULATE does not delete rows. They are not used to recalculate the data for TABLE_M. As result, the data for April 2001 in TABLE_M is derived from data for only day: April 2. To avoid this problem, use a separate statement for each table:
RECALCULATE TABLE_D from TABLE_H where DATE=2001-04-01; RECALCULATE TABLE_M from TABLE_D where MONTH=5 ;
Propagation of changes Using the recalculate function with the delete, insert into, or update function, you may change data in the TDS tables and propagate these changes to the dependent tables. A detailed description about the recalculation function is in the Tivoli Decision Support for OS/390 Release 5.1 Language Guide and Reference, SH19-6817. Example B-1 will update only the table use_summary_d from the view acct_proratev2.
Example: B-1 Recalculate example 1
recalculate use_summary_d from acct_proratev2
Example B-2 will update USE_SUMMARY_D, USE_SUMMARY_D2, USE_SUMMARY_D3, USE_SUMMARY_D4, and BILLED_DATA table:
Example: B-2 Recalculate example 2
recalculate from acct_prorateV2
Note: As you can see, there are many things to do to get an accounting or chargeback system running. You can only achieve an accurate accounting/charge back implementation if you have indisputable naming convention set up in your whole IT environment.
276
Appendix C.
277
USRIUSR
The members listed in this appendix are an example of a Tivoli Decision Support for OS/390 component that can be used to collect the RACF information from the Flat File. Example C-1 lists the USRIUSR member. This member defines the RACF subcomponent and its objects.
Example: C-1 Collect RACF component - USRIUSR member /**********************************************************************/ /* NAME : USRIUSR */ /* */ /* FUNCTION: DEFINITION DER RACF USERID COLLECT COMPONENTE */ /* */ /* Created : 20.03.2000 IBM Ulrich Hinueber / Joachim Fischer */ /* */ /**********************************************************************/ /* RACF USER COLLECT COMPONENTE */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMPONENTS (COMPONENT_NAME, DESCRIPTION, USER_ID) VALUES('USR_RACFUSER','USR - RACF USERID/DASD Collect',USER);
/**********************************************************************/ /* COMMON OBJECTS (USED BY MORE THAN ONE PART - ALWAYS INSTALLED) */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOG ','USR_DATA','USRLUSR'); /**********************************************************************/ /* RACF USER RECORD */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','RECORD ','RACF_USER','USRRUSR'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','RECORD ','RACF_DSN','USRRUSR'); /**********************************************************************/ /* Tablespace */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','TABSPACE','DRLSCOM','DRLSCOM'); /**********************************************************************/
278
/* Table and UPDATE DEFINITIONS */ /**********************************************************************/ SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','USER_GROUP','DRLTUSGR'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_USER_GROUP','USRUUSR1'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','RAFATSO','DRLTTSO'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_ATSO','USRUUSR2'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','RAFADB2','DRLTDB2'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_ADB2','USRUUSR3'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','RAFACICS','DRLTCICS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_ACICS','USRUUSR4'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','RAFAOMVS','DRLTOMVS'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_AOMVS','USRUUSR5'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','RAFADASD','DRLTDASD'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_DSN2','USRUDSN2'); SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','LOOKUP ','DFSMS_DS_OWNER','DRLTDFOW');
279
SQL INSERT INTO &SYSPREFIX.DRLCOMP_OBJECTS (COMPONENT_NAME, OBJECT_TYPE, OBJECT_NAME, MEMBER_NAME) VALUES('USR_RACFUSER','UPDATE ','RACF_DSN1','USRUDSN1');
USRLUSR
The USRLUSR member shown in Example C-2 defines the RACF log definition.
Example: C-2 Collect RACF component - USRLUSR member /**********************************************************************/ /* NAME: USRLUSR */ /* */ /* Function: Log definition for RACF Flat File Collect */ /* Record 0200 and 0400 for update RAF Lookup table */ /* */ /* Created */ /* 08.03.2000 IBM Ulrich Hinueber / Joachim Fischer */ /**********************************************************************/ /**********************************************************************/ /* DEFINITION VOM LOG */ /**********************************************************************/ DEFINE LOG USR_DATA VERSION 'USR.140';
COMMENT ON LOG USR_DATA IS 'USR_RACF Log Definition';
USRRUSR
The USRRUSR member shown in Example C-3 defines the record definitions for the RACF subcomponent.
Example: C-3 Collect RACF component - USRRUSR member /**********************************************************************/ /* NAME: USRRUSR */ /* */ /* FUNCTION: RECORD DEFINITION RACF RECORD 0200 - USERID */ /* RECORD DEFINITION RACF RECORD 0400 - Data set */ /* */ /* Created: */ /* 23.03.2000 IBM Ulrich Hinueber / Joachim Fischer */ /**********************************************************************/ /* User information Record definition */ /**********************************************************************/ DEFINE RECORD RACF_USER VERSION 'USR.140'
280
IN LOG USR_DATA IDENTIFIED BY RECORD = '0200' FIELDS (RECORD OFFSET 4 LENGTH USER_ID OFFSET 9 LENGTH OWNER OFFSET 29 LENGTH DEF_GROUP OFFSET 99 LENGTH ACCOUNTID OFFSET 128 LENGTH );
4 8 8 8 8
/* /* /* /* /*
*/ */ */ */ */
COMMENT ON RECORD RACF_USER IS 'RACF Record Type 0200 Definition'; /**********************************************************************/ /* Data Set Record Definition */ /**********************************************************************/ DEFINE RECORD RACF_DSN VERSION 'USR.140' IN LOG USR_DATA IDENTIFIED BY RECORD = '0400' FIELDS (RECORD OFFSET 4 LENGTH 4 CHAR, /* RECORD TYPE */ DSN OFFSET 9 LENGTH 44 CHAR, /* DATASET NAME */ GROUP_ID OFFSET 155 LENGTH 8 CHAR, /* GROUP_ID */ ACCOUNTID OFFSET 195 LENGTH 8 CHAR /* Installation Data */ ); COMMENT ON RECORD RACF_DSN IS 'RACF Record Type 0400 Definition';
USRUDSN1
The USRUDSN1 member shown in Example C-4 defines the update definition RACF_DSN1 for the RACF subcomponent.
Example: C-4 Collect RACF component - USRUDSN1 member /**********************************************************************/ /* NAME: USRUDSN1 */ /* */ /* FUNCTION: UPDATE DEFINITION TO COLLECT THE PROJECT AND IT'S */ /* DATASETNAMES FROM RECORD RACF_DSN TO DFSMS_DS_OWNER */ /* */ /* Created: 13.03.2000 IBM Ulrich Hinueber / Joachim Fischer */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_DSN1 */ /**********************************************************************/ DEFINE UPDATE RACF_DSN1 VERSION 'USR.140' FROM RACF_DSN
281
TO &PREFIX.DFSMS_DS_OWNER LET (DSNT = TRANSLATE(DSN,'_','%'), DSNT1 = TRANSLATE(DSNT,'*','**'), DSNT2 = TRANSLATE(DSNT1,'%','*'), T_DSQUAL1 = VALUE(WORD(DSNT2,1,'.'),' DSN2 = WORD(DSNT2,2,'.'), T_DSQUAL2 = CASE LENGTH(DSN2) WHEN 0 THEN '%' ELSE DSN2 END, DSN3 = WORD(DSNT2,3,'.'), T_DSQUAL3 = CASE LENGTH(DSN3) WHEN 0 THEN '%' ELSE DSN3 END, SYSID = '%') GROUP BY (MVS_SYSTEM_ID = SYSID, DS_QUALIFIER1 = VALUE(T_DSQUAL1,'%'), DS_QUALIFIER2 = T_DSQUAL2 , DS_QUALIFIER3 = T_DSQUAL3) SET (PROJECT = FIRST(ACCOUNTID));
'),
USRUDSN2
The USRUDSN2 member shown in Example C-5 defines the update definition RACF_DSN2 for the RACF subcomponent.
Example: C-5 Collect RACF component - USRUDSN2 member /**********************************************************************/ /* NAME: USRUDSN2 */ /* */ /* FUNCTION: UPDATE DEFINITION TO COLLECT THE PROJECT AND IT'S */ /* DATASETNAMES FROM RECORD RACF_DSN TO RAFADASD */ /* */ /* Created: */ /* 13.03.2000 IBM Ulrich Hinueber / Joachim Fischer */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_DSN */ /**********************************************************************/ DEFINE UPDATE RACF_DSN2 VERSION 'USR.140' FROM RACF_DSN TO &PREFIX.RAFADASD
282
DSN3 T_DSQUAL3
T_DSQUAL4 SYSID GROUP BY (SYSID DSNQUAL1 DSNQUAL2 DSNQUAL3 DSNQUALL SET (ACCTID
= TRANSLATE(DSN,'_','%'), = TRANSLATE(DSNT,'*','**'), = TRANSLATE(DSNT1,'%','*'), = VALUE(WORD(DSNT2,1,'.'),' = WORD(DSNT2,2,'.'), = CASE LENGTH(DSN2) WHEN 0 THEN '%' ELSE DSN2 END, = WORD(DSNT2,3,'.'), = CASE LENGTH(DSN3) WHEN 0 THEN '%' ELSE DSN3 END, = '%', = '%') = = = = = SYSID, VALUE(T_DSQUAL1,'%'), T_DSQUAL2 , T_DSQUAL3 , T_DSQUAL4)
'),
= FIRST(ACCOUNTID));
USRUUSR1
The USRUUSR1 member shown in Example C-6 defines the update definition RACF_USER_GROUP for the RACF subcomponent.
Example: C-6 Collect RACF component - USRUUSR1 member /**********************************************************************/ /* NAME: USRUUSR1 */ /* */ /* FUNCTION: UPDATE DEFINITION FOR COLLECT 0200 RECORD FROM */ /* RACF FLAT FILE AND FILL THE TABLE DRL.USER_GROUP */ /* */ /* CREATE: */ /* 20.03.2000 IBM ULRICH HINUEBER / JOACHIM FISCHER */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_USER */ /**********************************************************************/ DEFINE UPDATE RACF_USER_GROUP VERSION 'USR.140' FROM RACF_USER
283
TO &PREFIX.USER_GROUP LET (SYSID = '%', SUBSYSID = '%') GROUP BY (SYSTEM_ID = SYSID, SUBSYSTEM_ID = SUBSYSID, USER_ID = USER_ID) SET (GROUP_NAME = FIRST(ACCOUNTID), DIVISION = FIRST(DEF_GROUP));
USRUUSR2
The USRUUSR2 member shown in Example C-7 defines the update definition RACF_ATS0 for the RACF subcomponent.
Example: C-7 Collect RACF component - USRUUSR2 member /**********************************************************************/ /* NAME: USRUUSR2 */ /* */ /* FUNCTION: UPDATE DEFINITION FOR COLLECT 0200 RECORD FROM */ /* RACF FLAT FILE AND FILL THE TABLE DRL.RAFATSO */ /* */ /* CREATE: */ /* 20.03.2000 IBM ULRICH HINUEBER / JOACHIM FISCHER */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_USER */ /**********************************************************************/ DEFINE UPDATE RACF_ATSO VERSION 'USR.140' FROM RACF_USER TO &PREFIX.RAFATSO LET (ACCT_1 = '%', ACCT_2 = '%', ACCT_3 = '%', ACCT_4 = '%', ACCT_5 = '%', RACFG = '%', TERMI = '%', SYS_ID = '%', USERF = '%') GROUP BY (ACCT1 = ACCT_1 ,
284
ACCT2 ACCT3 ACCT4 ACCT5 RACFGRP USERID TERMID SYSID USERFLD SET (ACCTID FIRST(ACCOUNTID));
= = = = = = = = = =
USRUUSR3
The USRUUSR3 member shown in Example C-8 defines the update definition RACF_ADB2 for the RACF subcomponent.
Example: C-8 Collect RACF component - USRUUSR3 member /**********************************************************************/ /* NAME: USRUUSR3 */ /* */ /* FUNCTION: UPDATE DEFINITION FOR COLLECT 0200 RECORD FROM */ /* RACF FLAT FILE AND FILL THE TABLE DRL.RAFADB2 */ /* */ /* CREATE: */ /* 20.03.2000 IBM ULRICH HINUEBER / JOACHIM FISCHER */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_USER */ /**********************************************************************/ DEFINE UPDATE RACF_ADB2 VERSION 'USR.140' FROM RACF_USER TO &PREFIX.RAFADB2 LET (CORR = '%', CONN = '%', SUBSYS = '%', SYS_ID = '%') GROUP BY (CORRELAT = CORR , CONNTYPE = CONN , AUTHORIZ = USER_ID, SUBSYSID = SUBSYS, SYSID = SYS_ID) SET
285
(ACCTID FIRST(ACCOUNTID));
USRUUSR4
The USRUUSR4 member shown in Example C-9 defines the update definition RACF_ACICS for the RACF subcomponent.
Example: C-9 Collect RACF component - USRUUSR4 member /**********************************************************************/ /* NAME: USRUUSR4 */ /* */ /* FUNCTION: UPDATE DEFINITION FOR COLLECT 0200 RECORD FROM */ /* RACF FLAT FILE AND FILL THE TABLE DRL.RAFACICS */ /* */ /* CREATE: */ /* 20.03.2000 IBM ULRICH HINUEBER / JOACHIM FISCHER */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_USER */ /**********************************************************************/ DEFINE UPDATE RACF_ACICS VERSION 'USR.140' FROM RACF_USER TO &PREFIX.RAFACICS LET (TERM = '%', OPER = '%', APPL = '%', SYS_ID = '%', TRANT = '%', TRANS = '%', USERF = '%') GROUP BY (TERMID = TERM , USERID = USER_ID, OPERID = OPER , APPLID = APPL , SYSID = SYS_ID , TRANTYPE = TRANT , TRANSID = TRANS , USERFLD = USERF) SET (ACCTID = FIRST(ACCOUNTID));
286
USRUUSR5
The USRUUSR5 member shown in Example C-10 defines the update definition RACF_AOMVS for the RACF subcomponent.
Example: C-10 Collect RACF component - USRUUSR5 member /**********************************************************************/ /* NAME: USRUUSR5 */ /* */ /* FUNCTION: UPDATE DEFINITION FOR COLLECT 0200 RECORD FROM */ /* RACF FLATE FILE AND FILL THE TABLE DRL.RACFAOMVS */ /* */ /* CREATE: */ /* 20.03.2000 IBM ULRICH HINUEBER / JOACHIM FISCHER */ /**********************************************************************/ /* DEFINE UPDATE FROM RECORD RACF_USER */ /**********************************************************************/ DEFINE UPDATE RACF_AOMVS VERSION 'USR.140' FROM RACF_USER TO &PREFIX.RAFAOMVS LET (RACFG = '%', JOBN = '%', SYS_ID = '%', PGMR = '%', JOBCL = '%', USERF = '%') GROUP BY (RACFGRP = RACFG , USERID = USER_ID , JOBNAME = JOBN , SYSID = SYS_ID , PGMRNAME = PGMR , JOBCLASS = JOBCL , USERFLD = USERF) SET (ACCTID = FIRST(ACCOUNTID));
287
288
Appendix D.
Additional material
This redbook refers to additional material that can be downloaded from the Internet as described below.
Select the Additional materials and open the directory that corresponds with the redbook form number, SG246044.
289
RACFSUBC.zip
Description The source files that make up the Resource Accounting Feature subcomponent for UNIX system services, described in Appendix A, RAF subcomponent for UNIX System Services data on page 245. The source files that make up the RACF subcomponent, described in Appendix C, Importing information from RACF on page 277.
290
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications see , How to get IBM Redbooks on page 292.
SLR to Tivoli Performance Reporter for OS/390 Migration Cookbook, SG24-5128 Tivoli Decision Support for OS/390 Viewer Guide, SG24-6011
Other resources
These publications are also relevant as further information sources:
Accounting Feature for the Workstation Release Notes (only available with product) OS/390 Version 2 Release 10.0 MVS System Management Facilities, GC28-1783 Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Host, SH19-4495 Tivoli Decision Support for OS/390 Release 5.1 Accounting Feature for the Workstation, SH19-4516 Tivoli Decision Support for OS/390 Release 5.1 Administration Guide , SH19-6816 Tivoli Decision Support for OS/390 Release 5.1 Language Guide and Reference, SH19-6817 Tivoli Decision Support for OS/390 Release 5.1 System Performance Feature Reference Volume 1, SH19-6819 Tivoli Decision Support for OS/390 Release 5.1 System Performance Feature Reference Volume 2, SH19-4494 Tivoli Decision Support for OS/390 Release 5.1 User's Guide for the Viewer, SH19-4517
291
Tivoli Decision Support Release 5.1 IMS Performance Feature Guide and Reference, SH19-6825 Tivoli Performance Reporter for OS/390 System Performance Feature Guide, SH19-6818
Page listing all Tivoli product manuals including Tivoli Decision Support for OS/390
http://www.tivoli.com/support/public/Prodman/public_manuals/td/TD_PROD_LIST .html
Also download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows.
292
Special notices
References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites.
293
The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
294
Glossary
Accounting Workstation Option (AWO). Another name for the Tivoli Decision Support for OS/390 Accounting Feature for the Workstation. Accounting Console. The graphical user interface of the Accounting Feature for the Workstation. ASCII. (American National Standard Code for Information Interchange) The standard code, using a coded character set consisting of 7-bit coded characters (8 bits, including parity check), that is used for information interchange among data processing systems, data communication systems, and associated equipment. The ASCII set consists of control characters and graphic characters. Command Line Interface (CLI) A type of computer interface in which the input command is a string of text characters. Contrast with Graphical User Interface. Central Processing Unit (CPU) The part of a computer that includes the circuits that control the interpretation and execution of instructions. A CPU is the circuitry and storage that executes instructions. Traditionally, the complete processing unit was often regarded as the CPU, whereas the CPU today is often a microchip. In either case, the centrality of a processor or processing unit depends on the configuration of the system or network in which it is used. Direct Access Storage Device (DASD) A mass storage medium on which a computer stores data. Contrast with random access memory. DB2 An IBM relational database management system that is available as a licensed program on several operating systems. Programmers and users of DB2 can create, access, modify, and delete data in relational tables using a variety of interfaces. EBCDIC. Extended binary-coded decimal interchange code. A coded character set of 256 8-bit characters. Extended Architecture (XA) An extension to System/370 architecture that takes advantage of continuing high performance enhancements to computer system hardware. Graphical User Interface (GUI) A type of computer interface consisting of a visual metaphor of a real-world scene, often of a desktop. Within that scene are icons, representing actual objects, that the user can access and manipulate with a pointing device. Contrast with Command Line Interface. Job Entry Subsystem (JES). An IBM licensed program that receives jobs into the system and processes all output data produced by the jobs. Megabyte (MB). (1) For processor storage, real and virtual storage, and channel volume, 220 or 1048576 bytes. (2) For disk storage capacity and communications volume, 1000000 bytes. Megahertz. Frequency in million cycles per second. Monitor (1) A device that observes and records selected activities within a data processing system for analysis. Possible uses are to indicate significant departure from the norm or to determine levels of utilization of particular functional units. (2) Software or hardware that observes, supervises, controls, or verifies operations of a system. (3) The function required to initiate the transmission of a token on the ring and to provide soft-error recovery in case of lost tokens, circulating frames, or other difficulties.
295
The capability is present in all ring stations. (4) Software that monitors specific applications or the systems on which the applications rely. Monitors typically monitor information, such as available disk space or application errors, and compares the information to defined thresholds. When thresholds are exceeded, either system or network administrators can be notified or an automated response can be performed. (5) In the NetView Graphic Monitor Facility, to open a view that can receive status changes from Tivoli NetView for OS/390. Problem determination and correction can be performed directly from the view. MVS Multiple Virtual Storage. Implies MVS/390, MVS/XA, MVS/ESA, and the MVS element of the OS/390 operating system. NLS. National Language Support. Open Database Connectivity (ODBC) A standard application programming interface (API) for accessing data in both relational and non-relational database management systems. Using this API, database applications can access data stored in database management systems on a variety of computers even if each database management system uses a different data storage format and programming interface. ODBC is based on the call level interface (CLI) specification of the X/Open SQL Access Group and was developed by Digital Equipment Corporation (DEC), Lotus, Microsoft, and Sybase. OS/390 Pertaining to the IBM operating system that includes and integrates functions previously provided by many IBM software products (including the MVS operating system) and (a) is an open, secure operating system for the IBM S/390 family of enterprise servers, (b) complies with industry standards, (c) is Year 2000 ready and enabled for network computing and e-business, and (d) supports technology advances in networking server capability, parallel processing, and object-oriented programming.
verifying users of the system; authorizing access to protected resources; logging detected, unauthorized attempts to enter the system; and logging detected accesses to protected resources. RACF is included in OS/390 Security Server and is also available as a separate program for the MVS and VM environments.
296
internetwork protocol. TCP provides a reliable host-to-host protocol between hosts in packet-switched communications networks and in interconnected systems of such networks. It uses the Internet Protocol (IP) as the underlying protocol. Transmission Control Protocol/Internet Protocol (TCP/IP) A set of communications protocols that support peer-to-peer connectivity functions for both local and wide area networks. UNIX An operating system developed by Bell Laboratories that features multiprogramming in a multiuser environment. The UNIX operating system was originally developed for use on minicomputers but has been adapted for mainframes and microcomputers. The AIX operating system is IBM's implementation of the UNIX operating system. Volume Table of Contents (VTOC). (1) A table on a direct access volume that describes each data set on the volume. (2) An area on a disk or diskette that describes the location, size, and other characteristics of each file and library on the disk or diskette.
Glossary
297
298
AWO CICS CLI CPU CSV DASD DB2 DFSMS DMS DRDA FA FTP GDG GUI IBM IMS IP IT ITSO JDBC JES MB MHz
Facility
Start Input/Output Service Level Agreement Systems Management Facilities Structured Programming Facility Structured Query Language Started Tasks Transmission Control Protocol/Internet Protocol Time Sharing Option Variable Blocked Sequential Volume Table of Contents Extended Architecture
299
300
Index
Symbols
.bmp 219 .exd file 84 .imd file 149, 170, 176 .imd files 157 .ini file 82 .mdb file 81 .txt files 82 .xls files 82 .zip 215 _A segment 178 _P segment 178 _V segment 178 apply allocation function 205 apply direct charge 207 apply factor 204 apply factor table 231 apply function 204 apply lookup table 204 apply rate function 197 apply rate table 207 apply rates function 208 archive input table 164 automatic collect 46 AWO 299 AWO directory structure 244 AWO.exe file 63 awomaster 182 awouser 184 awouser1 193 awouser2 186
A
A_TIMESTAMP 58 abbreviations 57 ACCOUNT 24 account ID 14 account proration table 24 account table 24 accounting 1, 180, 210, 261 accounting component 22 accounting console 62 , 74, 78, 168, 188 accounting data tables 19 accounting period 220, 226 accounting rules 13 ACCT parameter 26 ACCT_PRORATE 24 ACCT1 27 ACCT2 27 ACCT3 27 ACCT4 27 ACCT5 27 active ledger 195, 200, 206, 208, 221, 229 activity_log 187 add/remove programs 236 administration dialog 247 allocation table 146, 205, 232 analysis function 211 APAR PQ35089 35 append data 164, 232 application accounting 5
batch job 3, 13, 59 batch resources usage 26 batch scripts directory 83 BILLED_DATA 19, 82, 156, 170, 177, 180, 194, 198, 229, 265 billeddat.imd 157 billeddat.txt 157 billing data 149 billing period 211, 221, 274 billing period table 24 BILLING_PERIOD 24 binary mode 145 Browse button 65 budget actual 210 budget ledger 208 budget table 146, 209 budgeting 14 business costs 6 business goals 4 business planning 6 business processes 2 business transaction 5
301
C
calendar 70, 226 capacity planning 7 center costs 3 charge segment 178 chargeback 1, 4, 180, 210, 261 chargeback cycle 13 chargeback methods 4 chargeback model 7 charts directory 83 Chinese language 218 CICS 31, 40 CICS transaction ID 34 CICS transaction resource usage 33 client 181 comma separated value 82 commands 84 complete window 69 component library 22 components 20 condition statement 54 configuration file 82 confirm file overwrite 218 control panel 69 control tables 23 copying files 67 corporate profit 7 cost center 7, 14 cost of services 4 cost recovery 6 CPU 8 CPU normalization table 24 CPU rate 10 CPU seconds 156 CPU time 3, 15 CPU_NORMAL_DATA 24 CPU-normalization 23 create allocation table 232 create database 182 create import definition 199 create index 230 create interim 204 create local database 189 create master database 188 create new database 189 create new user 184 create rate table 196 credit/debit table 24 CREDIT_DEBIT 24
Criteria= option 154 currency format 72 currency symbol 73 current billing period 200, 208, 220, 226 , 274 CustMapping= option 156, 174, 177 custom field 209, 221 custom field mapping 155, 228 custom fields 194 custom install button 66 custom mapping 174, 176 CUSTOMER 24 customer ID 14 customer table 24 customization 220
D
DASD 8 DASD excps 156 DASD rate 11 DASD resources usage 37 DASD resources utilization 37 DASD, HSMBKUP 23 data center 2 data explorer 167, 193, 204, 208, 226 data set name 14 data tables 22 database access 186 database administrator 184 database options 183 database owner 185 database property 182 database roles 185 database structure 78 DATASRC 176 date 70 date format 69, 227 date separator 72 date/time stamp 152 DAY_OF_WEEK 23 DB2 31, 40, 295 DB2 resource usage 31 DB2 SRB time 33 DB2 system ID 31 DCOLLECT 25 DCOLLECT process 43 delete files 244 delete input table 164 demo 67
302
demo database 79, 146 department budget 6 destination location window. 65 DFSMS 25 DFSMS_LAST_RUN 23 direct charges table 206 direct cost 7 directory structure 79 disk storage 63 DLRSDEFS 48 down loading 59 download 215 download boundary 59 DRLIOMVS 247 DRLJPROR 267 DRLPOMVS 259 DRLSBLDA 145, 176 drlsblda#txt 173 drlsblda.imd 149, 170 drlshlda.txt 172 DRLSOMVS 249 DRLTOMVS 250 DRLUOMVS 254 D-ROM 63 DSNTIAUL 198 dummy active ledger 223 duplicate data 229
F
factor table 204 field mapping 176 field width 222 file server 63 file transfer 145 file transfer program finalize active ledger finalize ledger 223 finance organization financial analyst 3 fixes 214 forecast 14 forecasting 211 functional accounting
181 223 13
G
generation data group 43 get command 145 global interval recording 40 grant 187 Graphical user interface 295
H
hardware costs 8 hardware requirements 62 HCT time 27 head count 13 help desk 7 hints 213 historical ledger 224 host based accounting 261 host exported data 72 host folder 81 host printer 27 HSMMIGR 23
E
ead-only access 186 EBCDIC to ASCII character translation 145 economical 13 environ folder 81 examine posted data 167 EXEC statement 47 EXEC statements 31 executives 4 expense ledger 208 expense ledger table 146 expense table 209 explode folder 81 export definition 198 export definitions directory 83 export folder 81 exported data 84 expression 57 external cost 9 extraction error 219
I
I/O 8 ID_TIME 56 IDCAMS DCOLLECT 37, 43 IFASMFDP program 42 IIP time 27 implementation 62 import definition 147, 170, 171, 199, 228 import definitions 84 import definitions directory 83
Index
303
import directory 82 import fails 228 import folder 81 import function 146 import operation 161, 176 import options 152 import process 147 import results 157 import source 176 import table 158, 172 import target fields 176 import wizard 147, 162, 171, 200, 231 import_billeddat 158 ImportAll= option 152, 174 importing data 146 importing duplicate data 231 IMS 31 IMS logs 41 IMS resources usage 35 income 210 index field 230 index name 230 indirect cost 7 information technology 2 initialize master database 189 insert permission 187 installation 63, 214 installing 226 InstallShield Wizard 64 interim table 204 interval recording 47 interval recording for started tasks 40 interval synchronization 40 INTVAL 40 invoking functions 168 IT accounting 14 IT budget. 7 IT center 13 IT department cost 4 IT Financial Analyst 3 IT measurement system 2 IT organization 8 IT overhead 6 IT processes 2 IT resources 7, 14 IT Services 7
J
JES 40 JES resource usage 26 JES resources usage 26 JES2 30 jet database 79, 180 job card 26 job class 26 job name 26 job name, 14
L
language 68, 217, 218 ledger table 156 175 ledger updates 175 ledger_active table 152 local database 78, 147, 161, 163, 172, 180, 189 local databases 62 local definitions 48 LOCAL.DEFS 48, 246 log collection process 41 log definition 264 log definitions 22 log procedure 264 log procedures 22 log switch 45 login properties 186 LogPath= option 154 long running tasks 40, 47, 59 lookup table 146, 200, 262 lookup tables 19, 22
M
management information 6 mapping process 177 master database 78, 161, 163, 169, 172, 180, 188, 191, 193, 211, 230 master databases 62 monthly rate 10 MVS system ID 26
N
naming conventions 14 NETV 24 network printer 27 new database 189
304
new local database 192 new master connection 189, 192 new target table 169 normalization 15 NPM 24 null values 156 NW_RESOURCE 24
ODBC 67, 181 ODBC connection properties 190 ODBC logon 190, 193 off-peak periods 13 OMVS 245 operating system 226 operating system customized settings 69 operating system platforms 62 operation planning and control 45 operational costs 2 operator ID 34 output 7 outsourcing 6 outtasking 6 overall costs 10 overhead 7 overhead costs 205 overhead distribution 13 overview 3
PRICE_LIST 24 primary index 230 print 8 print rate 12 printed lines 27 printed pages 156 Printer name 26 program files 67 Program name 26 progress display 68 prorate job DRLJPROR 267 prorated usage 4 provided reports 211 PTFs 214 purge definitions 23
Q
queries directory 83
R
RACF 19 RACF group 26 RACF user ID 34 RACF user-ID 26 RADASD 37 RAFABATCH 24, 27 RAFACICS 19, 24, 34 RAFADASD 19, 24, 37 RAFADB2 19, 24, 33 RAFADDR_SMF30 47, 54 RAFADDR_SMF30_A 47, 58 RAFADDR_SMF30_E 47, 58 RAFADDRLOG 53 RAFAHSMBKU 24 RAFAHSMMIGR 24 RAFAIMS 24, 36 RAFAIMVS 19 RAFANETSESM 24 RAFAOMVS 19 RAFASTC 24, 31 RAFATSO 19, 24, 29 RAFBATCH 19, 26 RAFBD2 19 RAFCICS 19, 33 RAFDASD 19 RAFDB2 31 RAFDMVS 19 RAFIMS 19, 35
P
PARMLIB member 40 peak time 13 performance management 7 PERIOD_PLAN 23 personnel cost 9 personnel expenditures 4 populating 147 post data 167 post function 183 post option 175 post options 162, 231 posting 147 posting data 162 PR billed data 156 PR Billed Data mapping 174 preview table 151 price list table 24 price segment 178
Index
305
RAFJOBLOG 26 RAFOMVS table 246 RAFSESSLOG 28 RAFSTC 19, 47 RAFTSO 19, 28 RAM 63 rate active table 178 rate table 196, 207 rates_active 195 RCT time 27 recalculate command 275 record definition 264 record definitions 22 record procedure 264 Redbooks Web Site 292 Contact us xx regional options 69, 72, 227 regional setting properties window 72 regional settings 227 Release Level 214 replace data 164, 231 report function 211 reports directory 83 reports provided 211 resource accounting 5 resource accounting feature 18, 245, 261 resource metric 27 resource monitoring facility 40 resource usage 4, 14 resource usage tables 19 resources accounting 28 restore interim 204 retain input table 164 revenue ledger table 146 revenue recovery 210 RFAANETV 24 RMF 30
S
scenario 180 SCHEDULE 23 175 schema.ini 149, 172 SDRLDEFS 22 seasonal fluctuations 14 select components window 66 SERVCAT 156 server roles 185
SERVICE 156 service category 195 service category table 178 service level 226 service level agreement 2 service level management 7 service rates 10 services 7 setting date format 69 setup status window 68 short date 227 show field 54 ShowErrors= option 154 SIO count 3 SMF global recording interval 40 SMF logs 41 SMF06 59 SMF101 31 SMF110 31 SMF30 26, 28, 47, 246 SMF30 update definitions 48 SMF30ISS 47, 58 SMF30RSD 47, 59 SMF30RST 59 SMF30RST) 47 SMF30TYP 58 SMF30WID 58 SMF6 26 software costs 8 software requirements 62 source table 163 space 7 span 59 SPECIAL_DAY 23 specify language 68 SQL 296 SQL code 231, 233 SQL directory 83, 85 SQL error 232, 233 SQL server 78, 180, 182 SQL statement 32 SQL statements 85 SQL syntax error 233 SRB time 27, 33 start up procedure 47 started task 29, 48 started task resource usage 29 started tasks 47 starting accounting consle 74
306
storage 8 Subtype 2 47 Subtype 3 47 subtype 4 47 synchronization interval 40 SYNCVAL 40 SYS1.MANx data set 42 system folder 82 system programmer 4 system resources 3 system usage 7
V
varchar (255) 232 VTAM 30 VTOC 44
W
window files 67 Windows 2000 63, 182 Windows 95 63, 182 Windows 98 63, 182 windows explorer 243 Windows ME 63, 182 Windows NT 63, 182 workload forecast 7 write access 227
T
tape 8 tape excps 156 target folder 216 target table 152, 163, 172 TCB time 27, 33 TCP 296 TCP/IP 297 terminal ID 34 TIMESTAMP 58 tips 213 total cost 3 trace log 154 TraceOn= option 154 TSO 14, 28, 30 TSO resources usage 28 TSO users sessions 47 TYPE field 233
Y
y2k#txt 176 y2k.imd 176
Z
zip file 214
U
uninstall 235 unique index 229 unique names 14 unit cost 210 units of work 13 UNIX 297 unix system services 245 unzip tool 215 update definition 54 update definitions 22 usage segment 178 USE_SUMMARY_D 19 user defined fields 176 user defined mapping 156, 174 user ID 14, 184 user-defined queries 211
Index
307
308
Back cover
Acrobat bookmark
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.