Académique Documents
Professionnel Documents
Culture Documents
MIMIX Reference
Product conventions.................................................................................................. 14 Menus and commands ........................................................................................ 14 Accessing online help.......................................................................................... 14 Publication conventions............................................................................................. 14 Formatting for displays and commands .............................................................. 15 Sources for additional information............................................................................. 17 How to contact us...................................................................................................... 19 Chapter 1 MIMIX overview 21 MIMIX concepts......................................................................................................... 23 System roles and relationships ........................................................................... 23 Data groups: the unit of replication...................................................................... 24 Changing directions: switchable data groups ...................................................... 24 Additional switching capability ....................................................................... 25 Journaling and object auditing introduction ......................................................... 25 Log spaces .......................................................................................................... 26 Multi-part naming convention .............................................................................. 27 The MIMIX environment ............................................................................................ 29 The product library .............................................................................................. 29 IFS directories ............................................................................................... 29 Job descriptions and job classes......................................................................... 30 User profiles .................................................................................................. 31 The system manager........................................................................................... 31 The journal manager ........................................................................................... 33 The MIMIXQGPL library ...................................................................................... 34 MIMIXSBS subsystem................................................................................... 34 Data libraries ....................................................................................................... 34 Named definitions................................................................................................ 34 Data group entries ............................................................................................... 35 Journal receiver management................................................................................... 37 Interaction with other products that manage receivers........................................ 38 Processing from an earlier journal receiver ......................................................... 38 Considerations when journaling on target ........................................................... 39 Operational overview................................................................................................. 40 Support for starting and ending replication.......................................................... 40 Support for checking installation status ............................................................... 41 Support for automatically detecting and resolving problems ............................... 41 Support for working with data groups .................................................................. 41 Support for resolving problems ........................................................................... 42 Support for switching a data group...................................................................... 44 Support for working with messages .................................................................... 44 Replication process overview 46 Replication job and supporting job names ................................................................ 47 Cooperative processing introduction ......................................................................... 50 MIMIX Dynamic Apply ......................................................................................... 50 Legacy cooperative processing ........................................................................... 51 Advanced journaling ............................................................................................ 51 System journal replication ......................................................................................... 53 Processing self-contained activity entries ........................................................... 54
Chapter 2
Processing data-retrieval activity entries ............................................................. 55 Processes with multiple jobs ............................................................................... 57 Tracking object replication................................................................................... 57 Managing object auditing .................................................................................... 57 User journal replication.............................................................................................. 61 What is remote journaling?.................................................................................. 61 Benefits of using remote journaling with MIMIX .................................................. 61 Restrictions of MIMIX Remote Journal support ................................................... 62 Overview of IBM processing of remote journals .................................................. 63 Synchronous delivery .................................................................................... 63 Asynchronous delivery .................................................................................. 65 User journal replication processes ...................................................................... 66 The RJ link .......................................................................................................... 66 Sharing RJ links among data groups............................................................. 66 RJ links within and independently of data groups ......................................... 67 Differences between ENDDG and ENDRJLNK commands .......................... 67 RJ link monitors ................................................................................................... 68 RJ link monitors - operation........................................................................... 68 RJ link monitors in complex configurations ................................................... 68 Support for unconfirmed entries during a switch ................................................. 70 RJ link considerations when switching ................................................................ 70 User journal replication of IFS objects, data areas, data queues.............................. 72 Benefits of advanced journaling .......................................................................... 72 Replication processes used by advanced journaling .......................................... 73 Tracking entries ................................................................................................... 74 IFS object file identifiers (FIDs) ........................................................................... 75 Lesser-used processes for user journal replication................................................... 76 User journal replication with source-send processing ......................................... 76 The data area polling process ............................................................................. 77 Chapter 3 Preparing for MIMIX 80 Checklist: pre-configuration....................................................................................... 81 Data that should not be replicated............................................................................. 83 Planning for journaled IFS objects, data areas, and data queues............................. 85 Is user journal replication appropriate for your environment? ............................. 85 Serialized transactions with database files.......................................................... 85 Converting existing data groups .......................................................................... 85 Conversion examples .................................................................................... 86 Database apply session balancing ...................................................................... 87 User exit program considerations........................................................................ 87 Starting the MIMIXSBS subsystem ........................................................................... 90 Accessing the MIMIX Main Menu.............................................................................. 91 Planning choices and details by object class 93 Replication choices by object type ............................................................................ 96 Configured object auditing value for data group entries............................................ 98 Identifying library-based objects for replication ....................................................... 100 How MIMIX uses object entries to evaluate journal entries for replication ........ 101 Identifying spooled files for replication .............................................................. 102 Additional choices for spooled file replication.............................................. 103
Chapter 4
Replicating user profiles and associated message queues .............................. 104 Identifying logical and physical files for replication.................................................. 105 Considerations for LF and PF files .................................................................... 105 Files with LOBs............................................................................................ 107 Configuration requirements for LF and PF files................................................. 108 Requirements and limitations of MIMIX Dynamic Apply.................................... 110 Requirements and limitations of legacy cooperative processing....................... 111 Identifying data areas and data queues for replication............................................ 112 Configuration requirements - data areas and data queues ............................... 112 Restrictions - user journal replication of data areas and data queues .............. 113 Supported journal code E and Q entry types............................................... 114 Identifying IFS objects for replication ...................................................................... 118 Supported IFS file systems and object types .................................................... 118 Considerations when identifying IFS objects..................................................... 119 MIMIX processing order for data group IFS entries..................................... 119 Long IFS path names .................................................................................. 119 Upper and lower case IFS object names..................................................... 119 Configured object auditing value for IFS objects ......................................... 120 Configuration requirements - IFS objects .......................................................... 120 Restrictions - user journal replication of IFS objects ......................................... 121 Supported journal code B entry types ......................................................... 122 Identifying DLOs for replication ............................................................................... 124 How MIMIX uses DLO entries to evaluate journal entries for replication .......... 124 Sequence and priority order for documents ................................................ 124 Sequence and priority order for folders ....................................................... 125 Processing of newly created files and objects......................................................... 127 Newly created files ............................................................................................ 127 New file processing - MIMIX Dynamic Apply............................................... 127 New file processing - legacy cooperative processing.................................. 128 Newly created IFS objects, data areas, and data queues ................................. 128 Determining how an activity entry for a create operation was replicated .... 129 Processing variations for common operations ........................................................ 130 Move/rename operations - system journal replication ....................................... 130 Move/rename operations - user journaled data areas, data queues, IFS objects ... 131 Delete operations - files configured for legacy cooperative processing ............ 134 Delete operations - user journaled data areas, data queues, IFS objects ........ 134 Restore operations - user journaled data areas, data queues, IFS objects ...... 134 Chapter 5 Configuration checklists 137 Checklist: New remote journal (preferred) configuration ......................................... 139 Checklist: New MIMIX source-send configuration................................................... 143 Checklist: Converting to remote journaling.............................................................. 147 Converting to MIMIX Dynamic Apply....................................................................... 150 Converting using the Convert Data Group command ....................................... 150 Checklist: manually converting to MIMIX Dynamic Apply.................................. 151 Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 154 Checklist: Converting to legacy cooperative processing ......................................... 157
Chapter 6
System-level communications 159 Configuring for native TCP/IP.................................................................................. 159 Port aliases-simple example ............................................................................. 160 Port aliases-complex example .......................................................................... 161 Creating port aliases ......................................................................................... 162 Configuring APPC/SNA........................................................................................... 163 Configuring OptiConnect ......................................................................................... 163 Configuring system definitions 166 Tips for system definition parameters ..................................................................... 167 Creating system definitions ..................................................................................... 170 Changing a system definition .................................................................................. 171 Multiple network system considerations.................................................................. 172 Configuring transfer definitions 174 Tips for transfer definition parameters..................................................................... 176 Using contextual (*ANY) transfer definitions ........................................................... 181 Search and selection process ........................................................................... 181 Considerations for remote journaling ................................................................ 182 Considerations for MIMIX source-send configurations...................................... 182 Naming conventions for contextual transfer definitions ..................................... 183 Additional usage considerations for contextual transfer definitions................... 183 Creating a transfer definition ................................................................................... 184 Changing a transfer definition ................................................................................. 186 Changing a transfer definition to support remote journaling.............................. 186 Finding the system database name for RDB directory entries ................................ 188 Using i5/OS commands to work with RDB directory entries.............................. 188 Starting the Lakeview TCP/IP server ...................................................................... 189 Using autostart job entries to start the TCP server ................................................. 190 Adding an autostart job entry ............................................................................ 190 Identifying the autostart job entry in the MIMIXSBS subsystem........................ 191 Changing the job description for an autostart job entry ..................................... 191 Verifying a communications link for system definitions ........................................... 194 Verifying the communications link for a data group................................................. 195 Verifying all communications links..................................................................... 195 Configuring journal definitions 197 Journal definitions created by other processes ....................................................... 200 Tips for journal definition parameters ...................................................................... 201 Journal definition considerations ............................................................................. 205 Naming convention for remote journaling environments with 2 systems........... 206 Example journal definitions for a switchable data group ............................. 207 Naming convention for multimanagement environments .................................. 208 Example journal definitions for three management nodes .......................... 209 Journal receiver size for replicating large object data ............................................. 213 Verifying journal receiver size options .............................................................. 213 Changing journal receiver size options ............................................................. 213 Creating a journal definition..................................................................................... 215 Changing a journal definition................................................................................... 217 Building the journaling environment ........................................................................ 219 Changing the remote journal environment .............................................................. 222
Chapter 7
Chapter 8
Chapter 9
Adding a remote journal link.................................................................................... 225 Changing a remote journal link................................................................................ 227 Temporarily changing from RJ to MIMIX processing .............................................. 228 Changing from remote journaling to MIMIX processing .......................................... 229 Removing a remote journaling environment............................................................ 231 Chapter 10 Configuring data group definitions 233 Tips for data group parameters ............................................................................... 234 Additional considerations for data groups ......................................................... 244 Creating a data group definition .............................................................................. 247 Changing a data group definition ............................................................................ 251 Fine-tuning backlog warning thresholds for a data group ....................................... 251 Additional options: working with definitions 255 Copying a definition................................................................................................. 255 Deleting a definition................................................................................................. 256 Displaying a definition ............................................................................................. 257 Printing a definition.................................................................................................. 257 Renaming definitions............................................................................................... 258 Renaming a system definition ........................................................................... 258 Renaming a transfer definition .......................................................................... 261 Renaming a journal definition with considerations for RJ link ........................... 262 Renaming a data group definition ..................................................................... 263 Configuring data group entries 265 Creating data group object entries .......................................................................... 267 Loading data group object entries ..................................................................... 267 Adding or changing a data group object entry................................................... 268 Creating data group file entries ............................................................................... 272 Loading file entries ............................................................................................ 272 Loading file entries from a data groups object entries ................................ 273 Loading file entries from a library ................................................................ 275 Loading file entries from a journal definition ................................................ 276 Loading file entries from another data groups file entries........................... 277 Adding a data group file entry ........................................................................... 278 Changing a data group file entry ....................................................................... 279 Creating data group IFS entries .............................................................................. 282 Adding or changing a data group IFS entry....................................................... 282 Loading tracking entries .......................................................................................... 284 Loading IFS tracking entries.............................................................................. 284 Loading object tracking entries.......................................................................... 285 Creating data group DLO entries ............................................................................ 287 Loading DLO entries from a folder .................................................................... 287 Adding or changing a data group DLO entry ..................................................... 288 Creating data group data area entries..................................................................... 289 Loading data area entries for a library............................................................... 289 Adding or changing a data group data area entry ............................................. 290 Additional options: working with DG entries ............................................................ 291 Copying a data group entry ............................................................................... 291 Removing a data group entry ............................................................................ 292 Displaying a data group entry............................................................................ 293
Chapter 11
Chapter 12
Printing a data group entry ................................................................................ 293 Chapter 13 Additional supporting tasks for configuration 294 Accessing the Configuration Menu.......................................................................... 295 Starting the system and journal managers.............................................................. 296 Setting data group auditing values manually........................................................... 297 Examples of changing of an IFS objects auditing value ................................... 298 Checking file entry configuration manually.............................................................. 303 Changes to startup programs.................................................................................. 305 Checking DDM password validation level in use..................................................... 306 Option 1. Enable MIMIXOWN user profile for DDM environment...................... 306 Option 2. Allow user profiles without passwords ............................................... 307 Starting the DDM TCP/IP server ............................................................................. 308 Identifying data groups that use an RJ link ............................................................. 310 Using file identifiers (FIDs) for IFS objects .............................................................. 312 Configuring restart times for MIMIX jobs ................................................................. 313 Configurable job restart time operation ............................................................. 313 Considerations for using *NONE ................................................................. 315 Examples: job restart time ................................................................................. 315 Restart time examples: system definitions .................................................. 316 Restart time examples: system and data group definition combinations..... 316 Configuring the restart time in a system definition ............................................ 319 Configuring the restart time in a data group definition....................................... 319 Starting, ending, and verifying journaling 322 What objects need to be journaled.......................................................................... 323 Authority requirements for starting journaling.................................................... 324 MIMIX commands for starting journaling................................................................. 325 Journaling for physical files ..................................................................................... 326 Displaying journaling status for physical files .................................................... 326 Starting journaling for physical files ................................................................... 326 Ending journaling for physical files .................................................................... 327 Verifying journaling for physical files ................................................................. 328 Journaling for IFS objects........................................................................................ 330 Displaying journaling status for IFS objects ...................................................... 330 Starting journaling for IFS objects ..................................................................... 330 Ending journaling for IFS objects ...................................................................... 331 Verifying journaling for IFS objects.................................................................... 332 Journaling for data areas and data queues............................................................. 334 Displaying journaling status for data areas and data queues............................ 334 Starting journaling for data areas and data queues .......................................... 334 Ending journaling for data areas and data queues............................................ 335 Verifying journaling for data areas and data queues ......................................... 336 Configuring for improved performance 337 Minimized journal entry data ................................................................................... 339 Restrictions of minimized journal entry data...................................................... 339 Configuring for minimized journal entry data ..................................................... 340 Configuring for high availability journal performance enhancements...................... 341 Journal standby state ........................................................................................ 341 Minimizing potential performance impacts of standby state ........................ 342
Chapter 14
Chapter 15
Journal caching ................................................................................................. 342 MIMIX processing of high availability journal performance enhancements....... 342 Requirements of high availability journal performance enhancements ............. 343 Restrictions of high availability journal performance enhancements................. 343 Caching extended attributes of *FILE objects ......................................................... 345 Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 346 Understanding the data area format.................................................................. 346 Determining if the data area should be changed............................................... 347 Configuring the RCVJRNE call delay and block values .................................... 347 Configuring high volume objects for better performance......................................... 350 Improving performance of the #MBRRCDCNT audit .............................................. 351 Chapter 16 Configuring advanced replication techniques 353 Keyed replication..................................................................................................... 355 Keyed vs positional replication .......................................................................... 355 Requirements for keyed replication ................................................................... 355 Restrictions of keyed replication........................................................................ 356 Implementing keyed replication ......................................................................... 356 Changing a data group configuration to use keyed replication.................... 356 Changing a data group file entry to use keyed replication........................... 357 Verifying key attributes ...................................................................................... 359 Data distribution and data management scenarios ................................................. 361 Configuring for bi-directional flow ...................................................................... 361 Bi-directional requirements: system journal replication ............................... 361 Bi-directional requirements: user journal replication.................................... 362 Configuring for file routing and file combining ................................................... 363 Configuring for cascading distributions ............................................................. 365 Trigger support ........................................................................................................ 368 How MIMIX handles triggers ............................................................................. 368 Considerations when using triggers .................................................................. 368 Enabling trigger support .................................................................................... 369 Synchronizing files with triggers ........................................................................ 369 Constraint support ................................................................................................... 370 Referential constraints with delete rules............................................................ 370 Replication of constraint-induced modifications .......................................... 371 Handling SQL identity columns ............................................................................... 373 The identity column problem explained ............................................................. 373 When the SETIDCOLA command is useful....................................................... 374 SETIDCOLA command limitations .................................................................... 374 Alternative solutions .......................................................................................... 375 SETIDCOLA command details .......................................................................... 376 Usage notes ................................................................................................ 377 Examples of choosing a value for INCREMENTS....................................... 377 Checking for replication of tables with identity columns .................................... 378 Setting the identity column attribute for replicated files ..................................... 378 Collision resolution .................................................................................................. 381 Additional methods available with CR classes .................................................. 381 Requirements for using collision resolution ....................................................... 382 Working with collision resolution classes .......................................................... 383 Creating a collision resolution class ............................................................ 383
Changing a collision resolution class........................................................... 384 Deleting a collision resolution class............................................................. 384 Displaying a collision resolution class ......................................................... 384 Printing a collision resolution class.............................................................. 385 Omitting T-ZC content from system journal replication ........................................... 387 Configuration requirements and considerations for omitting T-ZC content ....... 388 Omit content (OMTDTA) and cooperative processing................................. 389 Omit content (OMTDTA) and comparison commands ................................ 389 Selecting an object retrieval delay........................................................................... 391 Object retrieval delay considerations and examples ......................................... 391 Configuring to replicate SQL stored procedures and user-defined functions.......... 393 Requirements for replicating SQL stored procedure operations ....................... 393 To replicate SQL stored procedure operations ................................................. 393 Using Save-While-Active in MIMIX.......................................................................... 396 Considerations for save-while-active................................................................. 396 Types of save-while-active options ................................................................... 397 Example configurations ..................................................................................... 397 Chapter 17 Object selection for Compare and Synchronize commands 399 Object selection process ......................................................................................... 399 Order precedence ............................................................................................. 401 Parameters for specifying object selectors.............................................................. 402 Object selection examples ...................................................................................... 407 Processing example with a data group and an object selection parameter ...... 407 Example subtree ............................................................................................... 410 Example Name pattern...................................................................................... 414 Example subtree for IFS objects ....................................................................... 415 Report types and output formats ............................................................................. 418 Spooled files ...................................................................................................... 418 Outfiles .............................................................................................................. 419 Comparing attributes 420 About the Compare Attributes commands .............................................................. 420 Choices for selecting objects to compare.......................................................... 421 Unique parameters ...................................................................................... 421 Choices for selecting attributes to compare ...................................................... 422 CMPFILA supported object attributes for *FILE objects .............................. 423 CMPOBJA supported object attributes for *FILE objects ............................ 423 Comparing file and member attributes .................................................................... 425 Comparing object attributes .................................................................................... 428 Comparing IFS object attributes.............................................................................. 431 Comparing DLO attributes....................................................................................... 434 Comparing file record counts and file member data 437 Comparing file record counts .................................................................................. 437 To compare file record counts ........................................................................... 438 Significant features for comparing file member data ............................................... 440 Repairing data ................................................................................................... 440 Active and non-active processing...................................................................... 440 Processing members held due to error ............................................................. 441 Additional features............................................................................................. 441
Chapter 18
Chapter 19
Considerations for using the CMPFILDTA command ............................................. 441 Recommendations and restrictions ................................................................... 441 Using the CMPFILDTA command with firewalls................................................ 442 Security considerations ..................................................................................... 442 Comparing allocated records to records not yet allocated ................................ 442 Comparing files with unique keys, triggers, and constraints ............................. 443 Avoiding issues with triggers ....................................................................... 444 Referential integrity considerations ............................................................. 444 Job priority .................................................................................................... 444 Specifying CMPFILDTA parameter values.............................................................. 445 Specifying file members to compare ................................................................. 445 Tips for specifying values for unique parameters .............................................. 446 Specifying the report type, output, and type of processing ............................... 449 System to receive output ............................................................................. 449 Interactive and batch processing................................................................. 449 Using the additional parameters........................................................................ 449 Advanced subset options for CMPFILDTA.............................................................. 451 Ending CMPFILDTA requests ................................................................................. 454 Comparing file member data - basic procedure (non-active) .................................. 455 Comparing and repairing file member data - basic procedure ................................ 458 Comparing and repairing file member data - members on hold (*HLDERR) .......... 461 Comparing file member data using active processing technology .......................... 464 Comparing file member data using subsetting options ........................................... 467 Chapter 20 Synchronizing data between systems 472 Considerations for synchronizing using MIMIX commands..................................... 474 Limiting the maximum sending size .................................................................. 474 Synchronizing user profiles ............................................................................... 474 Synchronizing user profiles with SYNCnnn commands .............................. 475 Synchronizing user profiles with the SNDNETOBJ command ................... 475 Missing system distribution directory entries automatically added .............. 476 Synchronizing large files and objects ................................................................ 476 Status changes caused by synchronizing ......................................................... 476 Synchronizing objects in an independent ASP.................................................. 477 About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 478 About synchronizing data group activity entries (SYNCDGACTE).......................... 479 About synchronizing file entries (SYNCDGFE command) ...................................... 480 About synchronizing tracking entries....................................................................... 482 Performing the initial synchronization...................................................................... 483 Establish a synchronization point ...................................................................... 483 Resources for synchronizing ............................................................................. 483 Using SYNCDG to perform the initial synchronization ............................................ 484 To perform the initial synchronization using the SYNCDG command defaults . 485 Verifying the initial synchronization ......................................................................... 487 Synchronizing database files................................................................................... 489 Synchronizing objects ............................................................................................. 491 To synchronize library-based objects associated with a data group ................. 491 To synchronize library-based objects without a data group .............................. 492 Synchronizing IFS objects....................................................................................... 495 To synchronize IFS objects associated with a data group ................................ 495
10
To synchronize IFS objects without a data group ............................................. 496 Synchronizing DLOs................................................................................................ 499 To synchronize DLOs associated with a data group ......................................... 499 To synchronize DLOs without a data group ...................................................... 500 Synchronizing data group activity entries................................................................ 503 Synchronizing tracking entries ................................................................................ 505 To synchronize an IFS tracking entry ................................................................ 505 To synchronize an object tracking entry ............................................................ 505 Sending library-based objects ................................................................................. 506 Sending IFS objects ................................................................................................ 508 Sending DLO objects .............................................................................................. 509 Chapter 21 Introduction to programming 510 Support for customizing........................................................................................... 511 User exit points.................................................................................................. 511 Collision resolution ............................................................................................ 511 Completion and escape messages for comparison commands ............................. 514 CMPFILA messages ......................................................................................... 514 CMPOBJA messages........................................................................................ 515 CMPIFSA messages ......................................................................................... 515 CMPDLOA messages ....................................................................................... 516 CMPRCDCNT messages .................................................................................. 516 CMPFILDTA messages..................................................................................... 517 Adding messages to the MIMIX message log ......................................................... 521 Output and batch guidelines.................................................................................... 523 General output considerations .......................................................................... 523 Output parameter ........................................................................................ 523 Display output.............................................................................................. 524 Print output .................................................................................................. 524 File output.................................................................................................... 526 General batch considerations............................................................................ 527 Batch (BATCH) parameter .......................................................................... 527 Job description (JOBD) parameter .............................................................. 527 Job name (JOB) parameter ......................................................................... 527 Displaying a list of commands in a library ............................................................... 528 Running commands on a remote system................................................................ 529 Benefits - RUNCMD and RUNCMDS commands ............................................. 529 Procedures for running commands RUNCMD, RUNCMDS.................................... 530 Running commands using a specific protocol ................................................... 530 Running commands using a MIMIX configuration element ............................... 532 Using lists of retrieve commands ............................................................................ 536 Changing command defaults................................................................................... 537 Customizing with exit point programs 538 Summary of exit points............................................................................................ 538 MIMIX user exit points ....................................................................................... 538 MIMIX Monitor user exit points .......................................................................... 538 MIMIX Promoter user exit points ....................................................................... 539 Requesting customized user exit programs ...................................................... 540 Working with journal receiver management user exit points ................................... 541
Chapter 22
11
Journal receiver management exit points.......................................................... 541 Change management exit points................................................................. 541 Delete management exit points ................................................................... 542 Requirements for journal receiver management exit programs................... 542 Journal receiver management exit program example ................................. 545 Appendix A Supported object types for system journal replication 549
Appendix B Copying configurations 552 Supported scenarios ............................................................................................... 552 Checklist: copy configuration................................................................................... 553 Copying configuration procedure ............................................................................ 558 Appendix C Configuring Intra communications 559 Manually configuring Intra using SNA ..................................................................... 559 Manually configuring Intra using TCP ..................................................................... 561 Appendix D MIMIX support for independent ASPs 563 Benefits of independent ASPs................................................................................. 564 Auxiliary storage pool concepts at a glance ............................................................ 564 Requirements for replicating from independent ASPs ............................................ 567 Limitations and restrictions for independent ASP support....................................... 567 Configuration planning tips for independent ASPs.................................................. 568 Journal and journal receiver considerations for independent ASPs .................. 569 Configuring IFS objects when using independent ASPs ................................... 569 Configuring library-based objects when using independent ASPs .................... 569 Avoiding unexpected changes to the library list ................................................ 570 Detecting independent ASP overflow conditions..................................................... 572 Appendix E Interpreting audit results 573 Interpreting audit results - MIMIX Availability Manager ........................................... 575 Interpreting audit results - 5250 emulator................................................................ 576 Checking the job log of an audit .............................................................................. 578 Interpreting results for configuration data - #DGFE audit........................................ 580 Interpreting results of audits for record counts and file data ................................... 582 What differences were detected by #FILDTA.................................................... 582 What differences were detected by #MBRRCDCNT ......................................... 583 Interpreting results of audits that compare attributes .............................................. 586 What attribute differences were detected .......................................................... 587 Where was the difference detected................................................................... 589 What attributes were compared ........................................................................ 590 Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 591 Attributes compared and expected results - #OBJATR audit ............................ 596 Attributes compared and expected results - #IFSATR audit ............................. 604 Attributes compared and expected results - #DLOATR audit ........................... 606 Comparison results for journal status and other journal attributes .................... 608 How configured journaling settings are determined .................................... 611 Comparison results for auxiliary storage pool ID (*ASP)................................... 612 Comparison results for user profile status (*USRPRFSTS) .............................. 615 How configured user profile status is determined........................................ 616 Comparison results for user profile password (*PRFPWDIND)......................... 619
12
Appendix F
Outfile formats 621 Outfile support in MIMIX Availability Manager......................................................... 621 Work panels with outfile support ............................................................................. 622 MCAG outfile (WRKAG command) ......................................................................... 623 MCDTACRGE outfile (WRKDTARGE command) ................................................... 626 MCNODE outfile (WRKNODE command)............................................................... 628 MXCDGFE outfile (CHKDGFE command) .............................................................. 630 MXCMPDLOA outfile (CMPDLOA command)......................................................... 632 MXCMPFILA outfile (CMPFILA command) ............................................................. 634 MXCMPFILD outfile (CMPFILDTA command) ........................................................ 636 MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 639 MXCMPRCDC outfile (CMPRCDCNT command)................................................... 640 MXCMPIFSA outfile (CMPIFSA command) ............................................................ 644 MXCMPOBJA outfile (CMPOBJA command) ......................................................... 647 MXDGACT outfile (WRKDGACT command)........................................................... 649 MXDGACTE outfile (WRKDGACTE command)...................................................... 651 MXDGDAE outfile (WRKDGDAE command) .......................................................... 659 MXDGDFN outfile (WRKDGDFN command) .......................................................... 660 MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 668 MXDGFE outfile (WRKDGFE command)................................................................ 670 MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 674 MXDGSTS outfile (WRKDG command) .................................................................. 676 WRKDG outfile SELECT statement examples .................................................. 696 WRKDG outfile example 1........................................................................... 696 WRKDG outfile example 2........................................................................... 696 WRKDG outfile example 3........................................................................... 697 WRKDG outfile example 4........................................................................... 697 MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 703 MXDGTSP outfile (WRKDGTSP command) ........................................................... 706 MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 709 MXRJLNK outfile (WRKRJLNK command) ............................................................. 713 MXSYSDFN outfile (WRKSYSDFN command)....................................................... 716 MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 720 MZPRCDFN outfile (WRKPRCDFN command) ...................................................... 722 MZPRCE outfile (WRKPRCE command) ................................................................ 723 MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 726 MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 728 732
Index
13
Product conventions
Product conventions
The conventions described here apply to all Lakeview products unless otherwise noted.
Publication conventions
This book uses typography and specialized formatting to help you quickly identify the type of information you are reading. For example, specialized styles and techniques distinguish information you see on a display from information you enter on a display or command line. In text, bold type identifies a new term whereas an underlined word highlights its importance. Notes and Attentions are specialized formatting techniques that are used, respectively, to highlight a fact or to warn you of the potential for damage. The following topics illustrate formatting techniques that may be used in this book.
14
UPPERCASE
monospace font
Text that you enter into a 5250 emulator command line. In instructions, the conventions of italic and UPPERCASE also apply. Examples showing programming code.
15
Publication conventions
16
The following information may also be helpful if you use advanced journaling: DB2 UDB for iSeries SQL Programming Concepts DB2 Universal Database for iSeries SQL Reference IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication, SG24-5189
17
18
How to contact us
For contact information, visit our Contact CustomerCare web page. If you are current on maintenance, support for MIMIX products is also available when you log in to Support Central. It is important to include product and version information whenever you report problems. If you use MIMIX Availability Manager, you should also include the version information provided at the bottom of each MIMIX Availability Manager window.
19
How to contact us
20
MIMIX overview
Chapter 1
MIMIX overview
This book provides concepts, configuration procedures, and reference information for MIMIX ha1 and MIMIX ha Lite. For simplicity, this book uses the term MIMIX to refer to the functionality provided by either product unless a more specific name is necessary. MIMIX version 5 provides high availability for your critical data in a production environment on IBM PowerTM Systems through real-time replication of changes. MIMIX continuously captures changes to critical database files and objects on a production system, sends the changes to a backup system, and applies the changes to the appropriate database file or object on the backup system. The backup system stores exact duplicates of the critical database files and objects from the production system. MIMIX uses two replication paths to address different pieces of your replication needs. These paths operate with configurable levels of cooperation or can operate independently. The user journal replication path captures changes to critical files and objects configured for replication through a user journal. When configuring this path, shipped defaults use the IBM i remote journaling function to simplify sending data to the remote system. In previous versions, MIMIX DB2 Replicator provided this function. The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the IBM i system journal. In previous versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating database files, IFS objects, data areas, and data queues. One common use of MIMIX is to support a hot backup system to which operations can be switched in the event of a planned or unplanned outage. If a production system becomes unavailable, its backup is already prepared for users. In the event of an outage, you can quickly switch users to the backup system where they can continue using their applications. MIMIX captures changes on the backup system for later synchronization with the original production system. When the original production system is brought back online, MIMIX assists you with analysis and synchronization of the database files and other objects. You can view the replicated data on the backup system at any time without affecting productivity. This allows you to generate reports, submit (read-only) batch jobs, or perform backups to tape from the backup system. In addition to real-time backup capability, replicated databases and objects can be used for distributed processing, allowing you to off-load applications to a backup system. Typically MIMIX is used among systems in a network. Simple environments have one production system and one backup system. More complex environments have
21
multiple production systems or backup systems. MIMIX can also be used on a single system. MIMIX automatically monitors your replication environment to detect and correct potential problems that could be detrimental to maintaining high availability. MIMIX also provides a means of verifying that the files and objects being replicated are what is defined to your configuration. This can help ensure the integrity of your MIMIX configuration. The topics in this chapter include: MIMIX concepts on page 23 describes concepts and terminology that you need to know about MIMIX. The MIMIX environment on page 29 describes components of the MIMIX operating environment. Journal receiver management on page 37 describes how MIMIX performs change management and delete management for replication processes. Operational overview on page 40 provides information about day to day MIMIX operations.
22
MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX performs replication. You should be familiar with the relationships between systems, the concepts of data groups and switching, and role of the i5/OS journaling function in replication.
23
MIMIX concepts
The terms management system and network system define the role of a system relative to how the products interact within a MIMIX installation. These roles remain associated with the system within the MIMIX installation to which they are defined. Typically one system in the MIMIX installation is designated as the management system and the remaining one or more systems are designated as network systems. A management system is the system in a MIMIX installation that is designated as the control point for all installations of the product within the MIMIX installation. The management system is the location from which work to be performed by the product is defined and maintained. Often the system defined as the management system also serves as the backup system during normal operations. A network system is any system in a MIMIX installation that is not designated as the management system (control point) of that MIMIX installation. Work definitions are automatically distributed from the management system to a network system. Often a system defined as a network system also serves as the production system during normal operations.
24
MIMIX provides support for switching due to planned and unplanned events. At the data group level, the Switch Data Group (SWTDG) command will switch the direction in which replication occurs between systems. Note: A switchable data group is different than bi-directional data flow. Bi-directional data flow is a data sharing technique described in Configuring advanced replication techniques on page 353.
25
MIMIX concepts
Journal entries deposited into the system journal (on behalf of an audited object) contain only an indication of a change to an object. Some of these types of entries contain enough information needed by MIMIX to apply the change directly to the replicated object on the target system, however many types of these entries require MIMIX to gather additional information about the object from the source system in order to apply the change directly to the replicated object on the target system. Journal entries deposited into a user journal (on behalf of a journaled file, data area, data queue, or IFS object) contain images of the data which was changed. This information is needed by MIMIX in order to apply the change directly to the replicated object on the target system. When replication is started, the start request (STRDG command) identifies a sequence number within a journal receiver at which MIMIX processing begins. In data groups configured with remote journaling, the specified sequence number and receiver name is the starting point for MIMIX processing from the remote journal. The i5/OS remote journal function controls where it starts sending entries from the source journal receiver to the remote journal receiver. The i5/OS operating system requires that journaled objects reside in the same auxiliary storage pool (ASP) as the user journal. The journal receivers can be in a different ASP. If the journal is in a primary independent ASP, the journal receivers must reside in the same primary independent ASP or a secondary independent ASP within the same ASP group. The i5/OS operating system (V5R4 and higher releases) allows journaling a maximum of 10,000,000 objects to one user journal. MIMIX can use existing journals with this value. Journals created by MIMIX have a maximum of 250,000 objects. User journaling will not start if the number of objects associated with the journal exceeds the journal maximum. The maximum includes: Objects for which changes are currently being journaled Objects for which journaling was ended while the current receiver is attached Journal receivers that are, or were, associated with the journal while the current journal receiver is attached.
Remote journaling requires unique considerations for journaling and journal receiver management. For additional information, see Journal receiver management on page 37.
Log spaces
Based on System i5 user space objects, a log space is a MIMIX object that provides an efficient storage and manipulation mechanism for replicated data that is temporarily stored on the target system during the receive and apply processes. All internal structures and objects that make up a log space are created and manipulated by MIMIX.
26
27
definition, it reverses the order of the system names and checks again, avoiding the need for redundant transfer definitions. You can also use contextual system support (*ANY) to configure transfer definitions. When you specify *ANY in a transfer definition, MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Unlike the conventional configuration case, a specific search order is used if MIMIX is still unable to find an appropriate transfer definition. For more information, see Using contextual (*ANY) transfer definitions on page 181.
28
IFS directories
A default IFS directory structure is used in conjunction with the library-based objects of the MIMIX family of products. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Over time, the installation processes for products and fixes will restore objects to the IFS directory structure as well as to the QSYS library. The directories created when License Manager is installed or upgraded follow these guidelines: /LakeviewTech This is the root directory for all IFS-based objects. /LakeviewTech/system-based-area This directory structure contains system-based objects that need to exist only once on a system. The systembased-area represents a unique directory for each set of objects. Two structures that you should be aware of are: /LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location for users to place fixes downloaded from the Lakeview website. The VvRrMm value is the same as the release of License Manager on the system. Multiple VvRrMm directories will exist as the release of License Manager changes. /LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places software packages that it uploads to the System i5. /LakeviewTech/UserData/ is available to users to store product-related data. The directories created when MIMIX is installed or upgraded follow these guidelines. The requirements of your MIMIX environment determine the structure of these directories:
29
/LakeviewTech/MIMIX/product-installation-library There is a unique directory structure for each installation of MIMIX. /LakeviewTech/MIMIX/product-installation-library/productarea There is a unique directory structure for each installation of MIMIX. The structure is determined by the set of objects needed by an area of the product and the product installation library.
MXAUDIT
MIMIX Auditing. Used for MIMIX compare commands, such as those called by MIMIX audits, as the default value on the Job description (JOBD) parameter. MIMIX Default. Used for MIMIX load commands and by other commands that do not have a specific job description as the default value on the JOBD parameter. MIMIX Synchronization. Used for MIMIX synchronization commands, such as those called by MIMIX audits, as the default value on the JOBD parameter. MIMIX Apply. Used for MIMIX apply process jobs. MIMIX Communications. Used for all target communication jobs.
MXDFT
MXSYNC
MIMIXAPY MIMIXCMN
X X
30
Table 2. Name
Job descriptions used by MIMIX Description Shipped in Installation Library Shipped in MIMIXQGPL Library X X X X X X X
MIMIX Default. Used for all MIMIX jobs that do not have a specific job description. MIMIX Manager. Used for MIMIX system manager and journal manager jobs. MIMIX Monitor. Used for most jobs submitted by the MIMIX Monitor product. MIMIX Promoter. Used for jobs submitted by the MIMIX Promoter product. MIMIX Reorganize File. Used for file reorganization jobs submitted by the database apply job. MIMIX Send. Used for database send, object send, object retrieve, container send, and status send jobs in MIMIX. MIMIX Synchronization. Used for MIMIX file synchronization. This is valid for synchronize commands that do not have a JOBD parameter on the display. MIMIX UPS Monitor. Used for the uninterruptible power source (UPS) monitor managed by the MIMIX Monitor product. MIMIX Verify. Used for MIMIX verify and compare command processes. This is valid for verify and compare commands that do not have a JOBD parameter on the display.
MIMIXUPS
MIMIXVFY
User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user profile. This profile owns all MIMIX objects, including the objects in the MIMIX product libraries and in the MIMIXQGPL library. The profile is created with sufficient authority to run all MIMIX products and perform all the functions provided by the MIMIX products. The authority of this user profile can be reduced, if business practices require, but this is not recommended. Reducing the authority of the MIMIXOWN requires significant effort by the user to ensure that the products continue to function properly and to avoid adversely affecting the performance of MIMIX products. See the License and Availability Manager book for additional security information for the MIMIXOWN user profile.
31
system manager job and a receiver side system manager job. These jobs must be active to enable replication. Once started, the system manager monitors for configuration changes and automatically moves any configuration changes to the network system. Dynamic status changes are also collected and returned to the management system. The system manager also gathers messages and timestamp information from the network system and places them in a message log and timestamp file on the management system. In addition, the system manager performs periodic maintenance tasks, including cleanup of the system and data group history files. Figure 1 shows a MIMIX installation with a management system and two network systems. In this installation, there are four pairs of system manager jobs; two between the first network system and the management system and two between the second network system and the management system. Each arrow represents a pair of system manager jobs. Since each pair has a send side system manager job and a receiver side system manager job, there are eight total system manager jobs in this installation.
Figure 1. System manager jobs in a MIMIX installation with one management system and
32
The System manager delay parameter in the system definition determines how frequently the system manager looks for work. Other parameters in the system definition control other aspects of system manager operation. System manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart the system managers based on the value of the Job restart time parameter in the system definitions for the network and management systems. For more information, see the section Configuring restart times for MIMIX jobs on page 313.
33
have three journal manager jobs, one on each system. For more information, see Journal definition considerations on page 205. By default, MIMIX performs both change management and delete management for journal receivers used by the replication process. Parameters in a journal definition allow you to customize details of how the change and delete operations are performed. The Journal manager delay parameter in the system definition determines how frequently the journal manager looks for work. Journal manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in the system definition determines when the journal manager for that system restarts. For more information, see the section Configuring restart times for MIMIX jobs on page 313.
MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related processing. This subsystem is shipped with the proper job queue entries and routing entries for correct operation of the MIMIX jobs.
Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data libraries: MIMIX uses data libraries for storing the contents of the object cache. MIMIX creates the first data library when needed and may create additional data libraries. The names of data libraries are of the form product-library_n (where n is a number starting at 1). For system journal replication, MIMIX creates libraries named product-library_x, where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These ASP-specific data libraries are created when needed and are not deleted until the product is uninstalled.
Named definitions
MIMIX uses named definitions to identify related user-defined configuration information. You can create named definitions for system information, communication
34
(transfer) information, journal information, and replication (data group) information. Any definitions you create can be used by both user journal and system journal replication processes. One or more or each of the following definitions are required to perform replication: A system definition identifies to MIMIX the characteristics of a system that participates in a MIMIX installation. A transfer definition identifies to MIMIX the communications path and protocol to be used between two systems. MIMIX supports Systems Network Architecture (SNA), OptiConnect, and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols. A journal definition identifies to MIMIX a journal environment on a particular system. MIMIX uses the journal definition to manage the journal receiver environment used by the replication process. A data group definition identifies to MIMIX the characteristics of how replication occurs between two systems. A data group definition determines the direction in which replication occurs between the systems, whether that direction can be switched, and the default processing characteristics to use when processing the database and object information associated with the data group. A remote journal link (RJ link) is a MIMIX configuration element that identifies an i5/OS remote journaling environment. Newly created data groups use remote journaling as the default configuration. An RJ link identifies journal definitions that define the source and target journals, primary and secondary transfer definitions for the communications path used by MIMIX, and whether the i5/OS remote journal function sends journal entries asynchronously or synchronously. When a data group is added, the ADDRJLNK command is run automatically, using the transfer definition defined in the data group. The naming conventions used within definitions are described in Multi-part naming convention on page 27.
35
Data group IFS entries This type of entry allows you to identify integrated file system (IFS) objects for replication. IFS objects include directories and stream files. They reside in directories, similar to DOS or Unix files. You can select IFS objects for replication by specific or generic path name. Data group DLO entries This type of entry allows you to identify document library objects (DLOs) for replication. DLOs are documents and folders. They are contained in folders (except for first-level folders). To select DLOs for replication you select individual DLOs by specific or generic folder and DLO name, and owner. Data group data area entries This type of entry allows you to define a data area for replication by the data area polling process. However, the preferred way to replicate data areas is to use advanced journaling.
A single data group can contain any combination of these types of data group entries. If your license is for only one of the MIMIX products rather than for MIMIX ha1 or MIMIX ha Lite, only the entries associated with the product to which you are licensed will be processed for replication.
36
In a remote journaling configuration, MIMIX recognizes remote journals and ignores change management for the remote journals. The remote journal receiver is changed automatically by the i5/OS remote journal function when the receiver on the source system is changed. You can specify in the source journal definition whether to have receiver change management performed by the system or by MIMIX. Any change management values you specify for the target journal definition are ignored. You can also customize how MIMIX performs journal receiver change management through the use of exit programs. For more information, see Working with journal receiver management user exit points on page 541. Delete management - The Receiver delete management (DLTMGT) parameter controls how the journal receivers used for replication are deleted. It is strongly recommended that you use the value *YES to allow MIMIX to perform delete management. When MIMIX performs delete management, the journal receivers are only deleted after MIMIX is finished with them and all other criteria specified on the journal
37
definition are met. The criteria includes how long to retain unsaved journal receivers (KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and how long to keep detached journal receivers (KEEPJRNRCV). Note: If more than one MIMIX installation uses the same journal, the journal manager for each installation can delete the journal regardless of whether the other installations are finished with it. If you have this scenario, you need to use the journal receiver delete management exit points to control deleting the journal receiver. For more information, see Working with journal receiver management user exit points on page 541. Delete management of the source and target receivers occur independently from each other. It is highly recommended that you configure the journal definitions to have MIMIX perform journal delete management. The i5/OS remote journal function does not allow a receiver to be deleted until it is replicated from the local journal (source) to the remote journal (target). When MIMIX manages deletion, a target journal receiver cannot be deleted until it is processed by the database reader (DBRDR) process and it meets the other criteria defined in the journal definition. If you choose to manage journal receivers yourself, you need to ensure that journals are not removed before MIMIX has finished processing them. MIMIX operations can be affected if you allow the system to handle delete management. For example, the system may delete a journal receiver before MIMIX has completed its use.
38
For example, refer to Figure 2. Replication ended while processing journal entries in target receiver 2. Target journal receiver 1 is deleted through the configured delete management options. If the data group is started (STRDG) with a starting journal sequence number for an entry that is in journal receiver 1, the remote journal function attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1. However, receiver 2 already exists on the target system. When the operating system encounters receiver 2, an error occurs and the transmission to the target system ends. You can prevent this situation before starting that data group if you delete any target journal receivers following the receiver that will be used as the starting point. If you encounter the problem, recovery is simply to remove the target journal receivers and let remote journaling resend them. In this example, deleting target receiver 2 would prevent or resolve the problem.
Figure 2. Example of processing from an earlier journal receiver.
4 3 2 1 1
39
Operational overview
Operational overview
Before replication can begin, the following requirements must be met through the installation and configuration processes: MIMIX software must be installed on each system in the MIMIX installation. At least one communication link must be in place for each pair of systems between which replication will occur. The MIMIX operating environment must be configured and be available on each system. Journaling must be active for the database files and objects configured for user journal replication. For objects to be replicated from the system journal, the object auditing environment must be set up. The files and objects must be initially synchronized between the systems participating in replication.
Once MIMIX is configured and files and objects are synchronized, day-to-day operations for MIMIX can be performed from either the web-based MIMIX Availability Manager or from a 5250 emulator for a System i5. MIMIX Availability Manager is easy to use and preferable for daily operations. Newer MIMIX functions may only be available through this user interface. Through preferences, individuals have the ability to customize what systems, installations, and data groups to monitor.
40
41
Operational overview
When you choose to display detailed status for a data group from MIMIX Availability Manager, the highest priority problem that exists for the data group determines which of several possible views of the Data Group Details window will be displayed. You can often take action to resolve problems directly from these detailed status windows. Data Group Details - Status This window identifies all of the replication jobs and services jobs needed by the data group and provides their status. Similar information is available from the merged view of the Data Group Status display. Data Group Details - User Journal This window represents replication performed by user journal replication processes, including journaled files, IFS objects, data areas, and data queues. It includes information about the replication of user journal transactions, including journal progress, performance, and recent activity. Similar information is available from database views of the Data Group Status display. Data Group Details - System Journal This window represents replication performed by system journal replication processes, including journal progress, performance, and recent activity. Similar information is available from object views of the Data Group Status display. Data Group Details - Activity This window summaries activity for the selected data group that is experiencing replication problems. Problems are grouped by type of activity: File, Object, IFS Tracking, or Object Tracking. This window displays only one type of problem at a time, based on the activity type selected from the navigation bar. Similar information is available in the 5250 emulator when you use the following options from the Work with Data Groups display: 12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk entries not active.
42
Activity, and Object Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the system journal. Failed requests: During normal processing, system journal replication processes may encounter object requests that cannot be processed due to an error. Often the error is due to a transient condition, such as when an object is in use by another process at the time the object retrieve process attempts to gather the object data. Although MIMIX will attempt some automatic retries, requests may still result in a Failed status. In many cases, failed entries can be resubmitted and they will succeed. Some errors may require user intervention, such as a never-ending process that holds a lock on the object. MIMIX is shipped with the MIMIX Retry Monitor (#RTYDGACTE) which runs periodically and automatically resubmits all failed activity entries for all data groups. In order to use this monitor, it must be manually enabled, then started, using options on the Work with monitors (WRKMON) display. If your environment results in numerous transient failed entries it is recommended that you use the #RTYDGACTE monitor. You can manually request that MIMIX retry processing for a data group activity entry that has a status of *FAILED. These entries can be viewed using the Work with Data Group Activity (WRKDGACT) command. From the Work with Data Group Activity or Work with Data Group Activity Entries displays, you can use the retry option to resubmit individual failed entries or all of the entries for an object. This option calls the Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with Data Group Activity display, you can also specify a time at which to start the request, thereby delaying the retry attempt until a time when it is more likely to succeed. MIMIX Availability Manager supports manually retrying activities from appropriate windows by providing Retry as an available action in the Action List. Files on hold: When the database apply process detects a data synchronization problem, it places the file (individual member) on error hold and logs an error. File entries are in held status when an error is preventing them from being applied to the target system. You need to analyze the cause of the problem in order to determine how to correct and release the file and ensure that the problem does not occur again. An option on the Work with Data Groups display provides quick access to the subset of file entries that are in error for a data group. From the Work with DG File Entries display, you can see the status of an entry and use a number of options to assist in resolving the error. An alternative view shows the database error code and journal code. Available options include access to the Work with DG Files on Hold (WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with file entries that are in a held status. You can view and work with the entry for which the error was detected and work with all other entries following the entry in error. MIMIX Availability Manager provides similar capabilities to those of WRKDGFEHLD from the following windows: Data Group Details - User Journal, Data Group Details Activity, and File Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the user journal. Journal analysis: With user journal replication, when the system that is the source of replicated data fails, it is possible that some of the generated journal entries may not have been transmitted to or received by the target system. However, it is not always possible to determine this until the failed system has been recovered. Even if the
43
Operational overview
failed system is recovered, damage to a disk unit or to the journal itself may prevent an accurate analysis of any missed data. Once the source system is available again, if there is no damage to the disk unit or journal and its associated journal receivers, you can use the journal analysis function to help determine what journal entries may have been missed and to which files the data belongs. You can only perform journal analysis on the system where a journal resides.
44
These messages are sent to both the primary and secondary message queues that are specified for the system definition. In addition to these message queues, message entries are recorded in a MIMIX message log file. The MIMIX message log provides a powerful tool for problem determination. Maintaining a message log file allows you to keep a record of messages issued by MIMIX as an audit trail. In addition, the message log provides robust subset and filter capabilities, the ability to locate and display related job logs, and a powerful debug tool. When messages are issued, they are initially sent to the specified primary and secondary message queues. In the event that these message queues are erased, placing messages into the message log file secures a second level of information concerning MIMIX operations. The message log on the management system contains messages from the management system and each network system defined within the installation. The system manager is responsible for collecting messages from all network systems. On a network system, the message log contains only those messages generated by MIMIX activity on that system. MIMIX automatically performs cleanup of the message log on a regular basis. The system manager deletes entries from the message log file based on the value of the Keep system history parameter in the system definition. However, if you process an unusually high volume of replicated data, you may want to also periodically delete unnecessary message log entries since the file grows in size depending on the number of messages issued in a day.
45
Chapter 2
In general terms, a replication path is a series of processes that, together, represent the critical path on which data to be replicated moves from its origin to its destination. MIMIX uses two replication paths to accommodate differences in how replication occurs for databases and objects. These paths operate with configurable levels of cooperation or can operate independently. The user journal replication path captures changes to critical files and objects configured for replication through the user journal using the i5/OS remote journaling function. In previous versions, MIMIX DB2 Replicator provided this function. The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the i5/OS system journal. In previous versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating files, IFS objects, data areas, and data queues. Within each replication path, MIMIX uses a series of processes. This chapter describes the replication paths and the processes used in each. The topics in this chapter include: Replication job and supporting job names on page 47 describes the replication paths for database and object information. Included is a table which identifies the replication job names for each of the processes that make up the replication path. Cooperative processing introduction on page 50 describes three variations available for performing replication activities using a coordinated effort between user journal processing and system journal processing. System journal replication on page 53 describes the system journal replication path which is designed to handle the object-related availability needs of your system through system journal processing. User journal replication on page 61 describes remote journaling and the benefits of using remote journaling with MIMIX. User journal replication of IFS objects, data areas, data queues on page 72 describes a technique which allows replication of changed data for certain object types through the user journal. Lesser-used processes for user journal replication on page 76 describes two lesser used replication processes, MIMIX source-send processing for database replication and the data area poller process.
46
Abbreviation CNRRCV CNRSND DAPOLL DBAPY DBRCV DBRDR DBSND JRNMGR MXCOMMD MXOBJSELPR OBJAPY OBJRTV OBJSND OBJRCV STSSND SYSMGR
47
Table 3.
(Continued) MIMIX processes and their corresponding job names Description System manager receive process Status receive Tracking entry update process Runs on Network Source Source or Target Job name SR******** sdn_STSRCV sdn_TEUPD Notes 1, 2 1, 3 3, 5
1. Send and receive processes depend on communication. The job name varies, depending on the transfer protocol. OptiConnect job names start with APIA* in the QSOC subsystem. The SNA job name is derived from the remote location name. TCP/IP uses a job name port number or alias as the job name. The alias is defined on the service table entry. 2. The system manager runs on both source and target systems. The ******** in the job name format indicates the name of the system definition. 3. The characters sdn in a job name indicate the short data group name. 4. The character s is the apply session letter. 5. The job is used only for replication with advanced journaling and is started only when needed.
48
49
When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication. IFS objects, data areas, or data queues that can be journaled are not automatically configured for advanced journaling, by default. These object types must be manually configured to use advanced journaling. In all variations of cooperative processing, the system journal is used to replicate the following operations: The creation of new objects that do not deposit an entry in a user journal when they are created. Restores of objects on the source system Move and rename operates from a non-replicated library or path into a library or path that is configured for replication.
50
relationships by assigning them to the same or appropriate apply sessions. It is also much better at maintaining data integrity of replicated objects which previously needed legacy cooperative processing in order to replicate some operations such as creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is more efficient hold log processing by enabling multiple files to be processed through a hold log instead of just one file at a time. New data groups created with the shipped default configuration values are configured to use MIMIX Dynamic Apply. This configuration requires data group object entries and data group file entries. For more information, see Identifying logical and physical files for replication on page 105 and Requirements and limitations of MIMIX Dynamic Apply on page 110.
Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data queues that are configured for cooperative processing. When these objects are configured for cooperative processing, replication of changed bytes of the journaled objects data occurs through the user journal. This is more efficient than replicating an entire object through the system journal each time changes occur. Such a configuration also allows for the serialization of updates to IFS objects, data areas, and data queues with database journal entries. In addition, processing time for these object types may be reduced, even for equal amounts of data, as user journal replication eliminates the separate save, send, and restore processes necessary for system replication. Frequently you will see the phrase user journal replication of IFS objects, data areas, and data queues used interchangeably with the term advanced journaling. These terms are the same. For more information, see User journal replication of IFS objects, data areas, data queues on page 72 and Planning for journaled IFS objects, data areas, and data queues on page 85.
51
52
These system value settings, along with the object audit value of each object, control what journal entries are created in the system journal (QAUDJRN) for an object. If an operation on an object is not represented by an entry in the system journal, MIMIX is not aware of the operation and cannot replicate it. The system objects you want to replicate are defined to a data group through data group object entries, data group DLO entries, and data group IFS entries. The term name space refers to this collection of objects that are identified for replication by MIMIX using the system journal replication processes. An object is replicated when it is created, restored, moved, or renamed into the MIMIX name space. While in the MIMIX name space, changes to the object or to the authority settings of the object are also replicated. Replication through the system journal is event-driven. When a data group is started, each process used in the replication path waits for its predetermined event to occur then begins its activity. The processes are interdependent and run concurrently. The system journal replication path in MIMIX uses the following processes: Object send process: alternates between identifying objects to be replicated and transmitting control information about objects ready for replication to the target system. Object receive process: receives control information and waits for notification that additional source system processing, if any, is complete before passing the control information to the object apply process. Object retrieve process: if any additional information is needed for replication, obtains it and places it in a holding area. This process is also used when additional processing is required on the source system prior to transmission to the target system.
53
Container send process: transmits any additional information from a holding area to the target system and notifies the control process of that action. Container receive process: receives any additional information and places it into a holding area on the target system. Object apply process: replicates objects according to the control information and any required additional information that is retrieved from the holding area. Status send process: notifies the source system of the status of the replication. Status receive process: updates the status on the source system and, if necessary, passes control information back to the object send process.
MIMIX uses a collection of structures and customized functions for controlling these structures during replication. Collectively the customized functions and structures are referred to as the work log. The structures in the work log consist of log spaces, work lists (implemented as user queues), and distribution status file. When a data group is started, MIMIX uses the security audit journal to monitor for activity on objects within the name space. When activity occurs on the object, such as it is being accessed or changed, a corresponding journal entry is created in the security audit journal. As journal entries are added to the journal receiver on the source system, the object send process reads journal entries and determines if they represent operations to objects that are within the name space. For each journal entry for an object within the name space, the object send process creates an activity entry in the work log. Creation of an activity entry includes adding the entry to the log space and adding a record to the distribution status file. An activity entry includes a copy of the journal entry and any related information associated with a replication operation for an object, including the status of the entry. User interaction with activity entries is through the Work with Data Group Activity display and the Work with DG Activity Entries display. There are two categories of activity entries: those that are self-contained and those that require the retrieval of additional information. Processing self-contained activity entries on page 54 describes the simplest object replication scenario. Processing data-retrieval activity entries on page 55 describes the object replication scenario in which additional data must be retrieved from the source system and sent to the target system.
54
Transmits the activity entry to a corresponding object receive process job on the target system.
The object receive process adds the received date and time to the activity entry, writes the activity entry to the log space, adds a record to the distribution status file, and places the activity entry on the object apply work list. Now each system has a copy of the activity entry. The next available object apply process job for the data group retrieves the activity entry from the object apply work list and replicates the operation represented by the entry. The object apply process adds the applied date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding status receive process on the source system. The status receive process updates the activity entry in the work log and the distribution status file.
The object receive process adds the received date and time to the activity entry, writes the activity entry to the log space, and adds a record to the distribution status file. Now each system has a copy of the activity entry. The object receive process waits until the source system processing is complete before it adds the activity entry to the object apply work list.
55
Concurrently, the object send process reads the object send work list. When the object send process finds an activity entry in the object send work list, the object send process performs one or more of the following additional steps on the entry: If an object retrieve job packaged the object, the activity entry is routed to the container send work list. The activity entry is transmitted to the target system, its status is updated, and a retrieved date and time is added to the activity entry.
On the source system the next available object retrieve process for the data group retrieves the activity entry from the object retrieve work list and processes the referenced object. In addition to retrieving additional information for the activity entry, additional processing may be required on the source system. The object retrieve process may perform some or all of the following steps: Retrieve the extended attribute of the object. This may be one step in retrieving the object or it may be the primary function required of the retrieve process. If necessary, cooperative processing activities, such as adding or removing a data group file entry, are performed. The object identified by the activity entry is packaged into a container in the data library. The object retrieve process adds the retrieved date and time to the activity entry and changes the status of the entry to pending send. The activity entry is added to the object send work list. From there the object send job takes the appropriate action for the activity, which may be to send the entry to the target system, add the entry to the container send work list, or both.
The container send and receive processes are only used when an activity entry requires information in addition to what is contained within the journal entry. The next available job for the container send process for the data group retrieves the activity entry from the container send work list and retrieves the container for the packaged object from the data library. The container send job transmits the container to a corresponding job of the container receive process on the target system. The container receive process places the container in a data library on the target system. The container send process waits for confirmation from the container receive job, then adds the container sent date and time to the activity entry, changes the status of the activity entry to PA (pending apply), and adds the entry to the object send work list. The next available object apply process job for the data group retrieves the activity entry from the object apply work list, locates the container for the object in the data library, and replicates the operation represented by the entry. The object apply process adds the applied date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding job for status receive process on the source system. The status receive process updates the activity entry in the log space and the distribution status file. If the activity entry requires further processing, such as if an updated container is needed on the target system, the status receive job adds the entry to the object send work list.
56
57
The system journal replication path within MIMIX relies on entries placed in the system journal by i5/OS object auditing functions. To ensure that objects configured for this replication path retain an object auditing value that supports replication, MIMIX evaluates and changes the objects auditing value when necessary. To do this, MIMIX employs a configuration value that is specified on the Object auditing value (OBJAUD) parameter of data group entries (object, IFS, DLO) configured for the system journal replication path. When MIMIX determines that an objects auditing value is lower than the configured value, it changes the object to have the higher configured value specified in the data group entry that is the closest match to the object. The OBJAUD parameter supports object audit values of *ALL, *CHANGE, or *NONE. MIMIX evaluates and may change an objects auditing value when specific conditions exist during object replication or during processing of a Start Data Group (STRDG) request. This evaluation process can also be invoked manually for all objects identified for replication by a data group. During replication - MIMIX may change the auditing value during replication when an object is replicated because it was created, restored, moved, or renamed into the MIMIX name space (the group of objects defined to MIMIX). While starting a data group - MIMIX may change the auditing value while processing a STRDG request if the request specified processes that cause object send (OBJSND) jobs to start and the request occurred after a data group switch or after a configuration change to one or more data group entries (object, IFS, or DLO). Shipped command defaults for the STRDG command allow MIMIX to set object auditing if necessary. If you would rather set the auditing level for replicated objects yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter when you start data groups. Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides the ability to manually set the object auditing level of existing objects identified for replication by a data group. When the command is invoked, MIMIX checks the audit value of existing objects identified for system journal replication. Shipped default values on the command cause MIMIX to change the object auditing value of objects to match the configured value when an objects actual value is lower than the configured value. The SETDGAUD command is used during initial configuration of a data group. Otherwise, it is not necessary for normal operations and should only be used under the direction of a trained MIMIX support representative. The SETDGAUD command also supports optionally forcing a change to a configured value that is lower than the existing value through its Force audit value (FORCE) parameter. Evaluation processing - Regardless of how the object auditing evaluation is invoked, MIMIX may find that an object is identified by more than one data group entry within the same class of object (IFS, DLO, or library-based). It is important to understand the order of precedence for processing data group entries. Data group entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set; object entries and DLO entries
58
are processed using the EBCDIC character set. The first entry (more generic) found that matches the object is used until a more specific match is found. The entry that most specifically matches the object is used to process the object. If the object has a lower audit value, it is set to the configured auditing value specified in the data group entry that most specifically matches the object. When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry, all of the directories in the objects directory path are checked and, if necessary, changed to the new auditing value. In the case of an IFS entry with a generic name, all descendents of the IFS object may also have their auditing value changed. When you change a data group entry, MIMIX updates all objects identified by the same type of data group entry in order to ensure that auditing is set properly for objects identified by multiple entries with different configured auditing values. For example, if a new DLO entry is added to a data group, MIMIX sets object auditing for all objects identified by the data groups DLO entries, but not for its object entries or IFS entries. For more information and examples of setting auditing values with the SETDGAUD command, see Setting data group auditing values manually on page 297.
59
60
61
replication performance of journal entries and allows database images to be sent to the target system in realtime. This realtime operation is called the synchronous delivery mode. If the synchronous delivery mode is used, the journal entries are guaranteed to be in main storage on the target system prior to control being returned to the application on the source machine. It allows the journal receiver save and restore operations to be moved to the target system. This way, the resource utilization on the source machine can be reduced.
62
Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal entries as they are generated by the source applications. The source applications do not continue processing until the journal entries are sent to the target journal. Each journal entry is first replicated to the target journal receiver in main memory on the target system (1 in Figure 3). When the source system receives notification of the delivery to the target journal receiver, the journal entry is placed in the source journal receiver (2) and the source database is updated (3). With synchronous delivery, journal entries that have been written to memory on the target system are considered unconfirmed entries until they have been written to
63
auxiliary storage on the source system and confirmation of this is received on the target system (4).
Figure 3. Synchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications 2 Source Journal Receiver (Local) 3 Production Database
Target System
Unconfirmed journal entries are entries replicated to a target system but the state of the I/O to auxiliary storage for the same journal entries on the source system is not known. Unconfirmed entries only pertain to remote journals that are maintained synchronously. They are held in the data portion of the target journal receiver. These entries are not processed with other journal entries unless specifically requested or until confirmation of the I/O for the same entries is received from the source system. Confirmation typically is not immediately sent to the target system for performance reasons. Once the confirmation is received, the entries are considered confirmed journal entries. Confirmed journal entries are entries that have been replicated to the target system and the I/O to auxiliary storage for the same journal entries on the source system is known to have completed. With synchronous delivery, the most recent copy of the data is on the target system. If the source system becomes unavailable, you can recover using data from the target system. Since delivery is synchronous to the application layer, there are application performance and communications bandwidth considerations. There is some performance impact to the application when it is moved from asynchronous mode to synchronous mode for high availability purposes. This impact can be minimized by ensuring efficient data movement. In general, a minimum of a dedicated 100 megabyte ethernet connection is recommended for synchronous remote journaling.
64
MIMIX includes special switch processing for unconfirmed entries to ensure that the most recent transactions are preserved in the event of a source system failure. For more information, see Support for unconfirmed entries during a switch on page 70.
Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal first (A in Figure 4) and then applied to the source database (B). An independent job sends the journal entries from a buffer (C) to the target system journal receiver (D) at some time after control is returned to the source applications that generated the journal entries. Because the journal entries on the target system may lag behind the source systems database, in the event of a source system failure, entries may become trapped on the source system.
Figure 4. Asynchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications A Source Journal Receiver (Local) B Production Database
Buffer
Target System
D Target Journal Message Queue Target Journal Receiver (Remote)
With asynchronous delivery, the most recent copy of the data is on the source system. Performance critical applications frequently use asynchronous delivery. Default values used in configuring MIMIX for remote journaling use asynchronous delivery. This delivery mode is most similar to the MIMIX database send and receive processes.
65
The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of a remote journal link. A remote journal link (RJ link) is a configuration element that identifies an i5/OS remote journaling environment. An RJ link identifies: A source journal definition that identifies the system and journal which are the source of journal entries being replicated from the source system. A target journal definition that defines a remote journal. Primary and secondary transfer definitions for the communications path for use by MIMIX. Whether the i5/OS remote journal function sends journal entries asynchronously or synchronously.
Once an RJ link is defined and other configuration elements are properly set, user journal replication processes will use the i5/OS remote journaling environment within its replication path. The concept of an RJ link is integrated into existing commands. The Work with RJ Links display makes it easy to identify the state of the i5/OS remote journaling environment defined by the RJ link.
66
journal entries for database operations to be routed back to their originating system. See Support for unconfirmed entries during a switch on page 70 and RJ link considerations when switching on page 70 for more details.
*CNTRLD
The ENDRJLNK commands ENDOPT parameter is ignored and an immediate end is preformed when either of the following conditions are true: When the remote journal function is running in synchronous mode (DELIVERY(*SYNC)). When the remote journal function is performing catch-up processing.
67
RJ link monitors
User journal replication processes monitor the journal message queues of the journals identified by the RJ link. Two RJ link monitors are created automatically, one on the source system and one on the target system. These monitors provide added value by allowing MIMIX to automatically monitor the state of the remote journal link, to notify the user of problems, and to automatically recover the link when possible.
68
originated the replication and holds the source journal definition for the next system in the cascade. For more information about configuring for these environments, see Data distribution and data management scenarios on page 361.
69
70
used during a planned switch cause the RJ link to remain active. You may need to end the RJ link after a planned switch.
71
72
the hotel risks reserving too many or too few rooms. Without advanced journaling, serialization of these transactions cannot not be guaranteed on the target system due to inherent differences in MIMIX processing from the user journal (database file) and the system journal (default for objects). With advanced journaling, MIMIX serializes these transactions on the target system by updating both the file and the data area through user journal processing. Thus, as long as the database file and data area are configured to be processed by the same apply session, updates occur on the target system in the same order they were originally made on the source system. Additional benefits of replicating IFS objects, data areas, and data queues from the user journal include: Replication is less intrusive. In traditional object replication, the save/restore process places locks on the replicated object on the source system. Database replication touches the user journal only, leaving the source object alone. Changes to objects replicated from the user journal may be replicated to the target system in a more timely manner. In traditional object replication, system journal replication processes must contend with potential locks placed on the objects by user applications. Processing time may be reduced, even for equal amounts of data. Database replication eliminates the separate save, send, and restore processes necessary for object replication. The objects replicated from the user journal can reduce burden on object replication processes when there is a lot of activity being replicated through the system journal. Commitment control is supported for B journal entry types for IFS objects journaled to a user journal. Advanced journaling can be used in configurations that use either remote journaling or MIMIX source-send processes for user journal replication.
Restrictions and configuration requirements vary for IFS objects and data area or data queue objects. If one or more of the configuration requirements are not met, the system journal replication path is used. For detailed information, including supported journal entry types, see Identifying data areas and data queues for replication on page 112 and Identifying IFS objects for replication on page 118.
1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ support.
73
Tracking entries
A unique tracking entry is associated with each IFS object, data area, and data queue that is replicated using advanced journaling. The collection of data group IFS entries for a data group determines the subset of existing IFS objects on the source system that are eligible for replication using advanced journaling techniques. Similarly, the collection of data group object entries determines the subset of existing data areas and data queues on the source system that are eligible for replication using advanced journaling techniques. MIMIX requires a tracking entry for each of the eligible objects to identify how it is defined for replication and to assist with tracking status when it is replicated. IFS tracking entries identify IFS stream files, including the source and target file ID (FID), while object tracking entries identify data areas or data queues. When you initially configure a data group you must load tracking entries, start journaling for the objects which they identify, and synchronize the objects with the target system. The same is true when you add new or change existing data group IFS entries or object entries. It is also possible for tracking entries to be automatically created. After creating or changing data group IFS entries or object entries that are configured for advanced journaling, tracking entries are created the next time the data group is started. However, this method has disadvantanges.This can significantly increase the amount of time needed to start a data group. If the objects you intend to replicate with advanced journaling are not journaled before the start request is made, MIMIX places the tracking entries in *HLDERR state. Error messages indicate that journaling must be started and the objects must be synchronized between systems. Once a tracking entry exists, it remains until one of the following occurs: The object identified by the tracking entry is deleted from the source system and replication of the delete action completes on the target system. The data group configuration changes so that an object is no longer identified for replication using advanced journaling.
74
Figure 5 shows an IFS user directory structure, the include and exclude processing selected for objects within that structure, and the resultant list of tracking entries created by MIMIX.
Figure 5. IFS tracking entries produced by MIMIX
Viewing tracking entries is supported in both 5250 emulator and MIMIX Availability Manager interfaces. Their status is included with other data group status. You also can see what objects they identify, whether the objects are journaled, and their replication status. You can also perform operations on tracking entries, such as holding and releasing, to address replication problems.
75
76
and begins reading entries from the next journal receiver. This eliminates excessive use of disk storage and allows valuable system resources to be available for other processing. Besides indicating the mapping between source and target file names, data group file entries identify additional information used by database processes. The data group file entry can also specify a particular apply session to use for processing on the target system. A status code in the data group file entry also stores the status of the file or member in the MIMIX process. If a replication problem is detected, MIMIX puts the member in hold error (*HLDERR) status so that no further transactions are applied. Files can also be put on hold (*HLD) manually. Putting a file on hold causes MIMIX to retain all journal entries for the file in log spaces on the target system. If you expect to synchronize files at a later time, it is better to put the file in an ignored state. By setting files to an ignored state, journal entries for the file in the log spaces are deleted and additional entries received from the target system are discarded. This keeps the log spaces to a minimal size and improves efficiency for the apply process. The file entry option Lock member during apply indicates whether or not to allow only restricted access (read-only) to the file on the backup system. This file entry option can be specified on the data group definition or on individual data group entries.
You define a data group data area entry for each data area that you want MIMIX to manage. The data group definition determines how frequently the polling programs check for changes to data areas. The data area polling process runs on the source system. This process retrieves each data area defined to a data group at the interval you specify and determines whether or not a data area has changed. MIMIX checks for changes to the data area type and length as well as to the contents of the data area. If a data area has changed, the data area polling process retrieves the data area and converts it into a journal entry. This
77
journal entry is sent through the normal user journal replication processing and is used to update the data area on the target system. For example, if a data area that is defined to MIMIX is deleted and recreated with new attributes, the data area polling process will capture the new attributes and recreate the data area on the target system.
78
79
Chapter 3
This chapter outlines what you need to do to prepare for using MIMIX. Preparing for the installation and use of MIMIX is a very important step towards meeting your availability management requirements. Because of their shared functions and their interaction with other MIMIX products, it is best to determine System i5 requirements for user journal and system journal processing in the context of your total MIMIX environment. Give special attention to planning and implementing security for MIMIX. General security considerations for all MIMIX products can be found in the License and Availability Manager book. In addition, you can make your systems more secure with MIMIX product-level and command-level security. Each product has its own productlevel security, but now you must consider the security implications of common functions used by each product. Information about setting security for common functions is also found in the License and Availability Manager book. The topics in this chapter include: Checklist: pre-configuration on page 81 provides a procedure to follow to prepare to configure MIMIX on each system that participates in a MIMIX installation. Data that should not be replicated on page 83 describes how to consider what data should not be replicated. Planning for journaled IFS objects, data areas, and data queues on page 85 describes considerations when planning to use advanced journaling for IFS objects, data areas, or data queues. Starting the MIMIXSBS subsystem on page 90 describes how to start the MIMIXSBS subsystem which all MIMIX products run in. Accessing the MIMIX Main Menu on page 91 describes the MIMIX Main Menu and its two assistance levels, basic and intermediate which provide options to help simplify daily interactions with MIMIX.
80
Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation. Do the following: 1. By now, you should have completed the following tasks: The checklist for installing MIMIX software in the License and Availability Manager book You should have also turned on product-level security and granted authority to user profiles to control access to the MIMIX products.
2. At this time, you should review the information in Data that should not be replicated on page 83. 3. Decide what replication choices are appropriate for your environment. For detailed information see the chapter Planning choices and details by object class on page 93. 4. If it is not already active, start the MIMIXSBS subsystem using topic Starting the MIMIXSBS subsystem on page 90. 5. Configure each system in the MIMIX installation, beginning with the management system. The chapter Configuration checklists on page 137 identifies the primary options you have for configuring MIMIX. 6. Once you complete the configuration process you choose, you may also need to do one or more of the following: If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to write exit programs for monitoring activity and you may want to ensure that your monitor definitions are replicated. See the Using MIMIX book for more information. Verify the configuration. Verify any exit programs that are called by MIMIX. Update any automation programs you use with MIMIX and verify their operation. If you plan to use switching support, you or your Certified MIMIX Consultant may need to take additional action to set up and test switching. In order to use MIMIX Switch Assistant, a default model switch framework must be configured and identified in MIMIX policies. For more information about MIMIX Model Switch Framework, see the Using MIMIX Monitor book. For more information about switching and policies, see the Using MIMIX book.
81
Checklist: pre-configuration
82
You should not replicate the following: LAKEVIEW, MIMIXQGPL, or any MIMIX installation libraries. The LAKEVIEW or MIMIXOWN user profiles. System user profiles from one system to another. For example, QSYSOPR and QSECOFR should not be replicated. IBM i5/OS objects from one system to another. IBM-supplied libraries, files, and other objects for i5/OS typically begin with the prefix letter Q.
83
84
Planning for journaled IFS objects, data areas, and data queues
You can choose to use the cooperative processing support within MIMIX to replicate any combination of journaled IFS objects, data queue objects, or data queue objects using user journal replication processes. In addition to configuration and journaling requirements and the restrictions that apply, you need to address several other considerations when planning to replicate journaled IFS objects, data areas, or data queues. These considerations affect whether journals should be shared, whether objects should be replicated in a data group shared with database files, whether configuration changes are needed to change apply sessions for database files, and whether exit programs need to be updated.
The benefits of user journal replication are described in Benefits of advanced journaling on page 72. For restrictions and limitations, see Identifying data areas and data queues for replication on page 112 and Identifying IFS objects for replication on page 118.
85
Planning for journaled IFS objects, data areas, and data queues
You may have previously used data groups with a Data group type (TYPE) value of *OBJ to separate replication of IFS, data area, or data queue objects from other activity. Converting these data groups to use advanced journaling will not cause problems with the data group. The data group definition and existing data group entries must be changed to the values required for advanced journaling. When converting an existing data group to use advanced journaling, all objects in the IFS path or the library specified that match the selection criteria are selected. You may need to create additional data group IFS or object entries in order to achieve the desired results. This may include creating entries that exclude objects from replication. Adding IFS, data area, or data queue objects configured for advanced journaling to an existing database replication environment may increase replication activity and affect performance. If a large amount of data is to be replicated, consider the overall replication performance and throughput requirements when choosing a configuration. Changing the replication mechanism of IFS objects, data areas, or data queues from system journal replication to user journal replication generally reduces bandwidth consumption, improves replication latency, and eliminates the locking contention associated with the save and restore process. However, if these objects have never been replicated, the addition of IFS byte stream files, data areas, or data queues to the replication environment will increase bandwidth consumption and processing workload.
Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group KEYAPP are running on IBM i V5R4. You use this data group for system journal replication of the objects in library PRODLIB. The data group has one data group object entry which has the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE) Example 1 - You decide to use advanced journaling for all *DTAARA and *DTAQ objects replicated with data group KEYAPP. You have confirmed that the data group definition specifies TYPE(*ALL) and does not need to change. After performing a controlled end of the data group, you change the data group object entry to have the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) When the data group is started, object tracking entries are loaded and journaling is started for the data area and data queue objects in PRODLIB. Those objects will now be replicated from a user journal. Any other object types in PRODLIB continue to be replicated from the system journal. Example 2 - You want to use advanced journaling for data group KEYAPP but one data area, XYZ, must remain replicated from the system journal. You will need the data group object entry described in Example 1
86
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) You will also need a new data group object entry that specifies the following so that data area XYZ can be replicated from the system journal: LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD) COOPDB(*NO)
87
Planning for journaled IFS objects, data areas, and data queues
incomplete journal entries, MIMIX provides two or more journal entries with duplicate journal entry sequence numbers and journal codes and types to the user exit program when the data for the incomplete entry is retrieved. Programs need to correctly handle these duplicate entries representing the single, original journal entry. Journal entries for journaled IFS objects, data areas, and data queues will be routed to the user exit program. This may be a performance consideration relative to user exit program design.
Contact your Certified MIMIX Consultant for assistance with user exit programs.
88
89
90
91
Main Menu. We recommend you use the MIMIX Basic Main Menu unless you must access the MIMIX Intermediate Main Menu.
Figure 6. MIMIX Basic Main Menu
MIMIX Basic Main Menu System: MIMIX Select one of the following: 1. Availability status 2. Start MIMIX 3. End MIMIX 5. Start or complete switch 11. Configuration menu 12. Work with monitors 13. Work with messages 31. Product management menu WRKMMXSTS SYSTEM1
Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.
Figure 7.
MIMIX Select one of the following: 1. 2. 3. 4. Work Work Work Work with with with with data groups systems messages monitors WRKDG WRKSYS WRKMSGLOG WRKMON
11. Configuration menu 12. Compare, verify, and synchronize menu 13. Utilities menu 31. Product management menu LAKEVIEW/PRDMGT
Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.
92
Chapter 4
93
Identifying DLOs for replication on page 124 describes how MIMIX interprets the data group DLO entries defined for a data group and includes examples for documents and folders. Processing of newly created files and objects on page 127 describes how new IFS objects, data areas, data queues, and files that have journaling implicitly started are replicated from the user journal. Processing variations for common operations on page 130 describes configuration-related variations in how MIMIX replicates move/rename, delete, and restore operations.
94
95
Object Class and Type Objects of type *FILE, extended attributes: PF (data, source) LF
Object entries
Identifying library-based objects for replication on page 100 Identifying data areas and data queues for replication on page 112
Object entries Object entries and Object tracking entries Data area entries
Object entries Object entries and Object tracking entries Object entries Identifying library-based objects for replication on page 100 Identifying IFS objects for replication on page 118
IFS entries IFS entries and IFS tracking entries DLO entries
DLOs
1. 2.
New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply. Existing data groups can be converted to this method of cooperative processing. User journal replication can be configured for either remote journaling or MIMIX source-send processes.
96
97
The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries. The value *NONE prevents replication of attribute and data changes for the identified object or DLO because T-ZC and T-YC entries are not recorded in the system journal. For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or data queues configured for user journal replication, the value *NONE can improve MIMIX performance by preventing unneeded entries from being written to the system journal.
98
When a compare request includes an object with a configured object auditing value of *NONE, any differences found for attributes that could generate T-ZC or T-YC journal entries are reported as *EC (equal configuration). You may also want to read the following: For more information about when MIMIX sets an objects auditing value, see Managing object auditing on page 57. For more information about manually setting values and examples, see Setting data group auditing values manually on page 297. To see what attributes can be compared and replicated, see the following topics: Attributes compared and expected results - #FILATR, #FILATRMBR audits on page 591 Attributes compared and expected results - #OBJATR audit on page 596 Attributes compared and expected results - #DLOATR audit on page 606. Attributes compared and expected results - #IFSATR audit on page 604
99
For *FILE objects configured for replication through the system journal, MIMIX caches extended file attribute information for a fixed set of *FILE objects. Also, the Omit content (OMTDTA) parameter provides the ability to omit a subset of data-changing operations from replication. For more information, see Caching extended attributes of *FILE objects on page 345 and Omitting T-ZC content from system journal replication on page 387. For *DTAARA and *DTAQ object types, MIMIX supports replication using either system journal or user journal replication processes. A configuration that uses the user journal is also called an advanced journaling configuration. Additional information, including configuration requirements are described in Identifying data areas and data queues for replication on page 112.
100
How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects you specify in data group object entries will be selected for replication. MIMIX determines which replication process will be used only after it determines whether the library-based object will be replicated. When determining whether to process a journal entry for a library-based object, MIMIX looks for a match between the object information in the journal entry and one of the data group object entries. The data group object entries are checked from the most specific to the least specific. The library name is the first search element, then followed by the object type, attribute (for files and device descriptions), and the object name. The most significant match found (if any) is checked to determine whether to include or exclude the journal entry in replication. Table 7 shows how MIMIX checks a journal entry for a match with a data group object entry. The columns are arranged to show the priority of the elements within the object entry, with the most significant (library name) at left and the least significant (object name) at right.
Table 7. Matching order for library-based object names. Library Name Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Object Type Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Attribute1 Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Object Name Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL
Search Order 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1.
The extended object attribute is only checked for objects of type *FILE and *DEVD.
101
When configuring data group object entries, the flexibility of the generic support allows a variety of include and exclude combinations for a given library or set of libraries. But, generic name support can also cause unexpected results if it is not well planned. Consider the search order shown in Table 7 when configuring data group object entries to ensure that objects are not unexpectedly included or excluded in replication. Example - For example, say you that you have a data group configured with data group object entries like those shown in Table 9. The journal entries MIMIX is evaluating for replication are shown in Table 8.
Table 8. Sample journal transactions for objects in the system journal Library FINANCE FINANCE FINANCE FINANCE Object BOOKKEEP ACCOUNTG BALANCE ACCOUNT1
A transaction is received from the system journal for program BOOKKEEP in library FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group object entry shown in Table 9. A transaction for file ACCOUNTG in library FINANCE would also be replicated since it fits the third entry. A transaction for data area BALANCE in library FINANCE would not be replicated since it fits the second entry, an Exclude entry.
Table 9. Entry 1 2 3 Sample of data group object entries, arranged in order from most to least specific Source Library Finance Finance Finance Object Type *PGM *DTAARA *ALL Object Name *ALL *ALL acc* Attribute *ALL *ALL *ALL Process Type *INCLD *EXCLD *INCLD
Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be replicated. Although the transaction fits both the second and third entries shown in Table 9, the second entry determines whether to replicate because it provides a more significant match in the second criteria checked (object type). The second entry provides an exact match for the library name, an exact match for object type, and a object name match to *ALL. In order for MIMIX to process the data area ACCOUNT1, an additional data group object entry with process type *INCLD could be added for object type of *DTAARA with an exact name of ACCOUNT1 or a generic name ACC*.
102
queue that is identified by an object entry with the appropriate settings, all spooled files for the output queue (*OUTQ) are replicated by system journal replication processes.
Table 10. Data group object entry parameter values for spooled file replication Value *ALL or *OUTQ *YES
Is it important to consider which spooled files must be replicated and which should not. Some output queues contain a large number of non-critical spooled files and probably should not be replicated. Most likely, you want to limit the spooled files that you replicate to mission-critical information. It may be useful to direct important spooled files that should be replicated to specific output queues instead of defining a large number of output queues for replication. When an output queue is selected for replication and the data group object entry specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA and *PRTDTA are included in the system value for the security auditing level (QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the system journal. When a spooled file is created, moved, deleted, or its attributes are changed, the resulting entries in the system journal are processed by a MIMIX object send job and are replicated.
103
program that automatically prints spooled files, you may want to use one of these values to control what is printed after replication when printers writers are active. If you move a spooled file between output queues which have different configured values for the SPLFOPT parameter, consider the following: Spooled files moved from an output queue configured with SPLFOPT(*NONE) to an output queue configured with SPLFOPT(*HLD) are placed in a held state on the target system. Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV) remain in a held state on the target system until you take action to release them.
104
You should be aware of common characteristics of replicating library-based objects, such when the configured object auditing value is used and how MIMIX interprets data group entries to identify objects eligible for replication. For this information, see Configured object auditing value for data group entries on page 98 and How MIMIX uses object entries to evaluate journal entries for replication on page 101. Some advanced techniques may require specific configurations. See Configuring advanced replication techniques on page 353 for additional information. For detailed procedures, see Creating data group object entries on page 267.
105
defaults are used. With this configuration, logical and physical files are processed primarily from the user journal. Cooperative journal - The value specified for the Cooperative journal (COOPJRN) parameter in the data group definition is critical to determining how files are cooperatively processed. When creating a new data group, you can explicitly specify a value or you can allow MIMIX to automatically change the default value (*DFT) to either *USRJRN or *SYSJRN based on whether operating system and configuration requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements are not met, MIMIX changes *DFT to *SYSJRN. Note: Data groups created prior to upgrading to version 5 continue to use their existing configuration. The installation process sets the value of COOPJRN to *SYSJRN and this value remains in effect until you take action as described in Converting to MIMIX Dynamic Apply on page 150. When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication.
Logical file considerations - Consider the following for logical files. Logical files are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, they are replicated through the system journal. It is strongly recommended that logical files reside in the same data group as all of their associated physical files.
Physical file considerations - Consider the following for physical files Physical files (source and data) are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, data files are replicated using legacy cooperative processing if those requirements are met, and source files are replicated through the system journal. If a data group definition specifies TYPE(*DB) and the configuration meets other MIMIX Dynamic Apply requirements, source files need to be identified by both data group object entries and data group file entries. If a data group is configured for only user journal replication (TYPE is *DB) and does not meet other configuration requirements for MIMIX Dynamic Apply, source files should be identified by only data group file entries. If a data group is configured for only system replication (TYPE is *OBJ), any source files should be identified by only data group object entries. Any data group object entries configured for cooperative processing will be replicated through the
106
system journal and should not have any corresponding data group file entries. Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. See Requirements and limitations of MIMIX Dynamic Apply on page 110 and Requirements and limitations of legacy cooperative processing on page 111 for additional information. For more information about load balancing apply sessions, see Database apply session balancing on page 87.
Commitment control - This database technique allows multiple updates to one or more files to be considered a single transaction. When used, commitment control maintains database integrity by not exposing a part of a database transaction until the whole transaction completes. This ensures that there are no partial updates when the process is interrupted prior to the completion of the transaction. This technique is also useful in the event that a partially updated transaction must be removed, or rolled back, from the files or when updates identified as erroneous need to be removed. MIMIX fully simulates commitment control on the target system. When commitment control is used on a source system in a MIMIX environment, MIMIX maintains the integrity of the database on the target system by preventing partial transactions from being applied until the whole transaction completes. If the source system becomes unavailable, MIMIX will not have applied incomplete transactions on the target system. In the event of an incomplete (or uncommitted) commitment cycle, the integrity of the database is maintained. If your application dynamically creates database files that are subsequently used in a commitment control environment, use MIMIX Dynamic Apply for replication. Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit cycle is open when MIMIX tries to save the file. The save operation will be delayed and may fail if the file being saved has uncommitted transactions.
107
User exit programs may be affected when journaled LOB data is added to an existing data group. Non-minimized LOB data produces incomplete entries. For incomplete journal entries, two or more entries with duplicate journal sequence numbers and journal codes and types will be provided to the user exit program when the data for the incomplete entry is retrieved and segmented. Programs need to correctly handle these duplicate entries representing the single, original journal entry. You should also be aware of the following restrictions: Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work against database files with LOB fields. There is no collision detection for LOB data. Most collision detection classes compare the journal entries with the content of the record on the target system. Although you can compare the actual content of the record, you cannot compare the content of the LOBs.
108
Table 12.
Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing MIMIX Dynamic Apply Required Values Legacy Cooperative Processing Required Values Configuration Notes
Critical Parameters
Data Group Definition Data group type (TYPE) *ALL or *DB *ALL See Requirements and limitations of MIMIX Dynamic Apply on page 110.
Use remote journal link (RJLNK) Cooperative journal (COOPJRN) File and tracking ent. opts: (FEOPT) Replication type Data Group Object Entries Object type (OBJTYPE) Attribute (OBJATR)
any value *DFT or *SYSJRN See cooperative journal is default. See Requirements and limitations of MIMIX Dynamic Apply on page 110.
*POSITION
any value
*ALL or *FILE *ALL or one of the following: LF, LF38, PF-DTA, PF-SRC, PF38-DTA, PF38SRC *YES *FILE
Cooperate with database (COOPDB) Cooperating object types (COOPTYPE) File and tracking ent. opts: (FEOPT) Replication type
*YES *FILE
*POSITION
any value
Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy cooperative processing require that existing files identified by a data group object entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must also be identified by data group file entries. When a file is identified by both a data group object entry and an data group file entry, the following are also required: The object entry must enable the cooperative processing of files by specifying
109
COOPDB(*YES) and COOPTYPE(*FILE). If name mapping is used between systems, the data group object entry and file entry must have the same name mapping defined. If the data group object entry and file entry specify different values for the File and tracking ent. opts (FEOPT) parameter, the values specified in the data group file entry take precedence. Files defined by data group file entries must have journaling started and must be synchronized. If journaling is not started, MIMIX cannot replicate activity for the file.
Typically, data group object entries are created during initial configuration and are then used as the source for loading the data group file entries. The #DGFE audit can be used to determine whether corresponding data group file entries exist for the files identified by data group object entries.
110
configured for replication Files created by these actions can be added to the MIMIX configuration by running the #DGFE audit. The audit recovery will synchronize the file as part of adding the file entry to the configuration. In data groups that specify TYPE(*ALL), the above actions are fully supported. Referential constraints - The following restrictions apply: If using referential constraints with *CASCADE or *SETNULL actions you must specify *YES for the Journal on target (JRNTGT) parameter in the data group definition. Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. If a particular preferred apply session has been specified in file entry options (FEOPT), MIMIX may ignore the specification in order to satisfy this restriction.
Positional replication only - Keyed replication is not supported by MIMIX Dynamic Apply. Data group definitions, data group object entries, and data group file entries must specify *POSITION for the Replication type element of the file and tracking entry options (FEOPT) parameter. The value *KEYED cannot be used.
111
Additional requirements for user journal replication - The following additional requirements must be met before data areas or data queues identified by data group object entries can be replicated with user journal processes. The data group definition and data group object entries must specify the values indicated in Table 13 for critical parameters. Object tracking entries must exist for the objects identified by properly configured object entries. Typically these are created automatically when the data group is started. Journaling must be started on both the source and target systems for the objects
112
Critical Parameters Data Group Definition Data group type (TYPE) Data Group Object Entry Cooperate with database (COOPDB) Cooperating object types (COOPTYPE)
*ALL
*YES *DTAARA *DTAQ The appropriate object types must be specified to enable advanced journaling. Otherwise, system journal replication results.
Additionally, see Planning for journaled IFS objects, data areas, and data queues on page 85 for additional details if any of the following apply: Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether data area or data queue objects should be replicated in a data group that also replicates database files. Serialized transactions - If you need to serialize transactions for database files and data area or data queue objects replicated from a user journal, you may need to adjust the configuration for the replicated files. Apply session load balancing - One database apply session, session A, is used for all data area and data queue objects are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly. User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.
113
differences on the target objects. These functions are supported in environments using V5R4 or higher operating systems. MIMIX does not support before-images for data updates to data areas, and cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. Furthermore, MIMIX does not provide a mechanism to prevent users or applications from updating replicated data areas on the target system accidentally. To guarantee the data integrity of replicated data areas between the source and target systems, you should run MIMIX AutoGuard on a regular basis. The apply of data area and data queue objects is restricted to a single database apply job (DBAPYA). If a data group has too much replication activity, this job may fall behind in the processing of journal entries. If this occurs, you should load-level the apply sessions by moving some or all of the database files to another database apply job. Pre-existing data areas and data queues to be selected for replication must have journaling started on both the source and target systems before the data group is started. The ability to replicate Distributed Data Management (DDM) data areas and data queues is not supported. If you need to replicate DDM data areas and data queues, use standard system journal replication methods.
Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
114
Journal entry types supported by MIMIX for data areas Type EM EN ES EW ZA ZB ZO ZP ZT Description Data area moved Data area renamed Data area saved Start of save for data area Change authority Change object attribute Ownership change Change primary group Auditing change 1 1 1 1 1 Notes 1 1
Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
Table 15 shows the currently supported journal entry types for data queues.
Table 15. Journal Code Q Q Q Q Q Q Q Q Q Q Q Q Data queue journal entry types supported by MIMIX Type QA QB QC QD QE QG QJ QK QL QM QN QR Description Create data queue Start data queue journaling Data queue cleared, no key Data queue deleted End data queue journaling Data queue attribute changed Data queue cleared, has key Send data queue entry, has key Receive data queue entry, has key Data queue moved Data queue renamed Receive data queue entry, no key 1 1 1 1 Notes 1
Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
115
Data queue journal entry types supported by MIMIX Type QS QX QY QZ ZA ZB ZO ZP ZT Description Send data queue entry, no key Start of save for data queue Data queue saved Data queue restored Change authority Change object attribute Ownership change Change primary group Auditing change 1 1 1 1 1 1 Notes
Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
For more information about journal entries, see Journal Entry Information (Appendix D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information Center.
116
117
Table 16 identifies the IFS file systems that are not supported by MIMIX and cannot be specified for either the System 1 object prompt or the System 2 object prompt in the Add Data Group IFS Entry (ADDDGIFSE) command.
Table 16. /QDLS /QFileSvr.400 /QFPNWSSTG IFS file systems that are not supported by MIMIX /QLANSrv /QNetWare /QNTC /QOPT /QSYS.LIB /QSR
Journaling is not supported for files in Network Work Storage Spaces (NWSS), which are used as virtual disks by IXS and IXA technology. Therefore, IFS objects configured to be replicated from a user journal must be in the Root (/) or QOpenSys file systems. Refer to the IBM book OS/400 Integrated File System Introduction for more information about IFS.
118
During replication, MIMIX preserves the character case of IFS object names. For example, the creation of /AbCd on the source system will be replicated as /AbCd on the target system. Replication will not alter the character case of objects that already exist on the target system (unless the object is deleted and recreated). In the root file system, /AbCd and /ABCD are equivalent names. If /ABCD exists as such on the target system, changes to /AbCd will be replicated to /ABCD, but the object name will not be changed to /AbCd on the target system. When character case is not a concern (root file system), MIMIX may present path names as all upper case or all lower case. For example, the WRKDGACTE display shows all lower case, while the WRKDGIFSE display shows all upper case. Names can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and /ABCD will produce the same result.
119
When character case does matter (QOpenSys file system), MIMIX presents path names in the appropriate case. For example, the WRKDGACTE display and the WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path. Names must be entered in the appropriate character case. For example, subsetting the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.
Additional requirements for user journal replication - The following additional requirements must be met before IFS objects identified by data group IFS entries can be replicated with user journal processes. The data group definition and data group IFS entries must specify the values indicated in Table 17 identifies for critical parameters. IFS tracking entries must exist for the objects identified by properly configured IFS entries. Typically these are created automatically when the data group is started. Journaling must be started on both the source and target systems for the objects identified by IFS tracking entries.
Critical configuration parameters for replicating IFS objects from a user journal Required Values Configuration Notes
Table 17.
Critical Parameters Data Group Definition Data group type (TYPE) Data Group IFS Entry
*ALL
120
Table 17.
Critical configuration parameters for replicating IFS objects from a user journal Required Values *YES Configuration Notes The default, *NO, results in system journal replication
Additionally, see Planning for journaled IFS objects, data areas, and data queues on page 85 for additional details if any of the following apply: Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether IFS objects should be replicated in a data group that also replicated database files. Serialized transactions - If you need to serialize transactions for database files and IFS objects replicated from a user journal, you may need to adjust the configuration for the replicated files. Apply session load balancing - One database apply session, session A, is used for all IFS objects that are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly. User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.
121
The ability to lock on apply IFS objects in order to prevent unauthorized updates from occurring on the target system is not supported when advanced journaling is configured. The ability to use the Remove Journaled Changes (RMVJRNCHG) command for removing journaled changes for IFS tracking entries is not supported. It is recommended that option 14 (Remove related) on the Work with Data Group Activity (WKRDGACT) display not be used for failed activity entries representing actions against cooperatively processed IFS objects. Because this option does not remove the associated tracking entries, orphan tracking entries can accumulate on the system.
122
Note: 1. The action identified in these entries are replicated cooperatively through the security audit journal.
123
How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits them from processing. This information can help you understand what is included or omitted. When determining whether to process a journal entry for a DLO, MIMIX looks for a match between the DLO information in the journal entry and one of the data group DLO entries. The data group DLO entries are checked from the most specific to the least specific. The folder path is the most significant search element, followed by the document name, then the owner. The most significant match found (if any) is checked to determine whether to process the entry. An exact or generic folder path name in a data group DLO entry applies to folder paths that match the entry as well as to any unnamed child folders of that path which are not covered by a more explicit entry. For example, a data group DLO entry with a folder path of ACCOUNT would also apply to a transaction for a document in folder path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of ACCOUNT/J* were added, it would take precedence because it is more specific. For a folder path with multiple elements (for example, A/B/C/D), the exact checks and generic checks against data group DLO entries are performed on the path. If no match is found, the lowest path element is removed and the process is repeated. For example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until a match is found or until all elements of the path have been removed. If there is still no match, then checks for folder path *ALL are performed.
Search Order 1 2 3
124
Table 19.
Matching order for document names Folder Path Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* *ALL *ALL *ALL *ALL *ALL *ALL Document Name Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Owner *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL
Search Order 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Document example - Table 20 illustrates some sample data group DLO entries. For example, a transaction for any document in a folder named FINANCE would be blocked from replication because it matches entry 6. A transaction for document ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1 would be blocked by entry 1. A transaction for any document in FINANCE2 would be blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of entry 5.
Table 20. Entry 1 2 3 4 5 6 Sample data group DLO entries, arranged in order from most to least specific Folder Path FINANCE1 FINANCE1 FINANCE1 FINANCE1 FINANCE2/Q1 FIN* Document PAYROLL LEDGER* *ALL *ALL *ALL *ALL Owner *ALL *ALL SMITHA *ALL *ALL *ALL Process Type *EXCLD *EXCLD *EXCLD *INCLD *INCLD *EXCLD
125
There is one exception to the requirement of replicating folders to satisfy the folder path for an include entry. A folder will not be replicated when the only include entry that would cause its replication specifies *ALL for its folder path and the folder matches an exclude entry with an exact or a generic folder path name, a document value of *ALL and an owner of *ALL. Table 20 and Table 21 illustrate the differences in matching folders to be replicated. In Table 20, above, a transaction for a folder named FINANCE would be blocked from replication because it matches entry 6. This would also affect all folders within FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4. Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5. Note that any transactions for documents in FINANCE2 or any child folders other than those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself must exist to satisfy entry 5. In Table 21, although entry 5 is an include entry, a transaction for folder ACCOUNT would be blocked from replication because it matches entry 2. This is because of the exception described above. ACCOUNT matches an exclude entry with an exact folder path, document value of *ALL, and an owner of *ALL, and the only include entry that would cause it to be replicated specifies folder path *ALL. The exception also affects all child folders in the ACCOUNT folder path. Note that the exception holds true even if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific folder name match takes precedence.
Table 21. Entry 1 2 3 4 5 Sample data group DLO entries, folder example Folder Path ACCOUNT2 ACCOUNT *ALL *ALL *ALL Document LEDGER* *ALL ABC* *ALL *ALL Owner *ALL *ALL *ALL JONESB *ALL Process Type *EXCLD *EXCLD *INCLD *INCLD *INCLD
A transaction for folder ACCOUNT2 would be replicated even though it is an exact path name match for exclude entry 1. The exception does not apply because entry 1 does not specify document *ALL. Entry 5 requires that ACCOUNT2 exist on the target system to satisfy the folder path requirements for document names other than LEDGER* and for child folders of ACCOUNT2.
126
127
proceeds normally after the file has been created. All subsequent file changes including moves or renames, member operations (adds, changes, and removes), member data updates, file changes, authority changes, and file deletes are replicated through the user journal.
128
For more information about requirements for implicit starting of journaling, see What objects need to be journaled on page 323. If the object is journaled to the user journal, MIMIX user journal replication processes can fully replicate the create operation. The user journal entries contain all the information necessary for replication without needing to retrieve information from the object on the source system. MIMIX creates a tracking entry for the newly created object and an activity entry representing the T-CO (create) journal entry. If the object is not journaled to the user journal, then the create of the object is processed with system journal processing. If the specified values in data group entry that identified the object as eligible for replication do not allow the object type to be cooperatively processed, the create of the object and subsequent operations are replicated through system journal processes. When MIMIX replicates a create operation through the user journal, the create timestamp (*CRTTSP) attribute may differ between the source and target systems.
129
Original Source Object Excluded from or not identified for replication Identified for replication Identified for replication Excluded from or not identified for replication
1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry, it is not guaranteed that an object with the same name exists on the backup system or that it is really the same object as on the source system. To ensure the integrity of the target (backup) system, a copy of the source object must be brought over from the source system. 2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is no guarantee that the target library exists on the target system. Further, the customer is assumed not to care if the target object is replicated, since it is not defined with an Include entry, so deleting the object is the most straight forward approach.
130
Move/rename operations - user journaled data areas, data queues, IFS objects
IFS, data area, and data queue objects replicated by user journal replication processes can be moved or renamed while maintaining the integrity of the data. If the new location or new name on the source system remains within the set of objects identified as eligible for replication, MIMIX will perform the move or rename operation on the object on the target system. When a move or rename operation starts with or results in an object that is not within the name space for user journal replication, MIMIX may need to perform additional operations in order to replicate the operation. MIMIX may use a create or delete operation and may need to add or remove tracking entries. Each row in Table 23 summarizes a move/rename scenario and identifies the action taken by MIMIX.
Table 23. MIMIX actions when processing moves or renames of objects when user journal replication processes are involved New name or location Within name space of objects to be replicated with user journal processing Not identified for replication Not identified for replication Within name space of objects to be replicated with system journal processing Within name space of objects to be replicated with user journal processing MIMIX action Moves or renames the object on the target system and renames the associated tracking entry. See example 1.
Source object Identified for replication with user journal processing Not identified for replication Identified for replication with user journal processing Identified for replication with user journal processing
None. See example 2. Deletes the target object and deletes the associated tracking entry. The object will no longer be replicated. See example 3. Moves or renames the object using system journal processes and removes the associated tracking entry. See example 4.
Creates tracking entry for the object using the new name or location and moves or renames the object using user journal processes. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication and synchronizes those objects. See example 5. Creates tracking entry for the object using the new name or location. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication. Synchronizes all of the objects identified by these new tracking entries. See example 6.
131
The following examples use IFS objects and directories to illustrate the MIMIX operations in move/rename scenarios that involve user journal replication (advanced journaling). The MIMIX behavior described is the same as that for data areas and data queues that are within the configured name space for advanced journaling. Table 24 identifies the initial set of source system objects, data group IFS entries, and IFS tracking entries before the move/rename operation occurs.
Table 24. Initial data group IFS entries, IFS tracking entries, and source IFS objects for examples Data Group IFS Entries /TEST/STMF* /TEST/DIR* Source System IFS Objects in Name Space /TEST/stmf1 /TEST/dir1/doc1 Associated Data Group IFS Tracking Entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1 system journal replication /TEST/NOTAJ* /TEST/notajstmf1 /TEST/notajdir1/doc1
Example 1, moves/renames within advanced journaling name space: The most common move and rename operations occur within advanced journaling name space. For example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/dir2, and that the IFS stream file /TEST/stmf1 was renamed to /TEST/stmf2. In both cases, the old and new names fall within advanced journaling name space, as indicated in Table 23. The rename operations are replicated and names are changed on the target system objects. The tracking entries for these objects are also renamed. The resulting changes on the target system objects and MIMIX configuration are shown in Table 25.
Table 25. Results of move/rename operations within name space for advanced journaling Resulting data group IFS tracking entries /TEST/stmf2 /TEST/dir2 /TEST/dir2/doc1
Example 2, moves/renames outside name space: When MIMIX encounters a journal entry for a source system object outside of the name space that has been renamed or moved to another location also outside of the name space, MIMIX ignores the transaction. The object is not eligible for replication. Example 3, moves/renames from advanced journaling name space to outside name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS stream file /TEST/stmf1 was renamed to /TEST/xstmf1. MIMIX is aware of only the original names, as indicated in Table 23. Thus, the old name is eligible for replication,
132
but the new name is not. MIMIX treats this as a delete operation during replication processing. MIMIX deletes the IFS directory and IFS stream file from the target system. MIMIX also deletes the associated IFS tracking entries. Example 4, moves/renames from advanced journaling to system journal name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both the old names and new names are eligible for replication as indicated in Table 23. However, the new names fall within the name space for replication through the system journal. As a result, MIMIX removes the tracking entries associated with the original names and performs the rename operation the objects on the target system. Table 26 shows these results.
Table 26. Results of move/rename operations from advanced journaling to system journal name space Resulting data group IFS tracking entries (removed) (removed)
Example 5, moves/renames from system journal to advanced journaling name space: In this example, MIMIX encounters journal entries indicating that source system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. MIMIX is aware that the old names are within the system journal name space and that the new names are within the advanced journaling name space. MIMIX creates tracking entries for the names and then performs the rename operation on the target system using advanced journaling. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 27 illustrates the results on the target system.
Table 27. Results of move/rename operations from system journal to advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1
Example 6, moves/renames from outside to within advanced journaling name space: In this example MIMIX encounters journal entries indicating that the source system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/xstmf1 was renamed to /TEST/stmf1. The original names are outside of the name space and are not eligible for replication. However, the new names are within
133
the name space for advanced journaling as indicated in Table 23. Because the objects were not previously replicated, MIMIX processes the operations as creates during replication. See Newly created files on page 127. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 28 illustrates the results.
Table 28. Results of move/rename operations from outside to within advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1
Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is encountered in the system journal, MIMIX system journal replication processes generate an activity entry representing the delete operation and handle the delete of the object from the target system. The user journal replication processes remove the corresponding tracking entry.
Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, the pre-existing object is replaced by a backup copy on the source system. With user journal replication, restores of IFS, data area, and data queue objects on the source system are
134
supported through cooperative processing between MIMIX system journal and user journal replication processes. Provided the object was journaled when it was saved, a restored IFS, data area, or data queue object is also journaled . During cooperative processing, system journal replication processes generate an activity entry representing the T-OR (restore) journal entry from the system journal and perform a save and restore operation on the IFS, data area, or data queue object. Meanwhile, user journal replication processes handle the management of the corresponding IFS or object tracking entry. MIMIX may also start journaling, or end and restart journaling on the object so that the journaling characteristics of the IFS, data area, or data queue object match the data group definition.
135
136
Chapter 5
Configuration checklists
MIMIX can be configured in a variety of ways to support your replication needs. Each configuration requires a combination of definitions and data group entries. Definitions identify systems, journals, communications, and data groups that make up the replication environment. Data group entries identify what to replicate and the replication option to be used. For available options, see Replication choices by object type on page 96. Also, advanced techniques, such as keyed replication, have additional configuration requirements. For additional information see Configuring advanced replication techniques on page 353. New installations: Before you start configuring MIMIX, system-level configuration for communications (lines, controllers, IP interfaces) must already exist between the systems that you plan to include in the MIMIX installation. Choose one of the following checklists to configure a new installation of MIMIX. Checklist: New remote journal (preferred) configuration on page 139 uses shipped default values to create a new installation. Unless you explicitly configure them otherwise, new data groups will use the i5/OS remote journal function as part of user journal replication processes. Checklist: New MIMIX source-send configuration on page 143 configures a new installation and is appropriate when your environment cannot use remote journaling. New data groups will use MIMIX source-send processes in user journal replication. To configure a new installation that is to use the integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ book.
Upgrades and conversions: You can use any of the following topics, as appropriate, to change a configuration: Checklist: Converting to remote journaling on page 147 changes an existing data group to use remote journaling within user journal replication processes. Converting to MIMIX Dynamic Apply on page 150 provides checklists for two methods of changing the configuration of an existing data group to use MIMIX Dynamic Apply for logical and physical file replication. Data groups that existed prior to installing version 5 must use this information in order to use MIMIX Dynamic Apply. Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 changes the configuration of an existing data group to use user journal replication processes for these objects. To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an existing installation, use topic Choosing the correct checklist for MIMIX for MQ in the MIMIX for IBM WebSphere MQ book. Checklist: Converting to legacy cooperative processing on page 157 changes the configuration of an existing data group so that logical and physical source files are processed from the system journal and physical data files use legacy
137
cooperative processing. Other checklists: The following configuration checklist employs less frequently used configuration tools and is not included in this chapter. Use Checklist: copy configuration on page 553 if you need to copy configuration data from an existing product library into another MIMIX installation.
138
139
11. Use Table 29 to create data group entries for this configuration. This configuration requires object entries and file entries for LF and PF files. For other object types or classes, any replication options identified in planning topic Replication choices by object type on page 96 are supported.
Table 29. Class Librarybased objects How to configure data group entries for the remote journal (preferred) configuration. Do the following: 1. Create object entries using. UseCreating data group object entries on page 267. 2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using Loading file entries from a data groups object entries on page 273.
Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF data files to ensure that legacy cooperative processing can be used.
Planning and Requirements Information Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112
3. After creating object entries, load object tracking entries for any *DTAARA and *DTAQ objects to be replicated from a user journal. Use Loading object tracking entries on page 285. IFS objects 1. Create IFS entries using Creating data group IFS entries on page 282. 2. After creating IFS entries, load IFS tracking entries for IFS objects to be replicated from a user journal. Use Loading IFS tracking entries on page 284. Create DLO entries using Creating data group DLO entries on page 287.
DLOs
12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 13. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure Setting data group auditing values manually on page 297. 14. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source
140
system or batch processing until you have successfully completed Step 18. 15. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330 For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334
16. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 17. Verify your configuration. Topic Verifying the initial synchronization on page 487 identifies the additional aspects of your configuration that are necessary for successful replication. 18. Start the data groups. You should use the procedure Starting Selected Data Group Processes in the Using MIMIX book.
141
142
143
IFS objects
DLOs
11. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 12. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure Setting data group auditing values manually on page 297. 13. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system or batch processing until you have successfully completed Step 17. 14. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems
144
For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330 For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334
15. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 16. Verify your configuration. Topic Verifying the initial synchronization on page 487 identifies the additional aspects of your configuration that are necessary for successful replication. 17. Start the data groups. You should use the procedure Starting Selected Data Group Processes in the Using MIMIX book.
145
146
147
b. Start data group replication using the procedure Starting selected data group processes in the Using MIMIX book. Be sure to specify *ALL for Start processes prompt (PRC parameter) and *LASTPROC as the value for the Database journal receiver and Database sequence number prompts.
148
149
It is recommended that you contact your Certified MIMIX Consultant for assistance before performing this procedure. Requirements: Before starting, consider the following: Any data group that existed prior to installing version 5 must use one of these procedures in order to use MIMIX Dynamic Apply. As of version 5, newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command defaults are used. Any data group to be converted must already be configured to use remote journaling. Any data group to be converted must have *SYSJRN specified as the value of Cooperative journal (COOPJRN). Keyed replication cannot be present in the data group configuration. A minimum level of i5/OS PTFs are required on both systems. For a complete list of required and recommended IBM PTFs, log in to Support Central and refer to the Technical Documents page. The conversion must be performed from the management system. The data group must be active when starting the conversion.
For additional information about configuration requirements and limitations of MIMIX Dynamic Apply, see Identifying logical and physical files for replication on page 105.
150
151
For additional information about loading file entries, see Loading file entries from a data groups object entries on page 273. 12. Start journaling for all files not previously journaled. See Starting journaling for physical files on page 326. 13. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CRLPND(*YES) 14. Verify that data groups are synchronized by running the MIMIX audits. See Verifying the initial synchronization on page 487.
152
153
2. Perform a controlled end of the data groups that will include objects to be replicated using advanced journaling. See the Using MIMIX book for how to end a data group in a controlled manner (ENDOPT(*CNTRLD)). 3. Ensure that all pending activity for objects and IFS objects has completed. Use the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity entries. Any activities that are still in progress will be listed. 4. The data group definitions used for user journal replication of IFS objects, data areas, and data queues must specify *ALL as the value for Data group type (TYPE). Verify the value in the data group definition is correct. If necessary, change the value. 5. Add or change data group IFS entries for the IFS objects you want to replicate. Be sure to specify *YES for the Cooperate with database prompt in procedure Adding or changing a data group IFS entry on page 282. For additional information, see Restrictions - user journal replication of IFS objects on page 121. 6. Add or change data group object entries for the data areas and data queues you want to replicate using the procedure Adding or changing a data group object entry on page 268. For additional information, see Restrictions - user journal replication of data areas and data queues on page 113. 7. Load the tracking entries associated with the data group IFS entries and data group object entries you configured. Use the procedures in Loading tracking entries on page 284.
154
8. Start journaling using the following procedures as needed for your configuration. If you ever plan to switch the data groups, you must also start journaling on the target system. For IFS objects, use Starting journaling for IFS objects on page 330 For data areas or data queues, use Starting journaling for data areas and data queues on page 334
9. Verify that journaling is started correctly. This step is important to ensure the IFS objects, data areas and data queues are actually replicated. For IFS objects, see Verifying journaling for IFS objects on page 332. For data areas and data queues, see Verifying journaling for data areas and data queues on page 336. 10. If you anticipate a delay between configuring data group IFS, object, or file entries and starting the data group, use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects are properly audited and that any transactions for the objects that occur between configuration and starting the data group are replicated. Use the procedure Setting data group auditing values manually on page 297. 11. Synchronize the IFS objects, data areas and data queues between the source and target systems. For IFS objects, follow the Synchronize IFS Object (SYNCIFS) procedures. For data areas and data queues, follow the Synchronize Object (SYNCOBJ) procedures. Refer to chapter Synchronizing data between systems on page 472 for additional information. 12. If you are replicating large amounts of data, you should specify i5/OS journal receiver size options that provide large journal receivers and large journal entries. Journals created by MIMIX are configured to allow maximum amounts of data. Journals that already exist may need to be changed. a. After IFS objects are configured, perform the steps in Verifying journal receiver size options on page 213 to ensure journaling is configured appropriately. b. Change any journal receiver size options necessary using Changing journal receiver size options on page 213. 13. If you have database replication user exit programs, changes may need to be made. See User exit program considerations on page 87. 14. Once you have completed the preceding steps, start the data groups. For more information about starting data groups, see the Using MIMIX book.
155
156
Perform the following steps to enable legacy cooperative processing and system journal replication: 1. Verify that data group is synchronized by running the MIMIX audits. See Verifying the initial synchronization on page 487. 2. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. Refer to topic Preparing for a controlled end of a data group in the Using MIMIX book. Note: Topic Ending a data group in a controlled manner in the Using MIMIX book includes subtask Preparing for a controlled end of a data group and the subtask needed for Step 3. 3. End the data group you are converting by performing a controlled end. Follow the procedure for Performing the controlled end in the Using MIMIX book. 4. From the management system, change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *SYSJRN. Use the command: CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN) 5. From the management system, use the following command to load the data group file entries from the target system. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system. LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO) For additional information about loading file entries, see Loading file entries from a data groups object entries on page 273. 6. Optional step: Delete the QDFTJRN data areas. These data areas automatically start journaling for newly created files. This may not be desired because the
157
journal image (JRNIMG) value for these files may be different than the value specified in the MIMIX configuration. Such a difference will be detected by the file attributes (#FILATR) audit. To delete these data areas, run the following command from each system: DLTDTAARA DTAARA(library/QDFTJRN) 7. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CRLPND(*YES)
158
Chapter 6
System-level communications
This information is provided to assist you with configuring the System i5 communications that are necessary before you can configure MIMIX. MIMIX supports the following communications protocols: Transmission Control Protocol/Internet Protocol (TCP/IP) Systems Network Architecture (SNA) OptiConnect
MIMIX should have a dedicated communications line that is not shared with other applications, jobs, or users on the production system. A dedicated path will make it easier to fine-tune your MIMIX environment and to determine the cause of problems. For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its own subnet. For SNA, it is recommended that MIMIX have its own communication line instead of sharing an existing SNA device. Your Certified MIMIX Consultant can assist you in determining your communications requirements and ensuring that communications can efficiently handle peak volumes of journal transactions. If you plan to use system journal replication processes, you need to consider additional aspects that may affect the communications speed. These aspects include the type of objects being transferred and the size of data queues, user spaces, and files defined to cooperate with user journal replication processes. MIMIX IntelliStart can help you determine your communications requirements. The topics in this chapter include: Configuring for native TCP/IP on page 159 describes using native TCP/IP communications and provides steps to prepare and configure your system for it. Configuring APPC/SNA on page 163 describes basic requirements for SNA communications. Configuring OptiConnect on page 163 describes basic requirements for OptiConnect communications and identifies MIMIX limitations when this communications protocol is used.
159
MIMIX users can also continue to use IBM ANYNET support to run SNA protocols over TCP networks. Preparing your system to use TCP/IP communications with MIMIX requires the following: 1. Configure both systems to use TCP/IP. The procedure for configuring a system to use TCP/IP is documented in the information included with the i5/OS software. Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the instructions to configure the system to use TCP/IP communications. 2. If you need to use port aliases, do the following: a. Refer to the examples Port aliases-simple example on page 160 and Port aliases-complex example on page 161. b. Create the port aliases for each system using the procedure in topic Creating port aliases on page 162. 3. Once the system-level communication is configured, you can begin the MIMIX configuration process.
Figure 9.
Creating Ports. In this example, the MIMIX installation consists of three systems,
160
System-level communications
In both Figure 8 and Figure 9, if you need to use port aliases for port 50410, you need to have a service table entry on each system that equates the port number to the port alias. For example, you might have a service table entry on system LONDON that defines an alias of MXMGT for port number 50410. Similarly, you might have service table entries on systems HONGKONG and CHICAGO that define an alias of MXNET for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in the transfer definition.
161
MIMIX installations and uses a separate port for each MIMIX installation.
If you need to use port aliases in an environment such as Figure 10, you need to have a service table entry on each system that equates the port number to the port alias. In this example, CHICAGO would require two port aliases and two service table entries. For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the PORT1 and PORT2 parameters on the transfer definitions.
Do the following to create a port alias on a system: 1. From a command line, type the command CFGTCP and press Enter. 2. The Configure TCP/IP menu appears. Select option 21 (Configure related tables) and press Enter.
162
System-level communications
3. The Configure Related Tables display appears. Select option 1 (Work with service table entries) and press Enter. 4. The Work with Service Table Entries display appears. Do the following: a. Type a 1 in the Opt column next to the blank lines at the top of the list. b. In the blank at the top of the Service column, use uppercase characters to specify the alias that the System i5 will use to identify this port as a MIMIX native TCP port. Attention: MIMIX requires that you restrict the length of port aliases to 14 or fewer characters and suggests that you specify the alias in uppercase characters. Note: Port alias names are case sensitive and must be unique to the system on which they are defined. For environments that have only one MIMIX installation, Lakeview Technology recommends that you use the same port number or same port alias on each system in the MIMIX installation. c. In the blank at the top of the Port column, specify the number of an unused port ID to be associated with the alias. The port ID can be any number greater than 1024 and less than 55534 that is not being used by another application. You can page down through the list to ensure that the number is not being used by the system. d. In the blank at the top of the Protocol column, type TCP to identify this entry as using TCP/IP communications. e. Press Enter. 5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the information shown for the alias and port is what you want. At the Text 'description' prompt, type a description of the port alias, enclosed in apostrophes, and then press Enter.
Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA (APPN or APPC) line, controller, and device must exist between the systems that will be identified by the transfer definition. If a line, controller, and device do not exist, consult your network administrator before continuing.
Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist between the two system that you identify in the transfer definition You can use the OptiConnect product from IBM for all communication for most1 MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify OptiConnect communications. Then you can do the following:
163
Configuring OptiConnect
Ensure that the QSOC library is in the system portion of the library list. Use the command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library is in the system portion of the library list. If it not, use the CHGSYSVAL command to add this library to the system library list. When you create the transfer definition, specify *OPTI for the transfer protocol.
1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP communicaitons.
164
System-level communications
165
Chapter 7
166
167
Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output queue used by this system definition and define characteristics of how the queue is handled. Any MIMIX functions that generate reports use this output queue. You can hold spooled files on the queue and save spooled files after they are printed. Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of days to retain MIMIX system history and data group history. MIMIX system history includes the system message log. Data group history includes time stamps and distribution history. You can keep both types of history information on the system for up to a year. Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the number of days to retain new and acknowledged notifications. The Keep new notifications (days) parameter specifies the number of days to retain new notifications in the MIMIX data library. The Keep acknowledged notifications (days) parameter specifies the number of days to retain acknowledged notifications in the MIMIX data library. MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT) Three parameters define information about MIMIX data libraries on the system. The Keep MIMIX data (days) parameter specifies the number of days to retain objects in the MIMIX data library, including the container cache used by system journal replication processes. The MIMIX data library ASP parameter identifies the auxiliary storage pool (ASP) from which the system allocates storage for the MIMIX data library. For libraries created in a user ASP, all objects in the library must be in the same ASP as the library. The Disk storage limit (GB) parameter specifies the maximum amount of disk storage that may be used for the MIMIX data libraries. User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs under the MIMIXOWN user profile and uses several job descriptions to optimize MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library. Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system manager and journal manager, restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The management or network role of the system affects the results of the time you specify on a system definition. Changing the job restart time is considered an advanced technique. Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control characteristics of printed output. Product library (PRDLIB) This parameter is used for installing MIMIX into a switchable independent ASP, and allows you to specify a MIMIX installation library that does not match the library name of the other system definitions. The only time this parameter should be used is in the case of an INTRA system (which is handled by the default value) or in replication environments where it is necessary to have extra MIMIX system definitions that will switch locations along with the switchable independent ASP. Due to its complexity, changing the product library is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant. ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable independent ASP, and defines the ASP group (independent ASP) in which the product library exists. Again, this parameter should only be used in replication
168
environments involving a switchable independent ASP. Due to its complexity, changing the ASP group is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant.
169
170
171
Opt __ __ __ __
Figure 12. Example of a contextual (*ANY) transfer definition in use for a multiple network
172
system environment.
Work with Transfer Definitions System: Type options, press Enter. 1=Create 2=Change 3=Copy 11=Verify communications link 4=Delete 5=Display 6=Print LONDON
7=Rename
Opt __
Protocol *TCP
173
Chapter 8
174
Verifying the communications link for a data group on page 195 provides a procedure to verify the primary transfer definition used by the data group.
175
176
with a range from 1000 through 55534. Lakeview Technology recommends using values between 40000 and 55500 to avoid potential conflicts with designations made by the operating system. By default, the PORT1 parameter uses the port 50410. For the PORT2 parameter, the default special value *PORT1 indicates that the value specified on the System 1 port number or alias (PORT1) parameter is used. If you configured TCP using port aliases in the service table, specify the alias name instead of the port number. For the *SNA protocol the following parameters apply: System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter. System x network identifier (NETID1, NETID2) These two parameters specify name of the network for system 1 and system 2, respectively. The default value *LOC indicates that the network identifier for the location name associated with the system is used. The special value *NETATR indicates that the value specified in the system network attributes is used. The special value *NONE indicates that the network has no name. For the NETID2 parameter, the special value *NETID1 indicates that the network identifier specified on the System 1 network identifier (NETID1) parameter is used. SNA mode (MODE) This parameter specifies the name of mode description used for communication. The default name is MIMIX. The special value *NETATR indicates that the value specified in the system network attributes is used.
The following parameters apply for the *OPTI protocol: System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter.
Threshold size (THLDSIZE) This parameter is accessible when you press F10 (Additional parameters). This controls the size of files and objects by specifying the maximum size of files and objects that are sent. If the file or object exceeds the threshold it is not sent. Valid values range from 1 through 9999999. The special value *NOMAX indicates that no maximum value is set. Transmitting large files and objects can consume excessive communications bandwidth and negatively impact communications performance, especially for slow communication lines.
177
Relational database (RDB) This parameter is accessible when you press F10 (Additional parameters) and is valid when default remote journaling configuration is used. The parameter consists of a four relational database values, which identify the communications path used by the i5/OS remote journal function to transport journal entries: a relational database directory entry name, two system database names, and a management indicator for directory entries. This parameter creates two RDB directory entries, one on each system identified in the transfer definition. Each entry identifies the other systems relational database. Note: If you use the value *ANY for both system 1 and system 2 on the transfer definition, *NONE is used for the directory entry name, and no directory entry is generated. If MIMIX is managing your RDB directory entries, a directory entry is generated if you use the value *ANY for only one of the systems on the transfer definition. This directory entry is generated for the system that is specified as something other than *ANY. For more information about the use of the value *ANY on transfer definitions, see Using contextual (*ANY) transfer definitions on page 181. The four elements of the relational database parameter are: Directory entry This element specifies the name of the relational database entry. The default value *GEN causes MIMIX to create an RDB entry and add it to the relational database. The generated name is in the format MX_nnnnnnnnnn_ssss, where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer definition short name. If you specify a value for the RDB parameter, it is recommended that you limit its length to 18 characters. When you specify the special value *NONE, the directory entry is not added or changed by MIMIX. System 1 relational database This element specifies the name of the relational database for System 1. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to Finding the system database name for RDB directory entries on page 188. Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP. System 2 relational database This element specifies the name of the relational database for System 2. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to Finding the system database name for RDB directory entries on page 188. Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP. Manage directory entries This element specifies that MIMIX will manage the relational database directory entries associated with the transfer definition whether the directory entry name is specified or whether the directory entry name is generated by MIMIX. Management of the relational database directory entries consists of adding, changing, and deleting the directory entries on both systems, as needed, when the transfer definition is created, changed, or deleted. The
178
special value *DFT indicates that MIMIX manages the relational database directory entries only when the name is generated using the special value *GEN on the Directory entry element of this parameter. The special value *YES indicates that the directory entries on each system are managed by MIMIX. If the relational database directory entries do not exist, MIMIX adds them. If they do exist, MIMIX changes them to match the values specified by the Relational database (RDB) parameter. When any of the transfer definition relational database values change, the directory entry is also changed. When the transfer definition is deleted, the directory entries are also deleted.
179
180
181
transfer definition that matches the transfer definition that you specified, for example, (PRIMARY SYSA SYSB).
182
183
2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the blank line at the top of the list area and press Enter. 3. The Create Transfer Definition display appears. Do the following: a. At the Transfer definition prompts, specify a name and the two system definitions between which communications will occur. b. At the Short transfer definition name prompt, accept the default value *GEN to generate a short transfer definition name. This short transfer definition name is used in generating relational database directory entry names if you specify to have MIMIX manage your RDB directory entries. c. At the Transfer protocol prompt, specify the communications protocol you want, then press Enter. If you are creating a transfer definition for a cluster environment, you must accept the default of *TCP for the Transfer protocol prompt. 4. Additional parameters for the protocol you selected appear on the display. Verify that the values shown are what you want. Make any necessary changes. 5. At the Description prompt, type a text description of the transfer definition, enclosed in apostrophes. 6. Optional step: If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value. 7. Optional step: If you need to change the relational database information, press F10 (Additional parameters). See Tips for transfer definition parameters on page 176 for details about the Relational database (RDB) parameter. If MIMIX is not managing the RDB directory entries, it may be necessary to change the RDB values. 8. To create the transfer definition, press Enter.
184
185
2. The Work with Transfer Definitions display appears. Type 2 (Change) next to the definition you want and press Enter. 3. The Change Transfer Definition (CHGTFRDFN) display appears. If you want to change which protocol is used between the specified systems, specify the value you want for the Transfer protocol prompt. 4. Press Enter to display the parameters for the specified transfer protocol. Locate the prompt for the parameter you need to change and specify the value you want. Press F1 (Help) for more information about the values for each parameter. 5. If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value. 6. If you need to change your relational database information, press F10 (Additional parameters). At the Relational database (RDB) prompt, specify the desired values for each of the four elements and press Enter. For special considerations when changing your transfer definitions that are configured to use RDB directory entries see Tips for transfer definition parameters on page 176. 7. To save changes to the transfer definition, press Enter.
186
page 188 for special considerations when changing your transfer definitions that are configured to use RDB directory entries.
187
188
189
where yyyy is either the port number in the form PORTnnnnn or the port alias.
190
b. Press Enter. The job description is changed. 7. Type the command ADDAJE and press Enter. 8. The Add Autostart Job Entry (ADDAJE) display appears. Specify the following values to configure the job description to start each time the MIMIXSBS subsystem is started: a. At the Subsystem description prompt specify MIMIXSBS. b. At the Library prompt, specify MIMIXQGPL. c. At the Job name prompt specify a name to describe the job being processed. Lakeview Technology suggests that you use the value you specified in Step 4. d. At the Job description prompt specify the name of the job description you just changed in Step 4. e. At the Library prompt specify MIMIXQGPL. f. Press Enter. The job description is added to the automatic start procedures within the MIMIXSBS subsystem. Each time the MIMIXSBS subsystem is started, this TCP server is also started.
191
parameter in the job description determines which program or command is run when the MIMIXSBS subsystem is started. Use the following command to change the job description to call the new system definition name or port number used for the autostart job entry which calls the STRSVR command when the MIMIXSBS subsystem is started:
CHGJOBD JOBD(MIMIXLIB/STRMXSVR) RQSDTA(MIMIXLIB/STRSVR HOST(System name) PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN))
where System name is the system host name for the system where the autostart job entry is defined in the MIMIX transfer definition. where nnnnn is either the port number in the form PORTnnnnn or the port alias of the system where the autostart job entry is defined in the MIMIX transfer definition.
192
193
194
195
196
Chapter 9
197
how change a data group that uses remote journaling so that it uses MIMIX send processing. Remote journaling is preferred. Removing a remote journaling environment on page 231 describes how to remove a remote journaling environment that you no longer need.
198
199
200
201
Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be used in the name of journal receivers associated with the journal used in the replication process and the library in which the journal receivers are located. The prefix must be unique to the journal definition and cannot end in a numeric character. The default value *GEN for the name prefix indicates that MIMIX will generate a unique prefix, which usually is the first six characters of the journal definition name with any trailing numeric characters removed. If that prefix is already used in another journal definition, a unique six character prefix name is derived from the definition name. If the journal definition will be used in a configuration which broadcasts data to multiple systems, there are additional considerations. See Journal definition considerations on page 205. The value *DFT for the journal receiver library allows MIMIX to determine the library name based on the ASP in which the journal receiver is allocated, as specified in the Journal receiver library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses #MXJRNIASP for the default journal receiver library name. Otherwise, the default library name is #MXJRN. You can specify a different name or specify the value *JRNLIB to use the same library that is used for the associated journal. Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receiver library. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32. The value *CRTDFT indicates that the command default value for the i5/OS Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library. For libraries that are created in a user ASP, all objects in the library must be in the same ASP as the library. Target journal state (TGTSTATE) This parameter specifies the requested status of the target journal, and can be used with active journaling support or journal standby state. Use the default value *ACTIVE to set the target journal state to active when the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)). Use the value *STANDBY to journal objects on the target system while preventing most journal entries from being deposited into the target journal. For more information about journal standby state, see Configuring for high availability journal performance enhancements on page 341. Journal caching (JRNCACHE) This parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Use the recommended default value *BOTH to perform journal caching on both the source and the target systems. You can also specify values *SRC, *TGT, or *NONE. Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD) Four parameters control how journal receivers associated with the replication process are changed. The Receiver change management (CHGMGT) parameter controls whether MIMIX performs change management operations for the journal receivers used in the replication process. The recommended value is the shipped default of *TIMESIZE, where MIMIX changes journal receivers by both threshold size and time of day.
202
The following parameters specify conditions that must be met before change management can occur. Receiver threshold size (MB) (THRESHOLD) You can specify the size, in megabytes, of the journal receiver at which it is changed. The default value is 6600 MB. This value is used when MIMIX or the system changes the receivers. If you decide to decrease the size of the Receiver threshold size you will need to manually change your journal receiver to reflect this change. If you change the journal receiver threshold size in the journal definition, the change is effective with the next receiver change. Time of day to change receiver (TIME) You can specify the time of day at which MIMIX changes the journal receiver. The time is based on a 24 hour clock and must be specified in HHMMSS format. Reset sequence threshold (RESETTHLD) You can specify the sequence number (in millions) at which to reset the receiver sequence number. When the threshold is reached, the next receiver change resets the sequence number to 1.
For information about how change management occurs in a remote journal environment and about using other change management choices, see Journal receiver management on page 37 Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT, KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal receivers associated with the replication process. The Receiver delete management (DLTMGT) parameter specifies whether or not MIMIX performs delete management for the journal receivers. By default, MIMIX performs the delete management operations. MIMIX operations can be adversely affected if you allow the system or another process to handle delete management. For example, if another process deletes a journal receiver before MIMIX is finished with it, replication can be adversely affected. All of the requirements that you specify in the following parameters must be met before MIMIX deletes a journal receiver: Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers allows you to back out (rollback) changes in the event that you need to recover from a disaster. The default value *YES causes MIMIX to keep unsaved journal receivers until they are saved. Keep journal receiver count (KEEPRCVCNT) You can specify the number of detached journal receivers to retain. For example, if you specify 2 and there are 10 journal receivers including the attached receiver (which is number 10), MIMIX retains two detached receivers (8 and 9) and deletes receivers 1 through 7. Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of days to retain detached journal receivers. For example, if you specify to keep the journal receiver for 7 days and the journal receiver is eligible for deletion, it will be deleted after 7 days have passed from the time of its creation. The exact time of the deletion may vary. For example, the deletion may occur within a few hours after the 7 days have passed.
203
For information see Journal receiver management on page 37 Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receivers. The default value *LIBASP indicates that the storage space for the journal receivers is allocated from the same ASP that is used for the journal receiver library. Threshold message queue (MSGQ) This parameter specifies the qualified name of the threshold message queue to which the system sends journal-related messages such as threshold messages. The default value *JRNDFN for the queue name indicates that the message queue uses the same name as the journal definition. The value *JRNLIB for the library name indicates that the message queue uses the library for the associated journal. Exit program (EXITPGM) This parameter allows you to specify the qualified name of an exit program to use when journal receiver management is performed by MIMIX. The exit program will be called when a journal receiver is changed or deleted by the MIMIX journal manager. For example, you might want to use an exit program to save journal receivers as soon as MIMIX finishes with them so that they can be removed from the system immediately. Minimize entry specific data (MINENTDTA) This parameter specifies which object types allow journal entries to have minimized entry-specific data. For additional information about improving journaling performance with this capability, see Minimized journal entry data on page 339.
Updated for 5.0.02.00.
204
205
that reside in the same library and ASP, attempts to start the remote journals will fail with message CPF699A (Unexpected journal receiver found). When you create a target journal definition instead of having it generated using the Add Remote Journal Link (ADDRJLNK) command, use the default value *GEN for the prefix name for the JRNRCVPFX on a target journal definition. The receiver name for source and target journals will be the same on the systems but will not be the same in the journal definitions. In the target journal, the prefix will be the same as that specified in the source journal definition.
For example, if the source journal definition name is MYJRN and you specified TGTJRNDFN(*GEN CHICAGO), the target journal definition will be named MYJRN@R CHICAGO. The target journal definition will have the following characteristics and associated new objects: The Journal name will have the same name as the source journal. The Journal library will use the first eight characters of the name of the source journal library followed by the characters @R. The Journal library ASP will be copied from source journal definition. The Journal receiver prefix will be copied from the source journal definition. The Journal receiver library will use the first eight characters of the name of the source journal receiver library followed by the characters @R. The Message queue library will use the first eight characters of the name of the source message queue library followed by the characters @R. The value for the Receiver change management (CHGMGT) parameter will be *NONE.
206
Opt
F3=Exit F12=Cancel
F4=Prompt F18=Subset
207
Identifying the correct journal definition on the Work with Journal Definition display can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the association between journal definitions much more clearly.
Figure 14. Example of RJ links for a switchable data group.
Work with RJ Links System: Type options, press Enter. 1=Add 2=Change 4=Remove 14=Build 15=Remove RJ connection 24=Delete target jrn environment ---Source Jrn Def--Name System PAYABLES PAYABLES CHICAGO NEWYORK 5=Display 6=Print 9=Start 17=Work with jrn attributes CHICAGO 10=End
Opt
F9=Retrieve F18=Subset
208
Manually create journal definitions (CRTJRNDFN command) using the library name-mapping convention. Journal definitions created when a data group is created may not have unique names and will not create all the necessary target journal definitions. Once the appropriately named journal definitions are created for source and target systems, manually create the remote journal links between them (ADDRJLNK command).
209
Figure 15. Library-mapped journal definitions - three node environment. All nodes are management systems
210
211
212
213
214
215
a. Lakeview recommends that you accept the default value *YES for the Receiver delete management prompt to allow MIMIX to perform delete management. b. Press Enter. c. One or more additional prompts related to receiver delete management appear on the display. If necessary, change the values. Keep unsaved journal receivers Keep journal receiver count Keep journal receivers (days) 9. At the Description prompt, type a brief text description of the transfer definition. 10. This step is optional. If you want to access additional parameters that are considered advanced functions, press F10 (Additional parameters). Make any changes you need to the additional prompts that appear on the display. 11. To create the journal definition, press Enter.
216
2. The Work with Journal Definitions display appears. Type 2 (Change) next to the definition you want and press Enter. 3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice to see all prompts for the display. 4. Make any changes you need to the prompts. Press F1 (Help) for more information about the values for each parameter. 5. If you need to access advanced functions, press F10 (Additional parameters). When the additional parameters appear on the display, make the changes you need. 6. To accept the changes, press Enter. Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective with the next receiver change. Before a change to any other parameter is effective, you must rebuild the journal environment. Rebuilding the journal environment ensures that it matches the journal definition and prevents problems starting the data group.
217
218
219
to build and press Enter. Option 14 calls the Build Journal Environment (BLDJRNENV) command. For environments using remote journaling, the command is called twice (first for the source journal definition and then for the target journal definition). A status message is issued indicating that the journal environment was created for each system. 4. If you plan to journal access paths, you need to change the value of the receiver size options. To do this, do the following: a. Type the command CHGJRN and press F4 (Prompt): b. For the JRN parameter, specify the name of the journal from the journal definition. c. Specify *GEN for the JRNRCV parameter. d. Specify *NONE for the RCVSIZOPT parameter. e. Press Enter.
220
221
3. Verify that the remote journal link is not in use on both systems. Use topic Displaying status of a remote journal link in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue. 4. Remove the connection to the remote journal as follows: a. Access the journal definitions for the data group whose environment you want to change. From the Work with Data Groups display, type a 45 (Journal definitions) next to the data group that you want and press Enter. b. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. You can select either the source or target journal definition. Note: The target journal definition will end with @R. c. From the Work with RJ Links display, choose the link based on the name in the Target Jrn Def column. Type a 15 (Remove RJ connection) next to the link with the target journal definition you want and press Enter d. A confirmation display appears. To continue removing the connections for the selected links, press Enter. 5. From the Work with RJ Links display, do the following to delete the target system objects associated with the RJ link: Note: The target journal definition will end with @R. a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter.
222
b. A confirmation display appears. To continue deleting the journal, its associated message queue, and the journal receiver, press Enter. 6. Make the changes you need for the target journal. For example, to change the target (remote) journal definition to a new receiver library, do the following: a. Press F12 to return to the Work with Journal Definitions display. b. Type option 2 (Change) next to the journal definition for the target system you want and press Enter. 7. From the Work with Journal Definitions display, type a 14 (Build) next to the target journal definition and press Enter. Note: The target journal definition will end with @R. 8. Return to the Work with Data Groups display. Then do the following: a. Type an 8 (Display status) next to the data group you want and press Enter. b. Locate the name of the receiver in the Last Read field for the Database process. 9. Do the following to start the RJ link: a. From the Work with Data Groups display, type a 44 (RJ links) next to the data group you want and press Enter. b. Locate the link you want based on the name in the Target Jrn Def column. Type a 9 (Start) next to the link with the target journal definition and press F4 (Prompt) c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver name from Step 8b as the value for the Starting journal receiver (STRRCV) and press Enter. 10. Start the data group using default values Refer to topic Starting selected data group processes in the Using MIMIX book.
223
224
6. At the Description prompt, type a text description of the link, enclosed in apostrophes. 7. To create the link between journal definitions, press Enter.
225
226
4. When you are ready to accept the changes, press Enter. 5. To make the changes effective, do the following: a. If you removed the RJ connection in Step 1, you need to use topic Building the journaling environment on page 219. b. Start the data group which uses the RJ link.
227
2. Verify that the process is ended. On the Work with Data Groups display, the data group should change to show a red L in the Source DB column. 3. Modify the data group definition as follows: a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter. b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts. c. Specify *NO for the Use remote journal link prompt. d. To accept the change press Enter. 4. Use the procedure Starting selected data group processes in the Using MIMIX book, specifying *ALL for the Start Process prompt.
228
229
230
231
b. A confirmation display appears. To continue deleting the journal, its associated message queue, the journal receiver, and to remove the connection to the source journal receiver, press Enter. 5. Delete the target journal definition using topic Deleting a Definition in the Using MIMIX book. When you delete the target journal definition, its link to the source journal definition is removed. 6. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK monitors which have the same name as the RJ link.
232
Chapter10
233
One of the system definitions specified must represent a management system. Although you can specify the system definitions in any order, you may find it helpful if you specify them in the order in which replication occurs during normal operations. For many users normal replication occurs from a production system to a backup system, where the backup system is defined as the management system for MIMIX. For example, if you normally replicate data for an application from a production system (MEXICITY) to a backup system (CHICAGO) and the backup system is the management system for the MIMIX cluster, you might name your data group SUPERAPP MEXICITY CHICAGO. The Short data group name (DGSHORTNAM) parameter indicates an abbreviated name used as a prefix to identify jobs associated with a data group. MIMIX will generate this prefix for you when the default *GEN is used. The short name must be unique to the MIMIX cluster and cannot be changed after the data group is created. Data source (DTASRC) This parameter indicates which of the systems in the data group definition is used as the source of data for replication. Allow to be switched (ALWSWT) This parameter determines whether the direction in which data is replicated between systems can be switched. If you plan to use the data group for high availability purposes, use the default value *YES. This allows you to use one data group for replicating data in either direction between the two systems. If you do not allow switching directions, you need to have second data group with
234
similar attributes in which the roles of source and target are reversed in order to support high availability. Data group type (TYPE) The default value *ALL indicates that the data group can be used by both user journal and system journal replication processes. This enables you to use the same data group for all of the replicated data for an application. The value *ALL is required for user journal replication of IFS objects, data areas, and data queues. MIMIX Dynamic Apply also supports the value *DB. For additional information, see Requirements and limitations of MIMIX Dynamic Apply on page 110 Note: In Clustering environments only, the data group value of *PEER is available. This provides you with support for system values and other system attributes that MIMIX currently does not support. Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the transfer definitions used to communicate between the systems defined by the data group. The name you specify in these parameters must match the first part of a transfer definition name. By default, MIMIX uses the name PRIMARY for a value of the primary transfer definition (PRITFRDFN) parameter and for the first part of the name of a transfer definition. If you specify a secondary transfer definition (SECTRFDFN), it is used if the communications path specified in the primary transfer definition is not available. Once MIMIX starts using the secondary transfer definition, it continues to use it even after the primary communication path becomes available again. Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of seconds that the send process waits when there are no entries available to process. Jobs go into a delay state when there are no entries to process. Jobs wait for the time you specify even when new entries arrive in the journal. A value of 0 uses more system resources. Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1, ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply to data groups that can include database files or tracking entries. Data group types of *ALL or *DB include database files. Data group types of *ALL may also include tracking entries. Journal on target (JRNTGT) The default value *YES enables journaling on the target system, which allows you to switch the direction of a data group more quickly. Replication of files with some types of referential constraint actions may require a value of *YES. For more information, see Considerations for LF and PF files on page 105. If you specify *NO, you must ensure that, in the event of a switch to the direction of replication, you manually start journaling on the target system before allowing users to access the files. Otherwise, activity against those files may not be properly recorded for replication. System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) parameters identify the user journal definitions associated with the systems defined as System 1 and System 2, respectively, of the data group. The value *DGDFN indicates that the journal definition has the same name as the data
235
group definition. The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters interact to automatically create as much of the journaling environment as possible. The DTASRC parameter determines whether system 1 or system 2 is the source system for the data group. When you create the data group definition, if the journal definition for the source system does not exist, a journal definition is created. If you specify to journal on the target system and the journal definition for the target system does not exist, that journal definition is also created. The names of journal definitions created in this way are taken from the values of the JRNDFN1 and JRNDFN2 parameters according to which system is considered the source system at the time they are created. You may need to build the journaling environment for these journal definitions. System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2) parameters identify the name of the primary auxiliary storage pool (ASP) device within an ASP group on each system. The value *NONE allows replication from libraries in the system ASP and basic user ASPs 2-32. Specify a value when you want to replicate IFS objects from a user journal or when you want to replicate objects from ASPs 33 or higher. For more information see Benefits of independent ASPs on page 564. Use remote journal link (RJLNK) This parameter identifies how journal entries are moved to the target system. The default value, *YES, uses remote journaling to transfer data to the target system. This value results in the automatic creation of the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK command), if needed. The RJ link defines the source and target journal definitions and the connection between them. When ADDRJLNK is run during the creation of a data group, the data group transfer definition names are used for the ADDRJLNK transfer definition parameters. MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate when MIMIX source-send processes must be used. Cooperative journal (COOPJRN) This parameter determines whether cooperatively processed operations for journaled objects are performed primarily by user (database) journal replication processes or system (audit) journal replication processes. Cooperative processing through the user journal is recommended and is called MIMIX Dynamic Apply. For data groups created on version 5, the shipped default value *DFT resolves to *USRJRN (user journal) when configuration requirements for MIMIX Dynamic Apply are met. If those requirements are not met, *DFT resolves to *SYSJRN and cooperative processing is performed through system journal replication processes. Number of DB apply sessions (NBRDBAPY) You can specify the number of apply sessions allowed to process the data for the data group. DB journal entry processing (DBJRNPRC) This parameter allows you to specify several criteria that MIMIX will use to filter user journal entries before they reach the database apply (DBAPY) process. Each element of the parameter identifies a criteria that can be set to either *SEND or *IGNORE. The value *SEND causes the journal entries meeting the criteria to be processed and sent to the database apply process. For data groups configured to use
236
MIMIX source-send processes, *SEND can minimize the amount of data that is sent over a communications path. The value *IGNORE prevents the entries from being sent to the database apply process. Certain database techniques, such as keyed replication, may require that an element be set to a specific value. The following available elements describe how journal entries are handled by the database reader (DBRDR) or the database send (DBSND) processes. Before images This criteria determines whether before-image journal entries are filtered out before reaching the database apply process. If you use keyed replication, the before-images are often required and you should specify *SEND. *SEND is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. For files not in data group This criteria determines whether journal entries for files not defined to the data group are filtered out. Generated by MIMIX activity This criteria determines whether journal entries resulting from the MIMIX database apply process are filtered out. Not used by MIMIX This criteria determines whether journal entries not used by MIMIX are filtered out.
Additional parameters: Use F10 (Additional parameters) to access the following parameters. These parameters are considered advanced configuration topics. Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog threshold criteria for the remote journal function. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the RJ link. The threshold can be specified as a time difference, a number of journal entries, or both. When a time difference is specified, the value is amount of time, in minutes, between the timestamp of the last source journal entry and the timestamp of the last remote journal entry. When a number of journal entries is specified, the value is the number of journal entries that have not been sent from the local journal to the remote journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold. Synchronization check interval (SYNCCHKITV) This parameter, which is only valid for database processing, allows you to specify how many before-image entries to process between synchronization checks. For MIMIX to use this feature, the journal image file entry option (FEOPT parameter) must allow before-image journaling (*BOTH). When you specify a value for the interval, a synchronization check entry is sent to the apply process on the target system. The apply process compares the before-image to the image in the file (the entire record, byte for byte). If there is a synchronization problem, MIMIX puts the data group file entry on hold and stops applying journal entries. The synchronization check transactions still occur even if you specify to ignore before-images in the DB journal entry processing (DBJRNPRC) parameter. Time stamp interval (TSPITV) This parameter, which is only valid for database processing, allows you to specify the number of entries to process before MIMIX creates a time stamp entry. Time stamps are used to evaluate performance. Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.
237
Verify interval (VFYITV) This parameter allows you to specify the number of journal transactions (entries) to process before MIMIX performs additional processing. When the value specified is reached, MIMIX verifies that the communications path between the source system and the target system is still active and that the send and receive processes are successfully processing transactions. A higher value uses less system resources. A lower value provides more timely reaction to error conditions. Larger, high-volume systems should have higher values. This value also affects how often the status is updated with the "Last read" entries. A lower value results in more accurate status information. Data area polling interval (DTAARAITV) This parameter specifies the number of seconds that the data area poller waits between checks for changes to data areas. The poller process is only used when configured data group data area entries exist. The preferred methods of replicating data areas require that data group object entries be used to identify data areas. When object entries identify data areas, the value specified in them for cooperative processing (COOPDB) determines whether the data areas are processed through the user journal with advanced journaling, or through the system journal. Journal at creation (JRNATCRT) This parameter allows you to specify whether to start journaling when objects are created in the libraries replicated by the data group. This applies to new objects of type *FILE, *DTAARA, and *DTAQ that are cooperatively processed. All new objects of the same type are journaled, including those not replicated by the data group. If multiple data groups include the same library in their configurations, only allow one data group to use journal at object creation (*YES or *DFT). The default for this parameter is *DFT which allows MIMIX to determine the objects to journal at creation. For example, a data group is configured to cooperatively process only file ABC from library APPDTA. The library also contains data areas and temporary files that are not configured for replication. Specifying a value that permits journaling of newly created objects (*YES or *DFT) will result in all newly created files in library APPDTA being journaled. Newly created data areas in this library would not be journaled. Note: There are operating system restrictions and some IBM library restrictions. For more information, see the requirements for implicit starting of journaling in What objects need to be journaled on page 323. For additional information, see Processing of newly created files and objects on page 127. Parameters for automatic retry processing: MIMIX may use delay retry cycles when performing system journal replication to automatically retry processing an object that failed due to a locking condition or an in-use condition. It is normal for some pending activity entries to undergo delay retry processingfor example, when a conflict occurs between replicated objects in MIMIX and another job on the system. The following parameters define the scope of two retry cycles: Number of times to retry (RTYNBR) This parameter specifies the number of attempts to make during a delay retry cycle. First retry delay interval (RTYDLYITV1) This parameter specifies the amount of time, in seconds, to wait before retrying a process in the first (short) delay retry cycle. Second retry delay interval (RTYDLYITV2) specifies the amount of time, in
238
seconds, to wait before retrying a process in the second (long) delay retry cycle. This is only used after all the retries for the RTYDLYITV1 parameter have been attempted. After the initial failed save attempt, MIMIX delays for the number of seconds specified for the First retry delay interval (RTYDLYITV1) before retrying the save operation. This is repeated for the specified number of times (RTYNBR). If the object cannot be saved after all attempts in the first cycle, MIMIX enters the second retry cycle. In the second retry cycle, MIMIX uses the number of seconds specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the save attempt for the specified number of times (RTYNBR). If the object identified by the entry is in use (*INUSE) after the first and second retry cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic object recovery policy is enabled. The values in effect for the Number of third delay/retries policy and the Third retry interval (min.) policy determine the scope of the third retry cycle. After all attempts have been performed, if the object still cannot be processed because of contention with other jobs, the status of the entry will be changed to *FAILED. Adaptive cache (ADPCHE) This parameter enables adaptive caching for a data group. Adaptive caching is a technique by which MIMIX caches data into memory before it is needed by user journal replication processes. Using adaptive caching provides greater elapsed time performance by using additional memory. File and tracking entry options (FEOPT) This parameter specifies default options that determine how MIMIX handles file entries and tracking entries for the data group. All database file entries, object tracking entries, and IFS tracking entries defined to the data group use these options unless they are explicitly overridden by values specified in data group file or object entries. File entry options in data group object entries enable you to set values for files and tracking entries that are cooperatively processed. The options are as follows: Journal image This option allows you to control the kinds of record images that are written to the journal when data updates are made to database file records, IFS stream files, data areas or data queues. The default value *AFTER causes only after-images to be written to the journal. The value *BOTH causes both before-images and after-images to be written to the journal. Some database techniques, such as keyed replication, may require the use of both before-image and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. Omit open/close entries This option allows you to specify whether open and close entries are omitted from the journal. The default value *YES indicates that open and close operations on file members or IFS tracking entries defined to the data group do not create open and close journal entries and are therefore omitted from the journal. If you specify *NO, journal entries are created for open and close operations and are placed in the journal. Replication type This option allows you to specify the type of replication to use for
239
database files defined to the data group. The default value *POSITION indicates that each file is replicated based on the position of the record within the file. Positional replication uses the values of the relative record number (RRN) found in the journal entry header to locate a database record that is being updated or deleted. MIMIX Dynamic Apply requires the value *POSITION. The value *KEYED indicates that each file is replicated based on the value of the primary key defined to the database file. The value of the key is used to locate a database record that is being deleted or updated. MIMIX strongly recommends that any file configured for keyed replication also be enabled for both beforeimage and after-image journaling. Files defined using keyed replication must have at least one unique access path defined. For additional information, see Keyed replication on page 355. Lock member during apply This option allows you to choose whether you want the database apply process to lock file members when they are being updated during the apply process. This prevents inadvertent updates on the target system that can cause synchronization errors. Members are locked only when the apply process is active. Apply session With this option, you can assign a specific apply session for processing files defined to the data group. The default value *ANY indicates that MIMIX determines which apply session to use and performs load balancing. Notes: Any changes made to the apply session option are not effective until the data group is started with *YES specified for the clear pending and clear error parameters. For IFS and object tracking entries, only apply session A is valid. For additional information see Database apply session balancing on page 87.
Collision resolution This option determines how data collisions are resolved. The default value *HLDERR indicates that a file is put on hold if a collision is detected. The value *AUTOSYNC indicates that MIMIX will attempt to automatically synchronize the source and target file. You can also specify the name of the collision resolution class (CRCLS) to use. A collision resolution class allows you to specify how to handle a variety of collision types, including calling exit programs to handle them. See the online help for the Create Collision Resolution Class (CRTCRCLS) command for more information. Note: The *AUTOSYNC value should not be used if the Automatic database recovery policy is enabled.
Disable triggers during apply This option determines if MIMIX should disable any triggers on physical files during the database apply process. The default value *YES indicates that triggers should be disabled by the database apply process while the file is opened. Process trigger entries This option determines if MIMIX should process any journal entries that are generated by triggers. The default value *YES indicates that journal entries generated by triggers should be processed.
240
Database reader/send threshold (DBRDRTHLD) This parameter specifies the backlog threshold criteria for the database reader (DBRDR) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the DBRDR process. If the data group is configured for MIMIX source-send processing instead of remote journaling, this threshold applies to the database send (DBSND) process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold. Database apply processing (DBAPYPRC) This parameter allows you to specify defaults for operations associated with the database apply processes. Each configured apply session uses the values specified in this parameter. The areas for which you can specify defaults are as follows: Force data interval You can specify the number of records that are processed before MIMIX forces the apply process information to disk from cache memory. A lower value provides easier recovery for major system failures. A higher value provides for more efficient processing. Maximum open members You can specify the maximum number of members (with journal transactions to be applied) that the apply process can have open at one time. Once the limit specified is reached, the apply process selectively closes one file before opening a new file. A lower value reduces disk usage by the apply process. A higher value provides more efficient processing because MIMIX does not open and close files as often. Threshold warning You can specify the number of entries the apply process can have waiting to be applied before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the database apply process and a message is sent to the primary and secondary message queues. Apply history log spaces You can specify the maximum number of history log spaces that are kept after the journal entries are applied. Any value other than zero (0) affects performance of the apply processes. Keep journal log user spaces You can specify the maximum number of journal log spaces to retain after the journal entries are applied. Log user spaces are automatically deleted by MIMIX. Only the number of user spaces you specify are kept. Size of log user spaces (MB) You can specify the size of each log space (in megabytes) in the log space chain. Log spaces are used as a staging area for journal entries before they are applied. Larger log spaces provide better performance.
Object processing (OBJPRC) This parameter allows you to specify defaults for object replication. The areas for which you can specify defaults are as follows: Object default owner You can specify the name of the default owner for objects
241
whose owning user profile does not exist on the target system. The product default uses QDFTOWN for the owner user profile. DLO transmission method You can specify the method used to transmit the DLO content and attributes to the target system. The value *OPTIMIZED uses i5/OS APIs. The *SAVRST uses i5/OS save and restore commands. IFS transmission method You can specify the method used to transmit IFS object content to the target system. The value *SAVRST uses i5/OS save and restore commands. The value *OPTIMIZED uses i5/OS APIs. Note: It is recommended that you use the *OPTIMIZED method of IFS transmission only in environments in which the high volume of IFS activity results in persistent replication backlogs. The i5/OS save and restore method guarantees that all attributes of an IFS object are replicated. The IFS optimization method does not currently replicate digital signatures or other attributes that have been added in i5/OS V5R2 or later. User profile status You can specify the user profile Status value for user profiles when they are replicated. This allows you to replicate user profiles with the same status as the source system in either an enabled or disabled status for normal operations. If operations are switched to the backup system, user profiles can then be enabled or disabled as needed as part of the switching process. Keep deleted spooled files You can specify whether to retain replicated spooled files on the target system after they have been deleted from the source system. When you specify *YES, the replicated spooled files are retained on the target system after they are deleted from the source system. MIMIX does not perform any clean-up of these spooled files. You must delete them manually when they are no longer needed. If you specify *NO, the replicated spooled files are deleted from the target system when they are deleted from the source system. Keep DLO system object name You can specify whether the DLO on the target system is created with the same system object name as the DLO on the source system. The system object name is only preserved if the DLO is not being redirected during the replication process. If the DLO from the source system is being directed to a different name or folder on the target system, then the system object name will not be preserved. Object retrieval delay You can specify the amount of time, in seconds, to wait after an object is created or updated before MIMIX packages the object. This delay provides time for your applications to complete their access of the object before MIMIX begins packaging the object.
Object send threshold (OBJSNDTHLD) This parameter specifies the backlog threshold criteria for the object send (OBJSND) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the OBJSND process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold.
242
Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object retrieve requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists. Container send processing (CNRSNDPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle container send requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the container send (CNRSND) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists. Object apply processing (OBJAPYPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object apply requests and the threshold at which the number of pending requests queued for processing triggers additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically terminate. You can also specify a threshold for warning message that indicates the number of pending requests waiting in the queue for processing before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the object apply process and a message is sent to the primary and secondary message queues. User profile for submit job (SBMUSR) This parameter allows you to specify the name of the user profile used to submit jobs. The default value *JOBD indicates that the user profile named in the specified job description is used for the job being submitted. The value *CURRENT indicates that the same user profile used by the job that is currently running is used for the submitted job. Send job description (SNDJOBD) This parameter allows you to specify the name and library of the job description used to submit send jobs. The product default uses MIMIXSND in library MIMIXQGPL for the send job description.
243
Apply job description (APYJOBD) This parameter allows you to specify the name and library of the job description used to submit apply requests. The product default uses MIMIXAPY in library MIMIXQGPL for the apply job description. Reorganize job description (RGZJOBD) This parameter, used by database processing, allows you to specify the name and library of the job description used to submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL for the reorganize job description. Synchronize job description (SYNCJOBD) This parameter, used by database processing, allows you to the name and library of the job description used to submit synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for synchronization job description. This is valid for any synchronize command that does not have JOBD parameter on the display. Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The source or target role of the system affects the results of the time you specify on a data group definition. Results may also be affected if you specify a value that uses the job restart time in a system definition defined to the data group. Changing the job restart time is considered an advanced technique. Recovery window (RCYWIN) Configuring a recovery window1 for a data group specifies the minimum amount of time, in minutes, that a recovery window is available and identifies the replication processes that permit a recovery window. A recovery window introduces a delay in the specified processes to create a minimum time during which you can set a recovery point. Once a recovery point is set, you can react to anticipated problems and take action to prevent a corrupted object from reaching the target system. When the processes reach the recovery point, they are suspended so that any corruption in the transactions after that point will not automatically be processed. By its nature, a recovery window can affect the data group's recovery time objective (RTO). Consider the effect of the duration you specify on the data group's ability to meet your required RTO. You should also disable auditing for any data group that has a configured recovery window. For more information, see Preventing audits from running in the Using MIMIX book.
244
For each data group file entry, the following must be specified: File entry options Journal image *DGDFT or *BOTH Finally, if you are changing an existing data group to have these values, you must end and restart the data group. Once you have these values specified, you will be able to use the RMVJRNCHG command if needed.
Updated for 5.0.08.00 and 5.0.13.00.
245
246
247
Note: If you specify *YES and you require that the status of journaling on the target system is accurate, you should perform a save and restore operation on the target system prior to loading the data group file entries. If you are performing your initial configuration, however, it is not necessary to perform a save and restore operation. You will synchronize as part of the configuration checklist. 6. More prompts appear on the display that identify journaling information for the data group. You may need to use the Page Down key to see the prompts. Do the following: a. Ensure that the values of System 1 journal definition and System 2 journal definition identify the journal definitions you need. Notes: If you have not journaled before, the value *DGDFN is appropriate. If you have an existing journaling environment that you have identified to MIMIX in a journal definition, specify the name of the journal definition. If you only see one of the journal definition prompts, you have specified *NO for both the Allow to be switched prompt and the Journal on target prompt. The journal definition prompt that appears is for the source system as specified in the Data source prompt. b. If any objects to replicate are located in an auxiliary storage pool (ASP) group on either system, specify values for System1 ASP group and System 2 ASP group as needed. The ASP group name is the name of the primary ASP device within the ASP group. c. The default for the Use remote journal link prompt is *YES, which required for MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a transfer definition and an RJ link, if needed. To create a data group definition for a source-send configuration, change the value to *NO. d. At the Cooperative journal (COOPJRN) prompt, specify the journal for cooperative operations. For new data groups, the value *DFT automatically resolves to *USRJRN when Data group type is *ALL or *DB and Remote journal link is *YES. The value *USRJRN processes through the user (database) journal while the value *SYSJRN processes through the system (audit) journal. 7. At the Number of DB apply sessions prompt, specify the number of apply sessions you want to use. 8. Verify that the values shown for the DB journal entry processing prompts are what you want. Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. 9. At the Description prompt, type a text description of the data group definition, enclosed in apostrophes. 10. Do one of the following:
248
To accept the basic data group configuration, Press Enter. Most users can accept the default values for the remaining parameters. The data group is created when you press Enter. To access prompts for advanced configuration, press F10 (Additional Parameters) and continue with the next step.
Advanced Data Group Options: The remaining steps of this procedure are only necessary if you need to access options for advanced configuration topics. The prompts are listed in the order they appear on the display. Because i5/OS does not allow additional parameters to be prompt-controlled, you will see all parameters regardless of the value specified for the Data group type prompt. 11. Specify the values you need for the following prompts associated with user journal replication: Remote journaling threshold Synchronization check interval Time stamp interval Verify interval Data area polling interval Journal at creation
12. Specify the values you need for the following prompts associated with system journal replication: Number of times to retry First retry delay interval Second retry delay interval
13. Accept the value *YES for the Adaptive cache prompt unless the system is memory constrained. 14. Specify the values you need for each of the prompts on the File and tracking ent. opts (FEOPT) parameter. Notes: Replication type must be *POSITION for MIMIX Dynamic Apply. Apply session A is used for IFS objects, data areas, and data queues that are configured for user journal replication. For more information see Database apply session balancing on page 87. The journal image value *BOTH is required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information.
15. Specify the values you need for each element of the following parameters: Database reader/send threshold Database apply processing Object processing
249
Object send threshold Object retrieve processing Container send processing Object apply processing
16. If necessary, change the values for the following prompts: User profile for submit job Send job description and its Library Apply job description and its Library Reorganize job description and its Library Synchronize job description and its Library Job restart time
17. When you are sure that you have defined all of the values that you need, press Enter to create the data group definition.
Updated for 5.0.13.00.
250
251
threshold conditions would have on RTO and your tolerance for data loss in the event of a failure. Table 31 lists the shipped values for thresholds available in a data group definition, identifies the risk associated with a backlog for each replication process, and identifies available options to address a persistent threshold condition. For each data group, you may need to use multiple options or adjust one or more threshold values multiple times before finding an appropriate setting.
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions
All journal entries in the backlog for the remote journaling function exist only in the source system journal and are waiting to be transmitted to the remote journal. These entries cannot be processed by MIMIX user journal replication processes and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required. For data groups that use remote journaling, all journal entries in the database reader backlog are physically located on the target system but MIMIX has not started to replicate them. If the source system fails, these entries need to be read and applied before switching. For data groups that use MIMIX source-send processing, all journal entries in the database send backlog, are waiting to be read and to be transmitted to the target system. The backlogged journal entries exist only in the source system and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required. All of the entries in the database apply backlog are waiting to applied to the target system. If the source system fails, these entries need to be applied before switching. A large backlog can also affect performance.
Option 3 Option 4
252
Table 31.
Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions Option 2 Option 3 Option 4
Replication Process Backlog Threshold and its Shipped Default Values Object send threshold 10 minutes
All of the journal entries in the object send backlog exist only in the system journal on the source system and are at risk of being lost if the source system fails. MIMIX may not have determined all of the information necessary to replicate the objects associated with the journal entries. As this backlog clears, subsequent processes may have backlogs as replication progresses. All of the objects associated with journal entries in the object retrieve backlog are waiting to be packaged so they can be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses. All of the packaged objects associated with journal entries in the container send backlog are waiting to be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses All of the entries in the object apply backlog are waiting to be applied to the target system. If the source system fails, these entries need to be applied before switching. Any related objects for which an automatic recovery action was collecting data may be lost.
The following options are available, listed in order of preference. Some options are not available for all thresholds. Option 1 - Adjust the number of available jobs. This option is available only for the object retrieve, container send, and object apply processes. Each of these processes have a configurable minimum and maximum number of jobs, a threshold at which more jobs are started, and a warning message threshold. If the number of entries in a backlog divided by the number of active jobs exceeds the job threshold, extra jobs are automatically started in an attempt to address the backlog. If the backlog reaches the higher value specified in the warning message threshold, the process status reflects the threshold condition. If the process frequently shows a threshold status, the
253
maximum number of jobs may be too low or the job threshold value may be too high. Adjusting either value in the data group configuration can result in more throughput. Option 2 - Temporarily increase job performance. This option is available for all processes except the RJ link. Use work management functions to increase the resources available to a job by increasing its run priority or its timeslice (CHGJOB command). These changes are effective only for the current instance of the job. The changes do not persist if the job is ended manually or by nightly cleanup operations resulting from the configured job restart time (RESTARTTIME) on the data group definition. Option 3 - Change threshold values or add criterion. All processes support changing the threshold value. In addition, if the quantity of entries is more of a concern than time, some processes support specifying additional threshold criteria not used by shipped default settings. For the remote journal, database reader (or database send), and object send processes, you can adjust the threshold so that a number of journal entries is used as criteria instead of, or in conjunction with a time value. If both time and entries are specified, the first criterion reached will trigger the threshold condition. Changes to threshold values are effective the next time the process status is requested. Option 4 - Get assistance. If you tried the other options and threshold conditions persist, contact your Certified MIMIX Consultant for assistance. It may be necessary to change configurations to adjust what is defined to each data group or to make permanent work management changes for specific jobs.
Updated for 5.0.13.00.
254
Chapter11
Copying a definition
Use this procedure on a management system to copy a system definition, transfer definition, journal definition, or a data group definition. Notes for data group definitions: The data group entries associated with a data group definition are not copied. Before you copy a data group definition, ensure that activity is ended for the definition to which you are copying.
Notes for journal definitions: The journal definition identified in the From journal definition prompt must exist before it can be copied. The journal definition identified in the To journal defining prompt cannot exist when you specify *NO for the Replace definition prompt. If you specify *YES for the Replace definition prompt, the To journal defining prompt must exist. It is possible to introduce conflicts in your configuration when replacing an existing journal definition. These conflicts are automatically resolved or an error message is sent when the journal environment for the definition is built.
To copy a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the
255
Deleting a definition
MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to definition you want and press Enter. 4. The Copy display for the definition type you selected appears. At the To definition prompt, specify a name for the definition to which you are copying information. 5. If you are copying a journal definition or a data group definition, the display has additional prompts. Verify that the values of prompts are what you want. 6. The value *NO for the Replace definition prompt prevents you from replacing an existing definition. If you want to replace an existing definition, specify *YES. 7. To copy the definition, press Enter.
Deleting a definition
Use this procedure on a management system to delete a system definition, transfer definition, journal definition, or a data group definition. Attention: When you delete a system or data group definition, information associated with the definition is also deleted. Ensure that the definition you delete is not being used for replication and be aware of the following: If you delete a system definition, all other configuration elements associated with that definition are deleted. This includes journal definitions, transfer definitions, and data group definitions with all associated data group entries. If you delete a data group definition, all of its associated data group entries are also deleted. The delete function does not clean up any records for files in the error/hold file.
When you delete a journal definition, only the definition is deleted. The files being journaled, the journal, and the journal receivers are not deleted. To delete a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. Ensure that the definition you want to delete is not being used for replication. Do the following:
256
a. From the MIMIX Main Menu, select option 2 (Work with systems) and press Enter. b. Type an 8 (Work with data groups) next to the system you want and press Enter. c. The result is a list of data groups for the system you selected. Type a 17 (File entries) next to the data group you want and press Enter. d. On the Work with DG File Entries display, verify that the status of the file entries is *INACTIVE. If necessary, use option 10 (End journaling). e. On the Work with Data Groups display, use option 10 (End data group). f. Before deleting a system definition, on the Work with Systems display, uses option 10 (End managers). 2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 3. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to definition you want and press Enter. 5. A confirmation display appears with a list of definitions to be deleted. To delete the definitions press Enter.
Displaying a definition
Use this procedure to display a system definition, transfer definition, journal definition, or a data group definition. To display a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 5 (Display) next to definition you want and press Enter. 4. The definition display appears. Page Down to see all of the values.
Printing a definition
Use this procedure to create a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition. To print a definition, do the following;
257
Renaming definitions
Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 6 (Print) next to definition you want and press Enter. 4. A spooled file is created with a name of MX***DFN, where *** indicates the type of definition. You can print the spooled file according to your standard print procedures.
Renaming definitions
The procedures for renaming a system definition, transfer definition, journal definition, or data group definition must be run from a management system. Attention: Before you rename any definition, ensure that all other configuration elements related to it are not active. This section includes the following procedures: Renaming a system definition on page 258 Renaming a transfer definition on page 261 Renaming a journal definition with considerations for RJ link on page 262 Renaming a data group definition on page 263
258
names, a temporary system definition name must be used because there cannot be two system definitions with the same name. Attention: Before you rename a system definition, ensure that MIMIX activity is ended by using the End Data Group (ENDDG) and End MIMIX Manager (ENDMMXMGR) commands. To rename system definitions, do the following for each system whose definition you are renaming from the management system unless noted otherwise: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. Perform a controlled end of the MIMIX installation. See the Using MIMIX book for procedures for ending MIMIX. 2. End the MIMIXSBS subsystem on all systems. See the Using MIMIX book for procedures for ending the MIMIXSBS subsystem. 3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter. 4. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you are renaming, and press Enter. 5. For each data group listed, do the following: a. From the Work with Data Groups display, select option 8 (Display status) and press Enter. b. Record the Last Read Receiver name and Sequence # for both database and object. 6. If changing the host name or IP address, do the following steps. Otherwise, continue with Step 7. a. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. b. From the MIMIX Configuration Menu, select option 2 (Work with transfer definitons) and press Enter. c. The Work with Transfer Definitions display appears. Select option 2 (Change) for each transfer definition that includes the system whose definition you are renaming and press Enter. d. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 to access additional parameters. e. Specify the new host name or IP address for the System 1 host name or address and System 2 host name or address and press Enter. Note: Many installations will have an autostart entry for the STRSVR command. Autostart entries must be reviewed for possible updates of a new system name or IP address. For more information, see Identifying the autostart job entry in the MIMIXSBS subsystem on page 191 and Changing the job description for an autostart job entry on page 191.
259
Renaming definitions
7. Start the MIMIXSBS subsystem and the port jobs on all systems using the host names or IP addresses. If you changed these, use the host name or IP address specified in Step 6. 8. For all systems, ensure communications before continuing. Follow the steps in topic Verifying all communications links on page 195. 9. From the Work with Systems Definitions (WRKSYSDFN) display type a 7 (Rename) next to the system whose definition is being renamed and press Enter. 10. The Rename System Definitions (RNMSYSDFN) display appears. At the To system definition prompt, specify the new name for the system whose definition is being renamed and press Enter. 11. The Confirm Rename System Defintion display appears. Press Enter. 12. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter. 13. The Work with Systems display appears. Type a 9 (Start) next to the management system you want and press Enter. 14. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following: a. At the Manager prompt, specify *ALL. b. Press F10 to access additional parameters. c. In the Reset configuration prompt, specify *YES. d. Press Enter. 15. The Work with Systems display appears. For each network system, do the following: a. Type a 9 (Start) next to each network system you want and press Enter. b. The Start MIMIX Managers (STRMMXMGR) display appears. Press Enter. Wait for the MIMIX Managers to start before continuing. 16. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definitions you have renamed and press Enter. 17. For each data group listed, do the following: a. From the Work with Data Groups display, select option 9 (Start DG) and press Enter. b. The Start Data Group (STRDG) display appears. Press F10 to display additional parameters. c. Type the Receiver names and Sequence #, adding 1 to the sequence #s, that were recorded in Step 5b for both database and object. Press Enter. 18. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you have renamed and ensure all data groups are active. You should see the letter A, highlighted blue in the database source column. Refer to the Using MIMIX book for more information. 19. Press F3 to return to the Work with Systems display.
260
20. From the Work with Systems display, select option 8 (Work with data groups) on the management system and press Enter. 21. From the Work with Data Groups display, select option 9 (Start DG) for data groups (highlighted red) that are not active and press Enter. 22. The Start Data Group (STRDG) display appears. Press Enter. Additional parameters are displayed. Press Enter again to start the data groups. 23. The Work with data groups display appears. Ensure all data groups are active. You should see the letter A, highlighted blue in the database source column. Refer to the Using MIMIX book for more information. Press F5 to refresh data.
9. Press F12 to return to the MIMIX Configuration Menu. 10. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 11. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name whose transfer definition needs to be changed and press Enter.
261
Renaming definitions
12. From the Change Data Group Definition display, specify the new name for the transfer definition and press Enter until the Work with DG Definitions display appears. 13. Press F12 to return to the MIMIX Configuration Menu. 14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal links) and press Enter. 15. From the Work with RJ Links menu, press F11 to display the transfer definitions. 16. Type a 2 (Change) next to the RJ link where you changed the transfer definition and press Enter. 17. From the Change Remote Journal Link display, specify the new name for the transfer definition and press Enter.
262
f. Press F12 to return to the MIMIX Configuration Menu. 3. From the MIMIX Configuration Menu, select option 3 (Work with journal definitions) and press Enter. 4. From the Work with Journal Definitions menu, type a 7 (Rename) next to the journal definition names you want to rename and press Enter. 5. The Rename Journal Definition display for the definition you selected appears. At the To journal definition prompts, specify the values you want for the new name. a. If the journal name is *JRNDFN, ensure that there are no journal receiver prefixes in the specified library whose names start with the journal receiver prefix. See Building the journaling environment on page 219 for more information. 6. Press Enter. The Work with Journal Definitions display appears. 7. If using remote journaling, do the following to change the corresponding definition for the remote journal. Otherwise, continue with Step 8: a. Type a 2 (Change) next to the corresponding remote journal definition name you changed and press Enter. b. Specify the values entered in Step 5 and press Enter. 8. From the Work with Journal Definitions menu, type a 14 (Build) next to the journal definition names you changed and press F4. 9. The Build Journaling Environment display appears. At the Source for values prompt, specify *JRNDFN. 10. Press Enter. You should see a message that indicates the journal environment was created. 11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 12. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name that uses the journal definition you changed and press Enter. 13. Press F10 to access additional parameters. 14. From the Change Data Group Definition display, specify the new name for the System 1 journal definition and System 2 journal definition and press Enter twice.
263
Renaming definitions
procedure Ending a data group in a controlled manner in the Using MIMIX book. 2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. 3. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data group name you want to rename and press Enter. 5. From the Rename Data Group Definition display, specify the new name for the data group definition and press Enter.
264
Chapter12
The appendix Supported object types for system journal replication on page 549 lists i5/OS object types and indicates whether each object type is replicated by MIMIX.
265
266
When you configure MIMIX, you can create data group object entries by adding individual object entries or by using the custom load function for library-based objects. The custom load function can simplify creating data group entries. This function generates a list of objects that match your specified criteria, from which you can selectively create data group object entries. For example, if you want to replicate all but a few of the data areas in a specific library, you could use the Add Data Group Object Entry (ADDDGOBJE) command to create a single data group object entry that includes all data areas in the library. Then, using the same object selection criteria with the custom load function, you can select from a list of data areas in the library to create exclude entries for the objects you do not want replicated. Once you have created data group object entries, you can tailor them to meet your requirements. You can also use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group.
267
4. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to specify the selection criteria: a. Identify the library and objects to be considered. Specify values for the System 1 library and System 1 object prompts. b. If necessary, specify values for the Object type, Attribute, System 2 library, and System 2 object prompts. c. At the Process type prompt, specify whether resulting data group object entries should include or exclude the identified objects. d. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. To ensure that journaled files, data areas, and data queues will be replicated from the user journal, you must specify the object types. e. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see all of the prompts. 5. To specify file entry options that will override those set in the data group definition, do the following: a. Press F9 (All parameters). b. Press Page Down until you locate the File entry options prompt. c. Specify the values you need on the elements of the File entry options prompt. 6. To generate the list of objects, press Enter. Note: If you skipped Step 5, you may need to press Enter multiple times. 7. The Load DG Object Entries display appears with the list of objects that matched your selection criteria. Either type a 1 (Select) next to the objects you want or press F21 (Select all). Then press Enter. 8. If necessary, you can use Adding or changing a data group object entry on page 268 to customize values for any of the data group object entries. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.
268
2. From the Work with Data Groups display, type a 20 (Object entries) next to the data group you want and press Enter. 3. The Work with DG Object Entries display appears. Do one of the following: To add a new entry, type a 1 (Add) next to the blank line at the top of the list and press Enter. To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.
4. The appropriate Data Group Object Entry display appears. When adding an entry, you must specify values for the System 1 library and System 1 object prompts. Note: When changing an existing object entry to enable replication of data areas or data queues from a user journal (COOPDB(*YES)), make sure that you specify only the objects you want to enable for the System 1 object prompt. Otherwise, all objects in the library specified for System 1 library will be enabled. 5. If necessary, specify a value for the Object type prompt. 6. Press F9 (All parameters). 7. If necessary, specify values for the Attribute, System 2 library, System 2 object, and Object auditing value prompts.
8. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. 9. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. Note: To ensure that journaled files, data areas, or data queues will be replicated from the user journal, you must specify *YES for Cooperate with database and you must specify the appropriate object types for Cooperating object types. 10. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts. 11. To specify file entry options that will override those set in the data group definition, do the following: a. If necessary, Press Page Down to locate the File entry options prompt. b. Specify the values you need on the elements of the File entry options prompt. 12. Press Enter. 13. For object entries configured for user journal replication of data areas or data queues, return to Step 7 in procedure Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 to complete additional steps necessary to complete the conversion. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly
269
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.
270
271
When loading from a data group, you can also specify the source from which file entry options are loaded, and override elements if needed. The Default FE options source (FEOPTSRC) parameter determines whether file entry options are loaded from the specified configuration source (*CFGSRC) or from the data group definition (*DGDFT). Any file entry option with a value of *DFT is loaded from the specified source. Any values specified on elements of the File entry options (FEOPT)
272
parameter override the values loaded from the FEOPTSRC parameter for all data group file entries created by a load request. Regardless of where the configuration source and file entry option source are located, the Load Data Group File Entries (LODDGFE) command must be used from a system designated as a management system. Note: The Load Data Group File Entries (LODDGFE) command performs a journal verification check on the file entries using the Verify Journal File Entries (VFYJRNFE) command. In order to accurately determine whether files are being journaled to the target system, you should first perform a save and restore operation to synchronize the files to the target system before loading the data group file entries.
Since no value was specified for FROMDGDFN, its default value *DGDFN causes the file entries to load from existing object entries for DGDFN1. The value *SYS2 for LODSYS causes this example configuration to load from its target system. Entries are added (UPDOPT(*ADD) to the existing configuration. Since all files identified by object entries are wanted, SELECT(*NO) bypasses the selection list. The data group file entries for DGDFN1 created have file entry options which match those found in the object entries because no values were specified for FEOPTSRC or FEOPT parameters. Example - Load from another data group with mixed sources for file entry options The file entries for data group DGDFN1 are created by loading from the object entries for data group DGDFN2, with file entry options loaded from multiple sources.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) FROMDGDFN(DGDFN2) FEOPT(*CFGSRC *DGDFT *CFGSRC *DGDFT)
The data group file entries created for DGDFN1 are loaded from the configuration information in the object entries for DGDFN2, with file entry options coming from multiple sources. Because the command specified the first element (Journal image) and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC, the resulting file entries have the same values for those elements as the data group object entries for DGDFN2. Because the command specified the second element (Omit open/close entries) and the fourth element (Lock member during apply) as *DGDFT, these elements are loaded from the data group definition. The rest of the file entry options are loaded from the configuration source (object entries for DGDFN2).
273
Procedure: Use this procedure to create data group file entries from the object entries defined to a data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears. The name of the data group for which you are creating file entries and the Configuration source value of *DGOBJE are pre-selected. Press Enter. 5. The following prompts appear on the display. Specify appropriate values. a. From data group definition - To load from entries defined to a different data group, specify the three-part name of the data group. b. Load from system - Ensure that the value specified is appropriate. For most environments, files should be loaded from the source system of the data group you are loading. (This value should be the same as the value specified for Data source in the data group definition.) c. Update option - If necessary, specify the value you want. d. Default FE options source - Specify the source for loading values for default file entry options. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 6. 6. Optionally, you can specify a file entry option value to override those loaded from the configuration source. Do the following: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. 8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 9. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.
274
Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used. Procedure: Use this procedure to create data group file entries from a library on either the source system or the target system. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *NONE and press Enter. 5. Identify the location of the files to be used for loading. For common configurations, you can accomplish this by specifying a library name at the System 1 library prompt and accepting the default values for the System 2 library, Load from system, and File prompts. If you are using system 2 as the data source for replication or if you want the library name to be different on each system, then you need to modify these values to appropriately reflect your data group defaults. 6. If necessary, specify the values you want for the following: Update option prompt Add entry for each member prompt 7. The value of the Default FE options source prompt is ignored when loading from a library. To optionally specify file entry options, do the following: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source.
275
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 10. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.
Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used. Procedure: Use this procedure to create data group file entries from the journal associated with a journal definition specified for the data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *JRNDFN and press Enter. File and library names on the source and target systems are set to the same names for the load operation. 5. At the Load from system prompt, ensure that the value specified represents the appropriate system. The journal definition associated with the specified system is used for loading. For common configurations, the value that corresponds to the source system of the data group you are loading should be used. (This value should match the value specified for Data source in the data group definition.) 6. If necessary, specify the value you want for the Update option prompt. 7. The value of the Default FE options source prompt is ignored when loading from a journal definition. To optionally specify file entry options, do the following: a. Press F10 (Additional parameters).
276
b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. 9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 10. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.
Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group, the value *DFT results in file entry options which match those specified in DGDFN2. Example 2: The data group file entries are created by loading from file entries for another data group, DGDFN2 in another installation MXTEST.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) PRDLIB(MXTEST) FROMDGDFN(DGDFN2)
Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group in another installation, the value *DFT results in file entry options which match those specified in DGDFN2 in installation MXTEST. Procedure: Use this procedure to create data group file entries from the file entries defined to another data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *DGFE and press Enter.
277
5. At the Production library prompt, either accept *CURRENT or specify the name of an installation library from which the data group you are copying is located. 6. At the From data group definition prompts, specify the three-part name of the data group from which you are loading. 7. If necessary, specify the value you want for the Update option prompt. 8. Specify the source for loading values for default file entry options at the Default FE options source prompt. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 9. 9. If necessary, do the following specify a file entry option value to override those loaded from the configuration source: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source 11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 12. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.
Updated for 5.0.08.00.
278
at the top of the list and press Enter. 4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File and Library prompts, specify the file that you want to replicate. 5. By default, all members in the file are replicated. If you want to replicate only a specific member, specify its name at the Member prompt. Note: All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt. See Step 7. 6. Verify that the values of the remaining prompts on the display are what you want. If necessary, change the values as needed. Notes: If you change the value of the Dynamically update prompt to *NO, you need to end and restart the data group before the addition is recognized. If you change the value of the Start journaling of file prompt to *NO and the file is not already journaled, MIMIX will not be able to replicate changes until you start journaling the file.
7. Optionally, you can specify file entry options that will override those defined for the data group. Do the following: a. Press F10 (Additional parameters), then press Page Down. b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure 8. Press Enter to create the data group file entry.
279
All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt.
5. To accept your changes, press Enter. The replication processes do not recognize the change until the data group has been ended and restarted.
280
281
From the management system, do the following to add a new data group IFS entry or change an existing IFS entry: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 22 (IFS entries) next to the data group you want and press Enter. 3. The Work with Data Group IFS Entries display appears. Do one of the following: To add a new entry, type a 1 (Add) next to the blank line at the top of the display and press Enter. To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.
4. The appropriate Data Group IFS Entry display appears. When adding an entry, you must specify a value for the System 1 object prompt. Notes: The object name must begin with the '/' character and can be up to 512 characters in total length. The object name can be a simple name, a name that is qualified with the name of the directory in which the object is located, or a generic name that contains one or more characters followed by an asterisk (*), such as /ABC*. Any component of the object name contained between two '/' characters cannot exceed 255 characters in length. All objects in the specified path are selected. When changing an existing IFS entry to enable replication from a user journal (COOPDB(*YES)), make sure that you specify only the IFS objects you want to enable.
282
5. If necessary, specify values for the System 2 object and Object auditing value prompts. 6. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. 7. Specify the appropriate value for the Cooperate with database prompt. To ensure that journaled IFS objects can be replicated from the user journal, specify *YES. To replicate from the system journal, specify *NO. 8. If necessary, specify a value for the Object retrieval delay prompt. 9. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts. 10. Press Enter to create the IFS entry. 11. For IFS entries configured for user journal replication, return to Step 7 in procedure Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 to complete additional steps necessary to complete the conversion. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.
283
284
9. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group. Note: The command used in this procedure does not start journaling on the tracking entries. Start journaling for the tracking entries when indicated by your configuration checklist.
285
286
287
press F21 (Select all). Then press Enter. 7. If necessary, you can use Adding or changing a data group DLO entry on page 288 to customize values for any of the data group DLO entries. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.
4. If you are adding a new DLO entry, the Add Data Group DLO Entry display appears. Identify the library and objects to be considered. Specify values for the System 1 folder and System 1 document prompts. 5. Do the following: a. If necessary, specify values for the Owner, System 2 folder, System 2 object, and Object auditing value prompts. b. At the Process type prompt, specify whether resulting data group DLO entries should include or exclude the identified documents c. If necessary, specify a value for the Object retrieval delay prompt. 6. Press Enter. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.
288
You can load all data group data area entries from a library or you can add individual data area entries. Once the data group data area entries are created, you can tailor them to meet your requirements by adding, changing, or deleting entries. You must define data group data area entries from the management system. The data area entries can be created from libraries on either system. If the system manager is configured and running, all created and changed data group data area entries are sent to the network systems automatically.
289
finished.
4. Specify the values you want at the prompts for System 1 data area and Library and System 2 data area and Library. 5. Press Enter to create the data area entry or accept the change.
290
291
Table 32.
Values to specify for each type of data group entry. To system 1 object
5. The value *NO for the Replace definition prompt prevents you from replacing an existing entry in the definition to which you are copying. If you want to replace an existing entry, specify *YES. 6. To copy the entry, press Enter. 7. For file entries, end and restart the data group being copied.
292
3. For data group file entries, a display with additional prompts appears. Specify the values you want and press Enter. 4. A confirmation display appears with a list of entries to be deleted. To delete the entries, press Enter.
293
Chapter13
294
295
296
For IFS objects, it is particularly important that you understand the ramifications of the value specified for the FORCE parameter. For more information see Examples of changing of an IFS objects auditing value on page 298. Procedure -To set the object auditing value for a data group, do the following on each system defined to the data group: 1. Type the command SETDGAUD and press F4 (Prompt). 2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the data group you want.
297
3. At the Object type prompt, specify the type of objects for which you want to set auditing values. 4. If you want to allow MIMIX to force a change to a configured value that is lower than the objects existing value, specify *YES for the Force audit value prompt. Note: This may affect the operation of your replicated applications. Lakeview recommends that you force auditing value changes only when you have specified *ALLIFS for the Object type. 5. Press Enter.
Order processed 1 2 3
Simply ending and restarting the data group will not cause these configuration changes to be effective. Because the change is to a lower auditing level, the change must be forced with the SETDGAUD command. Similarly, running the SETDGAUD command with FORCE(*NO) does not change the auditing values for this scenario.
298
Table 34 shows the intermediate and final results as each data group IFS entry is processed by the force request.
Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry Note 1 Note 1 Note 1 Note 1 Note 1 *CHANGE *CHANGE Changed by 2nd entry *CHANGE Changed by 3rd entry Note 2 *CHANGE Final results of FORCE(*YES) *CHANGE *CHANGE *ALL *CHANGE *CHANGE
Existing objects
Notes: 1. Because the first data group IFS entry excludes objects from replication, object auditing processing does not apply. 2. This objects auditing value is evaluated when the third data group IFS entry is processed but the entry does not cause the value to change. The existing value is the same as the configured value of the third entry at the time it is processed.
Example 2: Table 35 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command. In this scenario there are multiple configured values.
Table 35. Example 2 configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*ALL) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD)
Order processed 1 2 3
For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are the same or lower than the existing values. Running the command with FORCE(*YES) does change the existing objects values. Table 36 shows the intermediate values as each entry is processed by the force request and the final results of the change. Data group IFS entry #3 in Table 35
299
prevents directory /DIR1 from having an auditing value of *CHANGE or *NONE because it is the last entry processed and it is the most specific entry.
Table 36. Intermediate audit values which occur during FORCE(*YES) processing for example 2. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE *NONE *NONE Changed by 2nd entry *NONE Changed by 3rd entry *ALL *ALL Final results of FORCE(*YES) *ALL *ALL *CHANGE *NONE *NONE
Existing objects
Example 3: This scenario illustrates why you may need to force the configured values to take effect after changing the existing data group IFS entries from *ALL to lower values. Table 37 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command.
Table 37. Example 3: configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*NONE) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD)
Order processed 1 2 3
For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are lower than the existing values. In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured auditing values take effect. Table 38 shows the intermediate values as each entry is processed by the force request and the final results of the change.
Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE *CHANGE
Existing objects
300
Table 38.
Intermediate audit values which occur during FORCE(*YES) processing for example 3. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE
Existing objects
/DIR1/DIR2 /DIR1/DIR2/STMF
*ALL *ALL
Example 4: This example begins with the same set of data group IFS entries used in example 3 (Table 37) and uses the results of the forced change in example 3 as the auditing values for the existing objects in Table 39. Table 39 shows how running the SETDGAUD command with FORCE(*NO) causes changes to auditing values. This scenario is quite possible as a result of a normal STRDG request. Complex data group IFS entries and multiple configured values cause these potentially undesirable results. Note: Any addition or change to the data group IFS entries can cause these results to occur.
Table 39. Example 4: comparison of objects actual values Auditing value Existing values /DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF *NONE *NONE *CHANGE *NONE *NONE After SETDGAUD FORCE(*NO) *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE After SETDGAUD FORCE(*YES) *NONE *NONE *CHANGE *NONE *NONE
Existing objects
There is no way to maintain the existing values in Table 39 without ensuring that a forced change occurs every time SETDGAUD is run, which may be undesirable. In this example, the next time data groups are started, the objects auditing values will be set to those shown in Table 39 for FORCE(*NO). Any addition or change to the data group IFS entries can potentially cause similar results the next time the data group is started. To avoid this situation, we recommend that you configure a consistent auditing value of *CHANGE across data group IFS entries which identify objects with common parent directories.
301
Example 5: This scenario illustrates the results of SETDGAUD command when the objects auditing value is determined by the user profile which accesses the object (value *USRPRF). Table 40 shows the configured data group IFS entry.
Table 40. Example 5 configuration of data group IFS entries Specified Object /DIR1/STMF Object auditing value OBJAUD(*NONE) Process type PRCTYPE(*INCLD)
Order processed 1
Table 41 compares the results running the SETDGAUD command with FORCE(*NO) and FORCE(*YES). Running the command with FORCE(*NO) does not change the value. The value *USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an auditing value of *USRPRF is not considered for change. Running the command with FORCE(*YES) does force a change because the existing value and the configured value are not equal.
Table 41. Example 5: comparison of objects actual values Auditing value Existing values /DIR1/STMF *USRPRF After SETDGAUD FORCE(*NO) *USRPRF After SETDGAUD FORCE(*YES) *NONE
Existing objects
302
303
To submit the job for batch processing, accept *YES. Press Enter and continue with the next step.
9. At the Job description prompts, specify the name and library of the job description used to submit the batch request. Accept MXAUDIT to submit the request using Lakeviews default job description, MXAUDIT. 10. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 11. To start the data group file entry check, press Enter.
304
305
3. You have two options for changing your environment to enable MIMIX RJ support to function. Each option has security implications. You must decide which option is best for your environment. The options are: Option 1. Enable MIMIXOWN user profile for DDM environment on page 306. MIMIX must be installed and transfer definitions must exist before you can make the necessary changes. For new installations this should automatically configured for you. Option 2. Allow user profiles without passwords on page 307. You can use this option before or after MIMIX is installed. However, this option should be performed before configuring MIMIX RJ support.
306
c. If you selected multiple transfer definitions, press Enter to advance to the next selection and record its RDB value. Ensure that you record the values for all transfer definitions you selected. Note: If the RDB value was generated by MIMIX, it will be in the form of the characters MX followed by the System1 definition, System2 definition, and the name of the transfer definition, with up to 18 characters. 2. On the source system, change the MIMIXOWN user profile to have a password and to prevent signing on with the profile. To do this, enter the following command: CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password) INLMNU(*SIGNOFF) Note: The password is case sensitive and must be the same on all systems in the MIMIX network. If the password does not match on all systems, some MIMIX functions will fail with security error message LVE0127. 3. You need a server authentication entry for the MIMIXOWN user profile for each RDB entry you recorded in Step 1. To add a server authentication entry, type the following command, using the password you specified in Step 2 and the RDB value from Step 1. Then press Enter. ADDSVRAUTE USRPRF(MIMIXOWN) SERVER(recorded-RDB-value) PASSWORD(user-defined-password) 4. Repeat Step 2 and Step 3 on the target system.
307
308
309
5. Press F10 (View RJ links). Consider the following and contact your MIMIX administrator before taking action that will end the RJ link or remove the remote journaling environment. When *NO appears in the Use RJ Link column, the data group will not be affected by a request to end the RJ link or to the remote journaling environment. Note: If you allow applications other than MIMIX to use the RJ link, they will be affected if you end the RJ link or remove the remote journaling environment. When *YES appears in the Use RJ Link column, the data group may be affected by a request to end the RJ link. If you use the procedure for ending a remote journal link independently in the Using MIMIX book, ensure that any data groups that use the RJ link are inactive before ending the RJ link.
310
311
312
MIMIX system-level jobs affected by the Job restart time value specified in a system definition are: system manager (SYSMGR), system manager receive (SYSMGRRCV), and journal manager (JRNMGR). MIMIX data group-level jobs affected by the Job restart time value specified in a data group definition are: object send (OBJSND), object receive (OBJRCV), database send (DBSND), database receive (DBRCV), database reader (DBRDR), object retrieve (OBJRTV), container send (CNRSND), container receive (CNRRCV), status send (STSSND), status receive (STSRCV), and object apply (OBJAPY). Also, the role of the system on which you change the restart time affects the results. For system definitions, the value you specify for the restart time and the role of the system (management or network) determines which MIMIX system-level jobs will restart and when. For data group definitions, the value you specify for the restart time and the role of the system (source or target) determines which data group-level jobs will restart and when. Time zone differences between systems also influence the results you obtain. MIMIX system-level jobs restart when they detect that the time specified in the system definition has passed.
313
The system manager jobs are a pair of jobs that run between a network system and the management system. The management and network systems both have journal manager jobs, but the jobs operate independently. The job restart time specified in the management systems system definition determines when to restart the journal manager on the management system. The job restart time specified in the network systems system definition determines when to restart the journal manager job on the network system, when to restart the system manager jobs on both systems, and also affects when cleanup jobs on both systems are submitted. Table 42 shows how the role of the system affects the results of the specified job restart time.
Table 42. System Definition Role Management System Effect of the systems role on changing the job restart time in a system definition. Effect on Jobs by the value specified Jobs System managers Cleanup jobs Journal managers Collector services Network System System managers Time *NONE
Specified value is not used to determine restart time. Restart is determined by value specified for network system. Job on management system restarts at time specified. Jobs on both systems restart when time on the management system reaches the time specified. Jobs are submitted on both systems by system manager jobs after they restart. Job on network system restarts at time specified. Job on management system is not restarted. Jobs are not restarted on either system.
Cleanup jobs
Jobs are submitted on both systems when midnight occurs on the management system. Job on network system is not restarted.
For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is built into the job restart processing. The actual delay is unique to each job. By distributing the jobs within this range the load on systems and communications is more evenly distributed, reducing bottlenecks caused by many jobs simultaneously attempting to end, start, and establish communications. MIMIX determines the actual restart time for the object apply (OBJAPY) jobs based on the timestamp of the system on which the jobs run. For all other affected jobs, MIMIX determines the actual start time for object or database jobs based on the timestamp of the system on which the OBJSND or the DBSND job runs. Table 43 shows how these key jobs affect when
314
In each row, the highlighted job determines the restart time for all jobs in the row. Source System Jobs Object send (OBJSND) Object retrieve (OBJRTV) Container send (CNRSND) Status receive (STSRCV) Database send (DBSND) 1 Target System Jobs Object receive (OBJRCV) Container receive (CNRRCV) Status Send (STSSND) Database receive (DBRCV) 1 Database reader (DBRDR) 1 Object apply (OBJAPY) When MIMIX is configured for remote journaling, the DBSND and DBRCV jobs are replaced by the DBRDR job. The DBRDR job restarts when the specified time occurs on the target system.
1
For more information about MIMIX jobs see Replication job and supporting job names on page 47.
315
Because the management systems system definition uses the default value of midnight, the journal manager on the management system restarts when midnight occurs on that system. Example 3: Friday afternoon you change system definition HONGKONG to have a job restart time value of *NONE. HONGKONG is the management system. LONDON is the associated network system and its system definition uses the default setting 000000 (midnight). You end and restart the MIMIX jobs to make the change effective. The journal manager on HONGKONG is no longer restarted. At midnight (00:00 a.m. Saturday and thereafter) HONGKONG time, the system manager jobs on both systems restart and submit cleanup jobs on both systems. In your runbook you document the new procedures to manually restart the journal manager on HONGKONG. Example 4: Wednesday evening you change the system definitions for LONDON and HONG KONG to both have a job restart time of *NONE. HONGKONG is the management system. You restart the MIMIX jobs to make the change effective. At midnight HONGKONG time, only the cleanup jobs on both systems are submitted. In your runbook you document the new procedures to manually restart the journal managers and system managers.
316
Example 5: You have a data group that operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both the system definitions and the data group definition use the default value 000000 (midnight) for the job restart time. For both systems, the MIMIX system-level jobs restart at midnight. The data group jobs on both systems restart between 2 and 35 minutes after midnight. Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a job restart time value of 013500. The data group operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both system definitions use the default restart time of midnight. MIMIX jobs remain up and running. At midnight, the system-level jobs on both systems restart using the values from the preexisting configuration; the data group-level jobs restart on both systems between 0:02 and 0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between 1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups restart at midnight. Example 7: You have a data group that operates between SYSTEMA and SYSTEMB which are both in the same time zone and are defined as the values of System 1 and System 2, respectively. The data group definition specifies a job restart time value of *SYSDFN2. The system definition for SYSTEMA specifies the default job restart time of 000000 (midnight). SYSTEMB is the management system and its system definition specifies the value *NONE for the job restart time. The journal manager on SYSTEMB does not restart and the data group jobs do not restart on either system because of the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA restarts at midnight. System manager jobs on both systems restart and submit cleanup jobs at midnight as a result of the value in the network system and the fact that the systems are in the same time zone. Example 8A: You have a data group defined between CHICAGO and NEWYORK (System 1 and System 2, respectively) and the data groups job restart time is set to 030000 (3 a.m.). CHICAGO is the source system as well as a network system; its system definition uses the default job restart time of midnight. NEWYORK is the target system as well as the management system; its system definition uses a job restart time of 020000 (2 a.m.). There is a one hour time difference between the two systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17 shows the effect of the time zone difference on this configuration. The journal manager on CHICAGO restarts at midnight Chicago time and the journal manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs on both systems restart when the management system (NEWYORK) reaches the restart time specified for the network system (CHICAGO). The cleanup jobs are submitted by the system manager jobs when they restart. With the exception of the object apply jobs (OBJAPY), the data group jobs restart during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJAPY jobs are based on the time on the target system, which is an hour ahead of the source
317
system time used for the other jobs, the OBJAPY jobs restart between 3:02 and 3:35 a.m. New York time.
Figure 17. Results of Example 8A. This is configured as a standard MIMIX environment.
Example 8B: This scenario is the same as example 8A with one exception. In this scenario, the MIMIX environment is configured to use MIMIX Remote Journal support. Figure 18 shows that the database reader (DBRDR) job restarts based on the time on the target system. Because the database send (DBSND) and database receive (DBRCV) jobs are not used in a remote journaling environment, those jobs do not restart.
Figure 18. Results of example 8B. This environment is configured to use MIMIX Remote Journal support.
318
4. To accept the change, press Enter. The change has no effect on jobs that are currently running. The value for the Job restart time is retrieved from the system definition at the time the jobs are started. The change is effective the next time the jobs are started.
319
4. To accept the change, press Enter. Changes have no effect on jobs that are currently running. The value for the Job restart time is retrieved at the time the jobs are started. The change is effective the next time the jobs are started.
320
321
322
323
TABLE statement is automatically journaled if the library in which it is created contains a journal named QSQJRN. New *FILE, *DTAARA, *DTAQ objects - A new object is automatically journaled if it is created in a library that contains a QDFTJRN data area and the data area has enabled automatic journaling for the object type. The Journal at creation (JRNATCRT parameter) in the data group definition enables MIMIX to create the QDFTJRN data area and enable automatic journaling for an object type. When a data group is started, MIMIX may automatically create the QDFTJRN data area. If the data group configuration meets the requirements for MIMIX Dynamic Apply, MIMIX evaluates all data group entries for each object type to determine whether to create the QDFTJRN data area. MIMIX uses the data group entry with the most specific match to the object type and library that also specifies *ALL for its System 1 object (OBJ1) and Attribute (OBJATR) prompts. Note: MIMIX prevents the QDFTJRN data area from being created the following libraries: QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*, QRPL*, QRCL*, QRPL*, QGPL, QTEMP and SYSIB*. Automatic journaling of new *DTAARA or *DTAQ objects is only supported in IBM i V5R4 and higher. For example, if MIMIX finds only the following data group object entries for library MYLIB, it would use the first entry when determining whether to create the QDFTJRN data area because it is the most specific entry that also meets the OBJ1(*ALL) and OBJATR(*ALL) requirements. The second entry is not considered in the determination because its OBJ1 and OBJATR values do not meet these requirements.
LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES) PRCTYPE(*INCLD) LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES) PRCTYPE(*INCLD)
Updated for 5.0.02.00.
324
If you use the IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling, the user ID that performs the start journaling request must have the appropriate authority requirements.
For journaling to be successfully started on an object, one of the following authority requirements must be satisfied: The user profile of the user attempting to start journaling for an object must have *ALLOBJ special authority. The user profile of the user attempting to start journaling for an object must have explicit *ALL object authority for the journal to which the object is to be journaled. Public authority (*PUBLIC) must have *OBJALTER, *OBJMGT, and *OBJOPR object authorities for the journal to which the object is to be journaled.
If you attempt to start journaling for a data group file entry, IFS tracking entry, or object tracking entry and the files or objects associated with the entry are already journaled, MIMIX checks that the physical file, IFS object, data area, or data queue is journaled to the journal associated with the data group. If the file or object is journaled to the correct journal, the journaling status of the data group file entry, IFS tracking or object tracking entry is changed to *YES. If the file or object is not journaled to the correct journal or the attempt to start journaling fails, an error occurs and the journaling status is changed to *NO.
325
3. The Start Journal Entry (STRJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want.
326
4. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt. 6. To start journaling for the physical file associated with the selected data group, press Enter. The system returns a message to confirm the operation was successful.
3. The End Journal File Entry (ENDJRNFE) display appears. If you want to end journaling for all files in the library, specify *ALL at the System 1 file prompt. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt. 6. To end journaling, press Enter.
327
3. The Verify Journaling File Entry (VFYJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want. 4. Specify the value you want for the Verify journaling on system prompt. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) when determining where to verify journaling. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt 6. Press Enter.
328
329
4. The Start Journaling IFS Entries (STRJRNIFSE) display appears. The Data group
330
definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 7. The System 1 file identifier and System 2 file identifier prompts identify the file identifier (FID) of the IFS object on each system. You cannot change the values2. 8. To start journaling on the IFS objects specified, press Enter.
3. The End Journaling IFS Entries (ENDJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required.
1. When the command is invoked from a command line, you can change values specified for the IFS objects prompts. Also, you can specify as many as 300 object selectors by using the + for more values prompt. 2. When the command is invoked from a command line, use F10 to see the FID prompts. Then you can optionally specify the unique FID for the IFS object on either system. The FID values can be used alone or in combination with the IFS object path name.
331
5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2. 7. To end journaling on the IFS objects specified, press Enter.
3. The Verify Journaling IFS Entries (VFYJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2. 7. To verify journaling on the IFS objects specified, press Enter. Using file identifiers (FIDs) for IFS objects on page 312.
332
333
4. The Start Journaling Obj Entries (STRJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the
334
tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 7. To start journaling on the objects specified, press Enter.
3. The End Journaling Obj Entries (ENDJRNOBJE) display appears. The Data group definition and IFS objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. To end journaling on the objects specified, press Enter.
335
3. The Verify Journaling Obj Entries (VFYJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. To verify journaling on the objects specified, press Enter.
336
Chapter15
MIMIX performance: The following topics describe how to improve MIMIX performance: Caching extended attributes of *FILE objects on page 345 describes how to change the maximum size of the cache used to store extended attributes of *FILE objects replicated from the system journal. Increasing data returned in journal entry blocks by delaying RCVJRNE calls on page 346 describes how you can improve object send performance by changing the size of the block of data from a receive journal entry (RCVJRNE) call and delaying the next call based on a percentage of the requested block size. Configuring high volume objects for better performance on page 350 describes how to change your configuration to improve system journal performance. Improving performance of the #MBRRCDCNT audit on page 351 describes how to use the CMPRCDCNT commit threshold policy to limit comparisons and thereby improve performance of this audit in environments which use commitment control.
337
338
Your environment may impose additional restrictions: If you rely on full image captures in the receiver as part of your auditing rules, do not configure for minimized entry data. Even if you do not rely on full image captures for auditing purposes, consider the effect of how data is minimized. The minimizing resulting from specifying *FILE does not occur on field boundaries. Therefore, the entry specific data may not be viewable and may not be used for auditing purposes. When *FLDBDY is specified, file data for modified fields is minimized on field boundaries. With *FLDBDY, entry-specific data is viewable and may be used for auditing purposes.
339
Configuring for minimized journal entry data may affect your ability to use the Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For example, using option 2 (Change) on WRKDGFEHLD to convert a minimized record update (RUP) to a record put (RPT), will result in failure when applied. RPTs requires the presence of a full, non-minimized, record.
See the IBM book, Backup and Recovery for restrictions and usage of journal entries with minimized entry-specific data.
Updated for 5.0.02.00.
340
341
restrictions, journal caching can be used as an alternative. See Journal caching on page 342.
2. At the Include access paths prompt, specify *ELIGIBLE to include only eligible access paths in the recovery time specification.
Journal caching
Journal caching is an attribute of the journal that is defined. When journal caching is enabled, the system caches journal entries and their corresponding database records into main storage. This means that neither the journal entries nor their corresponding database records are written to disk until an efficient disk write can be scheduled. This usually occurs when the buffer is full, or at the first commit, close, or file end of data. Because most database transactions must no longer wait for a synchronous write of the journal entries to disk, the performance gain can be significant. For example, batch operations must usually wait for each new journal entry to be written to disk. Journal caching can be helpful during batch operations when large numbers of add, update, and delete operations against journaled objects are performed. The default value for journal caching is *BOTH. It is recommended that you use the default value of *BOTH to perform journal caching on both the source and the target systems. For more information about journal caching, see IBMs Redbooks TechnoteJournal Caching: Understanding the Risk of Data Loss.
342
To enable journal standby state or journal caching in a MIMIX environment, two parameters have been added to the Create Journal Definition (CRTJRNDFN) and Change Journal Definition (CHGJRNDFN) commands: Target journal state (TGTSTATE) and Journal caching (JRNCACHE). See Creating a journal definition on page 215 and Changing a journal definition on page 217. When journaling is used on the target system, the TGTSTATE parameter specifies the requested status of the target journal. Valid values for the TGTSTATE parameter are *ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)), the target journal state is set to active when the data group is started. When *STANDBY is specified, objects are journaled on the target system, but most journal entries are prevented from being deposited into the target journal. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the TGTSTATE value should remain unchanged. The JRNCACHE parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Valid values for the JRNCACHE parameter are *TGT, *BOTH, *NONE, or *SRC. Although journal caching can be configured on the target system, source system, or both, it is recommended to be performed on both (*BOTH) the target system and source system. The recommended value of *BOTH is the default. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the JRNCACHE value should remain unchanged.
343
commitment control. For journals in standby mode, commitment control entries are not sent to or deposited in the journal. Note: MIMIX does not use commitment control on the target system. As such, MIMIX support of IBMs high availability performance enhancements can be configured on the target system even if commitment control is being used on the source system. Do not use these high availability performance enhancements in conjunction with referential constraints, with the exception of referential constraint types of *RESTRICT.
Also be aware of the following additional restrictions: Do not change journal standby state or journal caching on IBM-supplied journals. These journal names begin with Q and reside in libraries which names also begin with Q (not QGPL). Attempting to change these journals results in an error message. Do not place a remote journal in journal standby state. Journal caching is also not allowed on remote journals. Do not use MIMIX support of IBMs high availability performance enhancements in a cascading environment.
344
345
346
Note: Delays are not applied to blocks larger than the specified medium block percentage. In the previous example, no delays will be applied to blocks larger than 30 percent of the RCVJRNE block size, or 60,000 bytes.
347
LEN(20) Note: Although you will see improvements from the file attribute cache with the default character value (LEN(2)), enhancements are maximized by recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE call delays. 2. Specify the RCVJRNE block size, percentages, and multipliers to be used for the delay. Valid values for the RCVJRNE block size are 32Kb to 4000Kb. Valid values for the percentages and multipliers are numbers 01 through 99. Lakeview recommends typing the following as a starting point where cache size is the two character number for the size of the file attribute cache: CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE(cache_size,10,02,30,01,0100) Note: For information about the cache size, see Caching extended attributes of *FILE objects on page 345.
348
349
350
351
Example: This example shows the result of setting the policy for a data group to a value of 10,000. Table 45 shows the files replicated by each of the apply sessions used by the data group and the result of comparison. Because of the number of uncommitted record operations present at the time of the request, files processed by apply sessions A and C are not compared.
Table 45. Apply Session A B C D Sample results with a policy threshold value of 10,000. Files Uncommitted Record Operation Per File A01 A02 B01 B02 C01 C02 D01 D02 11,000 0 5,000 0 7,000 6,000 50 500 Apply Session Total > 10,000 < 10,000 > 10,000 < 10,000 Not compared, *CMT Not compared, *CMT Compared Compared Not compared, *CMT Not compared, *CMT Compared Compared Result
352
Chapter16
System journal replication: The following topics describe advanced techniques for system journal replication: Omitting T-ZC content from system journal replication on page 387 describes considerations and requirements for omitting content of T-ZC journal entries from replicated transactions for logical and physical files. Selecting an object retrieval delay on page 391 describes how to set an object retrieval delay value so that a MIMIX lock on an object does not interfere with your applications. This topic includes several examples. Configuring to replicate SQL stored procedures and user-defined functions on page 393 describes the requirements for replicating these constructs and how configure MIMIX to replicate them.
353
Using Save-While-Active in MIMIX on page 396 describes how to change type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU.
354
Keyed replication
By default, MIMIX user journal replication processes use positional replication. You can change from positional replication to keyed replication for database files.
355
Keyed replication
You can use the Verify Key Attributes (VFYKEYATR) command to determine whether a physical file is eligible for keyed replication. See Verifying key attributes on page 359.
356
DB journal entry processing must have Before images as *SEND for source send configurations. When using remote journaling, all journal entries are sent. Verify that you have the value you need specified for the Journal image element of the File and tracking ent. options. *BOTH is recommended. File and tracking ent. options must specify *KEYED for the Replication type element.
3. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic Verifying Key Attributes in the Using MIMIX book. 4. If you have modified file entry options on individual data group file entries, you need to ensure that the values used are compatible with keyed replication. 5. Start journaling for the file entries using Starting journaling for physical files on page 326.
Note: You can use any of the following ways to configure data group file entries for keyed replication: Use either procedure in topic Loading file entries on page 272 to add or modify a group of data group file entries. If you are modifying existing file
357
Keyed replication
entries in this way, you should specify *UPDADD for the Update option parameter. Use topic Adding a data group file entry on page 278 to create a new file entry. Use topic Changing a data group file entry on page 279 to modify an existing file entry. 5. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic Verifying Key Attributes in the Using MIMIX book. 6. After you have changed individual data group file entries, you need to start journaling for the file entries using Starting journaling for physical files on page 326.
358
3. Press Enter. 4. A spooled file is created that indicates whether you can use keyed replication for the files in the library or data group you specified. Display the spooled file (WRKSPLF command) or use your standard process for printing. You can use keyed replication for the file if *BOTH appears in the Replication Type Allowed column. If a value appears in the Replication Type Defined column, the file is already defined to the data group with the replication type shown.
359
360
MIMIX user journal replication provides filtering options within the data group definition. Also, MIMIX provides options within the data group definition and for individual data group file entries for resolving most collision points. Additionally, collision resolution classes allow you to specify different resolution methods for each collision point.
File sharing is a scenario in which a file can be shared among a group of systems and can be updated from any of the systems in the group. MIMIX implements file sharing among systems defined to the same MIMIX installation. To enable file sharing, MIMIX must be configured to allow bi-directional data flow. An example of file sharing is when an enterprise maintains a single database file that must be updated from any of several systems.
361
Configure two data group definitions between the two systems. In one data group, specify *SYS1 for the Data source (DTASRC) parameter. In the other data group, specify *SYS2 for this parameter. Each data group definition should specify *NO for the Allow to be switched (ALWSWT) parameter.
Note: In system journal replication, MIMIX does not support simultaneous updates to the same object on multiple systems and does not support conflict resolution for objects. Once an object is replicated to a target system, system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system.
362
File combining is a scenario in which all or partial information from files on multiple systems can be sent to and combined in a single file on a target system. In its user journal replication processes, MIMIX implements file combining between multiple source systems and a target system that are defined to the same MIMIX installation. MIMIX determines what data from the multiple source files is sent to the target system based on the contents of a journal transaction. An example of file combining is when many locations within an enterprise update a local file and the updates from all local files are sent to one location to update a composite file. The example in Figure 20
363
shows file combining from multiple source systems onto a composite file on the management system.
Figure 20. Example of file combining
To enable file combining between two systems, MIMIX user journal replication must be configured as follows: Configure the data group definition for keyed replication. See topic Keyed replication on page 355. If only part of the information from the source system is to be sent to the target system, you need an exit program to filter out transactions that should not be sent to the target system. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file combining operation effectively becomes a file routing operation. To ensure that the data group will perform file combining operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication. After the combining operating is complete, if the combined data will be replicated or distributed again, you need to prevent it from returning to the system on which it originated.
File routing is a scenario in which information from a single file can be split and sent to files on multiple target systems. In user journal replication processes, MIMIX implements file routing between a source system and multiple target systems that are defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit program that makes the file routing decision. The user exit program determines what data from the source file is sent to each of the target systems based on the contents
364
of a journal transaction. An example of file routing is when one location within an enterprise performs updates to a file for all other locations, but only updated information relevant to a location is sent back to that location. The example in Figure 21 shows the management system routing only the information relevant to each network system to that system.
Figure 21. Example of file routing
To enable file routing, MIMIX user journal replication processes must be configured as follows: Configure the data group definition for keyed replication. See topic Keyed replication on page 355. The data group definition must call an exit program that filters transactions so that only those transactions which are relevant to the target system are sent to it. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file routing operation effectively becomes a file combining operation. To ensure that the data group will perform file routing operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication.
365
Data can pass through one intermediate system within a MIMIX installation. Additional MIMIX installations will allow you to support cascading in scenarios that require data to flow though two or more intermediate systems before reaching its destination. Figure 22 shows the basic cascading configuration that is possible within one MIMIX installation.
Figure 22. Example of a simple cascading scenario
To enable cascading you must have the following: Within a MIMIX installation, the management system must be the intermediate system. Configure a data group between the originating system (a network system) to the intermediate (management) system. Configure another data group for the flow from the intermediate (management) system to the destination system. For user journal replication, you also need the following: The data groups should be configured to send journal entries that are generated by MIMIX. To do this, specify *SEND for the Generated by MIMIX element of the DB journal entry processing (DBJRNPRC) parameter. When this is the case, MIMIX performs the database updates. If it is possible for the data to be routed back to the originating or any intermediate systems, you need to use keyed replication. Note: Once an object is replicated to a target system, MIMIX system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system. Cascading may be used with other data management techniques to accomplish a specific goal. Figure 23 shows an example where the Chicago system is a management system in a MIMIX installation that collects data from the network systems and broadcasts the updates to the other participating systems. The network systems send unfiltered data to the management system. Figure 23 is a cascading scenario because changes that originate on the Hong Kong system pass through an intermediate system (Chicago) before being distributed to the Mexico City system and other network systems in the MIMIX installation. Exit programs are required for the
366
data groups acting between the management system and the destination systems and need to prevent updates from flowing back to their system of origin.
Figure 23. Bi-directional example that implements cascading for file distribution.
367
Trigger support
Trigger support
A trigger program is a user exit program that is called by the database when a database modification occurs. Trigger programs can be used to make other database modifications which are called trigger-induced database modifications.
368
This is because the database apply process checks each transaction before processing to see if filtering is required, and firing the trigger adds additional overhead to database processing.
369
Constraint support
Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of constraints: referential, unique, primary key and check. Unique, primary key and check constraints are single file operations transparent to MIMIX. If a constraint is met for a database operation on the source system, the same constraint will be met for the replicated database operation on the target. Referential constraints, however, ensure the integrity between multiple files. For example, you could use a referential constraint to: Ensure when an employee record is added to a personnel file that it has an associated department from a company organization file. Empty a shopping cart and remove the order records if an internet shopper exits without placing an order.
When constraints are added, removed or changed on files replicated by MIMIX, these constraint changes will be replicated to the target system. With the exception of files that have been placed on hold, MIMIX always enables constraints and applies constraint entries. MIMIX tolerates mismatched before images or minimized journal entry data CRC failures when applying constraint-generated activity. Because the parent record was already applied, entries with mismatched before images are applied and entries with minimized journal entry data CRC failures are ignored. To use this support: Ensure that your target system is at the same release level or greater than the source system to ensure the target system is able to use all of the i5/OS function that is available on the source system. If an earlier i5/OS level is installed on the target system the operation will be ignored. You must have your MIMIX environment configured for either MIMIX Dynamic Apply or legacy cooperative processing.
370
Referential constraint handling for these dependent files is supported through the replication of constraint-induced modifications. MIMIX does not provide the ability to disable constraints because i5/OS would check every record in the file to ensure constraints are met once the constraint is reenabled. This would cause a significant performance impact on large files and could impact switch performance. If the need exists, this can be done through automation.
371
Constraint support
372
Nothing prevents identity column values from being generated more than once. However, in typical usage, the identity column is also a primary, unique key and set to not cycle. The value generator for the identity column is stored internally with the table. Following certain actions which transfer table data from one system to another, the next identity column value generated on the receiving system may not be as expected. This can occur after a MIMIX switch and after other actions such as certain save/restore operations on the backup system. Similarly, other actions such as applying journaled changes (APYJRNCHG), also do not keep the value generator synchronized. Any SQL table with an identity column that is replicated by a switchable data group can potentially experience this problem. Journal entries used to replicate inserted rows on the production system do not contain information that would allow the value generator to remain synchronized. The result is that after a switch to the backup system, rows can be inserted on the backup system using identity column values
373
other than the next expected value. The starting value for the value generator on the backup system is used instead of the next expected value based on the tables content. This can result in the reuse of identity column values which in turn can cause a duplicate key exception. Detailed technical descriptions of all attributes are available in the IBM eServer iSeries Information Center. Look in the Database section for the SQL Reference for CREATE TABLE and ALTER TABLE statements.
Also, the SETIDCOLA command is needed in any environment in which you are attempting to restore from a save that was created while replication processes were running.
374
chosen must be valid for all tables in the data group. See Examples of choosing a value for INCREMENTS on page 377. Not supported -The following scenarios are known to be problematic and are not supported. If you cannot use the SETIDCOLA command in your environment, consider the Alternative solutions on page 375. Columns that have cycled - If an identity column allows cycling and adding a row increments its value beyond the maximum range, the restart value is reset to the beginning of the range. Because cycles are allowed, the assumption is that duplicate keys will not be a problem. However, unexpected behavior may occur when cycles are allowed and old rows are removed from the table with a frequency such that the identity column values never actually complete a cycle. In this scenario, the ideal starting point would be wherever there is the largest gap between existing values. The SETIDCOLA command cannot address this scenario; it must be handled manually. Rows deleted on production table - An application may require that an identity column value never be generated twice. For example, the value may be stored in a different table, data area or data queue, given to another application, or given to a customer. The application may also require that the value always locate either the original row or, if the row is deleted, no row at all. If rows with values at the end of the range are deleted and you perform a switch followed by the SETIDCOLA command, the identity column values of the deleted rows will be re-generated for newly inserted rows. The SETIDCOLA command is not recommended for this environment. This must be handled manually. No rows in backup table - If there are no rows in the table on the backup system, the restart value will be set to the initial start value. Running the SETIDCOLA command on the backup system may result in re-generating values that were previously used. The SETIDCOLA command cannot address this scenario; it must be handled manually. Application generated values - Optionally, applications can supply identity column values at the time they insert rows into a table. These application-generated identity values may be outside the minimum and maximum values set for the identity column. For example, a tables identity column range may be from 1 through 100,000,000 but an application occasionally supplies values in the range of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA command is run, the command would recognize the higher values from the application and would cycle back to the minimum value of 1. Because the result would be problematic, the SETIDCOLA command is not recommended for tables which allow application-generated identity values. This must be handled manually.
Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you have these options. Manually reset the identity column starting point: Following a switch to the backup system, you can manually reset the restart value for tables with identity
375
columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for this purpose. Convert to SQL sequence objects: To overcome the limitations of identity column switching and to avoid the need to use the SETIDCOLA command, SQL sequence objects can be used instead of identity columns. Sequence objects are implemented using a data area which can be replicated by MIMIX. The data area for the sequence object must be configured for replication through the user journal (cooperatively processed).
376
Following a planned switch where tables are synchronized, you can usually use *DFT. number-of-increments-to-skip Specify the number of increments to skip. Valid values are 1 through 2,147,483,647. Following an unplanned switch, use a larger value to ensure that you skip any values used on the production system that may not have been replicated to the backup system.
Usage notes
The reason you are using this command determines which system you should run it from. See When the SETIDCOLA command is useful on page 374 for details. The command can be invoked manually or as part of a MIMIX Model Switch Framework custom switching program. Evaluation of your environment to determine an appropriate increment value is highly recommended before using the command. This command can be long running when many files defined for replication by the specified data group contain identity columns. This is especially true when affected identity columns do not have indexes over them or when they are referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce this time. This command creates a work library named SETIDCOLA which is used by the command. The SETIDCOLA library is not deleted so that it can be used for any error analysis. Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts. RUNSQLSTM produces spooled files showing the ALTER TABLE statements executed, along with any error messages received. If any statement fails, the RUNSQLSTM will also fail, and return the failing status back to the job where SETIDCOLA is running and an escape message will be issued.
377
For example, data group ORDERS contains tables A and B. Each row added to table A increases the identity value by 1 and each row added to table B increases the identify value by 1,000. Rows are inserted into table A at a rate of approximately 600 rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per hour. Prior to a switch, on the production system the latest value for table A was 75 and the latest value for table B was 30,000. Consider the following scenarios: Scenario 1. You performed a planned switch for test purposes. Because replication of all transactions completed before the switch and no users have been allowed on the backup system, the backup system has the same values as the production. Before starting replication in the reverse direction you run the SETIDCOLA command with an INCREMENTS value of 1. The next rows added to table A and B will have values of 76 and 31,000, respectively. Scenario 2. You performed an unplanned switch. From previous experience, you know that the latency of changes being transferred to the backup system is approximately 15 minutes. Rows are inserted into Table A at the highest rate. In 15 minutes, approximately 150 rows will have been inserted into Table A (600 rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However, since all measurements are approximations or based on historical data, this amount should be adjusted by a factor of at least 100% to 300 to ensure that duplicate identity column values are not generated on the backup system. The next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000 + (300*1000)= 330,000 respectively.
2. Check the job log for the following messages. Message LVE3E2C identifies the number of tables found with identity columns. Message LVI3E26 indicates that no tables were found with identity columns. 3. If the results found tables with identity columns, you need to evaluate the tables and determine whether you can use the SETIDCOLA command to set values.
378
limitations on page 374. 3. Determine what increment value is appropriate for use for all tables replicated by the data group. Consider the needs of each table. Also consider the MIMIX backlog at the time you plan to use the command. See Examples of choosing a value for INCREMENTS on page 377. 4. From the appropriate system, as defined in When the SETIDCOLA command is useful on page 374 specify a data group and the number of increments to skip in the command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*SET) INCREMENTS(number)
379
380
Collision resolution
Collision resolution is a function within MIMIX user journal replication that automatically resolves detected collisions without user intervention. MIMIX supports the following choices for collision resolution that you can specify in the file entry options (FEOPT) parameter in either a data group definition or in an individual data group file entry: Held due to error: (*HLDERR) This is the default value for collision resolution in the data group definition and data group file entries. MIMIX flags file collisions as errors and places the file entry on hold. Any data group file entry for which a collision is detected is placed in a "held due to error" state (*HLDERR). This results in the journal entries being replicated to the target system but they are not applied to the target database. If the file entry specifies member *ALL, a temporary file entry is created for the member in error and only that file entry is held. Normal processing will continue for all other members in the file. You must take action to apply the changes and return the file entry to an active state. When held due to error is specified in the data group definition or the data group file entry, it is used for all 12 of the collision points. Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically synchronize file members when an error is detected. The member is put on hold while the database apply process continues with the next transaction. The file member is synchronized using copy active file processing, unless the collision occurred at the compare attributes collision point. In the latter case, the file is synchronized using save and restore processing. When automatic synchronization is specified in the data group definition or data group file entry, it is used for all 12 of the collision points. Collision resolution class: A collision resolution class is a named definition which provides more granular control of collision resolution. Some collision points also provide additional methods of resolution that can only be accessed by using a collision resolution class. With a defined collision resolution class, you can specify how to handle collision resolution at each of the 12 collision points. You can specify multiple methods of collision resolution to attempt at each collision point. If the first method specified does not resolve the problem, MIMIX uses the next method specified for that collision point.
381
Collision resolution
data collision. This method is available for all collision points. The MXCCUSREXT service program dynamically links your exit program. The MXCCUSREXT service program is shipped with MIMIX and runs on the target system. The exit program is called on three occasions. The first occasion is when the data group is started. This call allows the exit program to handle any initialization or set up you need to perform. The MXCCUSREXT service program (and your exit program) is called if a collision occurs at a collision point for which you have indicated that an exit program should perform collision resolution actions. Finally, the exit program is called when the data group is ended. Field merge: (*FLDMRG) This method is only available for the update collision point 3, used with keyed replication. If certain rules are met, fields from the afterimage are merged with the current image of the file to create a merged record that is written to the file. Each field within the record is checked using the series of algorithms below. In the following algorithms, these abbreviations are used: RUB = before-image of the source file RUP = after-image of the source file RCD = current record image of the target file a. If the RUB equals the RUP and the RUB equals the RCD, do not change the RUP field data. b. If the RUB equals the RUP and the RUB does not equal the RCD, copy the RCD field data into the RUP record. c. If the RUB does not equal the RUP and the RUB equals the RCD, do not change the RUP field data. d. If the RUB does not equal the RUP and the RUB does not equal the RCD, fail the field-level merge. Applied: (*APPLIED) This method is only available for the update collision point 3 and the delete collision point 1. For update collision point 3, the transaction is ignored if the record to be updated already equals the data in the updated record. For delete collision point 1, the transaction is ignored because the record does not exist.
If multiple collision resolution methods are specified and do not resolve the problem MIMIX will always use *HLDERR as the last resort, placing the file on hold.
382
You must specify either *AUTOSYNC or the name of a collision resolution class for the Collision resolution element of the File entry option (FEOPT) parameter. Specify the value as follows: If you want to implement collision resolution for all files processed by a data group, specify a value in the parameter within the data group definition. If you want to implement collision resolution for only specific files, specify a value in the parameter within an individual data group file entry. Note: Ensure that data group activity is ended before you change a data group definition or a data group file entry.
If you plan to use an exit program for collision resolution, you must first create a named collision resolution class. In the collision resolution class, specify *EXITPGM for each of the collision points that you want to be handled by the exit program and specify the name of the exit program.
383
Collision resolution
7. At the Number of retry attempts prompt, specify the number of times to try to automatically synchronize a file. If this number is exceeded in the time specified in the Retry time limit, the file will be placed on hold due to error 8. At the Retry time limit prompt, specify the number of maximum number of hours to retry a process if a failure occurs due to a locking condition or an in-use condition. Note: If a file encounters repeated failures, an error condition that requires manual intervention is likely to exist. Allowing excessive synchronization requests can cause communications bandwidth degradation and negatively impact communications performance. 9. To create the collision resolution class, press Enter.
384
385
Collision resolution
386
10 25 30 36 37 38 62 63 64
1.
Clear Initialize Open Reorganize Remove Rename Add constraint Change constraint Remove constraint
These T-ZC journal entries may or may not have a member name associated with them. If a member name is associated with the journal entry, the T-ZC is a member operation. If no member name is associated with the journal entry, the T-ZC is assumed to be a file operation.
387
By default, MIMIX replicates file attributes and file member data for all T-ZC entries generated for logical and physical files configured for system journal replication. While MIMIX recreates attribute changes on the target system, member additions and data changes require MIMIX to replicate the entire object using save, send, and restore processes. This can cause unnecessary replication of data and can impact processing time, especially in environments where the replication of file data transactions is not necessary. Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data group object entry commands, you can specify a predetermined set of access types for *FILE objects to be omitted from system journal replication. T-ZC journal entries with access types within the specified set are omitted from processing by MIMIX. The OMTDTA parameter is useful when a file or members data does not need to be the replicated. For example, when replicating work files and temporary files, it may be desirable to replicate the file layout but not the file members or data. The OMTDTA parameter can also help you reduce the number of transactions that require substantial processing time to replicate, such as T-ZC journal entries with access type 30 (Open). Each of the following values for the OMTDTA parameter define a set of access types that can be omitted from replication: *NONE - No T-ZCs are omitted from replication. All file, member, and data operations in transactions for the access types listed in Table 46 are replicated. This is the default value. *MBR - Data operations are omitted from replication. File and member operations in transactions for the access types listed in Table 46 are replicated. Access type 7 (Change) for both file and member operations are replicated. *FILE - Member and data operations are omitted from replication. Only file operations in transactions for the access types listed in Table 46 are replicated. Only file operations in transactions with access type 7 (Change) are replicated.
388
continue to be journaled and replicated, the data group object entry should also specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter. For all library-based objects, MIMIX evaluates the object auditing level when starting data a group after a configuration change. If the configured value specified for the OBJAUD parameter is higher than the objects actual value, MIMIX will change the object to use the higher value. If you use the SETDGAUD command to force the object to have an auditing level of *NONE and the data group object entry also specifies *NONE, any changes to the file will no longer generate T-ZC entries in the system journal. For more information about object auditing, see Managing object auditing on page 57. Object attribute considerations - When MIMIX evaluates a system journal entry and finds a possible match to a data group object entry which specifies an attribute in its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in order to determine which object entry is the most specific match. If the object attribute is not needed to determine the most specific match to a data group object entry, it is not retrieved. After determining which data group object entry has the most specific match, MIMIX evaluates that entry to determine how to proceed with the journal entry. When the matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to consider the object attribute in any other evaluations. As a result, the performance of the object send job may improve.
Updated for 5.0.03.00.
389
replication. This may affect whether replicated files on the source and target systems are identical. For example, recall how a file with an object auditing attribute value of *NONE is processed. After MIMIX replicates the initial creation of the file through the system journal, the file on the target system reflects the original state of the file on the source system when it was retrieved for replication. However, any subsequent changes to file data are not replicated to the target system. According to the configuration information, the files are synchronized between source and target systems, but the files are not the same. A similar situation can occur when OMTDTA is used to prevent replication of predetermined types of changes. For example, if *MBR is specified for OMTDTA, the file and member attributes are replicated to the target system but the member data is not. The file is not identical between source and target systems, but it is synchronized according to configuration. Comparison commands will report these attributes as *EC (equal configuration) even though member data is different. MIMIX audits, which call comparison commands with a data group specified, will have the same results. Running a comparison command without specifying a data group will report all the synchronized-but-not-identical attributes as *NE (not equal) because no configuration information is considered. Consider how the following comparison commands behave when faced with nonidentical files that are synchronized according to the configuration. The Compare File Attributes (CMPFILA) command has access to configuration information from data group object entries for files configured for system journal replication. When a data group is specified on the command, files that are configured to omit data will report those omitted attributes as *EC (equal configuration). When CMPFILA is run without specifying a data group, the synchronized-but-not-identical attributes are reported as *NE (not equal). The Compare File Data (CMPFILDTA) command uses data group file entries for configuration information. As a result, when a data group is specified on the command, any file objects configured for OMTDTA will not be compared. When CMPFILDTA is run without specifying a data group, the synchronized-but-notidentical file member attributes are reported as *NE (not equal). The Compare Object Attributes (CMPOBJA) command can be used to check for the existence of a file on both systems and to compare its basic attributes (those which are common to all object types). This command never compares filespecific attributes or member attributes and should not be used to determine whether a file is synchronized.
390
Example 2 - The object retrieval delay value is configured to be 2 seconds: Object A is created or changed at 10:45:51.
391
The Object Retrieve job encounters the create/change journal entry at 10:45:52. It retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object retrieval delay value has not be met or exceeded, the object retrieve job delays for 1 second to satisfy the configured delay value. After the delay (at time 10:45:53), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval delay value has been met, the object retrieve job continues with normal processing and attempts to package the object.
Example 3 - The object retrieval delay value is configured to be 4 seconds: Object A is created or changed at 13:20:26. The Object Retrieve job encounters the create/change journal entry at 13:20:27. It retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:26 + configured delay value of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3 seconds to satisfy the configured delay value. While the object retrieve job is waiting to satisfy the configured delay value, the object is changed again at 13:20:28. After the delay (at time 13:20:30), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) again exceeds the current date/time (13:20:30) and delays for 2 seconds to satisfy the configured delay value. After the delay (at time 13:20:32), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval delay value has now been met, the object retrieve job continues with normal processing and attempts to package the object.
392
393
2. Ensure that you have a data group object entry that includes the associated program object. For example: ADDDGOBJE DGDFN(name system1 system2) LIB1(library) OBJ1(*ALL) OBJTYPE(*PGM)
394
395
396
value will also use save-while-active. All other attempts to save the object will use a normal save. Note: Although MIMIX has the capability to replicate DLOs using save/restore techniques, it is recommended that DLOs be replicated using optimized techniques, which can be configured using the DLO transmission method under Object processing in the data group definition.
Example configurations
The following examples describe the SQL statements that could be used to view or set the configuration settings for a data group definition (data group name, system 1 name, system 2 name) of MYDGDFN, SYS1, SYS2. Example - Viewing: Use this SQL statement to view the values for the data group definition: SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Example - Disabling: If you want to modify the values for a data group definition to disable use of save-while-active for a data group and use a normal save, you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Example - Modifying: If you want to modify a data group definition to enable use of save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Note: You only have to make this change on the management system; the network system will be automatically updated by MIMIX.
397
398
Chapter17
The topics in this chapter include: Object selection process on page 399 describes object selection which interacts with your input from a command so that the objects you expect are selected for processing. Parameters for specifying object selectors on page 402 describes object selectors and elements which allow you to work with classes of objects Object selection examples on page 407 provides examples and graphics with detailed information about object selection processing, object order precedence, and subtree rules. Report types and output formats on page 418 describes the output of compare commands: spooled files and output files (outfiles).
399
The object selection process takes a candidate group of objects, subsets them as defined by a list of object selectors, and produces a list of objects to be processed. Figure 24 illustrates the process flow for object selection.
Figure 24. Object selection process flow
Candidate objects are those objects eligible for selection. They are input to the object selection process. Initially, candidate objects consist of all objects on the
400
system. Based on the command, the set of candidate objects may be narrowed down to objects of a particular class (such as IFS objects). The values specified on the command determine the object selectors used to further refine the list of candidate objects in the class. An object selector identifies an object or group of objects. Object selectors can come from the configuration information for a specified data group, from items specified in the object selector parameter, or both. MIMIX processing for object selection consists of two distinct steps. Depending on what is specified on the command, one or both steps may occur. The first major selection step is optional and is performed only if a data group definition is entered on the command. In that case, data group entries are the source for object selectors. Data group entries represent one of four classes of objects: files, library-based objects, IFS objects, and DLOs. Only those entries that correspond to the class associated with the command are used. The data group entries subset the list of candidate objects for the class to only those objects that are eligible for replication by the data group. If the command specifies a data group and items on the object selection parameter, the data group entries are processed first to determine an intermediate set of candidate objects that are eligible for replication by the data group. That intermediate set is input to the second major selection step. The second step then uses the input specified on the object selection parameter to further subset the objects selected by the data group entries. If no data group is specified on the data group definition parameter, the object selection parameter can be used independently to select from all objects on the system. The second major object selection step subsets the candidate objects based on Object selectors from the commands object selector parameter (file, object, IFS object, or DLO). Up to 300 object selectors may be specified on the parameter. If none are specified, the default is to select all candidate objects. Note: A single object selector can select multiple objects through the use of generic names and special values such as *ALL, so the resulting object list can easily exceed the limit of 300 object selectors that can be entered on a command. The selection parameter is separate and distinct from the data group configuration entries. If a data group is specified, the possible object selectors are 1 to N, where N is defined by the number of data group entries. The remaining candidate objects make up the resultant list of objects to be processed. Each object selector consists of multiple object selector elements, which serve as filters on the object selector. The object selector elements vary by object class. Elements provide information about the object such as its name, an indicator of whether the objects should be included in or omitted from processing, and name mapping for dual-system and single-system environments. See Table 47 for a list of object selector elements by object class.
Order precedence
Object selectors are always processed in a well-defined sequence, which is important when an object matches more than one selector.
401
Selectors from a data group follow data group rules and are processed in most- to least-specific order. Selectors from the object selection parameter are always processed last to first. If a candidate object matches more than one object selector, the last matching selector in the list is used. As a general rule when specifying items on an object selection parameter, first specify selectors that have a broad scope and then gradually narrow the scope in subsequent selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1. Object selection examples on page 407 illustrates the precedence of object selection. For each object selector, the elements are checked according to a priority defined for the object class. The most specific element is checked for a match first, then the subsequent elements are checked according to their priority. For additional, detailed information about order precedence and priority of elements, see the following topics: How MIMIX uses object entries to evaluate journal entries for replication on page 101 Identifying IFS objects for replication on page 118 How MIMIX uses DLO entries to evaluate journal entries for replication on page 124 Processing variations for common operations on page 130
402
generic name specifications. Filtering elements provide additional filtering capability for candidate objects. Name mapping elements are required primarily for environments where objects exist in different libraries or paths. Include or omit elements identify whether the object should be processed or explicitly excluded from processing.
Table 47 lists object selection elements by function and identifies which elements are available on the commands.
Table 47. Class Commands: Object selection parameters and parameter elements by class File CMPFILA, CMPFILDTA, CMPRCDCNT1 FILE File Library Member Attribute1 Include/Omit System 2 file1 System 2 library1 Library-based object CMPOBJA, SYNCOBJ OBJ Object Library Type Attribute Include/Omit System 2 object System 2 library IFS CMPIFSA, SYNCIFS OBJ Path Subtree Name Pattern Type Include/Omit System 2 path System 2 name pattern DLO CMPDLOA, SYNCDLO DLO Path Subtree Name Pattern Type Owner Include/Omit System 2 path System 2 name pattern
The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.
File name and object name elements: The File name and Object name elements allow you to identify a file or object by name. These elements allow you to choose a specific name, a generic name, or the special value *ALL. Using a generic name, you can select a group of files or objects based on a common character string. If you want to work with all objects beginning with the letter A, for example, you would specify A* for the object name. To process all files within the related selection criteria, select *ALL for the file or object name. When a data group is also specified on the command, a value of *ALL results in the selection of files and objects defined to that data group by the respective data group file entries or data group object entries. When no data group is specified on the command, specifying *ALL and a library name, only the objects that reside within the given library are selected. Library name element: The library name element specifies the name of the library that contains the files or objects to be included or omitted from the resultant list of
403
objects. Like the file or object name, this element allows you to define a library a specific name, a generic name, or the special value *ALL. Note: The library value *ALL is supported only when a data group is specified. Member element: For commands that support the ability to work with file members, the Member element provides a means to select specific members. The Member element can be a specific name, a generic name, or the special value *ALL. Refer to the individual commands for detailed information on member processing. Object path name (IFS) and DLO path name elements: The Object path name (IFS) and DLO path name elements identify an object or DLO by path name. They allow a specific path, a generic path, or the special value *ALL. Traditionally, DLOs are identified by a folder path and a DLO name. Object selection uses an element called DLO path, which combines the folder path and the DLO name. If you specify a data group, only those objects defined to that data group by the respective data group IFS entries or data group DLO entries are selected. Directory subtree and folder subtree elements: The Directory subtree and Folder subtree elements allow you to expand the scope of selected objects and include the descendants of objects identified by the given object or DLO path name. By default, the subtree element is *NONE, and only the named objects are selected. However, if *ALL is used, all descendants of the named objects are also selected. Figure 25 illustrates the hierarchical structure of folders and directories prior to processing, and is used as the basis for the path, pattern, and subtree examples shown later in this document. For more information, see the graphics and examples beginning with Example subtree on page 410.
Figure 25. Directory or folder hierarchy
404
Directory subtree elements for IFS objects: When selecting IFS objects, only the objects in the file system specified will be included. Object selection will not cross file system boundaries when processing subtrees with IFS objects. Objects from other file systems do not need to be explicitly excluded, however you will need to specify if you want to include objects from other file systems. For more information, see the graphic and examples beginning with Example subtree for IFS objects on page 415. Name pattern element: The Name pattern element provides a filter on the last component of the object path name. The Name pattern element can be a specific name, a generic name, or the special value *ALL. If you specify a pattern of $*, for example, only those candidate objects with names beginning with $ that reside in the named DLO path or IFS object path are selected. Keep in mind that improper use of the Name pattern element can have undesirable results. Let us assume you specified a path name of /corporate, a subtree of *NONE, and pattern of $*. Since the path name, /corporate, does not match the pattern of $*, the object selector will identify no objects. Thus, the Name pattern element is generally most useful when subtree is *ALL. For more information, see the Example Name pattern on page 414. Object type element: The Object type element provides the ability to filter objects based on an object type. The object type is valid for library-based objects, IFS objects, or DLOs, and can be a specific value or *ALL. The list of allowable values varies by object class. When you specify *ALL, only those object types which MIMIX supports for replication are included. For a list of replicated object types, see Supported object types for system journal replication on page 549. Supported object types for CMPIFSA and SYNCIFS are listed in Table 48.
Table 48. Supported object types for CMPIFSA and SYNCIFS Description All directories, stream files, and symbolic links are selected Directories Stream files Symbolic links
Supported object types for CMPDLOA and SYNCDLO are listed in Table 49.
Table 49. DLO type *ALL *DOC *FLR Supported DLO types for CMPDLOA and SYNCDLO Description All documents and folders are selected Documents Folders
405
For unique object types supported by a specific command, see the individual commands. Object attribute element: The Object attribute element provides the ability to filter based on extended object attribute. For example, file attributes include PF, LF, SAVF, and DSPF, and program attributes include CLP and RPG. The attribute can be a specific value, a generic value, or *ALL. Although any value can be entered on the Object attribute element, a list of supported attributes is available on the command. Refer to the individual commands for the list of supported attributes. Owner element: The Owner element allows you to filter DLOs based on DLO owner. The Owner element can be a specific name or the special value *ALL. Only candidate DLOs owned by the designated user profile are selected. Include or omit element: The Include or omit element determines if candidate objects or included in or omitted from the resultant list of objects to be processed by the command. Included entries are added to the resultant list and become candidate objects for further processing. Omitted entries are not added to the list and are excluded from further processing. System 2 file and system 2 object elements: The System 2 file and System 2 object elements provide support for name mapping. Name mapping is useful when working with multiple sets of files or objects in a dual-system or single-system environment. This element may be a specific name or the special value *FILE1 for files or *OBJ1 for objects. If the File or Object element is not a specific name, then you must use the default value of *FILE1 or *OBJ1. This specification indicates that the name of the file or object on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the File or Object parameter. System 2 library element: The System 2 library element allows you to specify a system 2 library name that differs from the system 1 library name, providing name mapping between files or objects in different libraries. This element may be a specific name or the special value *LIB1. If the System 2 library element is not a specific name, then you must use the default value of *LIB1. This specification indicates that the name of the library on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the Library object selector. System 2 object path name and system 2 DLO path name elements: The System 2 object path name and System 2 DLO path name elements support name mapping for the path specified in the Object path name or DLO path name element. Name mapping is useful when working with two sets of IFS objects or DLOs in different paths in either a dual-system or single-system environment. Generic values are not supported for the system 2 value if you specified a generic value for the IFS Object or DLO element. Instead, you must choose the default values of *OBJ1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of
406
the file or object on system 2 is the same as that value on system 1. The default provides support for a two-system environment without name mapping. System 2 name pattern element: The System 2 name pattern provides support for name mapping for the descendents of the path specified for the Object path name or DLO path name element. The System 2 name pattern element may be a specific name or the special value *PATTERN1. If the Object path name or DLO path name element is not a specific name, then you must use the default value of *PATTERN1. This specification indicates that no name mapping occurs. Generic values are not supported for the System 2 name pattern element if you specified a generic value for the Name pattern element.
Next, Table 51 represents the object selectors based on the data group object entry configuration for data group DG1. Objects are evaluated against data group entries in the same order of precedence used by replication processes.
Table 51. Object selectors from data group entries for data group DG1 Object A* ABC* Library LIBX LIBX Object type *ALL *FILE Include or omit *INCLUDE *OMIT
Order Processed 3 2
407
Table 51.
Object selectors from data group entries for data group DG1 Object DEF Library LIBX Object type *JOBQ Include or omit *INCLUDE
Order Processed 1
The object selectors from the data group subset the candidate object list, resulting in the list of objects defined to the data group shown in Table 52. This list is internal to MIMIX and not visible to users.
Table 52. Object A AB DEF Objects selected by data group DG1 Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ
Note: Although job queue DEF in library LIBX did not appear in Table 50, it would be added to the list of candidate objects when you specify a data group for some commands that support object selection. These commands are required to identify or report candidate objects that do not exist. Perhaps you now want to include or omit specific objects from the filtered candidate objects listed in Table 52. Table 53 shows the object selectors to be processed based on the values specified on the object selection parameter. These object selectors serve as an additional filter on the candidate objects.
Table 53. Object selectors for CMPOBJA object selection parameter Object *ALL *ALL *ALL Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ Include or omit *INCLUDE *INCLUDE *OMIT
Order Processed 1 2 3
The objects compared by the CMPOBJA command are shown in Table 54. These are the result of the candidate objects selected by the data group (Table 52) that were subsequently filtered by the object selectors specified for the Object parameter on the CMPOBJA command (Table 53).
Table 54. Object A AB Resultant list of objects to be processed Library LIBX LIBX Object type *OUTQ *SBSD
In this example, the CMPOBJA command is used to compare a set of objects. The input source is a selection parameter. No data group is specified.
408
The data in the following tables show how candidate objects would be processed in order to achieve a resultant list of objects. Table 55 lists all the candidate objects on your system.
Table 55. Object ABC AB A DEFG DEF DE D Candidate objects on system Library LIBX LIBX LIBX LIBX LIBX LIBX LIBX Object type *FILE *SBSD *OUTQ *PGM *PGM *DTAARA *CMD
Table 56 represents the object selectors chosen on the object selection parameter. The sequence column identifies the order in which object selectors were entered. The object selectors serve as filters to the candidate objects listed in Table 55. The last object selector entered on the command is the first one used when determining whether or not an object matches a selector. Thus, generic object selectors with the broadest scope, such as A*, should be specified ahead of more specific generic entries, such as ABC*. Specific entries should be specified last.
Table 56. Sequence Entered 1 2 3 4 5 Object selectors entered on CMPOBJA selection parameter Object A* D* ABC* *ALL DEFG Library LIBX LIBX LIBX LIBX LIBX Object type *ALL *ALL *ALL *PGM *PGM Include or omit *INCLUDE *INCLUDE *OMIT *OMIT *INCLUDE
Sequence Processed 5 4
409
Table 57.
Candidate objects selected by object selectors Object ABC* D* A* Library LIBX LIBX LIBX Object type *ALL *ALL *ALL Include or omit *OMIT *INCLUDE *INCLUDE Selected candidate objects ABC D, DE A, AB
Sequence Processed 3 2 1
Table 58 represents the included objects from Table 57. This filtered set of candidate objects is the resultant list of objects to be processed by the CMPOBJA command.
Table 58. Object A AB D DE DEFG Resultant list of objects to be processed Library LIBX LIBX LIBX LIBX LIBX Object type *OUTQ *SBSD *CMD *DTAARA *PGM
Example subtree
In the following graphics, the shaded area shows the objects identified by the combination of the Object path name and Subtree elements of the Object parameter for an IFS command. Circled objects represent the final list of objects selected for processing.
410
Figure 26 illustrates a path name value of /corporate/accounting, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The candidate objects selected include /corporate/accounting and all descendants.
Figure 26. Directory of /corporate/accounting/
Figure 27 shows a path name of /corporate/accounting/*, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL. In this case, no
411
additional filtering is performed on the objects identified by the path and subtree. The candidate objects selected consist of the specified objects only.
Figure 27. Subtree *NONE for /corporate/accounting/*
412
Figure 28 displays a path name of /corporate/accounting/*, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. All descendants of /corporate/accounting/* are selected.
Figure 28. Subtree *ALL for /corporate/accounting/*
413
Figure 29 is a subset of Figure 28. Figure 29 shows a path name of /corporate/accounting, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL, where only the specified directory is selected.
Figure 29. Subtree *NONE for /corporate/accounting
414
scenario, only those candidate objects which match the generic pattern value ($123, $236, and $895) are selected for processing.
Figure 30. Pattern $* for /corporate/accounting
415
Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded areas are the file systems. Table 59 contains examples showing what file systems would be selected with the path names specified and a subtree specification of *ALL.
Figure 31. Directory with a subtree containing IFS objects.
.
Table 59. Examples of specified paths and objects selected for Figure 31 File system Root file system Root file system in independent ASP PARIS Root file system Objects selected /qsyabc /PARIS/qsyabc None
416
417
Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output parameter. The spooled output consists of four main sectionsthe input or header section, the object selection list section, the differences section, and the summary section. First, the header section of the spooled report includes all of the input values specified on the command, including the data group value (DGDFN), comparison level (CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual attributes compared, number of files, objects, IFS objects or DLOs compared, and number of detected differences. It also provides a legend that provides a description of special values used throughout the report.
418
The second section of the report is the object selection list. This section lists all of the object selection entries specified on the comparison command. Similar to the header section, it provides details on the input values specified on the command. The detail section is the third section of the report, and provides details on the objects and attributes compared. The level of detail in this section is determined by the report type specified on the command. A report type value of *ALL will list all objects compared, and will begin with a summary status that indicates whether or not differences were detected. The summary row indicates the overall status of the object compared. Following the summary row, each attribute compared is listedalong with the status of the attribute and the attribute value. In the event the attribute compared is an indicator, a special value of *INDONLY will be displayed in the value columns. A comparison level value of *DIF will list details only for those objects with detected attribute differences. A value of *SUMMARY will not include the detail section for any object. The fourth section of the report is the summary, which provides a one row summary for each object compared. Each row includes an indicator that indicates whether or not attribute differences were detected.
Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output parameter. Similar to the spooled output, the level of output in the output file is dependent on the report type value specified on the Report type parameter. Each command is shipped with an outfile template that uses a normalized database to deliver a self-defined record, or row, for every attribute you compare. Key information, including the attribute type, data group name, timestamp, command name, and system 1 and system 2 values, helps define each row. A summary row precedes the attribute rows. The normalized database feature ensures that new object attributes can be added to the audit capabilities without disruption to current automation processing. The template files for the various commands are located in the MIMIX product library.
419
Chapter18
Comparing attributes
This chapter describes the commands that compare attributes: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands are designed to audit the attributes, or characteristics, of the objects within your environment and report on the status of replicated objects. Together, these command are collectively referred to as the compare attributes commands. You may already be using the compare attributes commands when they are called by audit functions within MIMIX AutoGuard. When used in combination with the automatic recovery features in MIMIX AutoGuard, the compare attributes commands provide robust functionality to help you determine whether your system is in a state to ensure a successful rollover for planned events or failover for unplanned events. The topics in this chapter include: About the Compare Attributes commands on page 420 describes the unique features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA. Comparing file and member attributes on page 425 includes the procedure to compare the attributes of files and members. Comparing object attributes on page 428 includes the procedure to compare object attributes. Comparing IFS object attributes on page 431 includes the procedure to compare IFS object attributes. Comparing DLO attributes on page 434 includes the procedure to compare DLO attributes.
420
Comparing attributes
provides you with assurance that files are most likely synchronized. The CMPOBJA command supports many attributes important to other librarybased objects, including extended attributes. Extended attributes are attributes unique to given objects, such as auto-start job entries for subsystems. The CMPIFSA and CMPDLOA commands provide enhanced audit capability for IFS objects and DLOs, respectively.
Unique parameters
The following parameters for object selection are unique to the compare attributes commands and allow you to specify an additional level of detail when comparing objects or files. Unique File and Object elements: The following are unique elements on the File parameter (CMPFILA command) and Objects parameter (CMPOBJA command): Member: On the CMPFILA command, the value specified on the Member element is only used when *MBR is also specified on the Comparison level parameter. Object attribute: The Object attribute element enables you to select particular characteristics of an object or file, and provides a level of filtering. For details, see CMPFILA supported object attributes for *FILE objects on page 423 and CMPOBJA supported object attributes for *FILE objects on page 423.
System 2: The System 2 parameter identifies the remote system name, and represents the system to which objects on the local system are compared. This parameter is ignored when a data group is specified, since the system 2
421
information is derived from the data group. A value is required if no data group is specified. Comparison level (CMPFILA only): The Comparison level parameter indicates whether attributes are compared at the file level or at the member level. System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only): The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified.
All comparison attributes supported by a specific compare attribute command may not be applicable for all object types supported by the command. For example, CMPOBJA supports a large number of object types and related comparison attributes. There are many cases where a specific comparison attributes are only valid for a particular object type. Comparison attributes not supported by a given object type are ignored. For example, auto-start job entries is a valid comparison attribute for object types of subsystem description (*SBSD). For all other object types selected as a result of running the
422
Comparing attributes
report, the auto-start job entry attribute is ignored for object types that are not of type *SBSD. If a data group is specified on a compare request, configuration data is used when comparing objects that are identified for replication through the system journal. If an objects configured object auditing value (OBJAUD) is *NONE, its attribute changes are not replicated. When differences are detected on attributes of such an object, they are reported as *EC (equal configuration) instead of being reported as *NE (not equal). For *FILE objects configured for replication through the system journal and configured to omit T-ZC journal entries, also see Omit content (OMTDTA) and comparison commands on page 389.
423
424
4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts.
425
f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Comparison level prompt, accept the default to compare files at a file level only. Otherwise, specify *MBR to compare files at a member level. Note: If *FILE is specified, the Member prompt is ignored (see Step 4b). 7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes based on whether the comparison is at a file or member level or press F4 to see a valid list of attributes. 8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Report type prompt, specify the level of detail for the output report. 12. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 14. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.
13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 18. 14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the Maximum replication lag prompt, specify the maximum amount of time between when a file in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum
426
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, specify *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.
427
4. At the Object prompts, you can specify elements for one or more object selectors that either identify objects to compare or that act as filters to the objects defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the Object and library prompts, specify the name or the generic value you want. b. At the Object type prompt, accept *ALL or specify a specific object type to compare. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the object and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the object and library to which objects on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data
428
group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing objects not defined to a data group. If necessary, specify the name of the remote system to which objects on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Report type prompt, specify the level of detail for the output report. 11. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 13. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.
12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 17. 13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 14. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 15. At the Maximum replication lag prompt, specify the maximum amount of time between when an object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3.
429
16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 17. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter and continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 20. To start the comparison, press Enter.
430
4. At the IFS objects prompts, you can specify elements for one or more object selectors that either identify IFS objects to compare or that act as filters to the IFS objects defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the Object path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to compare. e. At the Include or omit prompt, specify the value you want.
431
f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which IFS objects on the local system are compared. Note: The System 2 object path name and System 2 name pattern values are ignored if a data group is specified on the Data group definition prompts. g. Press Enter. 5. The System 2 parameter prompt appears if you are comparing IFS objects not defined to a data group. If necessary, specify the name of the remote system to which IFS objects on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the Report type prompt, specify the level of detail for the output report. 9. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15. 11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the Maximum replication lag prompt, specify the maximum amount of time between when an IFS object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in
432
the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To start the comparison, press Enter.
433
4. At the Document library objects prompts, you can specify elements for one or more object selectors that either identify DLOs to compare or that act as filters to the DLOs defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to compare. e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
434
f. At the Include or omit prompt, specify the value you want. g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which DLOs on the local system are compared. Note: The System 2 DLO path name and System 2 DLO name pattern values are ignored if a data group is specified on the Data group definition prompts. h. Press Enter. 5. The System 2 parameter prompt appears if you are comparing DLOs not defined to a data group. If necessary, specify the name of the remote system to which DLOs on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the Report type prompt, specify the level of detail for the output report. 9. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15. 11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the Maximum replication lag prompt, specify the maximum amount of time between when a DLO in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in
435
the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To start the comparison, press Enter.
436
Chapter19
437
command compares the number of current records (*CURRDS) and the number of deleted records (*NBRDLTRCDS) for members of physical files that are defined for replication by an active data group. In resource-constrained environments, this capability provides a less-intensive means to gauge whether files are likely to be synchronized. Note: Equal record counts suggest but do not guarantee that members are synchronized. To check for file data differences, use the Compare File Data (CMPFILDTA) command. To check for attribute differences, use the Compare File Attributes (CMPFILA) command. Members to be processed must be defined to a data group that permits replication from a user journal. Journaling is required on the source system. User journal replication processes must be active when this command is used. Members on both systems can be actively modified by applications and by MIMIX apply processes while this command is running. For information about the results of a comparison, see What differences were detected by #MBRRCDCNT on page 583. The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase. Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery phase. Differences detected by this audit appear as not recovered in the Audit Summary user interfaces. Any repairs must be undertaken manually, in the following ways: In MIMIX Availability Manager, repair actions are available for specific errors when viewing the output file for the audit. Run the #FILDTA audit for the data group to detect and correct problems. Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.
3. At the File prompts, you can specify elements for one or more object selectors to act as filters to the files defined to the data group indicated in Step 2. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt
438
for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Include or omit prompt, specify the value you want. 4. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.
5. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 9. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
6. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 7. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 8. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 9. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the
439
job or specify a simple name. 12. To start the comparison, press Enter.
Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it detects in member data between systems. When files are not synchronized, the CMPFILDTA command provides the ability to resynchronize the file at the record level by sending only the data for the incorrect member to the target system. (In contrast, the Synchronize DG File Entry (SYNCDGFE) command would resynchronize the file by transferring all data for the file from the source system to the target system.)
440
When a member held due to error is being processed by the CMPFILDTA command, the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member then changes to *ACTIVE status if compare and repair processing is successful. In the event that compare and repair processing is unsuccessful, the member-level entry is set back to *HLDERR.
Additional features
The CMPFILDTA command incorporates many other features to increase performance and efficiency. Subsetting and advanced subsetting options provide a significant degree of flexibility for performing periodic checks of a portion of the data within a file. Parallel processing uses multi-threaded jobs to break up file processing into smaller groups for increased throughput. Rather than having a single-threaded job on each system, multiple thread groups break up the file into smaller units of work. This technology can benefit environments with multiple processors as well as systems with a single processor.
441
Keyed replication - Although you can run the CMPFILDTA command on keyed files, the command only supports files configured for *POSITIONAL replication. The CMPFILDTA command cannot compare files configured for *KEYED replication. SNA environments - CMPFILDTA requires a TCP/IP transfer definitionyou cannot use SNA. You can be configured for SNA, but then you must override CMPFILDTA to refer to a transfer definition. For more information, see System-level communications on page 159. Apply threshold and apply backlog - Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.
Security considerations
You should take extra precautions when using CMPFILDTAs repair function, as it is capable of accessing and modifying data on your system. To compare file data, you must have read access on both systems. When using the repair function, write access on the system to be repaired may also be necessary when active technology is not used. CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a remote process using RUNCMD, which requires two conditions to be true. First, the user profile of the job that is invoking CMPFILDTA must exist on the remote system and have the same password on the remote system as it does on the local system. Second, the user profile must have appropriate read or update access to the members to be compared or repaired. If active processing and repair is requested, only read access is needed. In this case, the repair processing would be done by the database apply process.
442
If one or more members differ in the manner described above, a distinct escape message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor these escape messages specifically.
Table 61.
CMPFILDTA and trigger support Trigger activation group (ACTGRP) *NEW NAMED or *CALLER *NEW *NEW *NEW NAMED or *CALLER CMPFILDTA Repair on system (REPAIR) Any value Any value *NONE Any value other than *NONE Any value other than *NONE Any value CMPFILDTA Process while active (ACTIVE) Any value Any value Any value *NO *YES Any value CMPFILDTA support Not supported Supported Supported Not supported Supported Supported
Trigger type
Read Read Update, insert, and delete Update, insert, and delete Update, insert, and delete Update, insert, and delete
443
Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA job. However, the run priority of either CMPFILDTA job is superseded if a
444
CMPFILDTA class object (*CLS) exists in the installation library of the system on which the job is running. Note: Use the Change Job (CHGJOB) command on the local system to modify the run priority of the local job. CMPFILDTA uses the priority of the local job to set the priority of the remote job, so that both jobs have the same run priority. To set the remote job to run at a different priority than the local job, use the Create Class (CRTCLS) command to create a *CLS object for the job you want to change.
Detailed information about object selection is available in Object selection for Compare and Synchronize commands on page 399.
Table 62. CMPFILDTA supported extended attributes for *FILE objects Description Physical file types, including PF, PF-SRC, and PF-DTA Files of type PF-DTA
445
Table 62.
CMPFILDTA supported extended attributes for *FILE objects Description Files of type PF-SRC Files of type PF38, including PF38, PF38-SRC, and PF38-DTA Files of type PF38-DTA Files of type PF38-SRC
446
When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair members held due to errorand when possible, restore them to an active state. Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A data group must also be specified on the command or the parameter is ignored. The default value, *ALL, indicates that all supported entry statuses (*ACTIVE and *HLDERR) are included in compare and repair processing. The value *ACTIVE processes only those members that are active1. When *HLDERR is specified, only member-level entries being held due to error are selected for processing. To repair members held due to error using *ALL or *HLDERR, you must also specify that the repair be performed on the target system and request that active processing be used. System 1 ASP group and System 2 ASP group: The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified. You must be running on OS V5R2 or greater to use these parameters. Subsetting option: The Subsetting option parameter provides a robust means by which to compare a subset of the data within members. In some instances, the value you select will determine which additional elements are used when comparing data. Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or *RANGE. If *ALL is specified, all data within all selected files is compared, and no additional subsetting is performed. The other options compare only a subset of the data. The following are common scenarios in which comparing a subset of your data is preferable: If you only need to check a specific range of records, use *RANGE. When a member, such as a history file, is primarily modified with insert operations, only recently inserted data needs to be compared. In this situation, use *ENDDTA. If time does not permit a full comparison, you can compare a random sample using *ADVANCED. If you do not have time to perform a full comparison all at once but you want all data to be compared over a number of days, use *ADVANCED.
*RANGE indicates that the Subset range parameter will be used to specify the subset of records to be compared. For more information, see the Subset range section. If you select *ENDDTA, the Records at end of file parameter specifies how many trailing records are compared. This value allows you to compare a selected number of records at the end of all selected members. For more information, see the section titled Records at end of file. Advanced subsetting can be used to audit your entire database over a number of days or to request that a random subset of records be compared. To specify
1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve previous behavior, specify STATUS(*ACTIVE).
447
advanced subsetting select *ADVANCED. For more information see Advanced subset options for CMPFILDTA on page 451. Subset range: Subset range is enabled when *RANGE is specified on the Subsetting option parameter, as described in the Subsetting option section. Two elements are included, First record and Last record. These elements allow you to specify a range of records to compare. If more than one member is selected for processing, all members are compared using the same relative record number range. Thus, using the range specification is usually only useful for a single member or a set of members with related records. The First record element can be specified as *FIRST or as a relative record number. In the case of *FIRST, records in the member are compared beginning with the first record. The Last record element can be specified as *LAST or as a relative record number. In the case of *LAST, records in the member are compared up to, and including, the last record. Advanced subset options: The Advanced subset options (ADVSUBSET) provides the ability to use sophisticated comparison techniques. For detailed information and examples, see Advanced subset options for CMPFILDTA on page 451. Records at end of file: The Records at end of file (ENDDTA) parameter allows you to compare recently inserted data without affecting the other subsetting criteria. If you specified *ENDDTA in the Subsetting option parameter, as indicated in the Subsetting option section, only those records specified in the Records at end of file parameter will be processed. This parameter is also valid if values other than *ENDDTA were specified in the Subsetting option. In this case, both records at the end of the file as well as any additional subsetting options factor into the compare. If some records are selected by both by the ENDDTA parameter and another subsetting option, those records are only processed once. The Records at end of file parameter can be specified as *NONE or number-ofrecords. When *NONE is specified, records at the end of the members are not compared unless they are selected by other subset criteria. To compare particular records at the end of each member, you must specify the number of records. The ENDDTA value is always applied to the smaller of the System 1 and System 2 members, and continues through until the end of the larger member. Let us assume that you specify 200 for the ENDDTA value. If one system has 1000 records while the other has 1100, relative records 801-1100 would be checked. The relative record numbers of the last 200 records of the smaller file are compared as well as the additional 100 relative record numbers due to the difference in member size. Using the Records at end of file parameter in daily processing can keep you from missing records that were inserted recently.
448
449
Transfer definition: The default for the Transfer definition parameter is *DFT. If a data group was specified, the default uses the transfer definition associated with the data group. If no data group was specified, the transfer definition associated with system 2 is used. The CMPFILDTA command requires that you have a TCP/IP transfer definition for communication with the remote system. If your data group is configured for SNA, override the SNA configuration by specifying the name of the transfer definition on the command. Number of thread groups: The Number of thread groups parameter indicates how many thread groups should be used to perform the comparison. You can specify from 1 to 100 thread groups. When using this parameter, it is important to balance the time required for processing against the available resources. If you increase the number of thread groups in order to reduce processing time, for example, you also increase processor and memory use. The default, *CALC, will determine the number of thread groups automatically. To maximize processing efficiency, the value *CALC does not calculate more than 25 thread groups. The actual number of threads used in the comparison is based on the result of the formula 2x + 1, where x is the value specified or the value calculated internally as the result of specifying *CALC. When *CALC is specified, the CMPFILDTA command displays a message showing the value calculated as the number of thread groups. Note: Thread groups are created for primary compare processing only. During setup, multiple threads may be utilized to improve performance, depending on the number of members selected for processing. The number of threads used during setup will not exceed the total number of threads used for primary compare processing. During active processing, only one thread will be used. Wait time (seconds): The Wait time (seconds) value is only valid when active processing is in effect and specifies the amount of time to wait for active processing to complete. You can specify from 0 to 3600 seconds, or the default *NOMAX. If active processing is enabled and a wait time is specified, CMPFILDTA processing waits the specified time for all pending compare operations processed through the MIMIX replication path to complete. In most cases, the *NOMAX default is highly recommended. DB apply threshold: The DB apply threshold parameter is only valid during active processing and requires that a data group be specified. The parameter specifies what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process. The default value *END stops the requested compare and repair action when the database apply threshold is reached; any repair actions that have not been completed are lost. The value *NOMAX allows the compare and repair action to continue even when the database apply threshold has been reached. Continuing processing when the apply process has a large backlog may adversely affect performance of the CMPFILDTA job and its ability to compare a file with an excessive number of outstanding entries. Therefore, *NOMAX should only be used in exceptional circumstances.
450
Number of subsets: The first issue to consider when performing advanced subset options is how many subsets or bins to establish. The Number of subsets element is the number of approximately equal-sized bins to define. These bins are numbered from 1 up to the number specified (N). You must specify at least one bin. Each record is assigned to one of these bins. The Interleave element specifies the manner in which members are assigned to a bin. Interleave: The Interleave factor specifies the mapping between the relative record number and the bin number. There are two approaches that can be used.
451
If you specify *NONE, records in each member are divided on a percentage basis. For example:
Table 63. Interleave *NONE Member A on Monday Total records in member: Number of subsets (bins): Interleave: Records assigned to bin 1: Records assigned to bin 2: Records assigned to bin 3: 30 3 *NONE 1-10 11-20 21-30 Member A on Tuesday 45 3 *NONE 1-15 16-30 31-45
Note that when the total number of records in a member changes, the mapping also changes. Records that were once assigned to bin 2 may in the future be assigned to bin 1. If you wish to compare all records over the course of a few days, the changing mapping may cause you to miss records. A specific Interleave value is preferable in this case. Using bytes, the Interleave value specifies a number of contiguous records that should be assigned to each bin before moving to the next bin. Once the last bin is filled, assignment restarts at the first bin. Let us assume you have specified in interleave value of 20 bytes. The following example is based on the one provided in Table 63:
Table 64. Interleave(20) Member A on Monday Total records in member: Record length: Number of subsets (bins): Interleave (bytes): Interleave (records): Records assigned to bin 1: 30 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 Member A on Tuesday 45 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 31-32 37-38 43-44
452
Table 64.
(Continued)Interleave(20) Member A on Monday Member A on Tuesday 3-4 9-10 15-16 21-22 27-28 33-34 39-40 45 5-6 11-12 17-18 23-24 29-30 35-36 41-42
If the Interleave and Number of Subsets is constant, the mapping of relative record numbers to bins is maintained, despite the growth of member size. Because every bin is eventually selected, comparisons made over several days will compare every record that existed on the first day. In most circumstances, *CALC is recommended for the interleave specification. When you select *CALC, the system determines how many contiguous bytes are assigned to each bin before subsequent bytes are placed in the next bin. This calculated value will not change due to member size changes. Specifying *NONE or a very large interleave factor maximizes processing efficiency, since data in each bin is processed sequentially. Specifying a very small interleave factor can greatly reduce efficiency, as little sequential processing can be done before the file must be repositioned. If you wish to compare a random sample, a smaller interleave factor provides a more random, or scattered, sample to compare. The next parameters, the First subset and the Last subset, allow you to specify which bin to process. First and last subset: The First subset and Last subset values work in combination to determine a range of bins to compare. For the First subset, the possible values are *FIRST and subset-number. If you select *FIRST, the range to compare will start with bin 1. Last subset has similar values, *LAST and subset-number. When you specify *LAST, the highest numbered bin is the last one processed. To compare a random sample of your data, specify a range of subsets that represent the size of the sample. For example, suppose you wish to compare seven percent of your data. If the number of subsets are 100, the first subset is 1, and the last subset is 7, seven percent of the data is compared. A first subset value of 21 and a last subset value of 27 would also compare seven percent of your data, but it would compare a different seven percent than the first example.
453
To compare all your data over the course of several days, specify the number of subsets and interleave factor that allows you to size each days workload as your needs require. For example, you would keep the subset value and interleave factor a constant, but vary the First and Last subset values each day. The following settings could be used over the course of a week to compare all of your data:
Table 65. Using First and last subset to compare data Number of subsets (bins) 100 100 100 100 100 100 100 Interleave First subset 1 11 21 31 41 51 66 Last subset 10 20 30 40 50 65 100 Percentage compared 10 10 10 10 10 15 35
Day of week
Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX Monitor documentation for more information.
454
4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.
455
Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, accept *NONE to indicate that no repair action is done. 7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison. 8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 12. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF. If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN. Notes: The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. 13. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.
456
To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.
457
4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. f. Press Enter.
458
5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or the system definition name to indicate the system on which repair action should be performed. Note: *TGT and *SRC are only valid if you are comparing files defined to a data group. *SRC is not valid if active processing is in effect. 7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison. 8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 12. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.
13. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following:
459
a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter.
19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.
460
461
group is specified on the Data group definition prompts. 5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system. 6. At the Process while active prompt, specify *YES to indicate that active processing technology should be used in the comparison. 7. At the File entry status prompt, specify *HLDERR to process members being held due to error only. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 15. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the System to receive output prompt, specify the system on which the output should be created. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following:
462
If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter.
16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To compare and repair the file, press Enter.
463
4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, accept the defaults. f. Press Enter. 5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system of the data group.
464
6. At the Process while active prompt, specify *YES or *DFT to indicate that active processing technology be used in the comparison. Since a data group is specified on the Data group definition prompts, *DFT will render the same results as *YES. 7. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 11. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.
12. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 17. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 14. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 15. At the System to receive output prompt, specify the system on which the output should be created. Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that you select *SYS2 for the System to receive output prompt.
465
16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used when the command is invoked from outside of shipped audits. When used as part of shipped audits, the default value is *OMIT since the results are already placed in an outfile. 17. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 20. To start the comparison, press Enter.
466
4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data
467
group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, specify a value if you want repair action performed. Note: To process members in *HLDERR status, you must specify *TGT. See Step 8. 7. At the Process while active prompt, specify whether active processing technology should be used in the comparison. Notes: To process members in *HLDERR status, you must specify *YES. See Step 8. If you are comparing files associated with a data group, *DFT uses active processing. If you are comparing files not associated with a data group, *DFT does not use active processing. Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit. 8. At the File entry status prompt, you can select files with specific statuses for compare and repair processing. Do one of the following: a. To process active members only, specify *ACTIVE. b. To process both active members and members being held due to error (*ACTIVE and *HLDERR), specify the default value *ALL. c. To process members being held due to error only, specify *HLDERR. Note: When *ALL or *HLDERR is specified for the File entry status prompt, *TGT must also be specified for the Repair on system prompt (Step 6) and *YES must be specified for the Process while active prompt (Step 7). 9. At the Subsetting option prompt, you must specify a value other than *ALL to use additional subsetting. Do one of the following: To compare a fixed range of data, specify *RANGE then press Enter to see additional prompts. Skip to Step 10. To define how many subsets should be established, how member data is assigned to the subsets, and which range of subsets to compare, specify *ADVANCED and press Enter to see additional prompts. Skip to Step 11. To indicate that only data specified on the Records at end of file prompt is compared, specify *ENDDTA and press Enter to see additional prompts. Skip to Step 12.
468
a. At the First record prompt, specify the relative record number of the first record to compare in the range. b. At the Last record prompt, specify the relative record number of the last record to compare in the range. c. Skip to Step 12. 11. At the Advanced subset options prompts, do the following: a. At Number of subsets prompt, specify the number of approximately equalsized subsets to establish. Subsets are numbered beginning with 1. b. At the Interleave prompt, specify the interleave factor. In most cases, the default *CALC is highly recommended. c. At the First subset prompt, specify the first subset in the sequence of subsets to compare. d. At the Last subset prompt, specify the last subset in the sequence of subsets to compare. 12. At the Records at end of file prompt, specify the number of records at the end of the member to compare. These records are compared regardless of other subsetting criteria. Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify a value other than *NONE. 13. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF. If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN. Notes: The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. 14. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.
469
If you do not want to generate output, specify *NONE. Press Enter and skip to Step 19. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.
15. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 16. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 17. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 18. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 19. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.
20. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 21. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 22. To start the comparison, press Enter.
470
471
Chapter20
The Lakeview-provided synchronize commands can be loosely grouped by common characteristics and the level of function they provide. Topic Considerations for synchronizing using MIMIX commands on page 474 describes subjects that apply to more than one group of commands, such as the maximum size of an object that can be synchronized, how large objects are handled, and how user profiles are addressed. Initial synchronization: Initial synchronization can be performed manually with a variety of MIMIX and IBM commands, or by using the Synchronize Data Group (SYNCDG) command. The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard. The command can be long-running. For information about initial synchronization, see these topics: Performing the initial synchronization on page 483 describes how to establish a synchronization point and identifies other key information. Environments using MIMIX support for IBM WebSphere MQ have additional requirements for the initial synchronization of replicated queue managers. For more information, see the MIMIX for IBM WebSphere MQ book.
Synchronize commands: The commands Synchronize Object (SYNCOBJ), Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide robust support in MIMIX environments, for synchronizing library-based objects, IFS objects, and DLOs, as well as their associated object authorities. Each command has considerable flexibility for selecting objects associated with or independent of a data group. Additionally, these commands are often called by other functions, such as by the automatic recovery features of MIMIX AutoGuard and by options to synchronize objects identified in tracking entries used with advanced journaling. For additional information, see: About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
472
Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry (SYNCDGACTE) command provides the ability to synchronize library-based objects, IFS objects, and DLOs that are associated with data group activity entries which have specific status values. The contents of the object and its attributes and authorities are synchronized. For additional information, see About synchronizing data group activity entries (SYNCDGACTE) on page 479. Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE) command provides the means to synchronize database files associated with a data group by data group file entries. Additional options provide the means to address triggers, referential constraints, logical files, and related files. For more information about this command, see About synchronizing file entries (SYNCDGFE command) on page 480. Send Network commands: The Send Network Object (SNDNETOBJ), Send Network IFS Object (SNDNETIFS), and Send Network DLO (SNDNETDLO) commands support fewer usage options and usability benefits than the Synchronize commands. These commands may require multiple invocations per library, path, or directory, respectively. These commands do not support synchronizing based on a data group name. Procedures: The procedures in this chapter are for commands that are accessible from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to synchronize individual items in your configuration, the best approach is to use the options provided on the displays where they are appropriate to use. The options call the appropriate command and, in many cases, pre-select some of the fields. The following procedures are included: Synchronizing database files on page 489 Synchronizing objects on page 491 Synchronizing IFS objects on page 495 Synchronizing DLOs on page 499 Synchronizing data group activity entries on page 503 Synchronizing tracking entries on page 505 Sending library-based objects on page 506 Sending IFS objects on page 508 Sending DLO objects on page 509
473
The following subtopics apply to more than one group of commands. Before you synchronize you should be aware of information in the following topics: Limiting the maximum sending size on page 474 Synchronizing user profiles on page 474 Synchronizing large files and objects on page 476 Status changes caused by synchronizing on page 476 Synchronizing objects in an independent ASP on page 477
1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify *TFRDFN.
474
475
When synchronizing other object types, this command implicitly synchronizes user profiles associated with the object if they do not exist on the target system. Although only the requested object type, such as *PGM, is specified on the command, the owning user profile, the primary group profile, and user profiles that have private authorities to an object are implicitly synchronized. The object and associated user profiles are synchronized. The status of the user profile on the target system is set to *DISABLED.
476
The Synchronize commands (SYNCOBJ, SYNCIFS and SYNCDLO) do not change the status of activity entries associated with the objects being synchronized. Activity entries retain the same status after the command completes. Note: The SYNCIFS command will change the status of an activity entry for an IFS object configured for advanced journaling. When advanced journaling is configured, each replicated activity has associated tracking entries. When you use the SYNCOBJ or SYNCIFS commands to synchronize an object that has a corresponding tracking entry, the status of the tracking entry will change to *ACTIVE upon successful completion of the synchronization request. If the synchronization is not successful, the status of the tracking entry will remain in its original status or have a status of *HLD. If the data group is not active, the status of the tracking entry will be updated once the data group is restarted.
477
About MIMIX commands for synchronizing objects, IFS objects, and DLOs
About MIMIX commands for synchronizing objects, IFS objects, and DLOs
The Synchronize Object (SYNCOBJ), Synchronize IFS (SYNCIFS), and Synchronize DLO (SYNCDLO) commands provide versatility for synchronizing objects and their authority attributes. Where to run: The synchronize commands can be run from either system. However, if you run these commands from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. Identifying what to synchronize: On each command, you can identify objects to synchronize by specifying a data group, a subset of a data group, or by specifying objects independently of a data group. When you specify a data group, its source system determines the objects to synchronize. The objects to be synchronized by the command are the same as those identified for replication by the data group. For example, specifying a data group on the SYNCOBJ command, will synchronize the same library-based objects as those configured for replication by the data group. If you specify a data group as well as specify additional object information in command parameters, the additional parameter information is used to filter the list of objects identified for the data group. When no data group is specified, the local system becomes the source system and a target system must be identified. The list of objects to synchronize is generated on the local system. For more information about the object selection criteria used when no data group is specified on these commands, see Object selection for Compare and Synchronize commands on page 399.
Each command has a Synchronize authorities parameter to indicate whether authority attributes are synchronized. By default, the object and all authority-related attributes are synchronized. You can also synchronize only the object or only the authority attributes of an object. Authority attributes include ownership, authorization list, primary group, public and private authorities. When you use the SYNCOBJ command to synchronize only the authorities for an object and a data group name is not specified, if any files processed by the command are cooperatively processed and a data group that contains these files is active, the command could fail if the database apply job has a lock on these files. When to run: Each command can be run when the data group is in an active or an inactive state. You can synchronize objects whether or not the data group is active. Using the SYNCOBJ, SYNCIFS, and SYNCDLO commands during off-peak usage or when the objects being synchronized are in a quiesced state reduces contention for object locks. When using the SYNCIFS command for a data group configured for advanced journaling, the data group can be active but it should not have a backlog of unprocessed entries.
478
Additional parameters: On each command, the following parameters provide additional control of the synchronization process. The Save active parameter provides the ability to save the object in an active environment using IBM's save while active support. Values supported are the same as those used in related IBM commands. The Save active wait time parameter specifies the amount of time to wait for a commit boundary or for a lock on an object. If a lock is not obtained in the specified time, the object is not saved. If a commit boundary is not reached in the specified time, the save operation ends and the synchronization attempt fails. The Maximum sending size (MB) parameter specifies the maximum size that an object can be in order to be synchronized. For more information, see Limiting the maximum sending size on page 474.
479
Status changes during to synchronization: During synchronization processing, if the data group is active, the status of the activity entries being synchronized are set to a status of pending synchronization (PZ) and then to pending completion (PC). When the synchronization request completes, the status of the activity entries is set to either completed by synchronization (CZ) or to failed synchronization (FZ). If the data group is inactive, the status of the activity entries remains either pending synchronization (PZ) or pending completion (PC) when the synchronization request completes. When the data group is restarted, the status of the activity entries is set to either completed by synchronization (CZ) or to failed synchronization (FZ).
1.
Files with triggers: The SYNCDGFE command provides the ability to optionally disable triggers during synchronization processing and enable them again when processing is complete. The Disable triggers on file (DSBTRG) parameter specifies whether the database apply process (used for synchronization) disables triggers when processing a file. The default value *DGFE uses data group file entry to determine whether triggers should be disabled. The value *YES will disable triggers on the target system during synchronization.
480
If configuration options for the data group, or optionally for a data group file entry, allow MIMIX to replicate trigger-generated entries and disable the triggers, when synchronizing a file with triggers you must specify *DATA as the sending mode. Including logical files: The Include logical files (INCLF) parameter allows you to include any attached logical files in the synchronization request. This parameter is only valid when *SAVRST is specified for the Sending mode prompt. Physical files with referential constraints: Physical files with referential constraints require a field in another physical file to be valid. When synchronizing physical files with referential constraints, ensure all files in the referential constraint structure are synchronized concurrently during a time of minimal activity on the source system. Doing so will ensure the integrity of synchronization points. Including related files: You can optionally choose whether the synchronization request will include files related to the file specified by specifying *YES for the Include related (RELATED) parameter. Related files are those physical files which have a relationship with the selected physical file by means of one or more join logical files. Join logical files are logical files attached to fields in two or more physical files. The Include related (RELATED) parameter defaults to *NO. In some environments, specifying *YES could result in a high number of files being synchronized and could potentially strain available communications and take a significant amount of time to complete. A physical file being synchronized cannot be name mapped if it is not in the same library as the logical file associated with it. Logical files may be mapped by using object entries.
481
Tracking entries may not exist for existing IFS objects, data areas, or data queues that have been configured for replication with advanced journaling since the last start of the data group. For status changes to be effective for a tracking entry that is being synchronized, the data group must be active. When the apply session receives notification that the object represented by the tracking entry is synchronized successfully, the tracking entry status changes to *ACTIVE.
482
483
more flexibility in object selection and also provide the ability to synchronize object authorities. By specifying a data group on any of these commands, you can synchronize the data defined by its data group entries. You can also use the Synchronize Data Group File Entry (SYNCDGFE) to synchronize database files and members. This command provides the ability to choose between MIMIX copy active file processing and save/restore processing and provides choices for handling trigger programs during synchronization. If you have configured or migrated to integrated advanced journaling, follow the SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and data queues, and SYNCDGFE procedures for files containing LOB data. You can also use options to synchronize objects associated with tracking entries from the Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries display. SNDNET commands: The Send Network commands (SNDNETIFS, SNDNETDLO, SNDNETOBJ) support fewer options for selecting and specifying multiple objects and do not provide a way to specify by data group. These commands may require multiple invocations per path, folder, or library, respectively.
This chapter (Synchronizing data between systems on page 472) includes additional information about the MIMIX SYNC and SNDNET commands.
484
Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment. Log in to Support Central and access the Technical Documents page for a list of required and recommended IBM PTFs. Journaling is started on the source system for everything defined to the data group. All replication processes are active. The user ID submitting the SYNCDG has *MGT authority in product level security if it is enabled for the installation. No other audits (comparisons or recoveries) are in progress when the SYNCDG is requested. Collector services has been started.
While the synchronization is in progress, other audits for the data group are prevented from running. MIMIX Availability Manager displays initialization mode on the Audit Summary and Compliance interfaces while running this command if the data group definition (DGDFN) specifies *ALL.
6. The Synchronize Data Group (SYNCDG) command prompt opens. Click Advanced and specify the following values by pressing F4 for valid options on each parameter or use the drop-down menu: Data group definition (DGDFN). Job description (JOBD). 7. Click on OK to perform the initial synchronization. 8. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use Verifying the initial
485
synchronization on page 487. From a 5250 emulator, do the following: 1. Use the command STRDG DGDFN(*ALL). 2. Type the command SYNCDG and press Enter. Specify the following values, pressing F4 for valid options on each parameter: Data group definition (DGDFN). Job description (JOBD). 3. Press Enter to perform the initial synchronization. 4. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use Verifying the initial synchronization on page 487.
486
Do the following: 1. Check whether all necessary journaling is started for each data group. Enter the following command:
(installation-library-name)/DSPDGSTS DGDFN(data-group-name) VIEW(*DBFETE)
On the File and Tracking Entry Status display, The File Entries column identifies how many file entries were configured from your replication patterns and indicates whether any file entries are not journaled on the source and target systems. If you are configured for advanced journaling, the Tracking Entries columns provide similar information. 2. Use MIMIX AutoGuard to audit your environment. To access the audits, enter the following command:
(installation-library-name)/WRKAUD
3. Each audit listed on the Work with Audits display is a unique combination of data group and MIMIX rule. When verifying an initial configuration, you need to perform a subset of the available audits for each data group in a specific order, shown in Table 67. Do the following: a. To change the number of active audits at any one time, enter the following command:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(*NOMAX)
b. Use F18 (Subset) to subset the audits by the name of the rule you want to run. c. Type a 9 (Run rule) next to the audit for each data group and press Enter.
487
Repeat Step 3b and Step 3c for each rule in Table 67 until you have started all the listed audits for all data groups.
Table 67. Rules for initial validation, listed in the order to be performed.
d. Reset the number of active audit jobs to values consistent with regular auditing:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(5)
4. Wait for all audits to complete. Some audits may take time to complete. Then check the results and resolve any problems. You may need to change subsetting values again so you can view all rule and data group combinations at once. On the Work with Audits display, check the Audit Status column for the following value: *NOTRCVD - The comparison performed by the rule detected differences. Some of the differences were not automatically recovered. Action is required. View notifications for more information and resolve the problem. Note: See the MIMIX AutoGuard document for more information about viewing audit results.
488
To synchronize a database file between two systems using the SYNCDGFE command defaults, do the following or use the alternative process described below: 1. From the Work with DG Definitions display, type 17 (File entries) next to the data group to which the file you want to synchronize is defined and press Enter. 2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next to the file entry for the file you want to synchronize and press Enter. Note: If you are synchronizing file entries as part of your initial configuration, you can type 16 next to the first file entry and then press F13 (Repeat). When you press Enter, all file entries will be synchronized. Alternative Process: You will need to identify the data group and data group file entry in this procedure. In Step 8 and Step 9, you will need to make choices about the sending mode and trigger support. For additional information, see About synchronizing file entries (SYNCDGFE command) on page 480. 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41 (Synchronize DG File Entry) and press Enter. 3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group definition prompts, specify the name of the data group to which the file is associated. 4. At the System 1 file and Library prompts, specify the name of the database file you want to synchronize and the library in which it is located on system 1. 5. If you want to synchronize only one member of a file, specify its name at the Member prompt. 6. At the Data source prompt, ensure that the value matches the system that you want to use as the source for the synchronization. 7. The default value *YES for the Release wait prompt indicates that MIMIX will hold the file entry in a release-wait state until a synchronization point is reached. Then it will change the status to active. If you want to hold the file entry for your intervention, specify *NO.
489
8. At the Sending mode prompt, specify the value for the type of data to be synchronized. 9. At the Disable triggers on file prompt, specify whether the database apply process should disable triggers when processing the file. Accept *DGFE to use the value specified in the data group file entry or specify another value. Skip to Step 14. 10. At the Save active prompt, accept *NO so that objects in use are not saved, or, specify another value. 11. At the Save active wait time prompt, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 12. At the Allow object differences prompt, accept the default or specify *YES to indicate whether certain differences encountered during the restore of the object on the target system should be allowed. 13. At the Include logical files prompt, accept the default or *NO to indicate whether you want to include attached logical files when sending the file. 14. To change any of the additional parameters, press F10 (Additional parameters). Verify that the values shown for Include related files, Maximum sending file size (MB) and Submit to batch are what you want. 15. To synchronize the file, press Enter
490
Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ) command to synchronize library-based objects between two systems. The objects to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478
491
Synchronizing objects
authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved. Or, specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. Note: When a data group is specified the following parameters are ignored: System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device name. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.
10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter.
492
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. e. At the System 2 object and System 2 library prompts, if the object and library names on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the name of the object and library on system 2 to which you want to synchronize the objects. f. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system to which to synchronize the objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. Note: When you specify *ONLY and a data group name is not specified, if any files that are processed by this command are cooperatively processed and the data group that contains these files is active, the command could fail if the database apply job has a lock on these files. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. At the System 1 ASP group or device prompt, specify the name of the auxiliary storage pool (ASP) group or device where objects configured for replication may reside on system 1. Otherwise, accept the default to use the current jobs ASP group name. 11. At the System 2 ASP device number prompt, specify the number of the auxiliary storage pool (ASP) where objects configured for replication may reside on system 2. Otherwise, accept the default to use the same ASP number from which the object was saved (*SAVASP). Only the libraries in the system ASP and any basic user ASPs from system 2 will be in the library name space. 12. At the System 2 ASP device name prompt, specify the name of the auxiliary storage pool (ASP) device where objects configured for replication may reside on system 2. Otherwise, accept the default to use the value specified for the system 1 ASP group or device (*ASPGRP1). 13. Determine how the synchronize request will be processed. Choose one of the following To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.
493
Synchronizing objects
14. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 15. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 16. To start the synchronization, press Enter.
494
495
e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 object path name and System 2 name pattern values are ignored when a data group is specified. f. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. If you chose values in Step 6 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 12.
10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see Using file identifiers (FIDs) for IFS objects on page 312. 13. To start the synchronization, press Enter.
496
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the IFS objects prompts, specify elements for one or more object selectors that identify IFS objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see the topic on object selection in the MIMIX Reference book. For each selector, do the following: a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want. Note: The IFS object path name can be used alone or in combination with FID values. See Step 13. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize. e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the IFS objects. g. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the IFS objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. If you chose values in Step 7 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.
497
To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 13.
11. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 12. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 13. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see Using file identifiers (FIDs) for IFS objects on page 312. 14. To start the synchronization, press Enter.
498
Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to synchronize document library objects (DLOs) between two systems. The DLOs to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478
499
Synchronizing DLOs
are ignored when a data group is specified. g. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.
10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter.
500
c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize. e. At the Owner prompt, accept *ALL or specify the owner of the DLO. f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the DLOs. h. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the DLOs. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.
11. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter and continue with the next step.
12. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 13. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 14. To start the synchronization, press Enter.
501
Synchronizing DLOs
502
To synchronize an object identified by a data group activity entry, do the following: 1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next to the activity entry that identifies the object you want to synchronize and press Enter. 2. The Confirm Synchronize of Object display appears. Press Enter to confirm the synchronization. Alternative Process: You will need to identify the data group and data group activity entry in this procedure. 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45 (Synchronize DG File Entry) and press Enter. 3. At the Data group definition prompts, specify the data group name. 4. At the Object type prompt, specify a specific object type to synchronize or press F4 to see a valid list. 5. Additional parameters appear based on the object type selected. Do one of the following: For files, you will see the Object, Library, and Member prompts. Specify the object, library and member that you want to synchronize. For objects, you will see the Object and Library prompts. Specify the object and library of the object you want to synchronize. For IFS objects, you will see the IFS object prompt. Specify the IFS object that you want to synchronize. For DLOs, you will see the Document library object and Folder prompts. Specify the folder path and DLO name of the DLO you want to synchronize.
6. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.
503
7. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 8. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 9. To start the synchronization, press Enter.
504
505
To send library-based objects between two systems, do the following: 1. If the objects you are sending are located in an independent auxiliary storage pool (ASP) on the source system, you must use the IBM command Set ASP Group (SETASPGRP) on the local system to change the ASP group for your job. This allows MIMIX to access the objects. 2. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 3. The MIMIX Utilities Menu appears. Select option 11 (Send object) and press Enter. 4. The Send Network Object (SNDNETOBJ) display appears. At the Object prompt, specify either *ALL, the name of an object, or a generic name. Note: You can specify as many as 50 objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter. 5. Specify the name of the library that contains the objects at the Library prompt. 6. Specify the type of objects to be sent from the specified library at the Object type prompt. Notes: If you specify *ALL, all object types supported by the i5/OS Save Object (SAVOBJ) command are selected. The single values that are listed for this parameter are not included when *ALL is specified because they are not supported by the i5/OS SAVOBJ command. To expand this field for multiple entries, type a plus sign (+) at the prompt and press Enter.
7. Press Enter. 8. Additional prompts appear on the display. Do the following: a. Specify the name of the system to which you are sending objects at the Remote system prompt.
506
b. If the library on the remote system has a different name, specify its name at the Remote library prompt. c. The remaining prompts on the display are used for objects synchronized via a save and restore operation. Verify that the values shown are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help). 9. By default, objects are restored to the same ASP device or number from which they were saved. To change the location where objects are restored, press F10 (Additional parameters), then specify a value for either the Restore to ASP device prompt or the Restore to ASP number prompt. Note: Object types *JRN, *JRNRCV, *LIB, and *SAVF can be restored to any ASP. IBM restricts which object types are allowed in user ASPs. Some object types may not be restored to user ASPs. Specifying a value of 1 restores objects to the system ASP. Specifying 2 through 32 restores values to the basic user ASP specified. If the specified ASP number does not exist on the target system or if it has overflowed, the objects are placed in the system ASP on the target system. 10. By default, authority to the object on the remote system is determined by that system. To have the authorities on the remote system determined by the settings of the local system, press F10 (Additional parameters), then specify *SRC at the Target authority prompt. 11. To start sending the specified objects, press Enter.
507
508
509
Chapter21
Introduction to programming
MIMIX includes a variety of functions that you can use to extend MIMIX capabilities through automation and customization. The topics in this chapter include: Support for customizing on page 511 describes several functions you can use to customize your replication environment. Completion and escape messages for comparison commands on page 514 lists completion, diagnostic, and escape messages generated by comparison commands. The MIMIX message log provides a common location to see messages from all MIMIX products. Adding messages to the MIMIX message log on page 521 describes how you can include your own messaging from automation programs in the MIMIX message log. MIMIX supports batch output jobs on numerous commands and provides several forms of output, including outfiles. For more information, see Output and batch guidelines on page 523. Displaying a list of commands in a library on page 528 describes how to display the super set of all Lakeview commands known to License Manager or subset the list by a particular library. Running commands on a remote system on page 529 describes how to run a single command or multiple commands on a remote system. Procedures for running commands RUNCMD, RUNCMDS on page 530 provides procedures for using run commands with a specific protocol or by specifying a protocol through existing MIMIX configuration elements. Using lists of retrieve commands on page 536 identifies how to use MIMIX list commands to include retrieve commands in automation. Commands are typically set with default values that reflect the recommendation of Lakeview Technology. Changing command defaults on page 537 provides a method for customizing default values should your business needs require it.
510
Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a target object and a source object are both updated at the same time. When the change to the source object is replicated to the target object, the data does not match and the collision is detected. With MIMIX user journal replication, the definition of a collision is expanded to include any condition where the status of a file or a record is not what MIMIX determines it should be when MIMIX applies a journal transaction. Examples of these detected conditions include the following: Updating a record that does not exist Deleting a record that does not exist Writing to a record that already exists Updating a record for which the current record information does not match the before image
The database apply process contains 12 collision points at which MIMIX can attempt to resolve a collision. When a collision is detected, by default the file is placed on hold due to an error (*HLDERR) and user action is needed to synchronize the files. MIMIX provides additional ways to automatically resolve detected collisions without user intervention. This process is called collision resolution. With collision resolution, you can specify different resolution methods to handle these different types of collisions. If a collision does occur, MIMIX attempts the specified collision resolution methods until either the collision is resolved or the file is placed on hold. You can specify collision resolution methods for a data group or for individual data group file entries. If you specify *AUTOSYNC for the collision resolution element of the file entry options, MIMIX attempts to fix any problems it detects by synchronizing the file. You can also specify a named collision resolution class. A collision resolution class allows you to define what type of resolution to use at each of the collision points. Collision resolution classes allow you to specify several methods of resolution to try
511
for each collision point and support the use of an exit program. These additional choices for resolving collisions allow customized solutions for resolving collisions without requiring user action. For more information, see Collision resolution on page 381.
512
513
CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of *FILE: Completion LVI3E01 This message indicates that all files were compared successfully. Diagnostic LVE3E0D This message indicates that a particular attribute compared differently. Diagnostic LVE3385 This message indicates that differences were detected for an active file. Diagnostic LVE3E12 This message indicates that a file was not compared. The reason the file was not compared is included in the message. Escape LVE3E05 This message indicates that files were compared with differences detected. If the cumulative differences include files that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3381 This message indicates that compared files were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E09 This message indicates that the CMPFILA command ended abnormally. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Informational LVI3E06 This message indicates that no object was selected to be processed.
The following are the messages for CMPFILA, with a comparison level specification of *MBR: Completion LVI3E05 This message indicates that all members compared successfully. Diagnostic LVE3388 This message indicates that differences were detected for
514
an active member. Escape LVE3E16 This message indicates that members were compared with differences detected. If the cumulative differences include members that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.
CMPOBJA messages
The following are the messages for CMPOBJA: Completion LVI3E02 This message indicates that objects were compared but no differences were detected. Diagnostic LVE3384 This message indicates that differences were detected for an active object. Escape LVE3E06 This message indicates that objects were compared and differences were detected. If the cumulative differences include objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3380 This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Informational LVI3E06 This message indicates that no object was selected to be processed.
The LVI3E02 includes message data containing the number of objects compared, the system 1 name, and the system 2 name. The LVE3E06 message includes the same message data as LVI3E02, and also includes the number of differences detected.
CMPIFSA messages
The following are the messages for CMPIFSA: Completion LVI3E03 This message indicates that all IFS objects were compared successfully. Diagnostic LVE3E0F This message indicates that a particular attribute was compared differently. Diagnostic LVE3386 This message indicates that differences were detected for an active IFS object. Diagnostic LVE3E14 This message indicates that a IFS object was not compared. The reason the IFS object was not compared is included in the message. Escape LVE3E07 This message indicates that IFS objects were compared with differences detected. If the cumulative differences include IFS objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.
515
Escape LVE3382 This message indicates that compared IFS objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Escape LVE3E0B This message indicates that the CMPIFSA command ended abnormally. Informational LVI3E06 This message indicates that no object was selected to be processed.
CMPDLOA messages
The following are the messages for CMPDLOA: Completion LVI3E04 This message indicates that all DLOs were compared successfully. Diagnostic LVE3E11 This message indicates that a particular attribute compared differently. Diagnostic LVE3387 This message indicates that differences were detected for an active DLO. Diagnostic LVE3E15 This message indicates that a DLO was not compared. The reason the DLO was not compared is included in the message. Escape LVE3E08 This message indicates that DLOs were compared and differences were detected. If the cumulative differences include DLOs that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3383 This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Escape LVE3E0C This message indicates that the CMPDLOA command ended abnormally. Informational LVI3E06 This message indicates that no object was selected to be processed.
CMPRCDCNT messages
The following are the messages for CMPRCDCNT: Escape LVE3D4D This message indicates that ACTIVE(*YES) outfile processing failed and identifies the reason code. Escape LVE3D5A This message indicates that system journal replication is not active. Escape LVE3D5F This message indicates that an apply session exceeded the
516
unprocessed entry threshold. Escape LVE3D6D This message indicates that user journal replication is not active. Escape LVE3D6F This message identifies the number of members compared and how many compared members had differences. Escape LVE3D72 This message identifies a child process that ended unexpectedly. Escape LVE3E17 This message indicates that no object was found for the specified selection criteria. Informational LVI306B This message identifies a child process that started successfully. Informational LVI306D This message identifies a child process that completed successfully. Informational LVI3D45 This message indicates that active processing completed. Informational LVI3D50 This message indicates that work files are not deleted. Informational LVI3D5A This message indicates that system journal replication is not active. Informational LVI3D5F This message identifies an apply session that has exceeded the unprocessed entry threshold. Informational LVI3D6D This message indicates that user journal replication is not active. Informational LVI3E05 This message identifies the number of members compared. No differences were detected. Informational LVI3E06 This message indicates that no object was selected for processing.
CMPFILDTA messages
The following are the messages for CMPFILDTA: Completion LVI3D59 This message indicates that all members compared were identical or that one or more members differed but were then completely repaired. Diagnostic LVE3031 - This message indicates the name of the local system is entered on the System 2 (SYS2) prompt. Using the name of the local system on the SYS2 prompt is not valid. Diagnostic LVE3D40 This message indicates that a record in one of the members cannot be processed. In this case, another job is holding an update lock on the record and the wait time has expired. Diagnostic LVE3D42 - This message indicates that a selected member cannot be processed and provides a reason code. Diagnostic LVE3D46 This message indicates that a file member contains one or
517
more field types that are not supported for comparison. These fields are excluded from the data compared. Diagnostic LVE3D50 This message indicates that a file member contains one or more large object (LOB) fields and a value other than *NONE was specified on the Repair on system (REPAIR) prompt. Files containing LOB fields cannot be repaired. In this case, the request to process the file member is ignored. Specify REPAIR(*NONE) to process the file member. Diagnostic LVE3D64 This message indicates that the compare detected minor differences in a file member. In this case, one member has more records allocated. Excess allocated records are deleted. This difference does not affect replication processing, however. Diagnostic LVE3D65 This message indicates that processing failed for the selected member. The member cannot be compared. Error message LVE0101 is returned. Escape LVE3358 This message indicates that the compare has ended abnormally, and is shown only when the conditions of messages LVI3D59, LVE3D5D, and LVE3D59 do not apply. Escape LVE3D5D This message indicates that insignificant differences were found or remain after repair. The message provides a statistical summary of the differences found. Insignificant differences may occur when a member has deleted records while the corresponding member has no records yet allocated at the corresponding positions. It is also possible that one or more selected members contains excluded fields, such as large objects (LOBs). Escape LVE3D5E This message indicates that the compare request ended because the data group was not fully active. The request included active processing (ACTIVE), which requires a fully active data group. Output may not be complete or accurate. Escape LVE3D5F This message indicates that the apply session exceeded the specified threshold for unprocessed entries. The DB apply threshold (DBAPYTHLD) parameter determines what action should be taken when the threshold is exceeded. In this case, the value *END was specified for DBAPYTHLD, thereby ending the requested compare and repair action. Escape LVE3D59 This message indicates that significant differences were found or remain after repair, or that one or more selected members could not be compared. The message provides a statistical summary of the differences found. Escape LVE3D56 This message indicates that no member was selected by the object selection criteria. Escape LVE3D60 This message indicates that the status of the data group could not be determined. The WRKDG (MXDGSTS) outfile returned a value of *UNKNOWN for one or more fields used in determining the overall status of the data group. Escape LVE3D62 This message indicates the number of mismatches that will not be fully processed for a file due to the large number of mismatches found for this request. The compare will stop processing the affected file and will continue to
518
process any other files specified on the same request. Escape LVE3D67 This message indicates that the value specified for the File entry status (STATUS) parameter is not valid. To process members in *HLDERR status, a data group must be specified on the command and *YES must be specified for the Process while active parameter. Escape LVE3D68 This message indicates that a switch cannot be performed due to members undergoing compare and repair processing. Escape LVE3D69 This message indicates that the data group is not configured for database. Data groups used with the CMPFILDTA command must be configured for database, and all processes for that data group must be active. Escape LVE3D6C This message indicates that the CMPFILDTA command ended before it could complete the requested action. The processing step in progress when the end was received is indicated. The message provides a statistical summary of the differences found. Escape LVE3E41 This message indicates that a database apply job cannot process a journal entry with the indicated code, type, and sequence number because a supporting function failed. The journal information and the apply session for the data group are indicated. See the database apply job log for details of the failed function. Informational LVI3727 This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and is now in *CMPRLS state. Informational LVI3728 This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and has been changed from *CMPRLS to *CMPACT state. Informational LVI3729 This message indicates that the repair request for a specific member was not successful. As a result, the CMPFILDTA command has changed the data group file entry for the member back to *HLDERR status. Informational LVI372C The CMPFILDTA command is ending controlled because of a user request. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI372D The CMPFILDTA command exceeded the maximum rule recovery time policy and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI372E The CMPFILDTA command is ending unexpectedly. It received an unexpected request from the remote CMPFILDTA job to shut down and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI3D4B This message indicates that work files are not automatically deleted because the time specified on the Wait time (seconds) (ACTWAIT) prompt expired or an internal error occurred. Informational LVI3D59 This message indicates that the CMPFILDTA command
519
completed successfully. The message also provides a statistical summary of compare processing. Informational LVI3D5E - This message indicates that the compare request ended because the request required Active processing and the data group was not active. Results of the comparison may not be complete or accurate. Informational LVI3D5F This message indicates that the apply session exceeded the specified threshold for unprocessed entries, thereby ending the requested compare and repair action. In this case, the value *END was specified for the DB apply threshold (DBAPYTHLD) parameter, which determines what action should be taken when the threshold is exceeded. Informational LVI3D60 - This message indicates that the status of the data group could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN for one or more status fields associated with systems, journals, system managers, journal managers, system communications, remote journal link, and database send and apply processes. Informational LVI3E06 This message indicates that the data group specified contains no data group file entries.
When active processing and ACTWAIT(*NONE) is specified, or when the active wait time out occurs, some members will have unconfirmed differences if none of the differences initially found was verified by the MIMIX database apply process. The CMPFILDTA outfile contains more detail on the results of each member compare, including information on the types of differences that are found and the number of differences found in each member. Messages LVI3D59, LVE3D5D, and LVE3D59 include message data containing the number of members selected, the number of members compared, the number of members with confirmed differences, the number of members with unconfirmed differences, the number of members successfully repaired, and the number of members for which repair was unsuccessful.
Updated for 5.0.02.00.
520
521
522
Output parameter
Some commands can produce output of more than one typedisplay, print, or output file. In these cases, the selection is made on the Output parameter. Table 68 lists the values supported by the Output parameter.
523
Note: Not all values are supported for all commands. For some commands, a combination of values is supported.
Table 68. * *NONE *PRINT *OUTFILE *BOTH Values supported by the Output parameter Display only No output is generated Spooled output is generated An output file is generated Both spooled output and an output file are generated.
Commands that support OUTPUT(*) that can also run in batch are required to support the other forms of output as well. Commands called from a program or submitted to batch with a specification of OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing or when called from another program would otherwise fail. With the exception of messages generated as a result of running a command, commands that support OUTPUT(*NONE) will generate no other forms of output. Commands that support combinations of output values do not support OUTPUT(*) in combination with other output values.
Display output
Commands that support OUTPUT(*) provide the ability to display information interactively. Display (DSP) and Work (WRK) commands commonly use display support. Display commands typically display detailed information for a specific entity, such as a data group definition. Work commands display a list of entries and provide a summary view of list of entries. Display support is required to work interactively with the MIMIX product. Work commands often provide subsetting capabilities that allow you to select a subset of information. Rather than viewing all configuration entries for all data groups, for example, subsetting allows you to view the configuration entries for a specific data group. This ability allows you to easily view data that is important or relevant to you at a given time.
Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to provide a readable form of output for print or distribution purposes. Output is generated in the form of spooled output files that can easily be printed or distributed. On commands that support spooled output, the spooled output is generated as a result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK) commands support this form of output. Other commands, such as Compare (CMP) and Verify (VFY), also support spooled output in most cases.
524
The Work (WRK) and Display (DSP) commands support different categories of reports. The following are standard categories of reports available from these commands: The detail report contains information for one item, such as an object, definition, or entry. A detail report is usually obtained by using option 6 (Print) on a Work (WRK) display, or by specifying *PRINT on the Output parameter on a Display (DSP) command. The list summary report contains summary information for multiple objects, definitions, or entries. A list summary is usually obtained by pressing F21 (Print) on a Work (WRK) display. You can also get this report by specifying *BASIC on the Detail parameter on a Work (WRK) command. The list detail report contains detailed information for multiple objects, definitions, or entries. A list detail report is usually obtained by specifying *PRINT on the Output parameter of a Work (WRK) command.
Certain parameters, which vary from command to command, can affect the contents of spooled output. The following list represents a common set of parameters that directly impact spooled output: EXPAND(*YES or *NO) - The expand parameter is available on the Work with Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS Entries (WRKDGIFSE), and the Work with Data Group DLO Entries (WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can be accomplished using generic entries, which represent one or more actual objects on the system. The object entry ABC*, for example, can represent many entries on a system. Expand support provides a means to determine that actual objects on a system are represented by a MIMIX configuration. Specifying *NO on the EXPAND parameter prints the configured data group entries. DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail option determines the level of detail in the generated spool file. Specifying DETAIL(*BASIC) prints a summary list of entries. For example, this specification on the Work with Data Group Definitions (WRKDGDFN) command will print a summary list of data group definitions. Specifying DETAIL(*FULL) prints each data group definition in detail, including all attributes of the data group definition. Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is specified. RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The Report Type (RPTTYPE) parameter controls the amount of information in the spooled file. The values available for this parameter vary, depending on the command. The values *DIF, *ALL, and *SUMMARY are available on the Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands. Specifying *DIF reports only detected differences. A value of *SUMMARY reports a summary of objects compared, including an indication of differences detected. *ALL provides a comprehensive listing of objects compared as well as difference detail.
525
The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values, as well as the value *RRN. Specifying *RRN allows you to output the relative record number of the first 1,000 objects that failed to compare. Using the *RRN value can help resolve situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. In this case, *RRN provides the information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.
File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile support across the MIMIX product is important for a number of reasons. Outfile support is a key enabler for advanced automation purposes. The support also allows MIMIX customers and qualified MIMIX consultants to develop and deliver solutions tailored to the individual needs of the user. As with the other forms of output, output files are commonly supported across certain classes of commands. The Work (WRK) commands commonly support output files. In addition, many audit-based reports, such as Comparison (CMP) commands, also provide output file support. Output file support for Work (WRK) commands provides access to the majority of MIMIX configuration and status-related data. The Compare (CMP) commands also provide output files as a key enabler for automatic error detection and correction capabilities. When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and OUTMBR parameters. The OUTFILE parameter requires a qualified file and library name. As a result of running the command, the specified output file will be used. If the file does not exist, it will automatically be created. Note: If a new file is created for CMPFILA, for example, the record format used is from the Lakeview-supplied model database file MXCMPFILA, found in the installation library. The text description of the created file is Output file for CMPFILA. The file cannot reside in the product library. The Outmember (OUTMBR) parameter allows you to specify which member to use in the output file. If no member exists, the default value of *FIRST will create a member name with the same name as the file name. A second element on the Outmember parameter indicates the way in which information is stored for an existing member. A value of *REPLACE will clear the current contents of the member and add the new records. A value of *ADD will append the new records to the existing data. Expand support: The Expand support was developed specifically as a feature for data group configuration entries that support generic specifications. Data group object entries, IFS entries, and DLO entries can all be configured using generic name values. If you specify an object entry with an object name of ABC* in library XYZ and accept the default values for all other fields, for example, all objects in library XYZ are replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the output files. Using EXPAND(*YES) will list all objects from the local system that match the configuration specified. Thus, if object name ABC* for library XYZ represented 1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the output file. EXPAND(*NO) would add a single generic entry. Note: EXPAND(*YES) support locates all objects on the local system.
526
527
528
529
Supports sending and receiving local data area (LDA) data. Allows commands to be run under other user profiles as long as the user ID and password are the same on both systems. The password is validated before the command is run on the remote system, thus the user must have authority to the user profile being used.
4. Specify the commands to run or messages to monitor for the command as follows: d. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands. e. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt. 5. Specify the protocol and protocol-specific implementation using Table 69.
Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS Specify At the Protocol prompt, specify *LOCAL.
530
Table 69.
Specific protocols and specifications used for RUNCMD and RUNCMDS Specify Do the following: 1. At the Protocol prompt, specify *TCP to run the commands using Transmission Control Protocol/Internet Protocol (TCP/IP) communications. Press Enter for additional prompts. 2. At the Host name or address prompt, specify the host alias or address of the TCP protocol. 3. At the Port number or alias prompt, specify the port number or port alias on the local system to communicate with the remote system. This value is a 14-character mixed-case TCP port alias or port number. Do the following: 1. At the Protocol prompt, specify *SNA to run the commands using System Network Architecture (SNA) communications. Press Enter for additional prompts. 2. At the Remote location prompt, specify the name or address of the remote location. 3. At the Local location prompt, specify the unique location name that identifies the system to remote devices. 4. At the Remote network identifier prompt, specify the name or address of the remote location. 5. At the Mode prompt, specify the name of the mode description used for communications. The product default for this parameter is MIMIX. Do the following: 1. At the Protocol prompt, specify *OPTI to run the commands using OptiConnect fiber optic network communications. Press Enter for additional prompts. 2. At the Remote location prompt, specify the name or address of the remote location.
6. Do one of the following: To access additional options, skip to Step 7. To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters). 8. At the Check syntax prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run. 9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs. 10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned.
531
11. At the User prompt, specify the user profile to use when the command is run on the remote system. 12. To run the commands or monitor for messages, press Enter.
4. Specify the commands to run or messages to monitor for the command as follows: a. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands. b. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt. 5. Specify the MIMIX configuration element using Table 70.
Table 70. MIMIX configuration protocols and specifications Protocol prompt value *SYSDFN Also specify
Protocol using MIMIX configuration element Run on system defined by the default transfer definition
Specify the name of the system definition or press F4 for a list of valid definitions. Press Enter for additional prompts
532
Table 70.
MIMIX configuration protocols and specifications Protocol prompt value *TFRDFN Also specify
Protocol using MIMIX configuration element Run on the system specified in the transfer definition (TFRDFN parameter) that is not the local system
Press F1 Help for assistance in specifying the three-part qualified name of the transfer definition. Press Enter for additional prompts.
Run on the system specified in the data group definition that is not the local system *DGDFN
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
Run on the current source system defined for the data group
*DGSRC
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
Run on the current target system defined for the data group
*DGTGT
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
Run by the database apply process when the journal entry is processed
*DGJRN
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
*DGSYS1
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
533
Table 70.
MIMIX configuration protocols and specifications Protocol prompt value *DGSYS2 Also specify
Protocol using MIMIX configuration element Run on the system defined as System 2 for the data group
Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.
6. Do one of the following: To access additional options, skip to Step 7. To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters). 8. At the Check syntax only prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run. 9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs. 10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned. 11. At the User prompt, specify the user profile to use when the command is run on the remote system. 12. If you specified *DGJRN for the Protocol prompt, you will see the File prompts. Do the following: a. At the File name prompt, specify the name of the file to use when the journal entry generated by the commands is sent. Note: Use these prompts if you want the command to run in the database apply job associated with the named file. If a file is not specified, database apply (DBAPY) session A is selected. b. At the Library prompt, specify the name of the library associated with the file. 13. If you specified a file name for the File prompt, you will see the When to run prompt. Using Table 71, specify when the journal entry associated with the command is processed by the target system for the specified data group. 14. To run the commands or monitor for messages, press Enter.
534
Table 71.
Options for processing journal entries with MIMIX *DGJRN protocol Specify Do the following: 1. At the Protocol prompt, specify *DGJRN. 2. At the When to run prompt, specify *RCV. Do the following: 1. At the Protocol prompt, specify *DGJRN. 2. At the When to run prompt, specify *APY.
When to run (Runopt) Run when the database apply job for the specified file receives the journal entry Run in sequence with all other entries for the file.
535
536
537
Chapter22
MIMIX also supports a generic interface to existing database and object replication process exit points that provides enhanced filtering capability on the source system. This generic user exit capability is only available through a Certified MIMIX Consultant.
538
The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX Model Switch Framework.
Table 73. Type Interface exit points MIMIX Monitor exit points Exit Point Name Pre-create Post-create Pre-change Post-change Pre-copy Post-copy Pre-delete Post-delete Pre-display Post-display Pre-print Post-print Pre-rename Post-rename Pre-start Post-start Pre-end Post-end Pre-work with information Post-work with information Pre-hold Post-hold Pre-release Post-release Pre-status Post-status Pre-change status Post-change status Pre-run Post-run Pre-export Post-export Pre-import Post-import
After pre-defined condition check After condition check (pre-defined and user-defined)
Data exit points (The data exit service program supports these exit points.)
539
540
541
the name of the first entry in the currently attached journal receiver.) Restrictions for Change Management Exit Points: The following restriction applies when the exit program is called from either of the change management exit points: Do not include the Change Data Group Receiver (CHGDGRCV) command in your exit program. Do not submit batch jobs for journal receiver change or delete management from the exit program. Submitting a batch job would allow the in-line exit point processing to continue and potentially return to normal MIMIX journal management processing, thereby conflicting with journal manager operations. By not submitting journal receiver change management to a batch job, you prevent a potential problem where the journal receiver is locked when it is accessed by a batch program.
542
program fails and signals an exception to MIMIX, MIMIX processing continues as if the exit program was not specified. Attention: It is possible to cause long delays in MIMIX processing that are undesirable when you use this exit program. When the exit program is called, MIMIX passes control to the exit program. MIMIX will not continue change management or delete management processing until the exit program returns. Consider placing long running processes that will not affect journal management in a batch job that is called by the exit program. Return Code
OUTPUT; CHAR (1)
This value indicates how to continue processing the journal receiver when the exit program returns control to the MIMIX process. This parameter must be set. When the exit program is called from Function C2, the value of the return code is ignored. Possible values are:
0 1 Do not continue with MIMIX journal management processing for this journal receiver. Continue with MIMIX journal management processing.
Function
INPUT; CHAR (2)
The exit point from which this exit program is called. Possible values are:
C1 C2 D0 D1 D2 Pre-change exit point for receiver change management. Post-change exit point for receiver change management. Pre-check exit point for receiver delete management. Pre-change exit point for receiver delete management. Post-change exit point for receiver delete management.
Note: Restrictions for exit programs called from the C1 and C2 exit points are described within topic Change management exit points on page 541. Journal Definition
INPUT; CHAR (10)
The name of the system defined to MIMIX on which the journal is defined. Reserved1
INPUT; CHAR (10)
543
Journal Library
INPUT; CHAR (10)
The name of the library in which the journal is located. Receiver Name
INPUT; CHAR (10)
The name of the journal receiver associated with the specified journal. This is the journal receiver on which journal management functions will operate. For receiver change management functions, this always refers to the currently attached journal receiver. For receiver delete management functions, this always refers to the same journal receiver. Receiver Library
INPUT; CHAR (10)
The value of the Sequence option (SEQOPT) parameter on the CHGJRN command that MIMIX processing would have used to change the journal receiver. Lakeview Technology recommends that you specify this parameter to prevent synchronization problems if you change the journal receiver. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:
*CONT The journal sequence number of the next journal entry created is 1 greater than the sequence number of the last journal entry in the currently attached journal receiver. The journal sequence number of the first journal entry in the newly attached journal receiver is reset to 1. The exit program should either reset the sequence number or set the return code to 0 to allow MIMIX to change the journal receiver and reset the sequence number.
*RESET
Threshold Value
INPUT; DECIMAL(15, 5)
The value to use for the THRESHOLD parameter on the CRTJRNRCV command. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:
0 value Do not change the threshold value. The exit program must not change the threshold size for the journal receiver. The exit program must create a journal receiver with this threshold value, specified in kilobytes. The exit program must also change the journal to use that receiver, or send a return code value of 0 so that MIMIX processing can change the journal receiver.
Reserved2
INPUT; CHAR (1)
544
Reserved3
INPUT; CHAR (1)
/*--------------------------------------------------------------*/ /* Program....: DMJREXIT */ /* Description: Example user exit program using CL */ /*--------------------------------------------------------------*/ PGM PARM(&RETURN &FUNCTION &JRNDEF &SYSTEM + &RESERVED1 &JRNNAME &JRNLIB &RCVNAME + &RCVLIB &SEQOPT &THRESHOLD &RESERVED2 + &RESERVED3) VAR(&RETURN) VAR(&FUNCTION) VAR(&JRNDEF) VAR(&SYSTEM) VAR(&RESERVED1) VAR(&JRNNAME) VAR(&JRNLIB) VAR(&RCVNAME) VAR(&RCVLIB) VAR(&SEQOPT) VAR(&THRESHOLD) VAR(&RESERVED2) VAR(&RESERVED3) TYPE(*CHAR) LEN(1) TYPE(*CHAR) LEN(2) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(8) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(6) TYPE(*DEC) LEN(15 5) TYPE(*CHAR) LEN(1) TYPE(*CHAR) LEN(1)
DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL
545
Table 75.
/*--------------------------------------------------------------*/ /* Constants and misc. variables */ /*--------------------------------------------------------------*/ DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0') DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1') DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1') DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2') DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0') DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1') DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2') DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165) DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10) DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10) /*--------------------------------------------------------------*/ /* MAIN */ /*--------------------------------------------------------------*/ CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/ /*--------------------------------------------------------------*/ /* Handle processing for the pre-change exit point. */ /*--------------------------------------------------------------*/ IF (&FUNCTION *EQ &PRECHG) THEN(DO) /*--------------------------------------------------------------*/ /* If the journal library is my library(MYLIB), exit program */ /* will do the changing of the receivers. */ /*--------------------------------------------------------------*/ IF (&JRNLIB *EQ 'MYLIB') THEN(DO) IF (&THRESHOLD *GT 0) THEN(DO) CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) + THRESHOLD(&THRESHOLD) CHGJRN JRN(&JRNLIB/&JRNNAME) + JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT) ENDDO /* There has been a threshold change */ ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) + SEQOPT(&SEQOPT)) /* No threshold change */ CHGVAR &RETURN &STOP /* Stop processing entry */ ENDDO /* &JRNLIB is MYLIB */ ENDDO /* &FUNCTION *EQ &PRECHG */ /*--------------------------------------------------------------*/ /* At the post-change user exit point if the journal library is */ /* ABCLIB, save the just detached journal receiver. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO) IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO) RTVJRNE JRN(&JRNLIB/&JRNNAME) + RCVRNG(&RCVLIB/&RCVNAME) FROMENT(*FIRST) + RTNJRNE(&RTNJRNE)
546
Table 75.
/*----------------------------------------------------------*/ /* Retrieve the journal entry, extract the previous receiver*/ /* name and library to do the save with. */ /*----------------------------------------------------------*/ CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10)) CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10)) SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) + OBJTYPE(*JRNRCV) /* Save detached receiver */ ENDDO /* &JRNLIB is ABCLIB */ ENDDO /* &FUNCTION is &POSTCHG */ /*--------------------------------------------------------------*/ /* Handle processing for the pre-check exit point. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO) IF (&JRNLIB *EQ 'TEAMLIB') THEN( + SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) + OBJTYPE(*JRNRCV)) ENDDO /* &FUNCTION is &PRECHK */ ENDPGM
547
548
Appendix A
549
Object Type *JOBSCD *JRN *JRNRCV *LIB *LIND *LOCALE *M36 *M36CFG *MEDDFN *MENU *MGTCOL *MODD *MODULE *MSGF *MSGQ *NODGRP *NODL *NTBD *NWID *NWSD *OOPOOL *OUTQ *OVL *PAGDFN *PAGSEG *PDG *PGM *PNLGRP *PRDAVL *PRDDFN *PRDLOD *PSFCFG *QMFORM *QMQRY *QRYDFN *RCT *S36 *SBSD *SCHIDX *SOCKET *SOMOBJ *SPADCT *SPLF *SQLPKG *SQLUDT *SRVPGM *SSND *STMF *SVRSTG *SYMLNK *TBL
Description Job schedule Journal Journal receiver Library Line description Locale space AS/400 Advanced 36 machine AS/400 Advanced 36 machine configuration Media definition Menu Management collection Mode description Module Message file Message queue Node group Node list NetBIOS description Network interface description Network server description Persistent pool (for OO objects) Output queue Overlay Page definition Page segment Print descriptor group Program Panel group Product availability Product definition Product load Print Services Facility (PSF) configuration Query management form Query management query Query definition Reference code translate table System/36 machine description Subsystem description Search index Local socket System Object Model (SOM) object Spelling aid dictionary Spool file Structured query language package User-defined SQL type Service program Session description Bytestream file Server storage space Symbolic link Table
Replicated Yes No7 No7 Yes4 Yes1 Yes No8 No8 Yes Yes Yes Yes Yes Yes Yes4 No9 Yes Yes Yes1 Yes No Yes4, 5 Yes Yes Yes Yes Yes12 Yes No6 No6 No6 Yes Yes Yes Yes No9 No9 Yes Yes No No Yes Yes Yes Yes Yes Yes Yes2 No8 Yes2 Yes
550
Object Type Description Replicated *USRIDX User index Yes *USRPRF User profile Yes *USRQ User queue Yes4 *USRSPC User space Yes10 *VLDL Validation list Yes13 *WSCST Workstation customizing object Yes Notes: 1. Replicating configuration objects to a previous version of IBM i may cause unpredictable results. 2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR, and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries. Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data Group DLO Entries. Excludes stream files associated with a server storage space. 3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38, PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF. 4. Content is not replicated. 5. Spooled files are replicated separately from the output queue. 6. These objects are system specific. Duplicating them could cause unpredictable results on the target system. 7. Duplicating these objects can potentially cause problems on the target system. 8. These objects are not duplicated due to size and IBM recommendation. 9. These object types can be supported by MIMIX for replication through the system journal, but are not currently included. Contact Lakeview Technology Support if you need support for these object types. 10.Changes made though external interfaces such as APIs and commands are replicated. Direct update of the content through a pointer is not supported. 11.The SQL field type of DATALINK is not supported. Files containing these types of fields must be excluded from replication. 12.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to that earlier release of IBM i. 13.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL, DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT, PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.
551
Appendix B
Copying configurations
This section provides information about how you can copy configuration data between systems. Supported scenarios on page 552 identifies the scenarios supported in version 5 of MIMIX. Checklist: copy configuration on page 553 directs you through the correct order of steps for copying a configuration and completing the configuration. Copying configuration procedure on page 558 documents how to use the Copy Configuration Data (CPYCFGDTA) command.
Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying configuration data from one library to another library on the same system. After MIMIX is installed, you can use the CPYCFGDTA command. The supported scenarios are as follows: :
Table 76. From MIMIX version 5 MIMIX version 42
1. 2.
The installation you are copying to must be at the same or a higher level service pack. V4R4 service pack SPC070.00.0 or higher must be installed.
552
553
7. Verify the data group definitions created have the correct job descriptions. Verify that the values of parameters for job descriptions are what you want to use. MIMIX provides default job descriptions that are tailored for their specific tasks. Note: You may have multiple data groups created that you no longer need. Consider whether or not you can combine information from multiple data groups into one data group. For example, it may be simpler to have both database files and objects for an application be controlled by one data group. 8. Verify that the options which control data group file entries are set appropriately. a. For data group definitions, ensure that the values for file entry options (FEOPT) are what you want as defaults for the data group. b. Check the file entry options specified in each data group file entry. Any file entry options (FEOPT) specified in a data group file entry will override the default FEOPT values specified in the data group definition. You may need to modify individual data group file entries. 9. Check the data group entries for each data group. Ensure that all of the files and objects that you need to replicate are represented by entries for the data group. Be certain that you have checked the data group entries for your critical files and objects. Use the procedures in the Using MIMIX book to verify your configuration. 10. Check how the apply sessions are mapped for data group file entries. You may need to adjust the apply sessions. 11. Use Table 78 to entries for any additional database files or objects that you need to add to the data group.
Table 78. Class Librarybased objects How to configure data group entries for the preferred configuration. Do the following: 1. Create object entries using Creating data group object entries on page 267. 2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using Loading file entries from a data groups object entries on page 273.
Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF source files to ensure that legacy cooperative processing can be used.
Planning and Requirements Information Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112
3. After creating object entries, load object tracking entries for *DTAARA and *DTAQ objects that are journaled to a user journal. Use Loading object tracking entries on page 285.
554
How to configure data group entries for the preferred configuration. Do the following: 1. Create IFS entries using Creating data group IFS entries on page 282. 2. After creating IFS entries, load IFS tracking entries for IFS objects that are journaled to a user journal. Use Loading IFS tracking entries on page 284. Create DLO entries using Creating data group DLO entries on page 287. Planning and Requirements Information Identifying IFS objects for replication on page 118
DLOs
12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 13. If you anticipate a delay between configuring and starting the data group and the data group contains object information, you should set object auditing to ensure that any transactions that occur during the delay will be replicated. Use the procedure Setting data group auditing values manually on page 297. 14. Verify that system-level communications are configured correctly. a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and that the communications entries are added to the MIMIXSBS subsystem. b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is started on each system (on each "side" of the transfer definition). You can use the WRKACTJOB command for this. Look for a job under the MIMIXSBS subsystem with a function of LV-SERVER. c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that a MIMIX installation on one system can communicate with a MIMIX installation on another system. Refer to topic Verifying the communications link for a data group on page 195. 15. Ensure that there are no users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system until you have successfully completed the last step of this procedure. 16. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330
555
For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334
17. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 18. Start the system managers using topic Starting the system and journal managers on page 296. 19. Clear pending entries when you start the data groups. Use topic Starting Selected Data Group Processes in the Using MIMIX book.
556
557
558
Appendix C
559
configure the communications necessary for Intra, consider the default product library (MIMIX) to be the local system and the second product library (in this example, MIMIXI) to be the remote system. If you need to manually configure SNA communications for an Intra environment, do the following: 1. Create the system definitions for the product libraries used for Intra as follows: a. For the MIMIX library (local system), use the local location name in the following command: CRTSYSDFN SYSDFN(local-location-name) TYPE(*MGT) TEXT(Manual creation) b. For the MIMIXI library (remote system), use the following command: CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(Manual creation) 2. Create the transfer definition between the two product libraries with the following command: CRTTFRDFN TFRDFN(PRIMARY INTRA local-location-name) PROTOCOL(*SNA) LOCNAME1(INTRA1) LOCNAME2(INTRA2) NETID1(*LOC) TEXT(Manual creation) 3. Create the MIMIX mode description using the following command: CRTMODD MODD(MIMIX) MAXSSN(100) MAXCNV(100) LCLCTLSSN(12) TEXT('MIMIX INTRA MODE DESCRIPTION Manual creation.') 4. Create a controller description for MIMIX Intra using the following command: CRTCTLAPPC CTLD(MIMIXINTRA) LINKTYPE(*LOCAL) TEXT('MIMIX INTRA Manual creation.') 5. Create a local device description for MIMIX using the following command: CRTDEVAPPC DEVD(MIMIX) RMTLOCNAME(INTRA1) LCLLOCNAME(INTRA2) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX INTRA Manual creation.') 6. Create a remote device description for MIMIX using the following command: CRTDEVAPPC DEVD(MIMIXI) RMTLOCNAME(INTRA2) LCLLOCNAME(INTRA1) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX REMOTE INTRA SUPPORT.') 7. Add a communication entry to the MIMIXSBS subsystem for the local location using the following command: ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA2) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX) 8. Add a communication entry to the MIMIXSBS subsystem for the remote location using the following command: ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA1) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX) 9. Vary on the controller, local device, and remote device using the following
560
commands: VRYCFG CFGOBJ(MIMIXINTRA) CFGTYPE(*CTL) STATUS(*ON) VRYCFG CFGOBJ(MIMIX) CFGTYPE(*DEV) STATUS(*ON) VRYCFG CFGOBJ(MIMIXI) CFGTYPE(*DEV) STATUS(*ON) 10. Start the MIMIX system manager in both product libraries using the following commands: MIMIX/STRMMXMGR SYSDFN(*INTRA) MGR(*ALL) MIMIX/STRMMXMGR SYSDFN(*LOCAL) MGR(*JRN) Note: You still need to configure journal definitions and data group definitions.
561
2. Create the transfer definition between the two product libraries with the following command. Note that the values for PORT1 and PORT2 must be unique. MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE) HOST2(INTRA) PORT1(55501) PORT2(55502) 3. Create auto-start jobs in the MIMIX subsystem for the port associated with each library so that MIMIX TCP server is started automatically when the subsystem is started. a. Within the MIMIX library use the commands: CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD) TOLIB(MIMIX) NEWOBJ(PORT55501) CHGJOBD JOBD(MIMIX/PORT55501) RQSDTA('MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501) ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55501) JOBD(MIMIX/PORT55501) b. Within the MIMIXI library use the commands: CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD) TOLIB(MIMIXI) NEWOBJ(PORT55502) CHGJOBD JOBD(MIMIXI/PORT55502) RQSDTA('MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502) ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55502) JOBD(MIMIXI/PORT55502) 4. Start the server for the management system (source) by entering the following command: MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501) 5. Start the server for the network system (Intra) by entering the following command: MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502) 6. Start the system managers from the management system by entering the following command: MIMIX/STRMMXMGR SYSDFN(INTRA) MGR(*ALL) RESET(*YES) Start the remaining managers normally. Note: You will still need to configure journal definitions and data group definitions on the management system. You may want to add service table entries for ports 55501 and 55502 to ensure that other applications will not try and use these ports.
562
Appendix D
ASPs
MIMIX has always supported replication of library-based objects and IFS objects to and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 232). Now, MIMIX also supports replication of library-based objects and IFS objects, including journaled IFS objects, data areas and data queues, located in independent ASPs1 (33-255). The system ASP and basic ASPs are collectively known as SYSBAS. Figure 32 shows that MIMIX supports replication to and from SYSBAS and to and from independent ASPs. Figure 33 shows that MIMIX also supports replication from SYSBAS to an independent ASP and from an independent ASP to SYSBAS.
Figure 32. MIMIX supports replication to and from an independent ASP as well as standard replication to and from SYSBAS (the system ASP and basic ASPs).
Figure 33. MIMIX also supports replication between SYSBAS and an independent ASP.
1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in V5R2 of i5/OS.
563
Restrictions: There are several permanent and temporary restrictions that pertain to replication when an independent ASP is included in the MIMIX configuration. See Requirements for replicating from independent ASPs on page 567 and Limitations and restrictions for independent ASP support on page 567.
Using MIMIX provides a robust solution for high availability and disaster recovery for data stored in independent ASPs.
564
User ASPs are additional ASPs defined by the user. A user ASP can either be a basic ASP or an independent ASP. One type of user ASP is the basic ASP. Data that resides in a basic ASP is always accessible whenever the server is running. Basic ASPs are identified as ASPs 2 through 32. Attributes, such as those for spooled files, authorization, and ownership of an object, stored in a basic ASP reside in the system ASP. When storage for a basic ASP is filled, the data overflows into the system ASP. Collectively, the system ASP and the basic ASPs are called SYSBAS. Another type of user ASP is the independent ASP. Identified by device name and numbered 33 through 255, an independent ASP can be made available or unavailable to the server without restarting the system. Unlike basic ASPs, data in an independent ASP cannot overflow into the system ASP. Independent ASPs are configured using iSeries Navigator.
Figure 34. Types of auxiliary storage pools.
Subtypes of independent ASPs consist of primary, secondary, and user-defined file system (UFDS) independent ASPs1. Subtypes can be grouped together to function as a single entity known as an ASP group. An ASP group consists of a primary independent ASP and zero or more secondary independent ASPs. For example, if you make one independent ASP unavailable, the others in the ASP group are made unavailable at the same time. A primary independent ASP defines a collection of directories and libraries and may have associated secondary independent ASPs. A primary independent ASP defines a database for itself and other independent ASPs belonging to its ASP group. The primary independent ASP name is always the name of the ASP group in which it resides. A secondary independent ASP defines a collection of directories and libraries and must be associated with a primary independent ASP. One common use for a secondary independent ASP is to store the journal receivers for the objects being journaled in the primary independent ASP.
1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only userdefined file systems and cannot be a member of an ASP group unless they are converted to a primary or secondary independent ASP.
565
Before an independent ASP is made available (varied on), all primary and secondary independent ASPs in the ASP group undergo a process similar to a server restart. While this processing occurs, the ASP group is in an active state and recovery steps are performed. The primary independent ASP is synchronized with any secondary independent ASPs in the ASP group, and journaled objects are synchronized with their associated journal. While being varied on, several server jobs are started in the QSYSWRK subsystem to support the independent ASP. To ensure that their names remain unique on the server, server jobs that service the independent ASP are given their own job name when the independent ASP is made available. Once the independent ASP is made available, it is ready to use. Completion message CPC2605 (vary on completed for device name) is sent to the history log.
566
Restrictions in MIMIX support for independent ASPs include the following: MIMIX supports the replication of objects in primary and secondary independent ASPs only. Replication of IFS objects that reside in user-defined file system (UDFS) independent ASPs is not supported. You should not place libraries in independent ASPs within the system portion of a library list. MIMIX commands automatically call the IBM command SETASPGRP, which can result in significant changes to the library list for the associated user job. See Avoiding unexpected changes to the library list on page 570.
567
MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must be installed into SYSBAS. These libraries cannot exist in an independent ASP. Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX commands must reside in SYSBAS. For successful replication, ASP devices in ASP groups that are configured in data group definitions must be made available (varied on). Objects in independent ASPs attached to the source system cannot be journaled if the device is not available. Objects cannot be applied to an independent ASP on the target system if the device is not available. Planned switchovers of data groups that include an ASP group must take place while the ASP devices on both the source and target systems are available. If the ASP device for the data group on either the source or target system is unavailable at the time the planned switchover is attempted, the switchover will not complete. To support an unplanned switch (failover), the independent ASP device on the backup system (which will become the temporary production system) must be available in order for the failover to complete successfully. You must run the Set ASP Group (SETASPGRP) command on the local system before running the Send Network Object (SNDNETOBJ) command if the object you are attempting to send to a remote system is located in an independent ASP.
Also be aware of the following temporary restrictions: MIMIX does not perform validity checking to determine if the ASP group specified in the data group definition actually exists on the systems. This may cause error conditions when running commands. Any monitors configured for use with MIMIX must specify the ASP group. Monitors of type *JRN or *MSGQ that watch for events in an independent ASP must specify the name of the ASP group where the journal or message queue exists. This is done with the ASPGRP parameter of the CRTMONOBJ command. Information regarding independent ASPs is not provided on the following displays: Display Data Group File Entry (DSPDGFE), Display Data Group Data Area Entry (DSPDGDAE), Display Data Group Object Entry (DSPDGOBJE), and Display Data Group Activity Entry (DSPDGACTE). To determine the independent ASP in which the object referenced in these displays resides, see the data group definition.
568
For object replication of library-based objects through the system journal, you should configure related objects in SYSBAS and an ASP group to be replicated by the same data group. Objects in SYSBAS and an ASP group that are not related should be separated into different data groups. This precaution ensures that the data group will start and that objects residing in SYSBAS will be replicated when the independent ASP is not available. Note: To avoid replicating an object by more than one data group, carefully plan what generic library names you use when configuring data group object entries in an environment that includes independent ASPs. Make every attempt to avoid replicating both SYSBAS data and independent ASP data for objects within the same data group. See the example in Configuring librarybased objects when using independent ASPs on page 569.
569
For example, data group APP1 defines replication between ASP groups named WILLOW on each system. Similarly, group APP2 defines replication between ASP groups named OAK on each system. Both data groups have a generic data group object entry that includes object XZY from library names beginning with LIB*. If object LIBASP/XYZ exists in both independent ASPs and matches the generic data group object entry defined in each data group, both data groups replicate the corresponding object. This is considered normal behavior for replication between independent ASPs, as shown in Figure 35. However, in this example, if SYSBAS contains an object that matches the generic data group object entry defined for each data group, the same object is replicated by both data groups. Figure 35 shows that object LIBBAS/XYZ meets the criteria for replication by both data groups, which is not desirable.
Figure 35. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2 because the data groups contain the same generic data group object entry. As a result, this presents a problem if you need to perform a switch.
570
the library list. This can affect the system and user portions of the library list as well as the current library in the library list. When a MIMIX command runs the SETASPGRP command during processing, MIMIX resets the user portion of the library list and the current library in the library list to their initial values. The system portion of the library list is not restored to its initial value. Figure 36, Figure 37, and Figure 38 show how the system portion of the library list is affected on the Display Library List (DSPLIBL) display when the SETASPGRP command is run.
Figure 36. Before a MIMIX command runs. The library list contains three independent ASP libraries, including a library in independent ASP WILLOW in the system portion of the library list.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device WILLOW Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO
WILLOW OAK
Figure 37. During the running of a MIMIX command. The independent ASP libraries are removed from the library list.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO
Figure 38. After the MIMIX command runs. The library in independent ASP WILLOW in the system portion of the library list is removed. The libraries in independent ASP OAK in the user
571
portion of the library list and the current library are restored.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO
WILLOW OAK
The SETASPGRP command can return escape message LVE3786 if License Program 5722-SS1 option 12 (Host Server) is not installed.
572
Appendix E
Audits use commands that compare and synchronize data. The results of the audits are placed in output files associated with the commands. The following topics provide supporting information for interpreting data returned in the output files. Interpreting audit results - MIMIX Availability Manager on page 575 describes how to check the status of an audit and resolve any problems that occur from within MIMIX Availability Manager. Interpreting audit results - 5250 emulator on page 576 describes how to check the status of an audit and resolve any problems that occur from a 5250 emulator. Checking the job log of an audit on page 578 describes how to use an audits job log to determine why an audit failed. Interpreting results for configuration data - #DGFE audit on page 580 describes the #DGFE audit which verifies the configuration data defined to your configuration using the Check Data Group File Entries (CHKDGFE) command. Interpreting results of audits for record counts and file data on page 582 describes the audits and commands that compare file data or record counts. Interpreting results of audits that compare attributes on page 586 describes the Compare Attributes commands and their results.
573
574
Rule Failed
Completed Successfully
575
Addressing audit problems - MIMIX Availability Manager Results Differences detected, some objects not recovered Action The remaining detected differences must be manually resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits, such as #FILDTA, may correct the detected differences.
Completed Successfully
Do the following: 1. Select Output File from the action list and click . 2. The detected differences are displayed. Look for items with a Difference Indicator value of *NE, *NC, or *RCYFAILED. If automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. You can display details about the error or attempt the possible recovery action available. 3. Select the action you want and click .
For more information about the values displayed in the audit results, see Interpreting results for configuration data - #DGFE audit on page 580, Interpreting results of audits for record counts and file data on page 582, and Interpreting results of audits that compare attributes on page 586.
2. Check the Audit Status column for values shown in Table 80. Audits with potential
576
problems are at the top of the list. Take the action indicated in Table 80.
Table 80. Addressing audit problems - 5250 emulator Action The audit failed for these possible reasons. Reason 1: The rule called by the audit failed or ended abnormally. To run the rule for the audit again, select option 9 (Run rule). To check the job log, see Checking the job log of an audit on page 578. Reason 2: The #FILDTA audit or the #MBRRCDCNT audit which required replication processes that were not active. 1. From the MIMIX Availability Status display, check whether there are any problems indicated for replication processes. 2. If there are no problems with replication processes, use F20 to access a command line and type WRKAUD. Then skip to Step 6. 3. If there are replication problems, use option 9 (Troubleshoot) next to the Replication activity. 4. On the Work with Data Groups display, if processes for the data group show a red I, L, or P in the Source and Target columns, use option 9 (Start DG). 5. When processes are active, use F7 to view audits. 6. From the Work with Audits display, use option 9 (Run rule) to run the audit. The comparison performed by the audit detected differences. No recovery actions were attempted because automatic audit recovery is disabled. 1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results in the output file. 3. Check the Difference Indicator column for values of *NC and *NE. For any of these differences, you will need manually resolve these problems. To have MIMIX recover differences on subsequent audits, change the value of the automatic audit recovery policy. The comparison performed by the audit detected differences. Some of the differences were not automatically recovered. The remaining detected differences must be manually resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits, such as #FILDTA, may correct the detected differences.
*DIFNORCY
*NOTRCVD
Do the following: 1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results in the output file. 3. Check the Difference Indicator column for values of *NC, *NE, and *RCYFAILED. If automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. For any of these differences, you will need to manually resolve these issues.
577
2. The Job Log window opens. Look at the most recent messages to determine the cause of the audit failure. Note: If you see no data available instead, you may still be able to view the job log from the 5250 emulator as described below. From a 5250 emulator, you must display the notifications from an audit in order to view the job log. Do the following: 1. From the Work with Audits display, type 7 (Notification) next to the audit and press Enter. 2. The notifications associated with the audit are displayed on the Work with Notifications display. Use option 5 (Display) or F22 to view the description in the Notification column. 3. If the notification is not sufficient to determine the problem, use option 12 (Display job) next to the notification. 4. The Display Job menu opens. Select option 4 (Display spooled files). Then use option 5 (Display) from the Display Job Spooled Files display. 5. Look for a completion message from the rule with the text indicated from Step 2. Usually the most recent messages are at the bottom of the display.
578
579
When the #DGFE rule is called and a recovery is attempted, the following values can also be indicated in the report: Recovered by automatic recovery actions (*RECOVERED) Automatic audit recovery actions were attempted but failed to correct the detected error (*RCYFAILED)
Table 81 provides examples of when various configuration errors might occur. Table 82 provides possible problem resolution actions for these errors:
Table 81. Result *NODGFE *EXTRADGFE *NOFILE *NOMBR CHKDGFE - possible error conditions File exists Yes Yes No Yes Member exists Yes Yes No No DGFE exists No Yes Yes Yes DGOBJE exists COOPDB(*YES) COOPDB(*NO) Exclude No entry
580
CHKDGFE - possible error resolution actions Recovery Actions Create the DGFE or change the DGOBJE to COOPDB(*NO) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value. Delete the DGFE or change the DGOBJE to COOPDB(*YES) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value. Delete the DGFE, re-create the missing file, or restore the missing file. Delete the DGFE for the member or add the member to the file.
*EXTRADGFE
*NOFILE *NOMBR
581
Each record in the output files for these audits or commands identifies a file member that has been compared and indicates whether a difference was detected for that member. MIMIX Availability Manager displays only detected differences found by each compare command using a subset of the fields from the output file. You can see the full set of fields in each output file by viewing it from a 5250 emulator. The type of data included in the output file is determined by the report type specified on the compare command. When viewed from a 5250 emulator, the data included for each report type is as follows: Difference reports return information about detected differences. Difference reports are the default for these compare commands. Full reports return information about all objects and attributes compared. Full reports include both differences and objects that are considered synchronized. Relative record number reports return the relative record number of the first 1,000 records of a member that fail to compare. Relative record number reports apply only to the Compare File Data command.
582
Possible values for Compare File Data (CMPFILDTA) output file field Difference Indicator (DIFIND) Description Record counts match. No differences were detected. Global difference indicator. No difference was detected. However, fields with unsupported types were omitted. The file feature is not supported for comparison. Examples of file features include materialized query tables. Matching entry not found in database apply table. Unable to process selected member. File formats differ between source and target files. Either the record length or the null capability is different. Indicates that a member is held or an inactive state was detected. Unable to complete processing on selected member. Messages preceding LVE0101 may be helpful. Indicates a difference was detected. The file member is being processed for repair by another job running the Compare File Data (CMPFILDTA) command. The source file is not journaled, or is journaled to the wrong journal. Unable to process selected member. See messages preceding message LVE3D42 in job log. The file or member is being processed by the Synchronize DG File Entry (SYNCDGFE) command. Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful. Indicates that the members synchronization status is unknown.
583
Table 84. Values *EQ *FF *HLD *LCK *NE *NF1 *NF2 *SJ *UE *UN
Possible values for Compare Record Count (CMPRCDCNT) output file field Difference Indicator (DIFIND) Description Record counts match. No difference was detected. Global difference indicator. The file feature is not supported for comparison. Examples of file features include materialized query tables. Indicates that a member is held or an inactive state was detected. Lock prevented access to member. Indicates a difference was detected. Member not found on system 1. Member not found on system 2. The source file is not journaled, or is journaled to the wrong journal. Unable to process selected member. Reason unknown. Messages preceding LVE3D42 in job log may be helpful. Indicates that the members synchronization status is unknown.
584
585
For difference and full reports of compare attribute commands, several of the attribute selectors return an indicator (*INDONLY) rather than an actual value. Attributes that return indicators are usually variable in length, so an indicator is returned to conserve space. In these instances, the attributes are checked thoroughly, but the report only contains an indication of whether it is synchronized. For example, an authorization list can contain a variable number of entries. When comparing authorization lists, the CMPOBJA command will first determine if both lists have the same number of entries. If the same number of entries exist, it will then determine whether both lists contain the same entries. If differences in the number of entries are found or if the entries within the authorization list are not equal, the report will indicate that differences are detected. The report will not provide the list of entriesit will only indicate that they are not equal in terms of count or content. MIMIX Availability Manager displays only detected differences found by Compare Attributes commands using a subset of the fields from the output file. MIMIX Availability Manager displays summary rows in the Summary List window and detail rows in the Details window for the Compare command type. You can see the full set of fields in the output file by viewing it from a 5250 emulator.
1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA).
586
1 5 5
*HLD *IOERR
N/A 1
587
Table 85. Values1 *LCK *NA *NC *NE *NF1 *NF2 *NS *RCYSBM
Possible values for output file field Difference Indicator (DIFIND) Description Lock prevented access to member. The values are not compared. The actual values may or may not be equal. The values are not equal based on the MIMIX configuration settings. The actual values may or may not be equal. Indicates differences were detected. Member not found on system 1. Member not found on system 2. Indicates that the attribute is not supported on one of the systems. Will not cause a global not equal condition. Indicates that MIMIX AutoGuard submitted an automatic audit recovery action that must be processed through the user journal replication processes. The database apply (DBAPY) will attempt the recovery and send an *ERROR or *INFO notification to indicate the outcome of the recovery attempt. Used to indicate that automatic recovery attempts via AutoGuard failed to recover the detected difference. Indicates that recovery for this object was successful. Unable to process selected member. The source file is not journaled. Unable to process selected member. See messages preceding message LVE3D42 in job log. Unable to process selected member. The file is being processed by the Synchronize DG File Entry (SYNCDGFE) command. Object status is unknown due to object activity. If an object difference is found and the comparison has a value specified on the Maximum replication lag prompt, the difference is seen as unknown due to object activity. This status is only displayed in the summary record.
Note: The Maximum replication lag prompt is only valid when a data group is specified on the command.
5 3 2
13 1 1 N/A 2
*UE *UN
1. 2. 3.
Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful. Indicates that the objects synchronization status is unknown.
1 4
Not all values may be possible for every Compare command. Priorities are used to determine the value shown in output files for Compare Attribute commands. The value *RECOVERED can only appear in an output file modified by a recovery action. The object was initially found to be *NE or *NC but MIMIX autonomic functions recovered the object.
588
For most attributes, when a detailed row contains blanks in either of the System 1 Indicator or System 2 Indicator fields, MIMIX determines the value of the Difference Indicator field according to Table 86. For example, if the System 1 Indicator is *NOTFOUND and the System 2 Indicator is blank (Object found), the resultant Difference Indicator is *NE.
Table 86. Difference Indicator values that are derived from System Indicator values. Difference Indicator System 1 Indicator Object *NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED Found (blank value) *NA Object Found *EQ / *EQ (blank value) (LOB) / *NE / *UA / *EC / *NC *NA *NE / *UA *NS *UN *NE *NE *NS *UN *NE
*NS
*UN
*NE / *UA *NE / *UA *NS *UN *NE *UN *UN *NE
For a small number of specific attributes, the comparison is more complex. The results returned vary according to parameters specified on the compare request and MIMIX configuration values. For more information see the following topics: Comparison results for journal status and other journal attributes on page 608 Comparison results for auxiliary storage pool ID (*ASP) on page 612 Comparison results for user profile status (*USRPRFSTS) on page 615 Comparison results for user profile password (*PRFPWDIND) on page 619
589
Possible values for output file fields SYS1IND and SYS2IND Description Member not found. Attribute not compared. Due to MIMIX configuration settings, this attribute cannot be compared. Object not found. Attribute not supported. Not all attributes are supported on all IBM i releases. This is the value that is used to indicate an unsupported attribute has been specified. Unable to retrieve the attributes of the object. Reason for failure may be a lock condition. Summary Record1 Priority 2 N/A2 1 N/A2
*RTVFAILED
1. 2.
The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary record. This value is not used in determining the priority of summary level records.
For comparisons which include a data group, the Data Source (DTASRC) field identifies which system is configured as the source for replication. In MIMIX Availability Manager Details windows, the direction of the arrow shown the data group field identifies the flow of replication.
590
Access path size Allow delete operation Allow operations Allow read operation Allow update operation
591
Compare File Attributes (CMPFILA) attributes Description Allow write operation Auxiliary storage pool ID Returned Values (SYS1VAL, SYS2VAL) *YES, *NO 1-16 (pre-V5R2) 1-255 (V5R2) 1 = System ASP See Comparison results for auxiliary storage pool ID (*ASP) on page 612 for details. *NONE, *CHANGE, *ALL Group which checks attributes *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND *NONE, list name Group which checks a pre-determined set of attributes. When *FILE is specified for the Comparison level (CMPLVL), these attributes are compared: *CST (group), *NBRMBR, *OBJATR, *RCDFMT, *TEXT, and *TRIGGER (group). When *MBR is specified for the Comparison level (CMPLVL), these attributes are compared: *CURRCDS, *EXPDATE, *NBRDLTRCD, *OBJATR, *SHARE, and *TEXT. 1-65535 Group which checks attributes *CSTIND, *CSTNBR No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of constraints, constraint names, constraint types, and the check pending attribute are equal. For referential and check constraints, the constraint state as well as whether the constraint status is enabled or disabled is also compared. Numeric value 0-4294967295 *YES, *NO Group which checks *DBRIND, *OBJATR Database relations No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of database relations and the dependent file names are equal. Blank for *NONE or date in CYYMMDD format, where C equals the century. Value 0 is 19nn and 1 is 20nn.
Object audit value File authorities Authority list name Pre-determined set of basic attributes
*EXPDATE1
592
Compare File Attributes (CMPFILA) attributes Description Pre-determined, extended set Returned Values (SYS1VAL, SYS2VAL) Valid only for Comparison level of *FILE, this group compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *ACCPTH, *AUT (group), *CCSID, *CST (group), *CURRCDS, *DBR (group), *MAXKEYL, *MAXMBRS, *MAXRCDL, *NBRMBR, *OBJATR, *OWNER, *PFSIZE (group), *RCDFMT, *REUSEDLT, *SELOMT, *SQLTYP, *TEXT, and *TRIGGER (group). 10 character name *NONE if the file has no members. *YES, *NO *NONE, 1-32767 0-32767 *YES, *NO Add, update, and delete authorities are not checked. Differences in these authorities do not result in an *NE condition. Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT. Results are described in Comparison results for journal status and other journal attributes on page 608. *YES, *NO 10 character name, blank if never journaled *AFTER, *BOTH 10 character name, blank if never journaled *OPNCLO, *NONE 3 character ID 10 character name *NONE if the file has no members. *YES, *NO *IMMED, *REBLD, *DLY 0-32767
*EXTENDED
Name of member *FIRST Force keyed access path Records to force a write Increment number of records Join Logical file
*JOURNAL
Journal attributes
*JOURNALED *JRN *JRNIMG *JRNLIB *JRNOMIT *LANGID1 *LASTMBR1 3 *LVLCHK1 *MAINT1 *MAXINC1
File is currently journaled Current or last journal Record images Current or last journal library Journal entries to be omitted Language ID Name of member *LAST Record format level check Access path maintenance Maximum increments
593
Compare File Attributes (CMPFILA) attributes Description Maximum key length Maximum members Max % deleted records allowed Maximum record length Current number of deleted records Number of members Initial number of records Object control level File owner File size attributes Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) 1-2000 *NOMAX, 1-32767 *NONE, 1-100 1-32766 0-4294967295 0-32767 *NOMAX, 1-2147483646 8 character user-defined value User profile name Group which checks *CURRCDS, *INCRCDS, *MAXINC, *NBRDLTRCD, *NBRRCDS *NONE, user profile name No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal. No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if public authority values are equal. 1-32 *IPL, *AFTIPL, *NO *YES, *NO *YES, *NO *YES, *NO PF Types - NONE, TABLE, LF Types - INDEX, VIEW, NONE 50 character value
*MAXKEYL1 *MAXMBRS1 *MAXPCT1 *MAXRCDL1 *NBRDLTRCD1 *NBRMBR1 *NBRRCDS1 *OBJCTLLVL1 *OWNER *PFSIZE *PGP *PRVAUTIND
*PUBAUTIND
Public authority indicator Number of record formats Access path recovery Reuse deleted records Select / omit file Share open data path SQL file type Text description
594
Compare File Attributes (CMPFILA) attributes Description Returned Values (SYS1VAL, SYS2VAL) Group which checks *TRGIND, *TRGNBR, *TRGXSTIND Trigger equal indicator No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates whether it is enabled or disabled, and if the number of triggers, trigger names, trigger time, trigger event, and trigger condition with an event type of update are equal. Numeric value No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if a trigger program exists on the system. 10 character user-defined value *IMMED, *CLS, 1-32767 *IMMED, *NOMAX, 1-32767
*TRIGGER *TRGIND 2
*TRGNBR 2 *TRGXSTIND 2
Number of triggers Trigger existence indicator User-defined attribute Maximum file wait time Maximum record wait time
2. 3.
4.
Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. This attribute cannot be specified as input for comparing but it is included in a group attribute. When the group attribute is checked, this value may appear in the output. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the file is configured for system journal replication with a configured Omit content (OMTDTA) value of *FILE. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, however, these values are blank.
595
*ASP
*ASPNBR
Number of defined storage pools. Valid for subsystem descriptions only. Attention key handling program Valid for user profiles only. Object audit value Authority attributes Authority to check. Valid for job queues only. Authority list name Pre-determined set of basic attributes
*ATTNPGM2
*NONE, *USRPRF, *CHANGE, *ALL Group which checks *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND *OWNER, *DTAAUT *NONE, list name Group which checks a pre-determined set of attributes. These attributes are compared: *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR.
596
Compare Object Attributes (CMPOBJA) attributes Description Character identifier control. Valid for user profiles only. Country ID Valid for user profiles only. Communications entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) *SYSVAL, ccsid-value
*CNTRYID2
*SYSVAL, country-id
*COMMEIND
No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of communication entries, maximum number of active jobs, communication device, communication mode, associated job description and library, and the default user entry values are equal. *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE
*CRTAUT2
Authority given to users who do not have specific authority to the object. Valid for libraries only. Auditing value for objects created in this library Valid for libraries only. Profile that owns objects created by user Valid for user profiles only. Object creation date Current library Valid for user profiles only. Data cyclic redundancy check (CRC) Valid for data queues only. DDM conversation Valid for job descriptions only. Decimal positions Valid for data areas only. Object Domain Data area extended attributes
*CRTOBJAUD2
*CRTOBJOWN
*CRTTSP *CURLIB
*DATACRC2
10 character value
*DDMCNV2
*KEEP, *DROP
0-9 *SYSTEM, *USER Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE
597
Compare Object Attributes (CMPOBJA) attributes Description Pre-determined, extended set Returned Values (SYS1VAL, SYS2VAL) Group which compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR. *NONE, 1 - 32,767 1 - 4294967294
*EXTENDED
*FRCRATIO1 2 *GID
Records to force a write Valid for logical files only. Group profile ID number Valid for user profiles only. Group authority to created objects Valid for user profiles only. Group authority type Valid for user profiles only. Group profile name Valid for user profiles only. Information status
*GRPAUT
*GRPAUTTYP
*PGP, *PRIVATE
*GRPPRF
*NONE, profile-name
*INFSTS
*OK (No errors occurred), *RTVFAILED (No information returned - insufficient authority or object is locked), *DAMAGED (Object is damaged or partially damaged). Menu - *SIGNOFF, menu name Library - *LIBL, library name Program - *NONE, program name Library - *LIBL, library name Group which checks *DDMCNV, *JOBQ, *JOBQLIB, *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB, *OUTQPRI, *PRTDEV 10 character name
*INLMNU
Initial menu Valid for user profiles only. Initial program Valid for user profiles only. Job description extended attributes Job queue Valid for job descriptions only. Job queue entries Valid for subsystem descriptions only.
*INLPGM
*JOBDEXT
*JOBQ2
*JOBQEIND
No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of job queue entries, job queue names, job queue libraries, and order of entries are the same
598
Compare Object Attributes (CMPOBJA) attributes Description Job queue extended attributes Job queue library Valid for job descriptions only. Job queue priority Valid for job descriptions only. Subsystem that receives jobs from this queue Valid for job queues only. Job queue status Valid for job queues only. Journal attributes Returned Values (SYS1VAL, SYS2VAL) Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS, *OPRCTL 10 character name
*JOBQEXT *JOBQLIB2
*JOBQPRI2
1 (highest) - 9 (lowest)
*JOBQSBS2
Subsystem name
*JOBQSTS2 *JOURNAL
HELD, RELEASED Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT4. Results are described in Comparison results for journal status and other journal attributes on page 608. *YES, *NO 10 character name *AFTER, *BOTH 10 character name *OPNCLO, *NONE *SYSVAL, language-id
Object is currently journaled Current or last journal Record images Current or last journal library Journal entries to be omitted Language ID Valid for user profiles only. Data area length Valid for data areas only Extended library information attributes Initial library list Valid for job descriptions only.
1-2000 (character), 1-24 (decimal), 1 (logical) Group which checks *CRTAUT, *CRTOBJAUD No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of library list entries and entry list values are equal. The comparison is order dependent.
599
Compare Object Attributes (CMPOBJA) attributes Description Limit capabilities Valid for user profiles only. Job log output Valid for job descriptions only. Record format level check Valid for logical files only. Access path maintenance Valid for logical files only. Maximum active jobs Valid for subsystem descriptions only. Maximum members Valid for logical files only. Message queue Valid for user profiles only. Number of logical file members Valid for logical files only. Object attribute Object control level Valid for object types that support this attribute5. Operator controlled Valid for job queues only. Output queue Valid for job descriptions only. Output queue library Valid for job descriptions only. Output queue priority Valid for job descriptions only. Returned Values (SYS1VAL, SYS2VAL) *PARTIAL, *YES, *NO
*LOGOUTPUT2
*LVLCHK1 2
*YES, *NO
*MAINT1 2
*MAXACT 2
*MAXMBRS1 2 *MSGQ2
*NOMAX, 1 - 32,767 Message queue - message queue name Library - *LIBL, library name 0 - 32,767
*NBRMBR1 2
*OBJATR *OBJCTLLVL2
*OPRCTL2 *OUTQ2
*OUTQLIB2
10 character name
*OUTQPRI2
1 (highest) - 9 (lowest)
600
Compare Object Attributes (CMPOBJA) attributes Description Object owner Primary group Pre-start job entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) 10 character name *NONE, user profile name No value, indicator only1 When this attribute is returned in output, its Difference Indicator value indicates if the number of prestart jobs, program, user profile, start job, wait for job, initial jobs, maximum jobs, additional jobs, threshold, maximum users, job name, job description, first and second class, and number of first and second class jobs values are equal. *LIBL/*WRKSTN, *DEV
*PRESTIND
*PRFOUTQ2
Output queue Valid for user profiles only. User profile password indicator Printer device Valid for job descriptions only. Private authority indicator
*PRFPWDIND *PRTDEV2
See Comparison results for user profile password (*PRFPWDIND) on page 619 for details. *USRPRF, *SYSVAL, *WRKSTN, printer device name
*PRVAUTIND
No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal. *SYSVAL, *NOMAX, 1-366 days
*PUBAUTIND
*PWDEXPITV
Password expiration interval Valid for user profiles only. No password indicator Valid for user profiles only. Job queue allocation indicator Valid for subsystem descriptions only.
*PWDIND
*QUEALCIND
No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the job queue entries for a subsystem are in the same order and have the same queue names and queue library names. It also compares the allocation indicator values
601
Compare Object Attributes (CMPOBJA) attributes Description Remote location entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of remote location entries, remote location, mode, job description and library, maximum active jabs, and default user entry values are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of routing entries, sequence number, maximum active, steps, compare start, entry program, class, and compare entry values are equal Group which checks *AJEIND, *ASPNBR, *COMMEIND, *JOBQEIND, *MAXACT, *PRESTIND, *RLOCIND, *RTGEIND, *SBSDSTS *ACTIVE, *INACTIVE
*RLOCIND
*RTGEIND
*SBSDEXT
Subsystem description extended attributes Subsystem status Valid for subsystem descriptions only. Object size Special authorities Valid for user profiles only. SQL stored procedures Valid for programs and service programs only.
*SBSDSTS2
*SIZE *SPCAUTIND
Numeric value No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if special authority values are equal *NONE, or indicator only3 *NONE is returned when there are no stored procedures associated with the program or service program. When the indicator only is returned in output, the Difference Indicator value identifies whether SQL stored procedures associated with the object are equal. *NONE, or indicator only3 *NONE is returned when there are no user defined functions associated with the program or service program. When the indicator only is returned in output, the Difference Indicator value identifies whether SQL user defined functions associated with the object are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if supplemental group values are equal 50 character description
*SQLSP
*SQLUDF
SQL user defined functions Valid for programs and service programs only.
*SUPGRPIND
*TEXT2
602
Compare Object Attributes (CMPOBJA) attributes Description Data area type - data area types of DDM resolved to actual data area types Valid for data areas only. User profile ID number Valid for user profiles only. User-defined attribute User Class Valid for user profiles only. User profile extended attributes Returned Values (SYS1VAL, SYS2VAL) *CHAR, *DEC, *LGL
*UID
1 - 4294967294
*USRATR2 *USRCLS
*USRPRFEXT
Group which checks *ATTNPGM, *CCSID, *CNTRYID, *CRTOBJOWN, *CURLIB, *GID, *GRPAUT, *GRPAUTTYP, *GRPPRF, *INLMNU, *INLPGM, *LANGID, *LMTCPB, *MSGQ, *PRFOUTQ, *PWDEXPITV, *PWDIND, *SPCAUTIND, *SUPGRPIND, *USRCLS *ENABLED, *DISABLED6 For details, see Comparison results for user profile status (*USRPRFSTS) on page 615. Character value of data
*USRPRFSTS
*VALUE2
1. 2. 3. 4. 5. 6.
This attribute only applies to logical files. Use the Compare File Attributes (CMPFILA) command to compare or omit physical file attributes. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. If *PRINT is specified for the output format on the compare request, an indicator appears in the System 1 and System 2 columns. If *OUTFILE is specified, these values are blank. These attributes are compared for object types of *FILE, *DTAQ, and *DTAARA. These are the only objects supported by IBM's user journals. The *OBJCTLLVL attribute is only supported on the following object types: *AUTL, *CNNL, *COSD, *CTLD, *DEVD, *DTAARA, *DTAQ, *FILE, *IPXD, *LIB, *LIND, *MODD, *NTBD, *NWID, *NWSD, and *USRPRF. The profile status is only compared if no data group is specified or the USRPRFSTS has a value of *SRC for the specified data group. If a data group is specified on the CMPOBJA command and the USRPRFSTS value on the object entry has a value of *TGT, *ENABLED, or *DISABLED, the user profile status is not compared.
603
Object auditing value Authority attributes Authority list name Pre-determined set of basic attributes Coded character set Create timestamp Data cyclic redundancy check (CRC) Data size Pre-determined, extended set
*JOURNAL
Journal information
604
Compare IFS Attributes (CMPIFSA) attributes Description Current or last journal library Journal optional entries Object type File owner Archived file PC Attributes Hidden file Read only attribute System file Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) 10 character name *YES, *NO *STMF, *DIR, *SYMLNK 10 character name *YES, *NO Group which checks *PCARCHIVE, *PCHIDDEN, *PCREADO, *PCSYSTEM *YES, *NO *YES, *NO *YES, *NO *NONE, user profile name No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal.
*PUBAUTIND
1.
2.
3.
Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. The *CRTTSP attribute does not compare directories (*DIR) or symbolic links (*SYMLNK). For stream files (*STMF), the #IFSATR audit omits the *CRTTSP attribute from comparison since creation timestamps are not preserved during replication. Running the CMPIFSA command will detect differences in the creation timestamps for stream files. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, these values are blank.
605
Object audit value Authority attributes Authority list name Pre-determined set of basic attributes
Coded character set Create timestamp Data size Pre-determined, extended set
Modify timestamp Object type File owner Archived file PC Attributes Hidden file Read only attribute
606
Compare DLO Attributes (CMPDLOA) attributes Description System file Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) *YES, *NO *NONE, user profile name No value, indicator only3 When this attribute is returned in output, its Difference Indicator value if the number of private authorities and private authority values are equal No value, indicator only1 When this attribute is returned in output, its Difference Indicator value if the public authority values are equal 50 character description
*PUBAUTIND
*TEXT
1. 2. 3.
Text description
This attribute is not supported for DLOs with an object type of *FLR. This attribute is always compared. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, these values are blank.
607
When specified on the CMPOBJA command, these values apply only to files, data areas, or data queues. When specified on the CMPFILA command, these values apply only to PF-DTA and PF38-DTA files. *JOURNAL *JOURNALED Object journal information attributes. This value acts as a group selection, causing all other journaling attributes to be selected Journal Status. Indicates whether the object is currently being journaled. This attribute is always compared when any of the other journaling attributes are selected. Journal. Indicates the name of the current or last journal. If blank, the object has never been journaled. Journal Image. Indicates the kinds of images that are written to the journal receiver for changes to objects. Journal Library. Identifies the library that contains the journal. If blank, the object has never been journaled. Journal Omit. Indicates whether file open and close journal entries are omitted.
2.
When these values are specified on a Compare command, the journal status (*JOURNALED) attribute is always evaluated first. The result of the journal status comparison determines whether the command will compare the specified attribute. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the journal status is as expected. The journal image status is reflected as not supported (*NS) because IBM i only supports after (*AFTER) images.
Compares that do not specify a data group - When no data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute). Table 93 shows the result displayed in the Differences Indicator field. If the file or
608
object is not journaled on both systems, the compare ends. If both source and target systems are journaled, MIMIX then compares any other specified journaling attribute.
Table 93. Difference indicator values for *JOURNALED attribute when no data group is specified Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.
The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Compares that specify a data group - When a data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute) to the configuration values. If both source and target systems are journaled according to the expected configuration settings, then MIMIX compares any other specified journaling attribute against the configuration settings. The Compare commands vary slightly in which configuration settings are checked. For CMPFILA requests, if the journaled status is as configured, any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94. For CMPOBJA and CMPIFSA requests, if the journaled status is as configured and the configuration specifies *YES for Cooperate with database (COOPDB), then any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94 and Table 95. If the configuration specifies COOPDB(*NO), only the journaled status is compared; possible results are shown in Table 96.
Table 94, Table 95, and Table 96 show results for the *JOURNALED attribute that can appear in the Difference Indicator field when the compare request specified a data group and considered the configuration settings.
609
Table 94 shows results when the configured settings for Journal on target and Cooperate with database are both *YES.
Table 94. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *YES for JRNTGT and COOPDB Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.
The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Table 95 shows results when the configured settings are *NO for Journal on target and *YES for Cooperate with database. .
Table 95. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *NO for JRNTGT and *YES for COOPDB. Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.
The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Table 96 shows results when the configured setting for Cooperate with database is *NO. In this scenario, you may want to investigate further. Even though the Difference Indicator shows values marked as configured (*EC), the object can be not journaled
610
on one or both systems. The actual journal status values are returned in the System 1 Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.
Table 96. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *NO for COOPDB. Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.
The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Because the data groups values for Journal image and Omit open/close entries can be overridden by a data group file entry or a data group object entry, the CMPFILA and CMPOBJA commands also retrieve these values from the entries. The values determined after the order of precedence is resolved, sometimes called the overall MIMIX configuration values, are used for the compare. For CMPOBJA and CMPIFSA requests, the value of the Cooperate with database (COOPDB) parameter is retrieved from the data group object entry or data group IFS entry. The default value in object entries is *YES, while the default value in IFS entries is *NO.
611
The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Compares that specify a data group - When a data group is specified on the compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not compare the *ASP attribute. When a data group is specified on a CMPOBJA request which specifies an object type except libraries (*LIB), MIMIX does not compare the *ASP attribute. Table 98 shows the possible results in the Difference Indicator field
Table 98. Difference Indicator values for non-library objects when the request specified a data group Difference Indicator Target ASP Values 1 ASP1 Source ASP2 *NOTFOUND
1.
The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
612
For CMPOBJA requests which specify a a data group and an object type of *LIB, MIMIX considers configuration settings for the library. Values for the System 1 library ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved from the data group object entry and used in the comparison. Table 99, Table 100, and Table 101 show the possible results in the Difference Indicator field. Note: For Table 99, Table 100, and Table 101, the results are the same even if the system roles are switched. Table 99 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *SRCLIB for the System 1 library ASP number and the data source is system 2. .
Table 99. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2). Difference Indicator Target ASP Values 1 ASP1 Source ASP2 *NOTFOUND
1.
The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Table 100 shows the expected values for the ASP attribute the request specifies a data group and the configuration specifies 1 for the System 1 library ASP number and the data source is system 2.
Table 100. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(1) and DTASRC(*SYS2) Difference Indicator Target ASP Values 1 Source 2 *NOTFOUND
1.
1
The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
Table 101 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *ASPDEV for the System 1
613
library ASP number, DEVNAME is specified for the System 1 library ASP device, and data source is system 2. .
Table 101. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and DTASRC(*SYS2) Difference Indicator Target ASP Values 1 1 Source 2 *NOTFOUND
1.
The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.
614
Compares that do not specify a data group - When the CMPOBJA command does not specify a data group, MIMIX compares the status values between source and target systems. The result is displayed in the Differences Indicator field, according to Table 85 in Interpreting results of audits that compare attributes on page 586. Compares that specify a data group - When the CMPOBJA command specifies a data group, MIMIX checks the configuration settings and the values on one or both systems. (For additional information, see How configured user profile status is determined on page 616.) When the configured value is *SRC, the CMPOBJA command compares the values on both systems. The user profile status on the target system must be the same as the status on the source system, otherwise an error condition is reported. Table 102 shows the possible values.
Table 102. Difference Indicator values when configured user profile status is *SRC Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *EC *NC *NE *DISABLED *NC *EC *NE *NOTFOUND *NE *NE *UN
When the configured value is *ENABLED or *DISABLED, the CMPOBJA command checks the target system value against the configured value. If the user profile status on the target system does not match the configured value, an error condition is reported. The source system user profile status is not relevant. Table 103 and Table
615
104 show the possible values when configured values are *ENABLED or *DISABLED, respectively.
Table 103. Difference Indicator values when configured user profile status is *ENABLED Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *EC *EC *NE *DISABLED *NC *NC *NE *NOTFOUND *NE *NE *UN
Table 104. Difference Indicator values when configured user profile status is *DISABLED Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *NC *NC *NE *DISABLED *EC *EC *NE *NOTFOUND *NE *NE *UN
When the configured value is *TGT, the CMPOBJA command does not compare the values because the result is indeterminate. Any differences in user profile status between systems are not reported. Table 105 shows possible values.
Table 105. Difference Indicator values when configured user profile status *TGT Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *NA *NA *NE *DISABLED *NA *NA *NE *NOTFOUND *NE *NE *UN
616
in an object entry, the default is to use the value *SRC from the data group definition. Table 106 shows the possible values at both the data group and object entry levels.
Table 106. Configuration values for replicating user profile status *DGDFT Only available for data group object entries, this indicates that the specified in the data group definition is used for the user profile statue. This is the default value for object entries. The status of the user profile is set to *DISABLED when the user profile is created or changed on the target system. The status of the user profile is set to *ENABLED when the user profile is created or changed on the target system. This is the default value in the data group definition. The status of the user profile on the source system is always used when the user profile is created or changed on the target system. If a new user profile is created, the status is set to *DISABLED. If an existing user profile is changed, the status of the user profile on the target system is not altered.
*TGT
1.
Data group definitions use these values. In data group object entries, the values *DISABLED and *ENABLED are used but have the same meaning.
617
618
619
Table 108 shows the possible Difference Indicator values when the user profile passwords are different on the local and remote systems and are not defined as *NONE.
Table 108. Difference Indicator values when user profile passwords are different, but not *NONE Difference Indicator Remote System User Profile Password *ENABLED *DISABLED Local System Expired Not Found *ENABLED *NE *NA *NA *NE *DISABLED *NE *NA *NA *NE Expired *NE *NA *NA *NE Not Found *NE *NE *NE *EQ
Table 109 shows the possible Difference Indicator values when the user profile passwords are defined as *NONE on the local and remote systems.
Table 109. Difference Indicator values when user profile passwords are *NONE. Difference Indicator Remote System User Profile Password *ENABLED *DISABLED Local System Expired Not Found *ENABLED *NA *NA *NA *NE *DISABLED *NA *NA *NA *NE Expired *NA *NA *NA *NE Not Found *NE *NE *NE *EQ
620
Appendix F
Outfile formats
This section contains the output files (outfile) formats for those MIMIX commands that provide outfile support. Lakeview Technology provides a model database file that defines the record format for the outfile. These database files can be found in the product installation library. Public authority to the created outfile is the same as the create authority of the library in which the file is created. Use the Display Library Description (DSPLIBD) command to see the create authority of the library. You can use the Run Query (RUNQRY) command to display outfiles with column headings and data type formatting if you have the licensed program 5722QU1, Query, installed. Otherwise, you can use the Display File Field Description (DSPFFD) command to see detailed outfile information, such as the field length, type, starting position, and number of bytes.
621
622
PARENT
CHAR(10)
*AGDFN, *NONE, *PARENT, userdefined name User-defined name *APPLIB, user-defined name *APP, *JOBD, user-defined name
Application CRG exit program Application CRG exit program library Exit program job name
EXITDTA NBRRESTART
CHAR(256) PACKED(5 0)
623
Table 111. MCAG outfile (WRKAG command) Field HOST Description Takeover IP address Type, length CHAR(256) Valid values User-defined value Column headings TAKEOVER IP ADDRESS DESCRIPTI ON UPDATE CLUSTER ENV
TEXT UPDENV
CHAR(50) CHAR(10)
IDA
CHAR(10)
BLANK, Name of the Input Data Area INPUT DATA AREA NAME BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND APP CRG STATUS
AGSTS
CHAR(10)
AGNODS
CHAR(10)
DCSTS
CHAR(10)
624
Table 111. MCAG outfile (WRKAG command) Field DCNODS Description Data CRG nodes status Type, length CHAR(10) Valid values BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL *NONE, User-defined name Column headings DATA CRG NODES STATUS DG STATUS FAILOVER MSGQ LIBRARY FAILOVER MSGQ NAME FAILOVER WAIT TIME FAILOVER DFT ACTION
REPSTS
CHAR(10)
FMSGQL
Failover message queue library Failover message queue name Failover wait time Failover default action
CHAR(10)
FMSGQN
CHAR(10)
FWTIME FDFTACT
PACKED(5 0) PACKED(5 0)
625
Object specifier file library Object specifier file member RJ mode Data CRG exit program Data CRG exit program library Data CRGs status
*AGDFN, user-defined name *DTACRG, user-defined name *NONE, *ASYNC, *SYNC MMXDTACRG, user-defined name *MIMIX, user-defined name BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND *NONE, *NOTAVAIL
DCNODS REPSTS
CHAR(10) CHAR(10)
BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, DATA CRG STATUS DG STATUS BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL 626
Table 112. MCDTACRGE outfile (WRKDTARGE command) Field DEVCRG ASPGRP DTATYPE Description Device CRG name ASP Group Data resource group type Type, length CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name *NONE, User-defined name *DEV, *DTA, *PEER, *XSM Column headings DEVICE CRG ASP GROUP DATA RESOURCE TYPE FAILOVER MSGQ LIBRARY FAILOVER MSGQ NAME FAILOVER WAIT TIME FAILOVER DFT ACTION CLUSTER ADMINISTRATIVE DOMAIN SYNCHRONIZATI ON DOMAIN
Failover message queue library Failover message queue name Failover wait time Failover default action Cluster administrative domain
*AGDFN, *NONE, User-defined name *AGDFN, *NONE, User-defined name *AGDFN, *NOMAX, 1-32767 *AGDFN, *CANCEL, *PROCEED *NONE, User-defined value
SYNCOPT
Synchronization option
PACKED(10 5)
*LASTCHG, *ACTDMN
627
CURDTAPVD
CHAR(10)
PREFROLE PREFSEQ
CHAR(10) PACKED(5 0)
CFGROLE
Configured role
CHAR(10)
CONFIGURE D ROLE
628
Table 113. MCNODE outfile (WRKNODE command) Field CFGSEQ Description Configured sequence Type, length PACKED(5 0) Valid values -2, -1, 0-127 (-2= *UNDEFINED) (-1 = *REPLICATE) (0 = *PRIMARY) (1-127 = *BACKUP sequence) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name *ACTIVE, *INACTIVE, *ATTN, *NONE, *NOTAVAIL, *UNKNOWN Column headings CONFIGURE D SEQUENCE
CFGDTAPVD
CHAR(10)
STATUS
CHAR(10)
629
Option System 2 file name System 2 library name System 2 member name
237
Table 114. MXCDGFE outfile (CHKDGFE command) Field ASPDEV Description Source ASP device Type, length CHAR(10) Valid values Column headings
*UNKNOWN - if object not found ASP DEVICE or an API error *SYSBAS - if object in ASP 1-32 User-defined name - if object in ASP 33-255 PF-DTA, PF-SRC, LF, PF38-DTA, PF38-SRC, LF38 OBJECT ATTRIBUTE
OBJATR
Object attribute
CHAR(10)
237
Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 DLO SYSTEM 2 DLO CCSID CNTRYID LANGID COMPARED ATTRIBUTE SYSTEM 1 INDICATOR
*SYS1, *SYS2 User-defined name User-defined name User-defined name System-defined name System-defined name See Attributes compared and expected results #DLOATR audit on page 606 See Table 87 in Where was the difference detected on page 589
632
Table 115. CMPDLOA Output file (MXCMPDLOA) Field SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID Description Stem 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 1 value of the specified attribute System 1 value CCSID Type, length CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) Valid values See Table 87 in Where was the difference detected on page 589 See What attribute differences were detected on page 587 See Attributes compared and expected results #DLOATR audit on page 606 1-65535 See Attributes compared and expected results #DLOATR audit on page 606 1-65535 Column headings SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID
633
634
Table 116. CMPFILA Output file (MXCMPFILA) Field CMPATR SYS1IND SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID ASPDEV1 Description Compared attribute System 1 file indicator System 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 2 value of the specified attribute System 2 value CCSID System 1 ASP device Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) CHAR(10) Valid values See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. See Table 87 in Where was the difference detected on page 589. See Table 87 in Where was the difference detected on page 589. See What attribute differences were detected on page 587. See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. 1-65535 See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. 1-65535 *NONE, User-defined name Column headings COMPARED ATTRIBUTE SYSTEM 1 INDICATOR SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE
ASPDEV2
CHAR(10)
635
Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field TIMESTAMP COMMAND DGSHRTNM DGNAME Description Timestamp (YYYY-MMDD.HH.MM.SSmmmmmm) Command name Data group short name Data group definition name Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) Valid values SAA timestamp CMPFILDTA Short data group name User-defined data group name * blank if not DG specified on the command User-defined system name *local system name if no DG specified User-defined system name *remote system name if no DG specified *SYS1, *SYS2 Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME
SYSTEM1
System 1
CHAR(8)
SYSTEM 1
SYSTEM2
System 2
CHAR(8)
SYSTEM 2
DTASRC
Data source
CHAR(10)
DATA SOURCE
636
Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field SYS1OBJ SYS1LIB MBR SYS2OBJ SYS2LIB OBJTYPE DIFIND REPAIRSYS FILEREP TOTRCDS Description System 1 object name System 1 library name Member name System 2 object name System 2 library name Object type Differences indicator Repair system File repair successful Total records compared Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) DECIMAL(20) Valid values User-defined name User-defined name User-defined name User-defined name User-defined name *FILE What attribute differences were detected on page 587 *SYS1, *SYS2 Blank, *YES, *NO 0 - 99999999999999999999 Column headings SYSTEM 1 OBJECT SYSTEM 1 LIBRARY MEMBER SYSTEM 2 OBJECT SYSTEM 2 LIBRARY OBJECT TYPE DIFFERENCE INDICATOR REPAIR SYSTEM FILE REPAIR SUCCESSFUL TOTAL RECORDS COMPARED MAJOR MISMATCHES BEFORE PROCESSING MAJOR MISMATCHES AFTER PROCESSING MINOR MISMATCHES AFTER PROCESSING
MAJMISMBEF
DECIMAL(20)
0 - 99999999999999999999
MAJMISMAFT
DECIMAL(20)
0 - 99999999999999999999
MINMISMAFT
DECIMAL(20)
0 - 99999999999999999999
637
Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field APYPENDING Description Apply pending records Type, length DECIMAL(20) Valid values 0 - 99999999999999999999 Column headings ACTIVE RECORDS PENDING SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE TEMPORARY TARGET SQL VIEW
System 1 ASP device System 2 ASP device Temporary target system SQL view pathname
*NONE, User-defined name *NONE, User-defined name i5/OS-format path name or blanks
638
SYSTEM 2
System 2
CHAR(8)
SYSTEM 2
System 1 object name System 1 library name Member name System 2 object name System 2 library name Relative record number System 1 ASP device System 2 ASP device
SYSTEM 1 OBJECT SYSTEM 1 LIBRARY MEMBER SYSTEM 2 OBJECT SYSTEM 2 LIBRARY RRN SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE
639
Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC) Field TIMESTAMP COMMAND DGSHRTNM Description Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm) Command Name Data group short name Format TIMESTAMP CHAR(10) CHAR(3) Valid values SAA timestamp CMPFILDTA short data group name Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1
DGNAME
CHAR(10)
user-defined data group name * blank if not DG specified on the command user-defined system name * local system name if no DG specified user-defined system name * remote system name if no DG specified *SYS1, *SYS2 user-defined name user-defined name user-defined name
SYSTEM1
System 1
CHAR(8)
SYSTEM2
System 2
CHAR(8)
SYSTEM 2
Data source System 1 object name System 1 library name Member name
640
Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC) Field DIFIND SYS1CURCNT Description Differences indicator System 1 current records Format CHAR(10) DECIMAL(20) Valid values Refer to differences indicator table 0 - 99999999999999999999 Column headings DIFFERENCE INDICATOR SYSTEM 1 CURRENT RECORDS SYSTEM 2 CURRENT RECORDS SYSTEM 1 DELETED RECORDS SYSTEM 2 DELETED RECORDS SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE ACTIVE RECORDS PENDING
SYS2CURCNT
DECIMAL(20)
0 - 99999999999999999999
SYS1DLTCNT
DECIMAL(20)
0 - 99999999999999999999
SYS2DLTCNT
DECIMAL(20)
0 - 99999999999999999999
ASPDEV1
CHAR(10)
ASPDEV2
CHAR(10)
ACTRCDPND
DECIMAL(20)
0 - 99999999999999999999
641
642
643
644
Table 120. CMPIFSA Output file (MXCMPIFSA) Field SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID Description System 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 2 value of the specified attribute System 2 value CCSID Type, length CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) Valid values See Table 87 in Where was the difference detected on page 589. What attribute differences were detected on page 587. See Attributes compared and expected results #IFSATR audit on page 604. 1-65535 See Attributes compared and expected results #IFSATR audit on page 604. 1-65535 Column headings SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID
645
646
647
Table 121. CMPOBJA Output file (MXCMPOBJA) Field CMPATR SYS1IND SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID ASPDEV1 Description Compared attribute System 1 file indicator Stem 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 1 value of the specified attribute System 1 value CCSID System 1 ASP device Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) CHAR(10) Valid values See Attributes compared and expected results #OBJATR audit on page 596 See Table 87 in Where was the difference detected on page 589 See Table 87 in Where was the difference detected on page 589 What attribute differences were detected on page 587 See Attributes compared and expected results #OBJATR audit on page 596 1-65535 See Attributes compared and expected results #OBJATR audit on page 596 1-65535 *NONE, User-defined name Column headings COMPARED ATTRIBUTE SYSTEM 1 INDICATOR SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE
ASPDEV2
CHAR(10)
648
*COMPLETED, *FAILED, *DELAYED, *ACTIVE OBJECT STATUS CATEGORY Refer to the OM5100P file for the list of valid object types Refer to the OM5200P file for the list of valid object attributes *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank 0-9999 (9999 = maximum value supported) *DLO, *IFS, *SPLF, *LIB User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK 649 OBJECT TYPE OBJECT ATTRIBUTE FAILURE REASON ENTRY COUNT OBJECT CATEGORY OBJECT LIBRARY OBJECT MEMBER DLO
Object type Object attribute Failure reason Entry count Object category Object library Object name Member name DLO name
Table 122. MXDGACT outfile (WRKDGACT command) Field FLR SPLFJOB SPLF SPLFNBR OUTQ OUTQLIB IFS CCSID Description Folder name Spooled file job name Spooled file name Spooled file number Output queue Output queue library Object IFS name Object CCSID Type, length CHAR(63) CHAR(26) CHAR(10) PACKED(7 0) CHAR(10) CHAR(10) CHAR(1024) VARLEN(100) BIN(5 0) Valid values User-defined name, BLANK Three part spooled file name, BLANK User-defined name, BLANK 1-99999, BLANK User-defined name, *NONE, BLANK User-defined name, *NONE, BLANK User-defined name, BLANK Default to job CCSID. If unable to convert to job's CCSID or job CCSID is 65535, related fields will be written in Unicode Column headings FOLDER SPLF JOB SPLF NAME SPLF NUMBER OUTQ OUTQ LIBRARY IFS OBJECT CCSID
IFSUCS
650
Object status Object type Object attribute Failure reason Object category Journal sequence number
JRNCODE
CHAR(1)
651
Table 123. MXDGACTE outfile (WRKDGACTE command) Field JRNTYPE JRNTSP Description Journal entry type Journal entry timestamp Type, length CHAR(2) TIMESTAMP Valid values Valid journal types YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm *YES, *NO Column headings JOURNAL ENTRY TYPE JOURNAL ENTRY TIMESTAMP JOURNAL ENTRY SEND TIMESTAMP JOURNAL ENTRY RCV TIMESTAMP JOURNAL ENTRY RTV TIMESTAMP CONTAINER SEND TIMESTAMP JOURNAL ENTRY APY TIMESTAMP REQUIRES CONTAINER SEND WAITING FOR RETRY NUMBER OF RETRIES ATTEMPTED NUMBER OF RETRIES REMAINING
JRNSNDTSP
Journal entry send timestamp Journal entry receive timestamp Journal entry retrieve timestamp Container send timestamp
TIMESTAMP
JRNRCVTSP
TIMESTAMP
JRNRTVTSP
TIMESTAMP
CNRSNDTSP
TIMESTAMP
JRNAPYTSP
TIMESTAMP
REQCNRSND
CHAR(10)
RTYWAIT RTYATTEMPT
CHAR(10)
RTYREMAIN
0-1998
652
Table 123. MXDGACTE outfile (WRKDGACTE command) Field DLYITV NXTRTYTSP MSGID MSG FAILEDJOB JRNENT OBJLIB OBJ OBJMBR DLO FLR SPLFJOB SPLF SPLFNBR OUTQ OUTQLIB IFS Description Delay interval Next retry timestamp Message ID Message data Failed job name Journal entry Object library Object name Member name DLO name Folder name Spooled file job name Spooled file name Spooled file number Output queue Output queue library Object IFS name Type, length PACKED(5 0) TIMESTAMP CHAR(7) CHAR(256) VARLEN(50) CHAR(26) CHAR(400) CHAR(10) CHAR(10) CHAR(10) CHAR(12) CHAR(63) CHAR(26) CHAR(10) PACKED(7 0) CHAR(10) CHAR(10) CHAR(1024) VARLEN(100) Valid values 1-7200 YYYY-MMDD.HH.MM.SS.mmmmmm Valid message ID, BLANK Valid message data, BLANK Job name, BLANK Journal entry User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK 1-99999, BLANK User-defined name, *NONE, BLANK User-defined name, *NONE, BLANK User-defined name, BLANK Column headings DELAY INTERVAL NEXT RETRY TIMESTAMP MESSAGE ID MESSAGE DATA FAILED JOB NAME JOURNAL ENTRY OBJECT LIBRARY OBJECT MEMBER DLO FOLDER SPLF NAME SPLF NUMBER OUTQ OUTQ LIBRARY IFS OBJECT
653
Table 123. MXDGACTE outfile (WRKDGACTE command) Field CCSID Description Object CCSID Type, length BIN(5 0) Valid values Default to job CCSID. If unable to convert to job's CCSID or job CCSID is 65535, related fields will be written in Unicode. User-defined name, BLANK Column headings CCSID
TGTOBJLIB
Target system object library name Target system object name Target system object member name Target system DLO name Target system object folder name Target system spooled file job name Target system spooled file name Target system spooled file job number
CHAR(10)
TARGET OBJECT LIBRARY TARGET OBJECT TARGET MEMBER TARGET DLO TARGET FOLDER
User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK
Three part spooled file name, BLANK TARGET SPLF User-defined name, BLANK 1-999999, BLANK JOB TARGET SPLF NUMBER TARGET OUTQ TARGET OUTQ LIBRARY TARGET IFS OBJECT
TGTOUTQ TGTOUTQLIB
Target system output queue CHAR(10) Target system output queue library Target system IFS name CHAR(10)
TGTIFS
CHAR(1024) VARLEN(100)
654
Table 123. MXDGACTE outfile (WRKDGACTE command) Field RNMOBJLIB Description Renamed object library name Renamed object name Renamed object member name Renamed DLO name Renamed object folder name Renamed spooled file job name Type, length CHAR(10) Valid values User-defined name, BLANK Column headings RENAMED OBJECT LIBRARY RENAMED OBJECT RENAMED MEMBER RENAMED DLO RENAMED FOLDER
User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK
Three part spooled file name, BLANK RENAMED SPLF JOB User-defined name, BLANK 1-999999, BLANK RENAMED SPLF NAME RENAMED SPLF NUMBER RENAMED OUTQ RENAMED OUTQ LIBRARY RENAMED IFS OBJECT RENAMED TGT OBJECTS LIBRARY
Renamed spooled file name CHAR(10) Renamed spooled file number Renamed output queue Renamed output queue library Renamed IFS object name Renamed target object library name PACKED(7 0)
RNMOUTQ RNMOUTQLIB
CHAR(10) CHAR(10)
RNMIFS RNMOBJLIB
655
Table 123. MXDGACTE outfile (WRKDGACTE command) Field RNMTGTOBJ Description Renamed target object name Renamed target object member name Renamed target object DLO name Renamed target object folder name Renamed target spooled file job name Renamed target spooled file name Renamed target spooled file number Type, length CHAR(10) Valid values User-defined name, BLANK Column headings RENAMED TARGET OBJECT RENAMED TARGET OBJ MEMBER RENAMED TARGET DLO RENAMED TARGET FOLDER
RNMTOBJMBR
CHAR(10)
RNMTGTDLO RNMTGTFLR
CHAR(12) CHAR(63)
RNMTSPLFJ
CHAR(26)
Three part spooled file name, BLANK RENAMED TARGET SPLF JOB User-defined name, BLANK RENAMED TARGET SPLF NAME RENAMED TARGET SPLF NUMBER RENAMED TARGET OUTQ RENAMED TARGET OUTQ LIBRARY RENAMED TARGET IFS OBJECT
RNTTGTSPLF
CHAR(10)
RNMTSPLFN
PACKED(7 0)
1-999999, BLANK
RNMTGTOUTQ
CHAR(10)
RNMTOUTQL
CHAR(10)
RNMTGTIFS
CHAR(1024) VARLEN(100)
656
Table 123. MXDGACTE outfile (WRKDGACTE command) Field COOPDB Description Cooperate with DB Type, length CHAR(10) Valid values *YES, *NO, BLANK Column headings COOPERATE WITH DATABASE IFS OBJECT FID (Binary) IFS OBJECT FID (Hex) IFS Object (UNICODE) TGT IFS Object (UNICODE) RNM IFS Object (UNICODE) RNM TGT IFS Object (UNICODE)
IFS object file identifier (binary format) IFS object file identifier (character format) IFS Object (UNICODE)
Binary representation of file identifier Character representation of file identifier User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK
TGTIFSUCS
TGT IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488) RNM IFS Object (UNICODE) RNM TGT IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488)
RNMIFSUCS
RNMTGTIFSU
657
658
DTAARA2 DTAARALIB2
CHAR(10) CHAR(10)
TEXT RTVERR
CHAR(50) CHAR(10)
659
JRNDFN1NM
CHAR(10)
JRNDFN2SYS
CHAR(8)
660
Table 125. MXDGDFN outfile (WRKDGDFN command) Field JRNDFN2 Description Configured system 2 journal definition Type, length CHAR(10) Valid values *DGDFN, user-defined name, *NONE User-defined name, blank Column Headings CONFIGURED SYSTEM 2 JRNDFN ACTUAL SYSTEM 2 JRNDFN JRNDFN SYSTEM 2 RJ LINK CURRENT NUMBER OF DB APPLIES REQUESTED NUMBER OF DB APPLIES DBJRNPRC BEFORE IMAGES DBJRNPRC FILES NOT IN DG DBJRNPRC GEND BY MIMIX ACT DBJRNPRC NOT USED BY MIMIX DESCRIPTION SYNC CHECK INTERVAL TIME STAMP INTERVAL
JRNDFN2NM
CHAR(10)
System 2 journal definition system name User remote journal link Number of DB apply sessions
RQSDBAPY
PACKED(3 0)
1-6
DBBFRIMG
CHAR(10)
*IGNORE, *SEND
DBNOTINDG
For files not in data group (DB journal entry processing) Generated by MIMIX activity (DB journal entry processing) Not used by MIMIX (DB journal entry processing) Description Synchronization check interval Time stamp interval
CHAR(10)
*IGNORE, *SEND
DBMMXGEN
CHAR(10)
*IGNORE, *SEND
661
Table 125. MXDGDFN outfile (WRKDGDFN command) Field VFYITV DTAARAITV Description Verify interval Data area polling interval Type, length PACKED(5 0) PACKED(5 0) Valid values 1000-999999 1-7200 Column Headings VERIFICATION INTERVAL DATA AREA POLLING INTERVAL NUMBER OF RETRIES FIRST RETRY INTERVAL SECOND RETRY INTERVAL USE ADAPTIVE CACHE DATA CRG FEOPT JOURNAL IMAGES FEOPT OMIT OPEN CLOSE FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT CFG APPY SESSION FEOPT COLLISION RESOLUTION
Number of times to retry First retry delay interval Second retry delay interval
Adaptive cache Data cluster resource group Journal image (File entry options)
DFTOPNCLO DFTREPTYPE
Omit open / close entries (File entry options) Replication type (File entry options)
CHAR(10) CHAR(10)
Lock member during apply (File entry options) Configured apply session (File entry options) Collision resolution (File entry options)
662
Table 125. MXDGDFN outfile (WRKDGDFN command) Field DFTSBTRG DFTPRCCST Description Disable triggers during apply (File entry options) Process constraint entries (File entry options) Type, length CHAR(10) CHAR(10) Valid values *YES, *NO *YES Column Headings FEOPT DISABLE TRIGGERS FEOPT PROCESS CONSTRAINT DBAPYPRC FORCE DATA DBAPYPRC MAX OPEN MEMBERS DBAPYPRC THRESHOLD WARNING DBAPYPRC HISTORY DBAPYPRC KEEP JRN DBAPYPRC SIZE OF LOG SPACES OBJPRC DEFAULT OWNER OBJPRC DLO TRANSFER METHOD OBJPRC IFS TRANSFER METHOD
DBFRCITV DBMAXOPN
Force data interval (Database apply processing) Maximum open members (Database apply processing) Threshold warning (Database apply processing)
PACKED(5 0) PACKED(5 0)
1-99999 50 - 32767
DBAPYTWRN
PACKED(7 0)
0, 100-9999999
Apply history log spaces (Database apply processing) Keep journal log spaces (Database apply processing) Size of log spaces (MB) (Database apply processing) Object default owner (Object processing)
OBJDFTOWN
CHAR(10)
User-defined name
OBJDLOMTH
CHAR(10)
*OPTIMIZED, *SAVRST
OBJIFSMTH
CHAR(10)
*SAVRST, *OPTIMIZED
663
Table 125. MXDGDFN outfile (WRKDGDFN command) Field OBJUSRSTS Description User profile status (Object processing) Type, length CHAR(10) Valid values *SRC, *TGT, *ENABLE, *DISABLE Column Headings OBJPRC USER PROFILE STATUS OBJPRC KEEP DELETED SPLF OBJPRC KEEP DLO SYS NAME OBJRTVPRC DELAY OBJRTVPRC MIN NUMBER OF JOBS OBJRTVPRC MAX NUMBER OF JOBS OBJRTVPRC THLD FOR MORE JOBS CNRSNDPRC MIN NUMBER OF JOBS CNRSNDPRC MAX NUMBER OF JOBS CNRSNDPRC THLD FOR MORE JOBS
OBJKEEPSPL
CHAR(10)
*YES, *NO
OBJKEEPDLO
CHAR(10)
*YES, *NO
OBJRTVDLY OBJRTVMINJ
Retrieve delay (Object retrieve processing) Minimum number of jobs (Object retrieve processing) Maximum number of jobs (Object retrieve processing) Threshold for more jobs (Object retrieve processing) Minimum number of jobs (Container send processing) Maximum number of jobs (Container send processing) Threshold for more jobs (Container send processing)
PACKED(3 0) PACKED(3 0)
0-999 1-99
OBJRTVMAXJ
PACKED(3 0)
1-99
OBJRTVTHLD
PACKED(5 0)
1-99999
CNRSNDMINJ
PACKED(3 0)
1-99
CNRSNDMAXJ
PACKED(3 0)
1-99
CNRSNDTHLD
PACKED(5 0)
1-99999
664
Table 125. MXDGDFN outfile (WRKDGDFN command) Field OBJAPYMINJ Description Type, length Valid values 1-99 Column Headings OBJAPYPRC MIN NUMBER OF JOBS OBJAPYPRC MAX NUMBER OF JOBS OBJAPYPRC THLD FOR MORE JOBS OBJAPYPRC THLD FOR WARNING MSGS USRPRF FOR SUBMIT JOB SEND JOBD SEND JOBD LIBRARY APPLY JOBD APPLY JOBD LIBRARY REORGANIZE JOBD REORGANIZE JOBD LIBRARY SYNC JOBD SYNC JOBD LIBRARY
OBJAPYMAXJ
1-99
OBJAPYTHLD
PACKED(5 0)
1-99999
OBJAPYTWRN
PACKED(5 0)
0, 50-99999 (0 = *NONE)
User profile for submit job Send job description Send job description library Apply job description Apply job description library Reorganize job description Reorganize job description library Synchronize job description Synchronize job description library
*JOBD, *CURRENT Job description name Job description library Job description name Job description library Job description name Job description library Job description name Job description library
665
Table 125. MXDGDFN outfile (WRKDGDFN command) Field SAVACT Description Save while active (seconds) Type, length PACKED(5 0) Valid values -1, 0, 1-999999 (0 = Save while active for files only with a 120 second wait time) (-1 = No save while active) (1-99999 = Save while active for all object types with specified wait time) 000000 - 235959, *NONE, *SYSDFN1, *SYSDFN2 000000 = midnight (default) *NONE, User-defined name *NONE, User-defined name *SYSJRN, *USRJRN *NONE, *ALLAPY 0-99999 *DFT, *YES, *NO 0-9999 (0 = *NONE) Column Headings SAVE WHILE ACTIVE (SEC)
RSTARTTIME
Restart Time
CHAR((8)
RESTART TIME
System 1 ASP group System 2 ASP group Cooperative Journal Recovery Window Process Recovery Window Duration Journal at creation RJ Link Threshold (Time in minutes)
SYSTEM 1 ASP GROUP SYSTEM 2 ASP GROUP COOPERATIVE JOURNAL RECOVERY PROCESS RECOVERY DURATION JOURNAL AT CREATION RJLNK THRESHOLD (TIME IN MIN) RJLNK THRESHOLD (NBR OF JRNE) DBSND/DBRDR THRESHOLD (TIME IN MIN)
RJLNKTHLDE
PACKED(7 0)
0, 1000-9999999 (0 = *NONE)
DBSNDTHLDM
PACKED(4 0)
0-9999 (0 = *NONE)
666
Table 125. MXDGDFN outfile (WRKDGDFN command) Field DBSNDTHLDE Description DB Send/Reader Threshold (Number of journal entries) Object Send Threshold (Time in minutes) Type, length PACKED(7 0) Valid values 0, 1000-9999999 (0 = *NONE) Column Headings DBSND/DBRDR THRESHOLD (NBR OF JRNE) OBJSND THRESHOLD (TIME IN MIN) OBJSND THRESHOLD (NBR OF JRNE) OBJRTV THRESHOLD CNRSND THRESHOLD
OBJSNDTHDM
PACKED(4 0)
0-9999 (0 = *NONE)
OBJSNDTHDE
PACKED(7 0)
0, 1000-9999999 (0 = *NONE)
OBJRTVTHDE CNRSNDTHDE
Object Retrieve Threshold (Number of activity entries) Container Send Threshold (Number of activity entries)
PACKED(5 0) PACKED(5 0)
667
DGSYS2
CHAR(8)
User-defined name User-defined name, *ALL User-defined name, *ALL *FLR1, User-defined name *DOC1, User-defined name *CHANGE, *ALL, *NONE
PRCTYPE OBJRTVDL Y
CHAR(10) PACKED(3 0)
668
669
System 1 member name CHAR(10) System 2 file name System 2 library name CHAR(10) CHAR(10)
System 2 member name CHAR(10) Description Journal image (File entry options) Omit open/close entries (File entry options) CHAR(50) CHAR(10)
OPNCLO
CHAR(10)
670
Table 127. MXDGFE outfile (WRKDGFE command) Field REPTYPE Description Replication type (File entry options) Type, length CHAR(10) Valid values *POSITION, *KEYED, *DGDFT Column headings FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT FILTER BFR IMAGE FEOPT CURRENT APYSSN FEOPT REQUESTED APYSSN FEOPT COLLISION RESOLUTION FEOPT DISABLE TRIGGERS FEOPT PROCESS TRIGGERS FEOPT PROCESS CONSTRAINTS CURRENT STATUS
APYLOCK
Lock member during CHAR(10) apply (File entry options) CHAR(10) CHAR(10)
FTRBFRIMG Filter before image (File entry options) APYSSN Current apply session (File entry options) Configured or requested apply session (File entry options)
RQSAPYSS N CRCLS
CHAR(10)
A-F, *DGDFT
Collision resolution class CHAR(10) (File entry options) Disable triggers during CHAR(10) apply (File entry options) Process trigger entries (File entry options) Process constraint entries (File entry options) File status CHAR(10)
DSBTRG
PRCTRG
PRCCST
CHAR(10)
*YES
STATUS
CHAR(10)
*ACTIVE, *RLSWAIT, *RLSCLR, *HLD, *HLDIGN, *RLS, *HLDRGZ, *HLDPRM, *HLDRNM, *HLDSYNC, *HLDRTY, *HLDERR, *HLDRLTD, *CMPACT, *CMPRLS, *CMPRPR
671
Table 127. MXDGFE outfile (WRKDGFE command) Field RQSSTS JRN1STS JRN2STS ERRCDE JECDE JETYPE Description Requested file status System 1 journaled System 2 journaled Error code Journal entry code Journal entry type Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(2) CHAR(1) CHAR(2) Valid values *ACTIVE, *HLD, *HLDIGN, *RLS, *RLSWAIT *YES, *NO, *NA *YES, *NO, *NA Valid error codes Valid journal entry code Valid journal entry type Column headings REQUESTED STATUS SYSTEM 1 JOURNALED SYSTEM 2 JOURNALED ERROR CODE JOURNAL ENTRY CODE JOURNAL ENTRY TYPE
672
673
Process type Object type Retrieve delay (Object retrieve processing) Cooperate with database
PROCESS TYPE OBJECT TYPE OBJRTVPRC DELAY COOPERATE WITH DATABASE OBJECT AUDITING VALUE
OBJAUD
Object auditing
CHAR(10)
674
675
The Work with Data Groups (WRKDG) command generates new outfiles based on the MXDGSTSF record format from the MXDGSTS model database file supplied by Lakeview Technology. The content of the outfile is based on the criteria specified on the command. If there are no differences found, the file is empty. Usage notes: When the value *UNKNOWN is returned for either the Data group source system status (DTASRCSTS) field or the Data group target system status (DTATGTSTS), status information is not available from the system that is remote relative to where the request was made. For example, if you requested the report from the target system and the value returned for DTASRCSTS is *UNKNOWN, the WRKDG request could not communicate with the source system. Fields which rely on data collected from the remote system will be blank. If a data group is configured for only database or only object replication, any fields associated with processes not used by the configured type of replication will be blank. See WRKDG outfile SELECT statement examples on page 696 for examples of how to query the contents of this output file. You can automate the process of gathering status. If you use MIMIX Monitor to create a synchronous interval monitor, the monitor can specify the command to generate the outfile. Through exit programs, you can program the monitor to take action based on the status returned in the outfile. For information about creating interval monitors, see the Using MIMIX Monitor book.
Table 129. MXDGSTS outfile (WRKDG command) Field ENTRYTSP DGDFN DGSYS1 Description Entry timestamp Data group definition name (Data group definition) System 1 (Data group definition) Type, length TIMESTAMP CHAR(10) CHAR(8) Valid values SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu User-defined data group name User-defined system name Column headings TIME REQUEST PROCESSED DGDFN NAME DGDFN SYSTEM 1
676
Table 129. MXDGSTS outfile (WRKDG command) Field DGSYS2 STSTIME STSTIMF STSAVAIL Description System 2 (Data group definition) Elapsed time for data group status (seconds) Elapsed time for data group status (HHH:MM:SS) Data group status retrieved from these systems Data group source system Data group source system status Data group target system Data group target system status Switch mode status for system 1 Type, length CHAR(8) PACKED(10 0) CHAR(10) CHAR(10) Valid values User-defined system name Calculated, 0-9999999999 Calculated, 0-9999999 *ALL, *SOURCE, *TARGET, *NONE Column headings DGDFN SYSTEM 2 ELAPSED TIME ELAPSED TIME (HHH:MM:SS) SYS STATUS RETRIEVED FROM DG SOURCE SYSTEM DG SOURCE STATUS DG TARGET SYSTEM DG TARGET STATUS SYSTEM 1 SWITCH STATUS SYSTEM 2 SWITCH STATUS OVERALL DG STATUS CONFIGURED FOR DB REPLICATION CONFIGURED FOR OBJECT REPLICATION
User-defined system name *ACTIVE, *INACTIVE, *UNKNOWN User-defined system name *ACTIVE, *INACTIVE, *UNKNOWN *NONE, *SWITCH
SWTSTS2
CHAR(10)
*NONE, *SWITCH
DGSTS DBCFG
Data group status summary Data group configured for data base replication Data group configured for object replication
CHAR(10) CHAR(10)
OBJCFG
CHAR(10)
*YES, *NO
677
Table 129. MXDGSTS outfile (WRKDG command) Field SRCSYSSTS Description Source system manager status summation (system manager Database send process status summation (DBSNDPRC) Object send process status summation (OBJSNDPRC) Data area polling process status (DTAPOLLPRC) Target System manager status summation (system manager plus journal manager status) Database apply status summation (Apply sessions A-F) Object apply status summation Total database file entries Active database file entries (FEACT) Inactive database file entries Database file entries not journaled on source Database file entries not journaled on target Database file entries held due to error Type, length CHAR(10) Valid values *ACTIVE, *INACTIVE, *UNKNOWN Column headings SOURCE MANAGER SUMMATION DB SEND STATUS OBJECT SEND STATUS DATA AREA POLLER STATUS TARGET MANAGER SUMMATION DB APPLY SUMMATION OBJECT APPLY SUMMATION TOTAL DB FILE ENTRIES ACTIVE DB FILE ENTRIES INACTIVE DB FILE ENTRIES FILES NOT JOURNALED ON SOURCE FILES NOT JOURNALED ON TARGET FILES HELD FOR ERRORS
*ACTIVE, *INACTIVE, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN, *NONE
TGTSYSSTS
CHAR(10)
*ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD 0-99999 0-99999 0-99999 0-99999
FENOTJRNT
PACKED(5 0)
0-99999
FEHLDERR
PACKED(5 0)
0-99999
678
Table 129. MXDGSTS outfile (WRKDG command) Field FEHLDOTHR OBJPENDSRC Description Database file entries held for other reasons (FEHLD) Objects in pending status, source system Type, length PACKED(5 0) PACKED(5 0) Valid values 0-99999 0-99999 Column headings FILES HELD FOR OTHER OBJECTS PENDING ON SOURCE SYSTEM OBJECTS PENDING ON TARGET SYSTEM TOTAL OBJECTS DELAYED TOTAL OBJECTS IN ERROR DLO CONFIG CHANGED IFS CONFIG CHANGED OBJECT CONFIG CHANGED PRIMARY TFRDFN SECONDARY TFRDFN LAST USED TFRDFN
OBJPENDAPY
0-99999
OBJDELAY
PACKED(5 0)
0-99999
OBJERR
Objects in error
PACKED(5 0)
0-99999
User-defined transfer definition name User-defined transfer definition name User-defined transfer definition name
679
Table 129. MXDGSTS outfile (WRKDG command) Field TFRSTS Description Current transfer definition communications status Source system manager status Type, length CHAR(10) Valid values *ACTIVE, *INACTIVE Column headings LAST USED TFRDFN STATUS SOURCE SYS MANAGER STATUS SOURCE JRN MANAGER STATUS CONTAINER SEND STATUS OBJECT RETRIEVE STATUS TARGET SYS MANAGER STATUS TARGET JRN MANAGER STATUS DB JRNRCV DB JRNRCV LIBRARY DB ENTRY TYPE AND CODE DB ENTRY SEQUENCE DB ENTRY TIMESTAMP
SRCMGRSTS
CHAR(10)
SRCJRNSTS
CHAR(10)
CNRSNDSTS OBJRTVSTS
CHAR(10) CHAR(10)
*ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN
TGTMGRSTS
CHAR(10)
TGTJRNSTS
CHAR(10)
Current database journal entry receiver name Current database journal entry receiver library name Current database journal code and entry type Current database journal entry sequence number Current database journal entry timestamp
User-defined value User-defined value Valid journal entry types and codes
CURDBSEQ CURDBTSP
PACKED(10 0) TIMESTAMP
680
Table 129. MXDGSTS outfile (WRKDG command) Field CURDBTPH RDDBRCV RDDBLIB Description Current database journal entry transactions per hour Last read database journal entry receiver name (DBSNTRCV) Last read database journal entry receiver library name Last read database journal code and entry type Last read database journal entry sequence number (DBSNTSEQ) Last read database journal entry timestamp (DBSNTDATE, DBSNTTIME) Last read database journal entry transactions per hour Number of database entries not sent Estimated time to process database entries not sent (seconds) Estimated time to process database entries not sent (HHH:MM:SS) Last received database journal entry receiver name Type, length PACKED(15 0) CHAR(10) CHAR(10) Valid values Calculated, 0-9999999999999 User-defined value User-defined value Column headings DB ARRIVAL RATE DB READER JRNRCV DB READER JRNRCV LIBRARY DB READER TYPE AND ENTRY CODE DB READER ENTRY SEQUENCE DB READER ENTRY TIMESTAMP DB READER READ RATE DB SEND BACKLOG DB SEND BACKLOG SECONDS DB SEND BACKLOG HHH:MM:SS DB LAST RECEIVED JRNRCV
RDDBCODE
CHAR(3)
RDDBSEQ
PACKED(10 0)
0-9999999999
RDDBTSP
TIMESTAMP
DBSNBKTIMF
CHAR(10)
Calculated, 0-999:99:99
RCVDBRCV
CHAR(10)
User-defined value
681
Table 129. MXDGSTS outfile (WRKDG command) Field RCVDBLIB Description Last received database journal entry receiver library name Last received database journal code and entry type Last received database journal entry sequence number Last received database journal entry timestamp Last received database journal entry transactions per hour Number of database apply sessions requested Number of database apply sessions configured Number of database apply session currently active (DBAPYPRC) Type, length CHAR(10) Valid values User-defined value Column headings DB LAST RECEIVED JRNRCV LIB DB LAST RCV TPE AND ENTRY DB LAST RECEIVED SEQUENCE DB LAST RECEIVED TIMESTAMP DB RECEIVE ARRIVAL RATE REQUESTED DB APPLY SESSIONS CONFIGURED DB APPLY SESSIONS ACTIVE DB APPLY SESSIONS DB APPLY BACKLOG DB APPLY TIME SECONDS DB APPLY TIME HHH:MM:SS
RCVDBCODE
CHAR(3)
See the IBM OS/400 Backup and Recovery Guide for journal and entry types 0-9999999999
RCVDBSEQ
PACKED(10 0)
RCVDBTSP
TIMESTAMP
RCVDBTPH DBAPYREQ
PACKED(15 0) PACKED(5 0)
DBAPYMAX
PACKED(5 0)
1-6
DBAPYACT
PACKED(5 0)
1-6
Number of database entries not applied PACKED(15 0) Estimated time to process database entries not applied (seconds) Estimated time to process database entries not applied (HHH:MM:SS) PACKED(10 0) CHAR(10)
682
Table 129. MXDGSTS outfile (WRKDG command) Field DBAPYTPH Description Database apply total transactions per hour Database apply session A status Database apply session A last received sequence number Database apply session A last processed sequence number Database apply session A number of unprocessed entries Database apply session A estimated time to apply unprocessed transactions (seconds) Database apply session A estimated time to apply unprocessed transactions (HHH:MM:SS) Database apply session A number of transactions per hour Database apply session A open commit indicator Database apply session A oldest open commit ID Database apply session A last applied journal code and entry type Type, length PACKED(15 0) Valid values Calculated, 0-999999999999999 Column headings DB APPLY PROCESSING RATE DB APPLY A STATUS DB APPLY A LAST RECEIVED DB APPLY A LAST PROCESSED DB APPLY A BACKLOG DB APPLY A TIME SECONDS DB APPLY A TIME HHH:MM:SS DB APPLY A PROCESSING RATE DB APPLY A COMMIT INDICATOR DB APPLY A CURRENT COMMIT ID DB APPLY A TYPE AND ENTRY
DBASTS DBARCVSEQ
CHAR(10) PACKED(10 0)
DBAPRCSEQ
PACKED(10 0)
0-9999999999
DBABKLG DBABKTIME
PACKED(15 0) PACKED(10 0)
DBABKTIMF
CHAR(10)
Calculated, 0-999:99:99
DBATPH
PACKED(15 0)
Calculated, 0-999999999999999
DBAOPNCMT
CHAR(10)
*YES, *NO
DBACMTID
CHAR(10)
Journal-defined commit ID
DBAAPYCODE
CHAR(3)
See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types.
683
Table 129. MXDGSTS outfile (WRKDG command) Field DBAAPYSEQ DBAAPYTSP Description Database apply session A last applied sequence number Database apply session A last applied journal entry timestamp Database apply session A object to which last transaction was applied Database apply session A library of object to which last transaction was applied Database apply session A member of object to which last transaction was applied. Database apply session A last applied journal entry clock time difference (seconds) Database apply session A last applied journal entry clock time difference (HHH:MM:SS) Database apply session A hold MIMIX log sequence number Repeat the database apply (all DBx fields match session A fields including the DBA fields) reserved information for five other apply sessions with values of x from B-F) Current object journal entry receiver name Current object journal entry receiver library name CHAR(10) CHAR(10) Type, length PACKED(10 0) TIMESTAMP Valid values 0-9999999999 SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu Column headings DB APPLY A LAST APPLIED DB APPLY A LAST TIMESTAMP DB APPLY A OBJECT NAME DB APPLY A LIBRARY NAME DB APPLY A MEMBER NAME DB APPLY A TIME DIFF SECONDS DB APPLY A TIME DIFF HHH:MM:SS DB APPLY A HOLD SEQUENCE All DBx headings match the DBA headings, with x
DBAAPYOBJ DBAAPYLIB
CHAR(10) CHAR(10)
DBAAPYMBR
CHAR(10)
DBAAPYTIME
PACKED(10 0)
Calculated, 0-9999999999
DBAAPYTIMF
CHAR(10)
Calculated, 0-999:99:99
DBAHLDSEQ
PACKED(10 0)
0-9999999999
684
Table 129. MXDGSTS outfile (WRKDG command) Field CUROBJCODE Description Current object journal code and entry type Current object journal entry sequence number Current object journal entry timestamp Type, length CHAR(3) Valid values See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999 Column headings OBJECT TYPE AND ENTRY CODES OBJECT JOURNAL SEQUENCES OBJECT JRN ENTRY TIMESTAMP OBJECT ARRIVAL PER HOUR OBJRDRPRC JRNRCV OBJRDRPRC JRNRCV LIBRARY OBJRDRPRC TYPE AND ENTRY CODE OBJRDRPRC JOURNAL SEQUENCE OBJRDRPRC JRN ENTRY TIMESTAMP OBJRDRPRC READ RATE OBJSNDPRC BACKLOG
CUROBJSEQ
PACKED(10 0)
CUROBJTSP
TIMESTAMP
CUROBJTPH
Current object journal entry transactions per hour Last read object journal entry receiver name (OBJSNTRCV) Last read object journal entry receiver library name Last read object journal code and entry type Last read object journal entry sequence number (OBJSNTSEQ) Last read object journal entry timestamp (OBJSNTDATE, OBJSNTTIME) Last read object journal entry transactions per hour Object entries not processed
PACKED(15 0)
0-999999999999999
RDOBJRCV RDOBJLIB
CHAR(10) CHAR(10)
RDOBJCODE
CHAR(3)
See the IBM OS/400 Backup and Recovery Guide for journal entry codes and entry types. 0-9999999999
RDOBJSEQ
PACKED(10 0)
RDOBJTSP
TIMESTAMP
RDOBJTPH OBJSNDBKLG
PACKED(15 0)
Calculated, 0-999999999999999
685
Table 129. MXDGSTS outfile (WRKDG command) Field OBJSNDNUM Description Number of object entries sent Type, length Valid values Column headings OBJSNDPRC SENT IN TIME SLICE OBJSNDPRC BACKLOG SECONDS OBJSNDPRC BACKLOG HHH:MM:SS OBJRCVPRC LAST RCVD JRNRCV OBJRCVPRC LAST RCVD JRNRCV LIB OBJRCVPRC LAST TYPE AND ENTRY OBJRCVPRC LAST ENTRY SEQUENCE OBJRCVPRC LAST ENTRY TIMESTAMP OBJRCVPRC RECEIVE RATE OBJRTVPRC MIN NUMBER OF JOBS OBJRTVPRC NUMBER OF JOBS 686
OBJSBKTIME
Estimated time to process object entries not sent (seconds) Estimated time to process entries not sent (HHH:MM:SS) Last received object journal entry receiver name Last received object journal entry receiver library name Last received object journal code and entry type Last received object journal entry sequence number Last received object journal entry timestamp Last received object journal entry transactions per hour Minimum number of object retriever processes Active number of object retriever processes (OBJRTVPRC)
PACKED(10 0)
Calculated, 0-9999999999
OBJSBKTIMF
CHAR(10)
Calculated, 0-999:99:99
RCVOBJRCV
CHAR(10)
User-defined value
RCVOBJLIB
CHAR(10)
User-defined value
RCVOBJCODE
CHAR(3)
See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999
RCVOBJSEQ
PACKED(10 0)
RCVOBJTSP
TIMESTAMP
RCVOBJTPH OBJRTVMIN
PACKED(15 0) PACKED(3 0)
0-999999999999999 1-99
OBJRTVACT
PACKED(3 0)
1-99
Table 129. MXDGSTS outfile (WRKDG command) Field OBJRTVMAX Description Maximum number of object retriever processes Number of object retriever entries not processed Last processed object retrieve journal code and entry type Last processed object retrieve journal sequence number Last processed object retrieve journal entry timestamp (OBJRTVDATE, OBJRTVTIME) Type of object last processed by object retrieve Qualified name of object last processed by object retrieve Minimum number of container send processes Active number of container send processes (CNRSNDPRC) Maximum number of container send processes Number of container send entries not processed Type, length PACKED(3 0) Valid values 1-99 Column headings OBJRTVPRC MAX NUMBER OF JOBS OBJRTVPRC BACKLOG OBJRTVPRC LAST TYPE AND ENTRY OBJRTVPRC LAST SEQUENCE OBJRTVPRC LAST TIMESTAMP OBJRTVPRC LAST OBJ TYPE OBJRTVPRC LAST OBJ NAME CNRSNDPRC MIN NUMBER OF JOBS CNRSNDPRC NUMBER OF JOBS CNRSNDPRC MAX NUMBER OF JOBS CNRSNDPRC BACKLOG
OBJRTVBKLG OBJRTVCODE
PACKED(15 0) CHAR(3)
0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999
OBJRTVSEQ
PACKED(10 0)
OBJRTVTSP
TIMESTAMP
OBJRTVTYPE OBJRTVOBJ
CHAR(10) CHAR(1024)
CNRSNDMIN
PACKED(3 0)
1-99
CNRSNDACT
PACKED(3 0)
1-99
CNRSNDMAX
PACKED(3 0)
1-99
CNRSNDBKLG
PACKED(15 0)
0-999999999999999
687
Table 129. MXDGSTS outfile (WRKDG command) Field CNRSNDNUM CNRSNDCPH CNRSNDCODE Description Number of containers sent Containers per hour Last processed container send journal code and entry type Last processed container send journal sequence number (CNRSNTSEQ) Last processed container send journal entry timestamp (CNRSNTDATE, CNTRSNTTIME) Type of object last processed by container send Qualified name of object last processed by container send Minimum number of object apply processes Active number of object apply processes (OBJAPYPRC) Maximum number of object apply processes Number of object apply entries not processed Type, length PACKED(15 0) PACKED(15 0) CHAR(3) Valid values 0-999999999999999 0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999 Column headings CNRSNDPRC NUMBER SENT CNRSNDPRC RATE CNRSNDPRC LAST TYPE AND ENTRY CNRSNDPRC LAST SEQUENCE CNRSNDPRC LAST TIMESTAMP CNRSNDPRC LAST OBJ TYPE CNRSNDPRC LAST OBJ NAME OBJAPYPRC MIN NUMBER OF JOBS OBJAPYPRC NUMBER OF JOBS OBJAPYPRC MAX NUMBER OF JOBS OBJAPYPRC BACKLOG
CNRSNDSEQ
PACKED(10 0)
CNRSNDTSP
TIMESTAMP
CNRSNDTYPE CNRSNDOBJ
CHAR(10) CHAR(1024)
OBJAPYMIN
PACKED(3 0)
1-99
OBJAPYACT
PACKED(3 0)
1-99
OBJAPYMAX
PACKED(3 0)
1-99
OBJAPYBKLG
PACKED(15 0)
Calculated, 0-999999999999999
688
Table 129. MXDGSTS outfile (WRKDG command) Field OBJAPYACTA Description Number of active objects Type, length PACKED(15 0) Valid values Calculated, 0-999999999999999 Column headings OBJAPYPRC ACTIVE BACKLOG OBJAPYPRC APPLIED IN TIME SLICE OBJAPYPRC BACKLOG SECONDS OBJAPYPRC BACKLOG HHH:MM:SS OBJAPYPRC RATE OBJAPYPRC LAST TYPE AND ENTRY OBJAPYPRC LAST SEQUENCE OBJAPYPRC LAST TIMESTAMP OBJAPYPRC LAST OBJ TYPE OBJAPYPRC LAST OBJ NAME RJ LINK USED BY DG
OBJAPYNUM
PACKED(15 0)
Calculated, 0-999999999999999
OBJABKTIME
Estimated time to process object entries not applied (seconds) Estimated time to process object entries not applied (HHH:MM:SS) Number of object entries applied per hour Last applied object journal code and entry type Last applied object journal sequence number (OBJAPYSEQ) Last applied object journal entry timestamp (OBJAPYDATE, OBJAPYTIME) Type of object last processed by object apply Qualified name of object last processed by object apply Remote journal (RJ) link used by data group
PACKED(10 0)
Calculated, 0-9999999999
OBJABKTIMF
CHAR(10)
Calculated, 0-999:99:99
OBJAPYTPH OBJAPYCODE
PACKED(15 0) CHAR(3)
Calculated, 0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999
OBJAPYSEQ
PACKED(10 0)
OBJAPYTSP
TIMESTAMP
OBJAPYTYPE OBJAPYOBJ
CHAR(10) CHAR(1024)
RJINUSE
CHAR(10)
*YES, *NO
689
Table 129. MXDGSTS outfile (WRKDG command) Field RJSRCDFN Description RJ link source journal definition Type, length CHAR(10) Valid values User-defined journal definition name Column headings RJ LINK SOURCE JRNDFN RJ LINK SOURCE JRNDFN RJ LINK TARGET SYSTEM RJ LINK TARGET JRNDFN RJ PRIMARY RDB ENTRY RJ PRIMARY TFRDFN RJ SECONDARY RDB ENTRY RJ SECONDARY TFRDFN RJ LINK STATE
RJSRCSYS
CHAR(8)
RJTGTDFN
CHAR(10)
RJTGTSYS
CHAR(8)
RJ link primary RDB entry RJ link primary transfer definition name RJ link secondary RDB entry
User-defined or MIMIX generated RDB name User-defined transfer definition name User-defined or MIMIX generated RDB name
RJSECTFR
CHAR(10)
RJSTATE
CHAR(10)
BLANK, *FAILED, *CTLINACT, *INACTPEND, *ASYNC, *SYNC, *ASYNPEND, *SYNCPEND, *NOTBUILT, *UNKNOWN *ASYNC, *SYNC, BLANK
RJDLVRY
CHAR(10)
RJSNDPTY
PACKED(3 0)
0-99 0=*SYSDFT
690
Table 129. MXDGSTS outfile (WRKDG command) Field RJRDRSTS RJSMONSTS RJTMONSTS ITECNT Description RJ reader task status RJ link source monitor status RJ link target monitor status Total IFS tracking entries Type, length CHAR(10) CHAR(10) CHAR(10) PACKED(10 0) Valid values BLANK, *UNKNOWN, *ACTIVE, *INACTIVE, *THRESHOLD BLANK, *UNKNOWN, *ACTIVE, *INACTIVE BLANK, *UNKNOWN, *ACTIVE, *INACTIVE 0-999999 Column headings RJREADER STATUS RJ SOURCE MONITOR RJ TARGET MONITOR TOTAL IFS TRACKING ENTRIES ACTIVE IFS TRACKING ENTRIES INACT IFS TRACKING ENTRIES IFS TE NOT JOURNALED ON SOURCE IFS TE NOT JOURNALED ON TARGET IFS TE HELD FOR ERRORS IFS TE HELD FOR OTHER TOTAL OBJ TRACKING ENTRIES ACTIVE OBJ TRACKING ENTRIES
ITEACTIVE
PACKED(10 0)
0-999999
ITENOTACT
PACKED(10 0)
0-999999
ITENOTJRNS
IFS tracking entries not journaled on source IFS tracking entries not journaled on target IFS tracking entries held due to error IFS tracking entries held for other reasons Total object tracking entries
PACKED(10 0)
0-999999
ITENOTJRNT
PACKED(10 0)
0-999999
OTEACTIVE
PACKED(10 0)
0-999999
691
Table 129. MXDGSTS outfile (WRKDG command) Field OTENOTACT Description Inactive object tracking entries Type, length PACKED(10 0) Valid values 0-999999 Column headings INACT OBJ TRACKING ENTRIES OBJ TE NOT JOURNALED ON SOURCE OBJ TE NOT JOURNALED ON TARGET OBJ TE HELD FOR ERRORS OBJ TE HELD FOR OTHER JOURNAL CACHE TARGET JOURNAL CACHE SOURCE JOURNAL STATE TARGET JOURNAL STATE SOURCE JRN CACHE TARGET STATUS JRN CACHE SOURCE STATUS
OTENOTJRNS
Object tracking entries not journaled on source Object tracking entries not journaled on target
PACKED(10 0)
0-999999
OTENOTJRNT
PACKED(10 0)
0-999999
Object tracking entries held due to error PACKED(10 0) Object tracking entries held for other reasons Journal cache target PACKED(10 0) CHAR(10)
JRNCACHESA
CHAR(10)
JRNSTATETA JRNSTATESA
CHAR(10) CHAR(10)
JRNCACHETS
CHAR(10)
*ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN
JRNCACHESS
CHAR(10)
692
Table 129. MXDGSTS outfile (WRKDG command) Field JRNSTATETS JRNSTATESS Description Journal state target status Journal state source status Type, length CHAR(10) CHAR(10) Valid values *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN User-defined value User-defined value Column headings JOURNAL STATE TARGET JOURNAL STATE SOURCE RJ TGT JRNRCV RJ TGT JRNRCV LIBRARY RJTGT TYPE AND ENTRY CODE RJ TGT ENTRY SEQUENCE RJ TGT ENTRY TIMESTAMP LAST OBJ RETRIEVED (UNICODE) LAST OBJ SENT (UNICODE) LAST OBJ APPLIED (UNICODE) TOTAL DB FILE ENTRIES2
RJTGTRCV RJTGTLIB
Last RJ target journal entry receiver name Last RJ target journal entry receiver library name Last RJ target journal code and entry type Last RJ target journal entry sequence number Last RJ target journal entry timestamp Qualified name of object last qualified by object retrieve - Unicode Qualified name of object last qualified by container send - Unicode Qualified name of object last qualified by object apply - Unicode Total database file entries
CHAR(10) CHAR(10)
RJTGTCOCDE
CHAR(3)
PACKED(10 0) TIMESTAMP GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488) PACKED(10 0)
CNRSNDUCS
OBJAPYUCS
FECNT2
0-9999999999
693
Table 129. MXDGSTS outfile (WRKDG command) Field FEACTIVE2 Description Active database file entries (FEACT) Type, length PACKED(10 0) Valid values 0-9999999999 Column headings ACTIVE DB FILE ENTRIES2 INACTIVE DB FILE ENTRIES2 FILES NOT JOURNALED ON SOURCE2 FILES NOT JOURNALED ON TARGET2 FILES HELD FOR ERRORS2 FILES HELD FOR OTHERS2 FILES BEING REPAIRED2 RJLNK THRESHOLD (TIME IN MIN) RJLNK THRESHOLD (NBR OF JRNE) DBSND/DBRDR THRESHOLD (TIME IN MIN) DBSND/DBRDR THRESHOLD (NBR OF JRNE)
FENOTACT2
PACKED(10 0)
0-9999999999
FENOTJRNS2
Database file entries not journaled on source Database file entries not journaled on target Database file entries held due to error
PACKED(10 0)
0-9999999999
FENOTJRNT2
PACKED(10 0)
0-9999999999
FEHLDERR2
PACKED(10 0)
0-9999999999
FEHLDOTHR2 FECMPRPR2
Database file entries held for other reasons (FEHLD) Database file entries being repaired
PACKED(10 0) PACKED(10 0)
0-9999999999 0-9999999999
RJLNKTHLDM
RJ Link Threshold Exceeded (Time in minutes) RJ Link Threshold Exceeded (Number of journal entries) DB Send/Reader Threshold Exceeded (Time in minutes) DB Send/Reader Threshold Exceeded (Number of journal entries)
PACKED(4 0)
0-9999
RJLNKTHLDE
PACKED(7 0)
0-9999999
DBRDRTHLDM
PACKED(4 0)
0-9999
DBRDRTHLDE
PACKED(7 0)
0-9999999
694
Table 129. MXDGSTS outfile (WRKDG command) Field DBAPYATHLD DBAPYBTHLD DBAPYCTHLD DBAPYDTHLD DBAPYETHLD DBAPYFTHLD OBJSNDTHDM Description DB Apply A Threshold Exceeded (Number of journal entries) DB Apply B Threshold Exceeded (Number of journal entries) DB Apply C Threshold Exceeded (Number of journal entries) DB Apply D Threshold Exceeded (Number of journal entries) DB Apply E Threshold Exceeded (Number of journal entries) DB Apply F Threshold Exceeded (Number of journal entries) Object Send Threshold Exceeded (Time in minutes) Object Send Threshold Exceeded (Number of journal entries) Object Retrieve Threshold Exceeded (Number of activity entries) Container Send Threshold Exceeded (Number of activity entries) Object Apply Threshold Exceeded (Number of activity entries) RJ Backlog Type, length PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(4 0) Valid values 0-99999 0-99999 0-99999 0-99999 0-99999 0-99999 0-9999 Column headings DB APPLY A THRESHOLD DB APPLY B THRESHOLD DB APPLY C THRESHOLD DB APPLY D THRESHOLD DB APPLY E THRESHOLD DB APPLY F THRESHOLD OBJSND THRESHOLD (TIME IN MIN) OBJSND THRESHOLD (NBR OF JRNE) OBJRTV THRESHOLD CNRSND THRESHOLD OBJAPY THRESHOLD RJ BACKLOG
OBJSNDTHDE
PACKED(7 0)
0-9999999
695
SUPERAPP
CHICAGO
LONDON
697
698
699
700
701
702
703
Table 130. MXDGOBJE outfile (WRKDGOBJE command) Field KEEPSPLF OBJRTVDLY USRPRFSTS JRNIMG OPNCLO REPTYPE Description Keep deleted spooled files Retrieve delay (Object retrieve processing) User profile status Journal image (File entry options) Omit open and close entries (File entry options) Replication type (File entry options) Lock member during apply (File entry options) Apply session (File entry options) Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values *YES, *NO 0-999, *DGDFT *DGDFT, *DISABLED, *ENABLED, *SRC, *TGT *DGDFT, *AFTER, *BOTH *DGDFT, *YES, *NO *DGDFT, *POSITION, *KEYED Column headings KEEP DLTD SPOOLED FILES OBJRTVPRC DELAY USER PROFILE STATUS FEOPT JOURNAL IMAGE FEOPT OMIT OPEN CLOSE FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT CURRENT APYSSN FEOPT COLLISION RESOLUTION FEOPT DISABLE TRIGGERS FEOPT PROCESS TRIGGERS FEOPT PROCESS CONSTRAINTS SYSTEM 1 LIBRARY ASP
APYLOCK APYSSN
CHAR(10) CHAR(10)
CRCLS
Collision resolution (File entry options) Disable triggers during apply (File entry options) Process trigger entries (File entry options) Process constraint entries (File entry options) System 1 library ASP number
CHAR(10)
User-defined name, *DGDFT, *HLDERR, *AUTOSYNC *YES, *NO, *DGDFT *YES, *NO, *DGDFT
DSBTRG PRCTRG
CHAR(10) CHAR(10)
PRCCST
CHAR(10)
*YES
LIB1ASP
PACKED(3,0)
704
Table 130. MXDGOBJE outfile (WRKDGOBJE command) Field LIB1ASPD Description System 1 library ASP device (File entry options) System 2 library ASP number System 2 library ASP device (File entry options) Number of omit content (OMTDTA) values Omit content values (File entry options) Spooled file options Number of cooperating object types Cooperating object types Number of attribute options Attribute options Type, length CHAR(10) Valid values *LIB1ASP, User-defined name Column headings SYSTEM 1 LIBRARY ASP DEV SYSTEM 2 LIBRARY ASP SYSTEM 2 LIBRARY ASP DEV NUMBER OF OMIT CONTENT VALUES OMIT CONTENT SPOOLED FILE OPTIONS NUMBER OF COOPERATING OBJECT TYPES COOPERATING OBJECT TYPES NUMBER OF ATTRIBUTE ATTRIBUTE
LIB2ASP LIB2ASPD
PACKED(3,0) CHAR(10)
NBROMTDTA
PACKED(3 0)
1-10
*NONE, *FILE, *MBR (10 characters each) *NONE, *HLD, *HLDONSAV 0-999
705
RCVTSP
TIMESTAMP
RECEIVE TIMESTAMP
APYTSP
TIMESTAMP
APPLY TIMESTAMP
706
Table 131. MXDGTSP outfile (WRKDGTSP command) Field CRTSNDET Description Elapsed time between create and send process (milliseconds) Type, length PACKED(10 0) Valid values Calculated, 0-9999999999 (Elapsed time between generation of the timestamps and the time the MIMIX send process is received on the target system for non-remote journaling. For remote journaling, the create and send times are set equal so elapsed time will be a value of 0. Calculated, 0-9999999999 (Elapsed time between the send time and the receive time.) Calculated, 0-9999999999 Elapsed time between the receive time and the apply time.) Calculated, 0-9999999999 (Elapsed time between generation of the timestamp to the time when the journal entry is applied on the target system.) -9999999999-0, 0-9999999999 Column headings SEND ELAPSED TIME
SNDRCVET
Elapsed time between send and receive process (milliseconds) Elapsed time between receive and apply process (milliseconds) Elapsed time between create and apply timestamps (milliseconds)
PACKED(10 0)
RCVAPYET
PACKED(10 0)
CRTAPYET
PACKED(10 0)
SYSTDIFF
The time differential between the source and target systems, where time differential = source time target time
PACKED(10 0)
TIME DIFFERENCE
707
708
Journal receiver prefix (Journal receiver prefix) Journal receiver library (Journal receiver prefix) Journal receiver library ASP
CHGMGT
CHAR(20)
THRESHOLD
PACKED(7 0)
709
Table 132. MXJRNDFN outfile (WRKJRNDFN command) Field RCVTIME RESETTHLD Description Time of day to change receiver Reset sequence threshold Type, length ZONED(6 0) PACKED(5 0) Valid values Time 10-1000000 Column headings RECEIVER CHANGE TIME RESET SEQUENCE THRESHOLD RECEIVER DELETE MANAGEMEN T KEEP UNSAVED JRNRCV KEEP JRNRCV COUNT KEEP JRNRCV (DAYS) DESCRIPTION JRNRCV ASP MSGQ THRESHOLD MSGQ MSGQ THRESHOLD MSGQ LIBRARY RJ LINK EXIT PROGRAM EXIT PROGRAM LIBRARY
DLTMGT
CHAR(10)
*YES, *NO
KEEPUNSAV
CHAR(10)
*YES, *NO
Keep journal receiver (days) Journal receiver ASP Description Journal receiver ASP Threshold message queue
0-999 0-999 *BLANK, User-defined text Numeric value (0 = *LIBASP) User-defined name, *JRNDFN
MSGQLIB
CHAR(10)
*JRNLIB, user-defined name (See field JRNLIB if this field contains *JRNLIB)
710
Table 132. MXJRNDFN outfile (WRKJRNDFN command) Field MINENTDTA REQTHLDSIZ Description Minimal journal entry data Requested threshold size Type, length CHAR(100) PACKED(7 0) Valid values Array of 10 CHAR(10) fields *DTAARA, *FLDBDY, *FILE, *NONE Numeric value Column headings MIN JRN ENTRY DATA REQUESTED THRESHOLD SIZE SAVE TYPE JOURNALING LAG LIMIT (SEC) *JRNLIBASP, user-defined name JOURNAL LIBRARY ASP DEV JRNRCV LIBRARY ASP DEV TARGET JOURNAL STATE JOURNAL CACHING
SAVTYPE JRNLAGLMT
CHAR(10) PACKED(3 0)
JRNLIBASPD
CHAR(10)
RCVLIBASPD
CHAR(10)
TGTSTATE
CHAR(10)
*ACTIVE, *STANDBY
JRNCACHE
CHAR(10)
711
712
SRCSYS SRCJEJRNA
CHAR(8) DEC(3)
System name "0 = *CRTDFT -1 = *ASPDEV *JRNLIBASP, *ASPDEV, ASP Primary Group name "0 = *CRTDFT -1 = *ASPDEV *RCVLIBASP, *ASPDEV, ASP Primary Group name Journal definition name
SRCJEJLAD
Source Journal Library ASP Device Source Journal Receiver Library ASP Source Journal Receiver Library ASP Device Journal definition name on target Target system name of journal definition Target Journal Library ASP
CHAR(10)
SRCJERCVA
DEC(3)
SRCJERLAD
CHAR(10)
TGTJRNDFN
CHAR(10)
TGTSYS TGTJEJRNA
CHAR(8) DEC(3)
713
Table 133. MXRJLNK outfile (WRKRJLNK command) Field TGTJEJLAD Description Target Journal Library ASP Device Target Journal Receiver Library ASP Target Journal Receiver Library ASP Device Delivery mode of remote journaling Remote journal state Type, length CHAR(10) Valid values *JRNLIBASP, *ASPDEV, ASP Primary Group name "0 = *CRTDFT -1 = *ASPDEV *RCVLIBASP, *ASPDEV, ASP Primary Group name *ASYNC, *SYNC, blank *ASYNC, *ASYNCPEND, *SYNC, *SYNCPEND, *INACTIVE, *CTLINACT, *FAILED, *NOTBUILT, *UNKNOWN Transfer definition name, *SYSDFN Transfer definition name, *SYSDFN, *NONE 0=*SYSDFN, 1-99 Plain text Column headings TGT JRN LIBRARY ASP DEV TGT JRNRCV LIBRARY ASP TGT JRNRCV LIBRARY ASP DEV RJ MODE (DELIVERY) STATE
TGTJERCVA
DEC(3)
TGTJERLAD
CHAR(10)
RJMODE RJSTATE
CHAR(10) CHAR(10)
Primary transfer definition Secondary transfer definition Async process priority Text description
714
715
Primary message queue (Primary message handling) Primary message queue library (Primary message handling) Primary message queue severity (Primary message handling) Primary message queue severity number (Primary message handling) Primary message queue information level (Primary message handling) Secondary message queue (Secondary message handling)
PRIMARY MSGQ PRIMARY MSGQ LIB PRIMARY MSGQ SEV PRIMARY MSGQ SEV NBR PRIMARY MSGQ INFO LEVEL SECONDARY MSGQ
PRIINFLVL
CHAR(10)
*SUMMARY, *ALL
SECMSGQ
CHAR(10)
User-defined name
716
Table 134. MXSYSDFN outfile (WRKSYSDFN command) Field SECMSGQLIB SECSEV Description Secondary message queue library (Secondary message handling) Secondary message queue severity (Secondary message handling) Secondary message queue severity number (Secondary message handling) Secondary message queue information level (Secondary message handling) Description Journal manager delay (seconds) System manager delay (seconds) Output queue (Output queue) Output queue library (Output queue) Hold on output queue Save on output queue Keep system history (days) Keep data group history (days) Keep MIMIX data (days) MIMIX data library ASP Type, length CHAR(10) CHAR(10) Valid values User-defined name, *LIBL *SEVERE, *INFO, *WARNING, *ERROR, *TERM, *ALERT, *ACTION, 0-99 0-99 Column headings SECONDARY MSGQ LIB SECONDARY MSGQ SEV SECONDARY MSGQ SEV NBR SECONDARY MSGQ INFO LEVEL DESCRIPTION JRNMGR DELAY (SEC) SYSMGR DELAY (SEC) OUTQ OUTQ LIBRARY HOLD ON OUTQ SAVE ON OUTQ KEEP SYS HISTORY (DAYS) KEEP DG HISTORY (DAYS) KEEP MIMIX DATA (DAYS) MIMIX DATA LIB ASP
SECSEVNBR
PACKED(3 0)
SECINFLVL
CHAR(10)
*SUMMARY, *ALL (Refer to the TFRSYS1 field if this field contains *SYS1) *BLANK, user-defined text 5-900 5-900 User-defined name User-defined name *YES, *NO *YES, *NO 1-365 1-365 1-365, 0 = *NOMAX Numeric value, 0 = *CRTDFT
TEXT JRNMGRDLY SYSMGRDLY OUTQ OUTQLIB HOLD SAVE KEEPSYSHST KEEPDGHST KEEPMMXDTA DTALIBASP
CHAR(50) PACKED(3 0) PACKED(3 0) CHAR(10) CHAR(10) CHAR(10) CHAR(10) PACKED(3 0) PACKED(3 0) PACKED(3 0) PACKED(3 0)
717
Table 134. MXSYSDFN outfile (WRKSYSDFN command) Field DSKSTGLMT SBMUSR MGRJOBD MGRJOBDLIB DFTJOBD DFTJOBDLIB PRDLIB RSTARTTIME KEEPNEWNFY KEEPACKNFY ASPGRP Description Disk storage limit (GB) User profile for submit job Manager job description (Manager job description) Manager job description library (Manager job description) Default job description (Default job description) Default job description library (Default job description) MIMIX product library Job restart time Keep new notification (days) Keep acknowledged notification (days) ASP Group Type, length PACKED(5 0) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(8) PACKED(3 0) PACKED(3 0) CHAR(10) Valid values 1-9999, 0 = *NOMAX *JOBD, *CURRENT User-defined name User-defined name User-defined name User-defined name User-defined name 000000 - 235959, *NONE (Values are returned left-justified) 1-365, 0 = *NOMAX 1-365, 0 = *NOMAX *NONE, User-defined name Column headings DISK STORAGE LIMIT (GB) USRPRF FOR SUBMIT JOB MANAGER JOBD MANAGER JOBD LIBRARY DEFAULT JOBD DEFAULT JOBD LIBRARY MIMIX PRODUCT LIBRARY RESTART TIME KEEP NEW NFY (DAYS) KEEP ACK NFY (DAYS) ASP GROUP
718
719
NETID2
CHAR(8)
720
Table 135. MXTFRDFN outfile (WRKTFRDFN command) Field MODE TEXT THLDSIZE RDB RDBSYS1 RDBSYS2 MNGRDB Description SNA mode Description Reset sequence threshold Relational database System 1 Relational database name System 2 Relational database name Manage RDB Directory Entries Indicator Transfer definition short name Type, length CHAR(8) CHAR(50) PACKED(7 0) CHAR(18) CHAR(18) CHAR(18) CHAR(10) Valid values User-defined name, *NETATR *BLANK, user-defined text 0-9999999 *GEN, user-defined name *SYS1, User-defined name *SYS2, User-defined name *DFT, *YES, *NO Column headings SNA MODE DESCRIPTION THRESHOLD SIZE RELATIONAL DATABASE RELATIONAL DATABASE RELATIONAL DATABASE MANAGE DIRECTORY ENTRIES TFRDFN SHORT NAME
TFRSHORTN
CHAR(4)
Name
721
PRCSYS TYPE
CHAR(10) CHAR(10)
*ANY, *BACKUP, *PRIMARY, *REPLICATE, user-defined name *ANY, *CRGADDNOD, *CRGCHG, *CRGCRT, *CRGDLT, *CRGDLTCMD, *CRGEND, *CRGENDNOD, *CRGFAIL, *CRGREJOIN, *CRGRESTR, *CRGRMVNOD, *CRGSTR, *CRGSWT, *CRGUNDO, User-defined value User-defined name User-defined value
PRDLIB TEXT
CHAR(10) CHAR(50)
722
723
Table 137. MZPRCE outfile (WRKPRCE command) Field OPERAND1 Description Compare operand 1 Type, length CHAR(10) Valid values BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1, *BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5, *BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4, *BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME, *CRGNAME, *CRGTYPE, *DTACRGSTS, *ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *NODCNT, *PRDLIB, *PRINOD,*PRIPRVROL, *PRIPRVSTS, *PRISTS, *PRVACTCDE, *PRVROL1, *PRVROL2, *PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1, *PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5, *REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4, *REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3, *REPSTS4, *REPSTS5, *ROLETYPE, User-defined type Column headings COMPARE OPERAND1
OPERATOR OPERAND2
CHAR(10) CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1, *BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5, *BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4, *BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME, *CRGNAME, *CRGTYPE, *DTACRGSTS, *ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *NODCNT, *PRDLIB, *PRINOD, *PRIPRVROL, *PRIPRVSTS, *PRISTS, *PRVACTCDE, *PRVROL1, *PRVROL2, *PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1, *PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5, *REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4, *REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3, *REPSTS4, *REPSTS5, *ROLETYPE, User-defined type BLANK, user-defined value
CMD
Command details
CHAR(1000)
COMMAND DETAILS
724
Table 137. MZPRCE outfile (WRKPRCE command) Field ACTLBL RTNVAL COMMENT Description Action label Return value Comment text Type, length CHAR(10) CHAR(10) CHAR(50) Valid values BLANK, user-defined value *FAIL, *SUCCESS BLANK, user-defined value Column headings ACTION LABEL RETURN VALUE COMMENT TEXT
725
FID1
FID1HEX OBJ2
FID2
FID2HEX CCSID
CHAR(32) BIN(5 0)
i5/OS-defined file identifier Defaults to job CCSID. If job CCSID is 65535 or data cannot be converted to job CCSID, OBJ1 and OBJ2 values remain in Unicode.
726
Table 138. MXDGIFSTE outfile (WRKDGIFSTE command) Field OBJ1CVT Description System 1 object name (converted to job CCSID) System 2 object name (converted to job CCSID) Object type Entry status Journaled on system 1 Journaled on system 2 Apply session Type, length CHAR(512) VARLEN(75) CHAR(512) VARLEN(75) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name converted using CCSID value. Zero length if conversion not possible. User-defined name converted using CCSID value. Zero length if conversion not possible. *DIR, *STMF, *SYMLNK *ACTIVE, *HLD, *HLDERR, *HLDIGN, *HLDRNM, *RLSWAIT *YES, *NO *YES, *NO A (only supported apply session) Column headings SYSTEM 1 IFS OBJECT CONVERTED SYSTEM 2 IFS OBJECT CONVERTED OBJECT TYPE CURRENT STATUS SYSTEM 1 JOURNALED SYSTEM 2 JOURNALED APPLY SESSION
OBJ2CVT
727
728
Table 139. MXDGOBJTE outfile (WRKDGOBJTE command) Field OBJ1APY Description System 1 object (known by apply) Type, length CHAR(10) Valid values User-defined name Column headings SYSTEM 1 OBJECT (APPLY) SYSTEM 1 LIBRARY (APPLY) SYSTEM 2 OBJECT (APPLY) SYSTEM 2 LIBRARY (APPLY)
LIB1APY
CHAR(10)
User-defined name
OBJ2APY
CHAR(10)
User-defined name
LIB2APY
CHAR(10)
User-defined name
729
Notices
Copyright 1999, 2008, Lakeview Technology Inc., All rights reserved. This document may not be copied, reproduced, translated, or transmitted in whole or part, except under license of Lakeview Technology Inc. MIMIX is a registered trademark of Lakeview Technology Inc. MIMIX AutoGuard, MIMIX AutoNotify, MIMIX Availability Manager, MIMIX ha1, MIMIX ha Lite, MIMIX DB2 Replicator, MIMIX Object Replicator, MIMIX Monitor, MIMIX Promoter, IntelliStart, RJ Link, and MIMIX Switch Assistant are trademarks of Lakeview Technology Inc. AS/400, DB2, eServer, i5/OS, IBM, iSeries, OS/400, Power, System i, and WebSphere are trademarks of International Business Machines Corporation. All other trademarks are the property of their respective owners. Lakeview Technology Inc. is an IBM Business Partner. If you are an entity of the U.S. government, you agree that this documentation and the program(s) referred to in this document are Commercial Computer Software, as defined in the Federal Acquisition Regulations (FAR), and the DoD FAR Supplement, and are delivered with only those rights set forth within the license agreement for such documentation and program(s). Use, duplication or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFAR 252.227-7013 (48 CFR) or subparagraphs (c)(1) & (2) of the Commercial Computer Software - Restricted Rights clause at FAR 52.227-19. The information in this document is subject to change without notice. Lakeview Technology Inc. makes no warranty of any kind regarding this material and assumes no responsibility for any errors that may appear in this document. The program(s) referred to in this document are not specifically developed, or licensed, for use in any nuclear, aviation, mass transit, or medical application or in any other inherently dangerous applications, and any such use shall remove Lakeview Technology Inc. from liability. Lakeview Technology Inc. shall not be liable for any claims or damages arising from such use of the Program(s) for any such applications. Examples and Example Programs: This book contains examples of reports and data used in daily operation. To illustrate them as completely as possible the examples may include names of individuals, companies, brands, and products. All of these names are fictitious. Any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. This book contains small programs that are furnished by Lakeview Technology Inc. as simple examples to provide an illustration. These examples have not been thoroughly tested under all conditions. Lakeview Technology, therefore, cannot guarantee or imply reliability, serviceability, or function of these example programs. All programs contained herein are provided to you AS IS. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.
Lakeview Technology Inc. 1901 South Meyers Suite 600 Oakbrook Terrace, IL 60181 USA www.lakeviewtech.com Phone:630-282-8100 Fax:630-282-8500
Index
Symbols
*FAILED activity entry 43 *HLD, files on hold 103 *HLDERR, held due to error 381 *HLDERR, hold error status 77 *MSGQ, maintaining private authorities 104 group 565 independent 565 independent, benefits 564 independent, configuration tips 568 independent, configuring 568 independent, configuring IFS objects 569 independent, configuring library-based objects 569 independent, effect on library list 570 independent, journal receiver considerations 569 independent, limitations 567 independent, primary 565 independent, replication 563 independent, requirements 567 independent, restrictions 567 independent, secondary 565 SYSBAS 563 system 564 user 565 asynchronous delivery 65 attributes, supported CMPDLOA command 606 CMPFILA command 591 CMPIFSA command 604 CMPOBJA command 596 audit results #DGFE rule 580, 630 #DLOATR rule 606, 632 #DLOATR rule, ASP attributes 612 #FILATR rule 591, 634 #FILATR rule, ASP attributes 612 #FILATR rule, journal attributes 608 #FILATRMBR rule 591, 634 #FILATRMBR rule, ASP attributes 612 #FILATRMBR rule, journal attributes 608 #FILDTA rule 582, 636 #IFSATR rule 604, 644 #IFSATR rule, ASP attributes 612 #IFSATR rule, journal attributes 608 #MBRRCDCNT rule 582, 640 #OBJATR rule 596, 647 #OBJATR rule, ASP attributes 612 #OBJATR rule, journal attributes 608 #OBJATR rule, user profile password attribute 619 #OBJATR rule, user profile status attribute 615 interpreting 573, 575, 576 interpreting, attribute comparisons 586
A
access paths, journaling 220 access types (file) for T-ZC entries 387 accessing MIMIX Main Menu 91 active server technology 440 additional resources 17 advanced journaling add to existing data group 85 apply session balancing 87 benefits 72 conversion examples 86 convert data group to 85 ending journaling 331, 335 loading tracking entries 284 planning for 85 replication process 73 serialized transactions with database 85 starting journaling 330, 334 advanced journaling, data areas and data queues synchronizing 505 verifying journaling 336 advanced journaling, IFS objects file IDs (FIDs) 312 journal receiver size 213 restrictions 121 synchronizing 505 verifying journaling 332 advanced journaling, large objects (LOBs) journal receiver size 213 synchronizing 476 APPC/SNA, configuring 163 apply session constraint induced changes 371 default value 240 specifying 236 apply session, database load balancing 87 ASP basic 565 concepts 564
732
interpreting, file data comparisons 582 timestamp difference 129 troubleshoot 578 auditing and reporting, compare commands DLO attributes 434 file and member attributes 425 file data using active processing 464 file data using subsetting options 467 file data with repair capability 458 file data without active processing 455 files on hold 461 IFS object attributes 431 object attributes 428 auditing value, i5/OS object set by MIMIX 58 auditing, i5/OS object 25 performed by MIMIX 297 audits 487 job log 578 authorities, private 104 automation 510 autostart job entry 190 changing 191 configuring 190 identifying 191
B
backlog comparing file data restriction 442 backup system 23 restricting access to files 240 basic ASP 565 batch output 527 benefits independent ASPs 564 LOB replication 107 bi-directional data flow 361 broadcast configuration 68
C
candidate objects defined 400 cascade configuration 68 cascading distributions, configuring 365 catchup mode 63 change management journal receivers 202 overview 37 remote journal environment 37
changing RJ link 227 startup programs, remote journaling 305 changing from RJ to MIMIX processing permanently 229 temporarily 228 checklist convert *DTAARA, *DTAQ to user journaling 154 convert IFS objects to user journaling 154 converting to remote journaling 147 copying configuration data 553 legacy cooperative processing 157 manual configuration (source-send) 143 MIMIX Dynamic Apply 150 new preferred configuration 139 pre-configuration 81 collision points 511 collision resolution 511 default value 240 requirements 382 working with 381 commands changing defaults 537 displaying a list of 528 commands, by mnemonic ADDDGDAE 290 ADDMSGLOGE 521 ADDRJLNK 225 CHGDGDAE 290 CHGJRNDFN 217 CHGRJLNK 227 CHGSYSDFN 171 CHGTFRDFN 186 CHKDGFE 303, 580 CLOMMXLST 536 CMPDLOA 420 CMPFILA 420 CMPFILDTA 440, 455 CMPIFSA 420 CMPOBJA 420 CMPRCDCNT 437 CPYCFGDTA 552 CPYDGDAE 291 CPYDGFE 291 CPYDGIFSE 291 CRTCRCLS 383 CRTDGDFN 247, 251 CRTJRNDFN 215 CRTSYSDFN 170
733
CRTTFRDFN 184 DLTCRCLS 384 DLTDGDFN 256 DLTJRNDFN 256 DLTSYSDFN 256 DLTTFRDFN 256 DSPDGDAE 293 DSPDGFE 293 DSPDGIFSE 293 ENDJRNFE 327 ENDJRNIFSE 331 ENDJRNOBJE 335 ENDJRNPF 327 LODDGDAE 289 LODDGFE 272 LODDGOBJE 268 MIMIX 91 OPNMMXLST 536 RMVDGDAE 292 RMVDGFE 292 RMVDGFEALS 292 RMVDGIFSE 292 RMVRJCNN 231 RUNCMD 529 RUNCMDS 529 SETDGAUD 297 SETIDCOLA 373 SNDNETDLO 509 SNDNETIFS 508 SNDNETOBJ 475, 506 STRJRNFE 326 STRJRNIFSE 330 STRJRNOBJE 334 STRMMXMGR 296 STRSVR 189 SWTDG 25 SYNCDFE 473 SYNCDGACTE 473, 479 SYNCDGFE 480, 489 SYNCDLO 472, 478, 499 SYNCIFS 472, 478, 495, 505 SYNCOBJ 472, 478, 491, 505 VFYCMNLNK 194, 195 VFYJRNFE 328 VFYJRNIFSE 332 VFYJRNOBJE 336 VFYKEYATR 359 WRKCRCLS 383 WRKDGDAE 289, 291 WRKDGDFN 255
WRKDGDLOE 291 WRKDGFE 291 WRKDGIFSE 291 WRKDGOBJE 291 WRKJRNDFN 255 WRKRJLNK 310 WRKSYSDFN 255 WRKTFRDFN 255 commands, by name Add Data Group Data Area Entry 290 Add Message Log Entry 521 Add Remote Journal Link 225 Change Data Group Data Area Entry 290 Change Journal Definition 217 Change RJ Link 227 Change System Definition 171 Change Transfer Definition 186 Check Data Group File Entries 303, 580 Close MIMIX List 536 Compare DLO Attributes 420 Compare File Attributes 420 Compare File Data 440, 455 Compare IFS Attributes 420 Compare Object Attributes 420 Compare Record Counts 437 Copy Configuration Data 552 Copy Data Group Data Area Entry 291 Copy Data Group File Entry 291 Copy Data Group IFS Entry 291 Create Collision Resolution Class 383 Create Data Group Definition 247, 251 Create Journal Definition 215 Create System Definition 170 Create Transfer Definition 184 Delete Collision Resolution Class 384 Delete Data Group Definition 256 Delete Journal Definition 256 Delete System Definition 256 Delete Transfer Definition 256 Display Data Group Data Area Entry 293 Display Data Group File Entry 293 Display Data Group IFS Entry 293 End Journal Physical File 327 End Journaling File Entry 327 End Journaling IFS Entries 331 End Journaling Obj Entries 335 Load Data Group Data Area Entries 289 Load Data Group File Entries 272 Load Data Group Object Entries 268 MIMIX 91
734
Open MIMIX List 536 Remove Data Group Data Area Entry 292 Remove Data Group File Entry 292 Remove Data Group IFS Entry 292 Remove Remote Journal Connection 231 Run Command 529 Run Commands 529 Send Network DLO 509 Send Network IFS 508 Send Network Object 506 Send Network Objects 475 Set Data Group Auditing 297 Set Identity Column Attribute 373 Start Journaling File Entry 326 Start Journaling IFS Entries 330 Start Journaling Obj Entries 334 Start Lakeview TCP Server 189 Start MIMIX Managers 296 Switch Data Group 25 Synchronize Data Group Activity Entry 479 Synchronize Data Group File Entry 480, 489 Synchronize DG Activity Entry 473 Synchronize DG File Entry 473 Synchronize DLO 472, 478, 499 Synchronize IFS 478 Synchronize IFS Object 472, 495, 505 Synchronize Object 472, 478, 491, 505 Verify Communications Link 194, 195 Verify Journaling File Entry 328 Verify Journaling IFS Entries 332 Verify Journaling Obj Entries 336 Verify Key Attributes 359 Work with Collision Resolution Classes 383 Work with Data Group Data Area Entries 289, 291 Work with Data Group Definition 255 Work with Data Group DLO Entries 291 Work with Data Group File Entries 291 Work with Data Group IFS Entries 291 Work with Data Group Object Entries 291 Work with Journal Definition 255 Work with RJ Links 310 Work with System Definition 255 Work with Transfer Definition 255 commands, run on remote system 529 commit cycles effect on audit comparison 582, 583 effect on audit results 587 policy effect on compare record count 351 commitment control 107
#MBRRCDCNT audit performance 351 journal standby state, journal cache 341, 344 journaled IFS objects 73 communications APPC/SNA 163 configuring system level 159 job names 48 native TCP/IP 159 OptiConnect 163 protocols 159 starting TCP sever 189 compare commands completion and escape messages 514 outfile formats 419 report types and outfiles 418 spooled files 418 comparing DLO attributes 434 file and member attributes 425 IFS object attributes 431 object attributes 428 when file content omitted 389 comparing attributes attributes to compare 422 overview 420 supported object attributes 421, 445 comparing file data 440 active server technology 440 advanced subsetting 451 allocated and not allocated records 442 comparing a random sample 451 comparing a range of records 448 comparing recently inserted data 448 comparing records over time 451 data correction 440 first and last subset 453 interleave factor 451 keys, triggers, and constraints 443 multi-threaded jobs 441 number of subsets 451 parallel processing 441 processing with DBAPY 441, 461 referential integrity considerations 444 repairing files in *HLDERR 441 restrictions 441 security considerations 442 thread groups 450 transfer definition 450 transitional states 441 using active processing 464
735
using subsetting options 467 wait time 450 with repair capability 458 with repair capability when files are on hold 461 without active processing 455 comparing file record counts 437 configuration additional supporting tasks 294 auditing 580 copying existing data 558 configuring advanced replication techniques 353 bi-directional data flow 361 cascading distributions 365 choosing the correct checklist 137 classes, collision resolution 383 data areas and data queues 112 DLO documents and folders 124 file routing, file combining 363 for improved performance 337 IFS objects 118 independent ASP 568 Intra communications 560, 561 job restart time 313 keyed replication 356 library-based objects 100 message queue objects for user profiles 104 omitting T-ZC journal entry content 388 spooled file replication 102 to replicate SQL stored procedures 393 unique key replication 356 configuring, collision resolution 382 confirmed journal entries 64 considerations journal for independent ASP 569 what to not replicate 83 constraints *CST attribute for CMPFILA 591 apply session for dependent files 371 auditing with CMPFILA 420 CMPFILA file-specific attribute 591 comparing file data 443 omit content and legacy cooperative processing 389 referential integrity considerations 444 requirements 370 requirements when synchronizing 481 restrictions with high availability journal performance enhancements 344
support 370 when journal is in standby state 341 constraints, physical files with apply session ignored 111 configuring 107 legacy cooperative processing 111 constraints, referential 111 contacting Lakeview Technology 19 container send process 56 defaults 243 description 54 threshold 243 contextual transfer definitions considerations 183 RJ considerations 182 continuous mode 63 conventions product 14 publications 14 convert data group to advanced journaling 154 COOPDB (Cooperate with database) 113, 120 cooperative journal (COOPJRN) behavior 106 cooperative processing and omitting content 389 configuring files 105 file, preferred method for 50 introduction 50 journaled objects 51 legacy 51 legacy limitations 111 MIMIX Dynamic Apply limitations 110 cooperative processing, legacy limitations 111 requirements and limitations 111 COOPJRN 106 COOPJRN (Cooperative journal) 236 COOPTYPE (Cooperating object types) 113 copying data group entries 291 definitions 255 create operation, how replicated 129 customer support 19 customizing 510 replication environment 511
D
data area
736
retrictions of journaled 113 data areas journaling 72 polling interval 238 polling process 77 synchronizing an object tracking entry 505 data distribution techniques 361 data group 24 convert to remote journaling 147 database only 110 determining if RJ link used 310 ending 40, 67 RJ link differences 67 sharing an RJ link 66 short name 234 starting 40 switching 24 switching, RJ link considerations 70 timestamps, automatic 237 type 235 data group data area entry 289 adding individual 290 loading from a library 289 data group definition 35, 233 creating 247 parameter tips 234 data group DLO entry 287 adding individual 288 loading from a folder 287 data group entry 401 defined 93 description 24 object 267 procedures for configuring 265 data group file entry 272 adding individual 278 changing 279 loading from a journal definition 276 loading from a library 275, 276 loading from FEs from another data group 277 loading from object entries 273 sources for loading 272 data group IFS entry 282 with independent ASPs 569 data group object entry adding individual 268 custom loading 267 independent ASP 569 with independent ASP 569
data library 34, 168 data management techniques 361 data queue restrictions of journaled 113 data queues journaling 72 synchronizing journaled objects 505 data source 234 database apply serialization 85 with compare file data (CMPFILDTA) 441, 461 database apply process 76 description 66 threshold warning 241 database reader process 66 description 66 threshold 241 database receive process 76 database send process 76 description 76 filtering 236 threshold 241 DDM password validation 306 server in startup programs 305 server, starting 308 defaults, command 537 definitions data group 35 journal 35 named 34 remote journal link 35 renaming 258 RJ link 35 system 35 transfer 35 delay times 167 delay/retry processing first and second 238 third 239 delete management journal receivers 203 overview 37 remote journal environment 38 delete operations journaled *DTAARA, *DTAQ, IFS objects 134 legacy cooperative processing 134 deleting data group entries 292
737
definitions 256 delivery mode asynchronous 65 synchronous 63 detail report 525 detected differences viewing and resolving 575, 576 directory entries managing 178 RDB 178 display output 524 displaying data group entries 293 definitions 257 distribution request, data-retrieval 55 DLOs example, entry matching 125 generic name support 124 keeping same name 242 object processing 124 duplicate identity column values 373 dynamic updates adding data group entries 278 removing data group entries 292
port alias, complex 161 port alias, simple 160 querying content of an output file 696 SETIDCOLA command increment values 377 WRKDG SELECT statements 696 exit points 511 journal receiver management 538, 541 MIMIX Monitor 538 MIMIX Promoter 539 exit programs journal receiver management 204, 542 requesting customized programs 540 expand support 526 extended attribute cache 345 configuring 345
F
failed request resolution 43 FEOPT (file and tracking entry options) 239 file id (FID) 75 files combining 363 omitting content 387 output 526 routing 364 sharing 361 synchronizing 480 filtering database replication 76 messages 45 on database send 236 on source side 237 remote journal environment 66 firewall, using CMPFILDTA with 442 folder path names 124
E
end journaling data areas and data queues 335 files 327 IFS objects 331 IFS tracking entry 331 object tracking entry 335 ending CMPFILDTA jobs 454 examples convert to advanced journaling 86 DLO entry matching 125 IFS object selection, subtree 415 job restart time 316 journal definitions for multimanagement environment 209 journal definitions for switchable data group 207 journal receiver exit program 545 load file entries for MIMIX Dynamic Apply 273 object entry matching 102 object retrieval delay 391 object selection process 407 object selection, order precedence in 408 object selection, subtree 410
G
generic name support 402 DLOs 124 generic user exit 538
H
help, accessing 14 history retention 168 hot backup 21
I
IBM i5/OS option 42 341 IBM OS/400 objects
738
to not replicate 83 IFS directory, created during installation 29 IFS file systems 118 unsupported 118 IFS object selection examples, subtree 415 subtree 405 IFS objects 118 file id (FID) use with journaling 75 journaled entry types, commitment control and 73 journaling 72 not supported 118 path names 119 supported object types 118 IFS objects, journaled restrictions 121 supported operations 130 sychronizing 482, 505 independent ASP 565 limitations 567 primary 565 replication 563 requirements 567 restrictions 567 secondary 565 synchronizing data within an 477 information and additional resources 17 installations, multiple MIMIX 23 interleave factor 451 Intra configuration 559 IPL, journal receiver change 37
J
job classes 30 job description parameter 527 job descriptions 30, 168 in data group definition 243 in product library 30 list of MIMIX 30 job log for audit 578 job name parameter 527 job names 47 job restart time 313 data group definition procedure 319 examples 315 overview 313 parameter 168, 244
system definition procedure 319 jobs, restarted automatically 313 journal 25 improving performance of 337 maximum number of objects in 26 security audit 53 system 53 journal analysis 43 journal at create 127, 238 requirements 323 requirements and restrictions 324 journal caching 202, 342 journal definition 35 configuring 197 created by other processes 200 creating 215 fields on data group definition 235 parameter tips 201 remote journal environment considerations 205 remote journal naming convention 206 remote journal naming convention, multimanagement 208 remote journaling example 207 journal entries 25 confirmed 64 filtering on database send 236 minimized data 339 OM journal entry 130 receive journal entry (RCVJRNE) 346 unconfirmed 64, 70 journal entry codes for data area and data queues 114 supported by MIMIX user journal processing 122 journal image 239, 355 journal manager 33 journal receiver 25 change management 37, 202 delete management 37, 38, 203 prefix 202 RJ processing earlier receivers 38 size for advanced journaling 213 starting point 26 stranded on target 39 journal receiver management interaction with other products 38 recommendations 37 journal sequence number, change during IPL 37
739
journal standby state 341 journaled data areas, data queues planning for 85 journaled IFS objects planning for 85 journaled object types user exit program considerations 87 journaling 25 cannot end 327 data areas and data queues 72 ending for data areas and data queues 335 ending for IFS objects 331 ending for physical files 327 IFS objects 72 IFS objects and commitment control 73 implicitly started 323 requirements for starting 323 starting for data areas and data queues 334 starting for IFS objects 330 starting for physical files 326 starting, ending, and verifying 322 verifying 487 verifying for data areas and data queues 336 verifying for IFS objects 332 verifying for physical files 328 journaling environment automatically creating 236 building 219 removing 231 source for values (JRNVAL) 219 journaling on target, RJ environment considerations 39 journaling status data areas and data queues 334 files 326 IFS objects 330 journaling, starting files 326
user exit program 108 large objects (LOBs) minimized journal entry data 339 legacy cooperative processing configuring 108 limitations 111 requirements 111 libraries to not replicate 83 library list adding QSOC to 164 library list, effect of independent ASP 570 library-based objects, configuring 100 limitations database only data group 110 list detail report 525 list summary report 525 load leveling 57 loading tracking entries 284 LOB replication 107 local-remote journal pair 63 log space 26 logical files 105, 106 long IFS path names 119
M
manage directory entries 178 management system 24 maximum size transmitted 177 MAXOPT2 value 213 menu MIMIX Configuration 295 MIMIX Main 91 message handling 167 message log 521 message queues associated with user profiles 104 journal-related threshold 204 messages 44 CMPDLOA 516 CMPFILA 514 CMPFILDTA 517 CMPIFSA 515 CMPOBJA 515 CMPRCDCNT 516 comparison completion and escape 514 MIMIX AutoGuard 487 MIMIX Dynamic Apply
K
keyed replication 355 comparing file data restriction 442 file entry option defaults 239 preventing before-image filtering 237 restrictions 356 verifying file attributes 359
L
large object (LOB) support
740
configuring 105, 108 recommended for files 105 reqirements and limitations 110 MIMIX environment 29 MIMIX installation 23 MIMIX jobs, restart time for 313 MIMIX Model Switch Framework 538 MIMIX performance, improving 337 MIMIX Retry Monitor 43 MIMIXOWN user profile 31, 306 MIMIXQGPL library 34 MIMIXSBS subsystem 34, 90 minimized journal entry data 339 LOBs 107 MMNFYNEWE monitor 127 monitor new objects not configured to MIMIX 127 move/rename operations system journal replication 130 user journal replication 131 multimanagement journal definition naming 208 multi-threaded jobs 441
O
object apply process defaults 243 description 54 threshold 243 object attributes, comparing 422 object auditing 323 object auditing level, i5/OS manually set for a data group 297 set by MIMIX 58, 297 object auditing value data areas, data queues 112 DLOs 124 IFS objects 120 library-based objects 98 omit T-ZC entry considerations 388 object entry, data group creating 267 object locking retry interval 238 object processing data areas, data queues 112 defaults 241 DLOs 124 high volume objects 350 IFS objects 118 retry interval 238 spooled files 102 object retrieval delay considerations 391 examples 391 selecting 391 object retrieve process 56 defaults 243 description 53 threshold 243 with high volume objects 350 object selection 399 commands which use 399 examples, order precedence 408 examples, process 407 examples, subtree 410 name pattern 405 order precedence 401 parameter 401 process 399 subtree 404
N
name pattern 405 name space 53 names, displaying long 119 naming conventions data group definitions 234 journal definitions 201, 206, 208 multi-part 27 transfer definitions 176 transfer definitions, contextual (*ANY) 183 transfer definitions, multiple network systems 172 network systems 24 multiple 172 new objects automatically journal 238 automatically replicate 127 files 127 files processed by legacy cooperative processing 128 files processed with MIMIX Dynamic Apply 127 IFS object journal at create requirements 323 IFS objects, data areas, data queues 128 journal at create selection criteria 324
741
object selector elements 401 by function 402 object selectors 401 object send process 54 description 53 threshold 242 object types supported 96, 549 Omit content (OMTDTA) parameter 388 and comparison commands 389 and cooperative processing 389 open commit cycles audit results 582, 583, 587 OptiConnect, configuring 163 outfiles 621 MCAG 623 MCDTACRGE 626 MCNODE 628 MXCDGFE 630 MXCMPDLOA 632 MXCMPFILA 634 MXCMPFILD 636 MXCMPFILR 639 MXCMPIFSA 644 MXCMPOBJA 647 MXCMPRCDC 640 MXDGACT 649 MXDGACTE 651 MXDGDAE 659 MXDGDFN 660 MXDGDLOE 668 MXDGFE 670 MXDGIFSE 674, 726, 728 MXDGIFSTE 726 MXDGOBJE 703 MXDGOBJTE 728 MXDGSTS 676 MXDGTSP 706 MXJRNDFN 709 MXSYSDFN 716 MXTFRDFN 720 MZPRCDFN 722 MZPRCE 723 user profile password 619 user profile status 615 WRKRJLNK 713 outfiles, supporting information record format 621 work with panels 622 output batch 527
considerations 523 display 524 expand support 526 file 526 parameter 523 print 524 output file querying content, examples of 696 output file fields Difference Indicator 582, 587 System 1 Indicator field 589 System 2 Indicator field 589 output queues 168 overview MIMIX operations 40 remote journal support 61 starting and ending replication 40 support for resolving problems 42 support for switching 24, 44 working with messages 44
P
parallel processing 441 path names, IFS 119 policy, CMPRCDCNT commit threshold 351 polling interval 238 port alias 160 complex example 161 creating 162 simple example 160 print output 524 printing controlling characteristics of 168 data group entries 293 definitions 257 private authorities, *MSGQ replication of 104 problems, journaling data areas and data queues 334 files 326 IFS objects 330 process container send and receive 56 database apply 76 database reader 66 database receive 76 database send 76 names 47 object apply 56 object retrieve 56
742
object send 54 process, object selection 399 processing defaults container send 243 database apply 241 file entry options 239 object apply 243 object retrieve 243 user journal entry 236 production system 23 publications conventions 14 formatting used in 15 IBM 17
Q
QAUDCTL system value 53 QAUDLVL system value 53, 103 QDFTJRN data area 238 restrictions 324 role in processing new objects 324 QSOC library 164 subsystem 305
R
RCVJRNE (Receive Journal Entry) 346 configuring values 347 determining whether to change the value of 347 understanding its values 346 RDB 178 directory entries 178 RDB directory entry 188 reader wait time 235 receiver library, changing for RJ target journal 222 receivers change management 202 delete management 203 recommendation multimanagement journal definitions 208 relational database (RDB) 178 entries 178, 186 remote journal benefits 61 i5/OS function 25, 61 i5/OS function, asynchronous delivery 65 i5/OS function, synchronous delivery 63
MIMIX support 61 relational database 178 remote journal environment changing 222 contextual transfer definitions 182 receiver change management 37 receiver delete management 38 restrictions 62 RJ link 66 security implications 306 switch processing changes 44 remote journal link 35, 66 remote journal link, See also RJ link remote journaling data group definition 236 repairing file data 458 files in *HLDERR 441 files on hold 461 replicating user profiles 476 what to not replicate 83 replication advanced topic parameters 237 by object type 96 configuring advanced techniques 353 constraint-induced modifications 371 data area 77 defaults for object types 96 direction of 23 ending data group 40 ending MIMIX 40 independent ASP 563 maximum size threshold 177 positional vs. keyed 355 process, remote journaling environment 66 retrieving extended attributes 345 spooled files 102 SQL stored procedures 393 starting data group 40 starting MIMIX 40 system journal process 53 unit of work for 24 user-defined functions 393 what to not replicate 83 replication path 46 reports detail 525 list detail 525 list summary 525
743
types for compare commands 418 requirement objects and journal in same ASP 26 requirements independent ASP 567 journal at create 323 keyed replication 355 legacy cooperative processing 111 MIMIX Dynamic Apply 110 standby journaling 343 user journal replication of data areas and data queues 112 restarted 313 restore operations, journaled *DTAARA, *DTAQ, IFS objects 134 restrictions comparing file data 441 data areas and data queues 113 independent ASP 567 journal at create 324 journal receiver management 38 journaled *DTAARA, *DTAQ objects 113 journaled IFS objects 121 keyed replication (unique key) 356 legacy cooperative processing 111 LOBs 108 MIMIX Dynamic Apply 110 number of objects in journal 26 QDFTJRN data area 324 remote journaling 62 standby journaling 343 retrying, data group activity entries 43 RJ link 35 adding 225 changing 227 data group definition parameter 236 description 66 end options 67 identifying data groups that use 310 sharing among data groups 66 switching considerations 70 threshold 237 RJ link monitors description 68 displaying status of 68 ending 68 not installed, status when 68 operation 68
S
save-while-active 396 considerations 396 examples 397 options 397 wait time 396 search process, *ANY transfer definitions 181 security considerations, CMPFILDTA command 442 general information 80 remote journaling implications 306 security audit journal 53 sending DLOs 509 IFS objects 508 library-based objects 506 serialization database files and journaled objects 85 object changes with database 72 servers starting DDM 308 starting TCP 189 short transfer definition name 176 source physical files 105, 106 source system 23 spooled files 102 compare commands 418 keeping deleted 103 options 103 retaining on target system 242 SQL stored procedures 393 replication requirements 393 SQL table identity columns 373 alternatives to SETIDCOLA 375 check for replication of 378 problem 373 SETIDCOLA command details 376 SETIDCOLA command examples 377 SETIDCOLA command limitations 374 SETIDCOLA command usage notes 377 setting attribute 378 when to use SETIDCOLA 374 standby journaling IBM i5/OS option 42 341 journal caching 342 journal standby state 341 MIMIX processing with 342 overview 341 requirements 343
744
restrictions 343 start journaling data areas and data queues 334 file entry 326 files 326 IFS objects 330 IFS tracking entry 330 object tracking entry 334 starting system and journal managers 296 TCP server 189 TCP server automatically 190 startup programs changes for remote journaling 305 MIMIX subsystem 90 QSOC subsystem 305 status, values affecting updates to 238 storage, data libraries 168 stranded journal on target, journal entries 39 subsystem MIMIXSBS, starting 90 QSOC 305 subtree 404 IFS objects 405 switching allowing 234 data group 24 enabling journaling on target system 235 example RJ journal definitions for 207 independent ASP restriction 568 MIMIX Model Switch Framework with RJ link 70 preventing identity column problems 373 remote journaling changes to 44 removing stranded journal receivers 39 RJ link considerations 70 synchronization check, automatic 237 synchronizing 472 activity entries overview 479 commands for 474 considerations 474 data group activity entries 503 database files 489 database files overview 480 DLOs 499 DLOs in a data group 499 DLOs without a data group 500 establish a start point 483 file entry overview 480 files with triggers 480
IFS objects 495 IFS objects by path name only 496 IFS objects in a data group 495 IFS objects without a data group 496 IFS tracking entries 505 including logical files 481 independent ASP, data in an 477 initial 484 initial configuration 483 initial configuration MQ environment 483 limit maximum size 474 LOB data 476 object tracking entries 505 object, IFS, DLO overview 478 objects 491 objects in a data group 491 objects without a data group 492 related file 481 resources for 483 status changes caused by 476 tracking entries 482 user profiles 474, 476 synchronous delivery 63 unconfirmed entries 64 SYSBAS 563, 565 system ASP 564 system definition 35, 166 changing 171 creating 170 parameter tips 167 system journal 53 system journal replication advanced techniques 353 omitting content 387 system library list 163, 570 system manager 32 system user profiles to not replicate 83 system value QAUDCTL 53 QAUDLVL 53, 103 QSYSLIBL 164 system, roles 23
T
target journal state 202 target system 23 TCP/IP adding to startup program 305
745
configuring native 159 creating port aliases for 160 temporary files to not replicate 83 thread groups 450 threshold, backlog adjusting 251 container send 243 database apply 241 database reader/send 241 object apply 243 object retrieve 243 object send 242 remote journal link 237 threshold, CMPRCDCNT commit 351 timestamps, automatic 237 tracking entries loading 284 loading for data areas, data queues 285 loading for IFS objects 284 purpose 74 tracking entry file identifiers (FIDs) 312 transfer definition 35, 174, 450 changing 186 contextual system support (*ANY) 28, 181 fields in data group definition 235 fields in system definition 167 multiple network system environment 172 other uses 174 parameter tips 176 short name 176 transfer protocols OptiConnect parameters 177 SNA parameters 177 TCP parameters 176 trigger programs defined 368 synchronizing files 369 triggers avoiding problems 444 comparing file data 443 disabling during synchronization 480 read 443 update, insert, and delete 443 T-ZC journal entries access types 387 configuring to omit 388 omitting 387
U
unconfirmed journal entries 64, 70 unique key comparing file data restriction 442 file entry options for replicating 239 replication of 355 user ASP 565 user exit points 541 user exit program data areas and data queues 87 IFS objects 87 large objects (LOBs) 108 user exit, generic 538 user journal replication advanced techniques 353 requirements for data areas and data queues 112 supported journal entries for data areas, data queues 114 tracking entry 74 user profile MIMIXOWN 306 password 619 status 615 user profiles default 168 MIMIX 31 replication of 104 specifying status 242 synchronizing 474 system distribution directory entries 476 to not replicate 83 user-defined functions 393
V
verifying communications link 194, 195 initial synchronization 487 journaling, IFS tracking entries 332 journaling, object tracking entries 336 journaling, physical files 328 key attributes 359 send and receive processes automatically 238
W
wait time comparing file data 450 reader 235
746
747