Vous êtes sur la page 1sur 747

Version 5.

0 MIMIX ha1 and MIMIX ha Lite for IBM i5/OS

MIMIX Reference

Published: July 22, 2008 Software level: 5.0.15.00

Copyrights, Trademarks, and Notices

Product conventions.................................................................................................. 14 Menus and commands ........................................................................................ 14 Accessing online help.......................................................................................... 14 Publication conventions............................................................................................. 14 Formatting for displays and commands .............................................................. 15 Sources for additional information............................................................................. 17 How to contact us...................................................................................................... 19 Chapter 1 MIMIX overview 21 MIMIX concepts......................................................................................................... 23 System roles and relationships ........................................................................... 23 Data groups: the unit of replication...................................................................... 24 Changing directions: switchable data groups ...................................................... 24 Additional switching capability ....................................................................... 25 Journaling and object auditing introduction ......................................................... 25 Log spaces .......................................................................................................... 26 Multi-part naming convention .............................................................................. 27 The MIMIX environment ............................................................................................ 29 The product library .............................................................................................. 29 IFS directories ............................................................................................... 29 Job descriptions and job classes......................................................................... 30 User profiles .................................................................................................. 31 The system manager........................................................................................... 31 The journal manager ........................................................................................... 33 The MIMIXQGPL library ...................................................................................... 34 MIMIXSBS subsystem................................................................................... 34 Data libraries ....................................................................................................... 34 Named definitions................................................................................................ 34 Data group entries ............................................................................................... 35 Journal receiver management................................................................................... 37 Interaction with other products that manage receivers........................................ 38 Processing from an earlier journal receiver ......................................................... 38 Considerations when journaling on target ........................................................... 39 Operational overview................................................................................................. 40 Support for starting and ending replication.......................................................... 40 Support for checking installation status ............................................................... 41 Support for automatically detecting and resolving problems ............................... 41 Support for working with data groups .................................................................. 41 Support for resolving problems ........................................................................... 42 Support for switching a data group...................................................................... 44 Support for working with messages .................................................................... 44 Replication process overview 46 Replication job and supporting job names ................................................................ 47 Cooperative processing introduction ......................................................................... 50 MIMIX Dynamic Apply ......................................................................................... 50 Legacy cooperative processing ........................................................................... 51 Advanced journaling ............................................................................................ 51 System journal replication ......................................................................................... 53 Processing self-contained activity entries ........................................................... 54

Chapter 2

Processing data-retrieval activity entries ............................................................. 55 Processes with multiple jobs ............................................................................... 57 Tracking object replication................................................................................... 57 Managing object auditing .................................................................................... 57 User journal replication.............................................................................................. 61 What is remote journaling?.................................................................................. 61 Benefits of using remote journaling with MIMIX .................................................. 61 Restrictions of MIMIX Remote Journal support ................................................... 62 Overview of IBM processing of remote journals .................................................. 63 Synchronous delivery .................................................................................... 63 Asynchronous delivery .................................................................................. 65 User journal replication processes ...................................................................... 66 The RJ link .......................................................................................................... 66 Sharing RJ links among data groups............................................................. 66 RJ links within and independently of data groups ......................................... 67 Differences between ENDDG and ENDRJLNK commands .......................... 67 RJ link monitors ................................................................................................... 68 RJ link monitors - operation........................................................................... 68 RJ link monitors in complex configurations ................................................... 68 Support for unconfirmed entries during a switch ................................................. 70 RJ link considerations when switching ................................................................ 70 User journal replication of IFS objects, data areas, data queues.............................. 72 Benefits of advanced journaling .......................................................................... 72 Replication processes used by advanced journaling .......................................... 73 Tracking entries ................................................................................................... 74 IFS object file identifiers (FIDs) ........................................................................... 75 Lesser-used processes for user journal replication................................................... 76 User journal replication with source-send processing ......................................... 76 The data area polling process ............................................................................. 77 Chapter 3 Preparing for MIMIX 80 Checklist: pre-configuration....................................................................................... 81 Data that should not be replicated............................................................................. 83 Planning for journaled IFS objects, data areas, and data queues............................. 85 Is user journal replication appropriate for your environment? ............................. 85 Serialized transactions with database files.......................................................... 85 Converting existing data groups .......................................................................... 85 Conversion examples .................................................................................... 86 Database apply session balancing ...................................................................... 87 User exit program considerations........................................................................ 87 Starting the MIMIXSBS subsystem ........................................................................... 90 Accessing the MIMIX Main Menu.............................................................................. 91 Planning choices and details by object class 93 Replication choices by object type ............................................................................ 96 Configured object auditing value for data group entries............................................ 98 Identifying library-based objects for replication ....................................................... 100 How MIMIX uses object entries to evaluate journal entries for replication ........ 101 Identifying spooled files for replication .............................................................. 102 Additional choices for spooled file replication.............................................. 103

Chapter 4

Replicating user profiles and associated message queues .............................. 104 Identifying logical and physical files for replication.................................................. 105 Considerations for LF and PF files .................................................................... 105 Files with LOBs............................................................................................ 107 Configuration requirements for LF and PF files................................................. 108 Requirements and limitations of MIMIX Dynamic Apply.................................... 110 Requirements and limitations of legacy cooperative processing....................... 111 Identifying data areas and data queues for replication............................................ 112 Configuration requirements - data areas and data queues ............................... 112 Restrictions - user journal replication of data areas and data queues .............. 113 Supported journal code E and Q entry types............................................... 114 Identifying IFS objects for replication ...................................................................... 118 Supported IFS file systems and object types .................................................... 118 Considerations when identifying IFS objects..................................................... 119 MIMIX processing order for data group IFS entries..................................... 119 Long IFS path names .................................................................................. 119 Upper and lower case IFS object names..................................................... 119 Configured object auditing value for IFS objects ......................................... 120 Configuration requirements - IFS objects .......................................................... 120 Restrictions - user journal replication of IFS objects ......................................... 121 Supported journal code B entry types ......................................................... 122 Identifying DLOs for replication ............................................................................... 124 How MIMIX uses DLO entries to evaluate journal entries for replication .......... 124 Sequence and priority order for documents ................................................ 124 Sequence and priority order for folders ....................................................... 125 Processing of newly created files and objects......................................................... 127 Newly created files ............................................................................................ 127 New file processing - MIMIX Dynamic Apply............................................... 127 New file processing - legacy cooperative processing.................................. 128 Newly created IFS objects, data areas, and data queues ................................. 128 Determining how an activity entry for a create operation was replicated .... 129 Processing variations for common operations ........................................................ 130 Move/rename operations - system journal replication ....................................... 130 Move/rename operations - user journaled data areas, data queues, IFS objects ... 131 Delete operations - files configured for legacy cooperative processing ............ 134 Delete operations - user journaled data areas, data queues, IFS objects ........ 134 Restore operations - user journaled data areas, data queues, IFS objects ...... 134 Chapter 5 Configuration checklists 137 Checklist: New remote journal (preferred) configuration ......................................... 139 Checklist: New MIMIX source-send configuration................................................... 143 Checklist: Converting to remote journaling.............................................................. 147 Converting to MIMIX Dynamic Apply....................................................................... 150 Converting using the Convert Data Group command ....................................... 150 Checklist: manually converting to MIMIX Dynamic Apply.................................. 151 Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 154 Checklist: Converting to legacy cooperative processing ......................................... 157

Chapter 6

System-level communications 159 Configuring for native TCP/IP.................................................................................. 159 Port aliases-simple example ............................................................................. 160 Port aliases-complex example .......................................................................... 161 Creating port aliases ......................................................................................... 162 Configuring APPC/SNA........................................................................................... 163 Configuring OptiConnect ......................................................................................... 163 Configuring system definitions 166 Tips for system definition parameters ..................................................................... 167 Creating system definitions ..................................................................................... 170 Changing a system definition .................................................................................. 171 Multiple network system considerations.................................................................. 172 Configuring transfer definitions 174 Tips for transfer definition parameters..................................................................... 176 Using contextual (*ANY) transfer definitions ........................................................... 181 Search and selection process ........................................................................... 181 Considerations for remote journaling ................................................................ 182 Considerations for MIMIX source-send configurations...................................... 182 Naming conventions for contextual transfer definitions ..................................... 183 Additional usage considerations for contextual transfer definitions................... 183 Creating a transfer definition ................................................................................... 184 Changing a transfer definition ................................................................................. 186 Changing a transfer definition to support remote journaling.............................. 186 Finding the system database name for RDB directory entries ................................ 188 Using i5/OS commands to work with RDB directory entries.............................. 188 Starting the Lakeview TCP/IP server ...................................................................... 189 Using autostart job entries to start the TCP server ................................................. 190 Adding an autostart job entry ............................................................................ 190 Identifying the autostart job entry in the MIMIXSBS subsystem........................ 191 Changing the job description for an autostart job entry ..................................... 191 Verifying a communications link for system definitions ........................................... 194 Verifying the communications link for a data group................................................. 195 Verifying all communications links..................................................................... 195 Configuring journal definitions 197 Journal definitions created by other processes ....................................................... 200 Tips for journal definition parameters ...................................................................... 201 Journal definition considerations ............................................................................. 205 Naming convention for remote journaling environments with 2 systems........... 206 Example journal definitions for a switchable data group ............................. 207 Naming convention for multimanagement environments .................................. 208 Example journal definitions for three management nodes .......................... 209 Journal receiver size for replicating large object data ............................................. 213 Verifying journal receiver size options .............................................................. 213 Changing journal receiver size options ............................................................. 213 Creating a journal definition..................................................................................... 215 Changing a journal definition................................................................................... 217 Building the journaling environment ........................................................................ 219 Changing the remote journal environment .............................................................. 222

Chapter 7

Chapter 8

Chapter 9

Adding a remote journal link.................................................................................... 225 Changing a remote journal link................................................................................ 227 Temporarily changing from RJ to MIMIX processing .............................................. 228 Changing from remote journaling to MIMIX processing .......................................... 229 Removing a remote journaling environment............................................................ 231 Chapter 10 Configuring data group definitions 233 Tips for data group parameters ............................................................................... 234 Additional considerations for data groups ......................................................... 244 Creating a data group definition .............................................................................. 247 Changing a data group definition ............................................................................ 251 Fine-tuning backlog warning thresholds for a data group ....................................... 251 Additional options: working with definitions 255 Copying a definition................................................................................................. 255 Deleting a definition................................................................................................. 256 Displaying a definition ............................................................................................. 257 Printing a definition.................................................................................................. 257 Renaming definitions............................................................................................... 258 Renaming a system definition ........................................................................... 258 Renaming a transfer definition .......................................................................... 261 Renaming a journal definition with considerations for RJ link ........................... 262 Renaming a data group definition ..................................................................... 263 Configuring data group entries 265 Creating data group object entries .......................................................................... 267 Loading data group object entries ..................................................................... 267 Adding or changing a data group object entry................................................... 268 Creating data group file entries ............................................................................... 272 Loading file entries ............................................................................................ 272 Loading file entries from a data groups object entries ................................ 273 Loading file entries from a library ................................................................ 275 Loading file entries from a journal definition ................................................ 276 Loading file entries from another data groups file entries........................... 277 Adding a data group file entry ........................................................................... 278 Changing a data group file entry ....................................................................... 279 Creating data group IFS entries .............................................................................. 282 Adding or changing a data group IFS entry....................................................... 282 Loading tracking entries .......................................................................................... 284 Loading IFS tracking entries.............................................................................. 284 Loading object tracking entries.......................................................................... 285 Creating data group DLO entries ............................................................................ 287 Loading DLO entries from a folder .................................................................... 287 Adding or changing a data group DLO entry ..................................................... 288 Creating data group data area entries..................................................................... 289 Loading data area entries for a library............................................................... 289 Adding or changing a data group data area entry ............................................. 290 Additional options: working with DG entries ............................................................ 291 Copying a data group entry ............................................................................... 291 Removing a data group entry ............................................................................ 292 Displaying a data group entry............................................................................ 293

Chapter 11

Chapter 12

Printing a data group entry ................................................................................ 293 Chapter 13 Additional supporting tasks for configuration 294 Accessing the Configuration Menu.......................................................................... 295 Starting the system and journal managers.............................................................. 296 Setting data group auditing values manually........................................................... 297 Examples of changing of an IFS objects auditing value ................................... 298 Checking file entry configuration manually.............................................................. 303 Changes to startup programs.................................................................................. 305 Checking DDM password validation level in use..................................................... 306 Option 1. Enable MIMIXOWN user profile for DDM environment...................... 306 Option 2. Allow user profiles without passwords ............................................... 307 Starting the DDM TCP/IP server ............................................................................. 308 Identifying data groups that use an RJ link ............................................................. 310 Using file identifiers (FIDs) for IFS objects .............................................................. 312 Configuring restart times for MIMIX jobs ................................................................. 313 Configurable job restart time operation ............................................................. 313 Considerations for using *NONE ................................................................. 315 Examples: job restart time ................................................................................. 315 Restart time examples: system definitions .................................................. 316 Restart time examples: system and data group definition combinations..... 316 Configuring the restart time in a system definition ............................................ 319 Configuring the restart time in a data group definition....................................... 319 Starting, ending, and verifying journaling 322 What objects need to be journaled.......................................................................... 323 Authority requirements for starting journaling.................................................... 324 MIMIX commands for starting journaling................................................................. 325 Journaling for physical files ..................................................................................... 326 Displaying journaling status for physical files .................................................... 326 Starting journaling for physical files ................................................................... 326 Ending journaling for physical files .................................................................... 327 Verifying journaling for physical files ................................................................. 328 Journaling for IFS objects........................................................................................ 330 Displaying journaling status for IFS objects ...................................................... 330 Starting journaling for IFS objects ..................................................................... 330 Ending journaling for IFS objects ...................................................................... 331 Verifying journaling for IFS objects.................................................................... 332 Journaling for data areas and data queues............................................................. 334 Displaying journaling status for data areas and data queues............................ 334 Starting journaling for data areas and data queues .......................................... 334 Ending journaling for data areas and data queues............................................ 335 Verifying journaling for data areas and data queues ......................................... 336 Configuring for improved performance 337 Minimized journal entry data ................................................................................... 339 Restrictions of minimized journal entry data...................................................... 339 Configuring for minimized journal entry data ..................................................... 340 Configuring for high availability journal performance enhancements...................... 341 Journal standby state ........................................................................................ 341 Minimizing potential performance impacts of standby state ........................ 342

Chapter 14

Chapter 15

Journal caching ................................................................................................. 342 MIMIX processing of high availability journal performance enhancements....... 342 Requirements of high availability journal performance enhancements ............. 343 Restrictions of high availability journal performance enhancements................. 343 Caching extended attributes of *FILE objects ......................................................... 345 Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 346 Understanding the data area format.................................................................. 346 Determining if the data area should be changed............................................... 347 Configuring the RCVJRNE call delay and block values .................................... 347 Configuring high volume objects for better performance......................................... 350 Improving performance of the #MBRRCDCNT audit .............................................. 351 Chapter 16 Configuring advanced replication techniques 353 Keyed replication..................................................................................................... 355 Keyed vs positional replication .......................................................................... 355 Requirements for keyed replication ................................................................... 355 Restrictions of keyed replication........................................................................ 356 Implementing keyed replication ......................................................................... 356 Changing a data group configuration to use keyed replication.................... 356 Changing a data group file entry to use keyed replication........................... 357 Verifying key attributes ...................................................................................... 359 Data distribution and data management scenarios ................................................. 361 Configuring for bi-directional flow ...................................................................... 361 Bi-directional requirements: system journal replication ............................... 361 Bi-directional requirements: user journal replication.................................... 362 Configuring for file routing and file combining ................................................... 363 Configuring for cascading distributions ............................................................. 365 Trigger support ........................................................................................................ 368 How MIMIX handles triggers ............................................................................. 368 Considerations when using triggers .................................................................. 368 Enabling trigger support .................................................................................... 369 Synchronizing files with triggers ........................................................................ 369 Constraint support ................................................................................................... 370 Referential constraints with delete rules............................................................ 370 Replication of constraint-induced modifications .......................................... 371 Handling SQL identity columns ............................................................................... 373 The identity column problem explained ............................................................. 373 When the SETIDCOLA command is useful....................................................... 374 SETIDCOLA command limitations .................................................................... 374 Alternative solutions .......................................................................................... 375 SETIDCOLA command details .......................................................................... 376 Usage notes ................................................................................................ 377 Examples of choosing a value for INCREMENTS....................................... 377 Checking for replication of tables with identity columns .................................... 378 Setting the identity column attribute for replicated files ..................................... 378 Collision resolution .................................................................................................. 381 Additional methods available with CR classes .................................................. 381 Requirements for using collision resolution ....................................................... 382 Working with collision resolution classes .......................................................... 383 Creating a collision resolution class ............................................................ 383

Changing a collision resolution class........................................................... 384 Deleting a collision resolution class............................................................. 384 Displaying a collision resolution class ......................................................... 384 Printing a collision resolution class.............................................................. 385 Omitting T-ZC content from system journal replication ........................................... 387 Configuration requirements and considerations for omitting T-ZC content ....... 388 Omit content (OMTDTA) and cooperative processing................................. 389 Omit content (OMTDTA) and comparison commands ................................ 389 Selecting an object retrieval delay........................................................................... 391 Object retrieval delay considerations and examples ......................................... 391 Configuring to replicate SQL stored procedures and user-defined functions.......... 393 Requirements for replicating SQL stored procedure operations ....................... 393 To replicate SQL stored procedure operations ................................................. 393 Using Save-While-Active in MIMIX.......................................................................... 396 Considerations for save-while-active................................................................. 396 Types of save-while-active options ................................................................... 397 Example configurations ..................................................................................... 397 Chapter 17 Object selection for Compare and Synchronize commands 399 Object selection process ......................................................................................... 399 Order precedence ............................................................................................. 401 Parameters for specifying object selectors.............................................................. 402 Object selection examples ...................................................................................... 407 Processing example with a data group and an object selection parameter ...... 407 Example subtree ............................................................................................... 410 Example Name pattern...................................................................................... 414 Example subtree for IFS objects ....................................................................... 415 Report types and output formats ............................................................................. 418 Spooled files ...................................................................................................... 418 Outfiles .............................................................................................................. 419 Comparing attributes 420 About the Compare Attributes commands .............................................................. 420 Choices for selecting objects to compare.......................................................... 421 Unique parameters ...................................................................................... 421 Choices for selecting attributes to compare ...................................................... 422 CMPFILA supported object attributes for *FILE objects .............................. 423 CMPOBJA supported object attributes for *FILE objects ............................ 423 Comparing file and member attributes .................................................................... 425 Comparing object attributes .................................................................................... 428 Comparing IFS object attributes.............................................................................. 431 Comparing DLO attributes....................................................................................... 434 Comparing file record counts and file member data 437 Comparing file record counts .................................................................................. 437 To compare file record counts ........................................................................... 438 Significant features for comparing file member data ............................................... 440 Repairing data ................................................................................................... 440 Active and non-active processing...................................................................... 440 Processing members held due to error ............................................................. 441 Additional features............................................................................................. 441

Chapter 18

Chapter 19

Considerations for using the CMPFILDTA command ............................................. 441 Recommendations and restrictions ................................................................... 441 Using the CMPFILDTA command with firewalls................................................ 442 Security considerations ..................................................................................... 442 Comparing allocated records to records not yet allocated ................................ 442 Comparing files with unique keys, triggers, and constraints ............................. 443 Avoiding issues with triggers ....................................................................... 444 Referential integrity considerations ............................................................. 444 Job priority .................................................................................................... 444 Specifying CMPFILDTA parameter values.............................................................. 445 Specifying file members to compare ................................................................. 445 Tips for specifying values for unique parameters .............................................. 446 Specifying the report type, output, and type of processing ............................... 449 System to receive output ............................................................................. 449 Interactive and batch processing................................................................. 449 Using the additional parameters........................................................................ 449 Advanced subset options for CMPFILDTA.............................................................. 451 Ending CMPFILDTA requests ................................................................................. 454 Comparing file member data - basic procedure (non-active) .................................. 455 Comparing and repairing file member data - basic procedure ................................ 458 Comparing and repairing file member data - members on hold (*HLDERR) .......... 461 Comparing file member data using active processing technology .......................... 464 Comparing file member data using subsetting options ........................................... 467 Chapter 20 Synchronizing data between systems 472 Considerations for synchronizing using MIMIX commands..................................... 474 Limiting the maximum sending size .................................................................. 474 Synchronizing user profiles ............................................................................... 474 Synchronizing user profiles with SYNCnnn commands .............................. 475 Synchronizing user profiles with the SNDNETOBJ command ................... 475 Missing system distribution directory entries automatically added .............. 476 Synchronizing large files and objects ................................................................ 476 Status changes caused by synchronizing ......................................................... 476 Synchronizing objects in an independent ASP.................................................. 477 About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 478 About synchronizing data group activity entries (SYNCDGACTE).......................... 479 About synchronizing file entries (SYNCDGFE command) ...................................... 480 About synchronizing tracking entries....................................................................... 482 Performing the initial synchronization...................................................................... 483 Establish a synchronization point ...................................................................... 483 Resources for synchronizing ............................................................................. 483 Using SYNCDG to perform the initial synchronization ............................................ 484 To perform the initial synchronization using the SYNCDG command defaults . 485 Verifying the initial synchronization ......................................................................... 487 Synchronizing database files................................................................................... 489 Synchronizing objects ............................................................................................. 491 To synchronize library-based objects associated with a data group ................. 491 To synchronize library-based objects without a data group .............................. 492 Synchronizing IFS objects....................................................................................... 495 To synchronize IFS objects associated with a data group ................................ 495

10

To synchronize IFS objects without a data group ............................................. 496 Synchronizing DLOs................................................................................................ 499 To synchronize DLOs associated with a data group ......................................... 499 To synchronize DLOs without a data group ...................................................... 500 Synchronizing data group activity entries................................................................ 503 Synchronizing tracking entries ................................................................................ 505 To synchronize an IFS tracking entry ................................................................ 505 To synchronize an object tracking entry ............................................................ 505 Sending library-based objects ................................................................................. 506 Sending IFS objects ................................................................................................ 508 Sending DLO objects .............................................................................................. 509 Chapter 21 Introduction to programming 510 Support for customizing........................................................................................... 511 User exit points.................................................................................................. 511 Collision resolution ............................................................................................ 511 Completion and escape messages for comparison commands ............................. 514 CMPFILA messages ......................................................................................... 514 CMPOBJA messages........................................................................................ 515 CMPIFSA messages ......................................................................................... 515 CMPDLOA messages ....................................................................................... 516 CMPRCDCNT messages .................................................................................. 516 CMPFILDTA messages..................................................................................... 517 Adding messages to the MIMIX message log ......................................................... 521 Output and batch guidelines.................................................................................... 523 General output considerations .......................................................................... 523 Output parameter ........................................................................................ 523 Display output.............................................................................................. 524 Print output .................................................................................................. 524 File output.................................................................................................... 526 General batch considerations............................................................................ 527 Batch (BATCH) parameter .......................................................................... 527 Job description (JOBD) parameter .............................................................. 527 Job name (JOB) parameter ......................................................................... 527 Displaying a list of commands in a library ............................................................... 528 Running commands on a remote system................................................................ 529 Benefits - RUNCMD and RUNCMDS commands ............................................. 529 Procedures for running commands RUNCMD, RUNCMDS.................................... 530 Running commands using a specific protocol ................................................... 530 Running commands using a MIMIX configuration element ............................... 532 Using lists of retrieve commands ............................................................................ 536 Changing command defaults................................................................................... 537 Customizing with exit point programs 538 Summary of exit points............................................................................................ 538 MIMIX user exit points ....................................................................................... 538 MIMIX Monitor user exit points .......................................................................... 538 MIMIX Promoter user exit points ....................................................................... 539 Requesting customized user exit programs ...................................................... 540 Working with journal receiver management user exit points ................................... 541

Chapter 22

11

Journal receiver management exit points.......................................................... 541 Change management exit points................................................................. 541 Delete management exit points ................................................................... 542 Requirements for journal receiver management exit programs................... 542 Journal receiver management exit program example ................................. 545 Appendix A Supported object types for system journal replication 549

Appendix B Copying configurations 552 Supported scenarios ............................................................................................... 552 Checklist: copy configuration................................................................................... 553 Copying configuration procedure ............................................................................ 558 Appendix C Configuring Intra communications 559 Manually configuring Intra using SNA ..................................................................... 559 Manually configuring Intra using TCP ..................................................................... 561 Appendix D MIMIX support for independent ASPs 563 Benefits of independent ASPs................................................................................. 564 Auxiliary storage pool concepts at a glance ............................................................ 564 Requirements for replicating from independent ASPs ............................................ 567 Limitations and restrictions for independent ASP support....................................... 567 Configuration planning tips for independent ASPs.................................................. 568 Journal and journal receiver considerations for independent ASPs .................. 569 Configuring IFS objects when using independent ASPs ................................... 569 Configuring library-based objects when using independent ASPs .................... 569 Avoiding unexpected changes to the library list ................................................ 570 Detecting independent ASP overflow conditions..................................................... 572 Appendix E Interpreting audit results 573 Interpreting audit results - MIMIX Availability Manager ........................................... 575 Interpreting audit results - 5250 emulator................................................................ 576 Checking the job log of an audit .............................................................................. 578 Interpreting results for configuration data - #DGFE audit........................................ 580 Interpreting results of audits for record counts and file data ................................... 582 What differences were detected by #FILDTA.................................................... 582 What differences were detected by #MBRRCDCNT ......................................... 583 Interpreting results of audits that compare attributes .............................................. 586 What attribute differences were detected .......................................................... 587 Where was the difference detected................................................................... 589 What attributes were compared ........................................................................ 590 Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 591 Attributes compared and expected results - #OBJATR audit ............................ 596 Attributes compared and expected results - #IFSATR audit ............................. 604 Attributes compared and expected results - #DLOATR audit ........................... 606 Comparison results for journal status and other journal attributes .................... 608 How configured journaling settings are determined .................................... 611 Comparison results for auxiliary storage pool ID (*ASP)................................... 612 Comparison results for user profile status (*USRPRFSTS) .............................. 615 How configured user profile status is determined........................................ 616 Comparison results for user profile password (*PRFPWDIND)......................... 619

12

Appendix F

Outfile formats 621 Outfile support in MIMIX Availability Manager......................................................... 621 Work panels with outfile support ............................................................................. 622 MCAG outfile (WRKAG command) ......................................................................... 623 MCDTACRGE outfile (WRKDTARGE command) ................................................... 626 MCNODE outfile (WRKNODE command)............................................................... 628 MXCDGFE outfile (CHKDGFE command) .............................................................. 630 MXCMPDLOA outfile (CMPDLOA command)......................................................... 632 MXCMPFILA outfile (CMPFILA command) ............................................................. 634 MXCMPFILD outfile (CMPFILDTA command) ........................................................ 636 MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 639 MXCMPRCDC outfile (CMPRCDCNT command)................................................... 640 MXCMPIFSA outfile (CMPIFSA command) ............................................................ 644 MXCMPOBJA outfile (CMPOBJA command) ......................................................... 647 MXDGACT outfile (WRKDGACT command)........................................................... 649 MXDGACTE outfile (WRKDGACTE command)...................................................... 651 MXDGDAE outfile (WRKDGDAE command) .......................................................... 659 MXDGDFN outfile (WRKDGDFN command) .......................................................... 660 MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 668 MXDGFE outfile (WRKDGFE command)................................................................ 670 MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 674 MXDGSTS outfile (WRKDG command) .................................................................. 676 WRKDG outfile SELECT statement examples .................................................. 696 WRKDG outfile example 1........................................................................... 696 WRKDG outfile example 2........................................................................... 696 WRKDG outfile example 3........................................................................... 697 WRKDG outfile example 4........................................................................... 697 MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 703 MXDGTSP outfile (WRKDGTSP command) ........................................................... 706 MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 709 MXRJLNK outfile (WRKRJLNK command) ............................................................. 713 MXSYSDFN outfile (WRKSYSDFN command)....................................................... 716 MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 720 MZPRCDFN outfile (WRKPRCDFN command) ...................................................... 722 MZPRCE outfile (WRKPRCE command) ................................................................ 723 MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 726 MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 728 732

Index

13

Product conventions

Product conventions
The conventions described here apply to all Lakeview products unless otherwise noted.

Menus and commands


Functionality for all Lakeview products is accessible from the products main menu. For example, all MIMIX products are accessible from a common MIMIX Main Menu. The options you see on a given menu may vary according to which products are installed. When there is a corresponding command for a menu option, the command is shown at the far right of the display. You can use either the menu option or the command to access the function. To issue a command from a command line outside of the menu interface, you can add the product library name to your library list or you can qualify the command with the name of the product library. If you enter a command without parameters, the system will prompt you for any required parameters. If you enter the command with all of the required parameters, the function is invoked immediately. Some commands can be submitted in batch jobs.

Accessing online help


MIMIX Availability Manager includes online help that is accessible from within the product. From any window within MIMIX Availability Manager, selecting the Help icon will open the help system and access help for the current window. From a 5250 emulator, context sensitive online help is available for all MIMIX commands and displays. Simply press F1 to view help. The position of your cursor determines what you will see. To view general help for a command, a display, or a menu, press F1 when the cursor is at the top of the display. To view help for a specific option, prompt, or column, press F1 when the cursor is located in the area for which you want help.

Publication conventions
This book uses typography and specialized formatting to help you quickly identify the type of information you are reading. For example, specialized styles and techniques distinguish information you see on a display from information you enter on a display or command line. In text, bold type identifies a new term whereas an underlined word highlights its importance. Notes and Attentions are specialized formatting techniques that are used, respectively, to highlight a fact or to warn you of the potential for damage. The following topics illustrate formatting techniques that may be used in this book.

14

Formatting for displays and commands


Table 1 shows the formatting used for the information you see on displays and command interfaces:
Table 1. Formatting examples for displays and commands Description Names of menus or displays, commands, keyboard keys, columns. (Column names are also shown in italic). Names of columns, prompts on displays, variables, user-defined values Examples MIMIX Basic Main Menu Update Access Code command Page Up key The Status column The Start processes prompt The library-name value CHGUPSCFG command WARNMSG parameter The value *YES Type the command MIMIX and press Enter. DGDFN(name system1 system2) CHGVAR &RETURN &CONTINUE

Convention Initial Capitalization Italic

UPPERCASE

System-defined mnemonic names for commands, parameters, and values.

monospace font

Text that you enter into a 5250 emulator command line. In instructions, the conventions of italic and UPPERCASE also apply. Examples showing programming code.

15

Publication conventions

16

Sources for additional information


This book refers to other published information. The following information, plus additional technical information, can be located in the IBM System i and i5/OS Information Center. From the Information center you can access these IBM PowerTM Systems topics, books, and redbooks: Backup and Recovery Journal management DB2 Universal Database for IBM PowerTM Systems Database Programming Integrated File System Introduction Independent disk pools OptiConnect for OS/400 TCP/IP Setup IBM redbook Striving for Optimal Journal Performance on DB2 Universal Database for iSeries, SG24-6286 IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication, SG24-5189 IBM redbook PowerTM Systems iASPs: A Guide to Moving Applications to Independent ASPs, SG24-6802

The following information may also be helpful if you use advanced journaling: DB2 UDB for iSeries SQL Programming Concepts DB2 Universal Database for iSeries SQL Reference IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication, SG24-5189

17

Sources for additional information

18

How to contact us
For contact information, visit our Contact CustomerCare web page. If you are current on maintenance, support for MIMIX products is also available when you log in to Support Central. It is important to include product and version information whenever you report problems. If you use MIMIX Availability Manager, you should also include the version information provided at the bottom of each MIMIX Availability Manager window.

19

How to contact us

20

MIMIX overview

Chapter 1

MIMIX overview

This book provides concepts, configuration procedures, and reference information for MIMIX ha1 and MIMIX ha Lite. For simplicity, this book uses the term MIMIX to refer to the functionality provided by either product unless a more specific name is necessary. MIMIX version 5 provides high availability for your critical data in a production environment on IBM PowerTM Systems through real-time replication of changes. MIMIX continuously captures changes to critical database files and objects on a production system, sends the changes to a backup system, and applies the changes to the appropriate database file or object on the backup system. The backup system stores exact duplicates of the critical database files and objects from the production system. MIMIX uses two replication paths to address different pieces of your replication needs. These paths operate with configurable levels of cooperation or can operate independently. The user journal replication path captures changes to critical files and objects configured for replication through a user journal. When configuring this path, shipped defaults use the IBM i remote journaling function to simplify sending data to the remote system. In previous versions, MIMIX DB2 Replicator provided this function. The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the IBM i system journal. In previous versions MIMIX Object Replicator provided this function.

Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating database files, IFS objects, data areas, and data queues. One common use of MIMIX is to support a hot backup system to which operations can be switched in the event of a planned or unplanned outage. If a production system becomes unavailable, its backup is already prepared for users. In the event of an outage, you can quickly switch users to the backup system where they can continue using their applications. MIMIX captures changes on the backup system for later synchronization with the original production system. When the original production system is brought back online, MIMIX assists you with analysis and synchronization of the database files and other objects. You can view the replicated data on the backup system at any time without affecting productivity. This allows you to generate reports, submit (read-only) batch jobs, or perform backups to tape from the backup system. In addition to real-time backup capability, replicated databases and objects can be used for distributed processing, allowing you to off-load applications to a backup system. Typically MIMIX is used among systems in a network. Simple environments have one production system and one backup system. More complex environments have

21

multiple production systems or backup systems. MIMIX can also be used on a single system. MIMIX automatically monitors your replication environment to detect and correct potential problems that could be detrimental to maintaining high availability. MIMIX also provides a means of verifying that the files and objects being replicated are what is defined to your configuration. This can help ensure the integrity of your MIMIX configuration. The topics in this chapter include: MIMIX concepts on page 23 describes concepts and terminology that you need to know about MIMIX. The MIMIX environment on page 29 describes components of the MIMIX operating environment. Journal receiver management on page 37 describes how MIMIX performs change management and delete management for replication processes. Operational overview on page 40 provides information about day to day MIMIX operations.

22

MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX performs replication. You should be familiar with the relationships between systems, the concepts of data groups and switching, and role of the i5/OS journaling function in replication.

System roles and relationships


Usually, replication occurs between two or more System i5 systems. The most common scenario for replication is a two-system environment in which one system is used for production activities and the other system is used as a backup system. The terms production system and backup system are used to describe the role of a system relative to the way applications are used on that system. In an availability management context, a production system is the system currently running the production workload for the applications. In normal operations, the production system is the system on which the principal copy of the data and objects associated with the application exist. A backup system is the system that is not currently running the production workload for the applications. In normal operations, the backup system is the system on which you maintain a copy of the data and objects associated with the application. These roles are not always associated with a specific system. For example, if you switch application processing to the backup system, the backup system temporarily becomes the production system. Typically, for normal operations in basic two-system environment, replicated data flows from the system running the production workload to the backup system. In a more complex environment, the terms production system and backup system may not be sufficient to clearly identify a specific system or its current role in the replication process. For example, if a payroll application on system CHICAGO is backed up on system LONDON and another application on system LONDON is backed up to the CHICAGO system, both systems are acting as production systems and as backup systems at the same time. The terms source system and target system identify the direction in which an activity occurs between two participating systems. A source system is the system from which MIMIX replication activity between two systems originates. In replication, the source system contains the journal entries used for replication. Information from the journal entries is either replicated to the target system or used to identify objects to be replicated to the target system. A target system is the system on which MIMIX replication activity between two systems completes. Because multiple instances of MIMIX can be installed on any system, it is important to correctly identify the instance to which you are referring. It is helpful to consider each installation of MIMIX on a system as being part of a separate network that is referred to as a MIMIX installation. A MIMIX installation is a network of System i5 systems that transfer data and objects among each other using functions of a common MIMIX product. A MIMIX installation is defined by the way in which you configure the MIMIX product for each of the participating systems. A system can participate in multiple independent MIMIX installations.

23

MIMIX concepts

The terms management system and network system define the role of a system relative to how the products interact within a MIMIX installation. These roles remain associated with the system within the MIMIX installation to which they are defined. Typically one system in the MIMIX installation is designated as the management system and the remaining one or more systems are designated as network systems. A management system is the system in a MIMIX installation that is designated as the control point for all installations of the product within the MIMIX installation. The management system is the location from which work to be performed by the product is defined and maintained. Often the system defined as the management system also serves as the backup system during normal operations. A network system is any system in a MIMIX installation that is not designated as the management system (control point) of that MIMIX installation. Work definitions are automatically distributed from the management system to a network system. Often a system defined as a network system also serves as the production system during normal operations.

Data groups: the unit of replication


The concept of a data group is used to control replication activities. A data group is a logical grouping of database files, data areas, objects, IFS objects, DLOs, or a combination thereof that defines a unit of work by which MIMIX replication activity is controlled. A data group may represent an application, a set of one or more libraries, or all of the critical data on a given system. Application environments may define a data group as a specific set of files and objects. For example, the R/3 environment defines a data group as a set of SQL tables that all use the same journal and which are all replicated to the same system. Users can start and stop replication activity by data group, switch the direction of replication for a data group, and display replication status by data group. By default, data groups support replication from both the system journal and the user journal. Optionally, you can limit a data group to replicate using only one replication path. The parameters in the data group definition identify the direction in which data is allowed to flow between systems and whether to allow the flow to switch directions. You also define the data to be replicated and many other characteristics the replication process uses on the defined data. The replication process is started and ended by operations on a data group. A data group entry identifies a source of information that can be replicated. Once a data group definition is created, you can define data group entries. MIMIX uses the data group entries that you create during configuration to determine whether a journal entry should be replicated. If you are using both user journal and system journal replication, a data group can have any combination of entries for files, IFS objects, library-based objects, and DLOs.

Changing directions: switchable data groups


When you configure a data group definition, you specify which of the two systems in the data group is the source for replicated data. In normal operation, data flows between two systems in the direction defined within the data group. When you need to switch the direction of replication, for example, when a production system is removed from the network for planned downtime. default values in the data group definition allow the same data group to be used for replication from either direction.

24

MIMIX provides support for switching due to planned and unplanned events. At the data group level, the Switch Data Group (SWTDG) command will switch the direction in which replication occurs between systems. Note: A switchable data group is different than bi-directional data flow. Bi-directional data flow is a data sharing technique described in Configuring advanced replication techniques on page 353.

Additional switching capability


Typically, switching is performed by using the MIMIX Switch Assistant. MIMIX Switch Assistant provides a user interface that prompts you through the switch process. MIMIX Switch Assistant calls your default MIMIX Model Switch Framework to control the switching process. MIMIX ha1 and MIMIX ha Lite include MIMIX Monitor, which provides support for the MIMIX Model Switch Framework. Through this support, you can customize monitoring and switching programs. Switching support in MIMIX Monitor includes logical and physical switching. When you perform switching in this manner, the exit programs called by your implementation of MIMIX Model Switch Framework must include the SWTDG command. For more information, see the Using MIMIX Monitor book. Your authorized Lakeview representative can assist you in implementing advanced switching scenarios.

Journaling and object auditing introduction


MIMIX relies on data recorded by the i5/OS operating system functions of journaling, remote journaling, and object auditing. Each of these functions record information in a journal. Variations in the replication process are optimized according to characteristics of the information provided by each of these functions. Journaling is the process of recording information about changes to user-identified objects, including those made by a system or user function, for a limited number of object types. Events are logged in a user journal. Optionally, logged events in a user journal can be on a remote system using remote journaling, whereby the journal and journal receiver exist on a remote system or on a different logical partition. Object auditing is the process by which the system creates audit records for specified types of access to objects. Object auditing logs events in a specialized system journal (the security audit journal, QAUDJRN). When an event occurs to an object or database file for which journaling is enabled, or when a security-relevant event occurs, the system logs identifying information about the event as a journal entry, a record in a journal receiver. The journal receiver is associated with a journal and contains the log of all activity for objects defined to the journal or all objects for which an audit trail is kept. Journaling must be active before MIMIX can perform replication. MIMIX uses the recorded journal entries to replicate activity to a designated system. Data group entries and other data group configuration settings determine whether MIMIX replicates activity for objects and whether replication is performed based on entries logged to the system journal or to a user journal. For some configurations, MIMIX uses entries from both journals.

25

MIMIX concepts

Journal entries deposited into the system journal (on behalf of an audited object) contain only an indication of a change to an object. Some of these types of entries contain enough information needed by MIMIX to apply the change directly to the replicated object on the target system, however many types of these entries require MIMIX to gather additional information about the object from the source system in order to apply the change directly to the replicated object on the target system. Journal entries deposited into a user journal (on behalf of a journaled file, data area, data queue, or IFS object) contain images of the data which was changed. This information is needed by MIMIX in order to apply the change directly to the replicated object on the target system. When replication is started, the start request (STRDG command) identifies a sequence number within a journal receiver at which MIMIX processing begins. In data groups configured with remote journaling, the specified sequence number and receiver name is the starting point for MIMIX processing from the remote journal. The i5/OS remote journal function controls where it starts sending entries from the source journal receiver to the remote journal receiver. The i5/OS operating system requires that journaled objects reside in the same auxiliary storage pool (ASP) as the user journal. The journal receivers can be in a different ASP. If the journal is in a primary independent ASP, the journal receivers must reside in the same primary independent ASP or a secondary independent ASP within the same ASP group. The i5/OS operating system (V5R4 and higher releases) allows journaling a maximum of 10,000,000 objects to one user journal. MIMIX can use existing journals with this value. Journals created by MIMIX have a maximum of 250,000 objects. User journaling will not start if the number of objects associated with the journal exceeds the journal maximum. The maximum includes: Objects for which changes are currently being journaled Objects for which journaling was ended while the current receiver is attached Journal receivers that are, or were, associated with the journal while the current journal receiver is attached.

Remote journaling requires unique considerations for journaling and journal receiver management. For additional information, see Journal receiver management on page 37.

Log spaces
Based on System i5 user space objects, a log space is a MIMIX object that provides an efficient storage and manipulation mechanism for replicated data that is temporarily stored on the target system during the receive and apply processes. All internal structures and objects that make up a log space are created and manipulated by MIMIX.

26

Multi-part naming convention


MIMIX uses named definitions to identify related user-defined configuration information. A multi-part, qualified naming convention uniquely describes certain types of definitions. This includes a two-part name for journal definitions and a threepart name for transfer definitions and data group definitions. Newly created data groups use remote journaling as the default configuration, which has unique requirements for naming data group definitions. For more information, see Naming convention for remote journaling environments with 2 systems on page 206. The multi-part name consists of a name followed by one or two participating system names (actually, names of system definitions). Together the elements of the multipart name define the entire environment for that definition. As a whole unit, a fullyqualified two-part or three-part name must be unique. The first element, the name, does not need to be unique. In a three-part name, the order of the system names is also important, since two valid definitions may share the same three elements but with the system names in different orders. For example, MIMIX automatically creates a journal definition for the security audit journal when you create a system definition. Each of these journal definitions is named QAUDJRN, so the name alone is not unique. The name must be qualified with the name of the system to which the journal definition applies, such as QAUDJRN CHICAGO or QAUDJRN NEWYORK. Similarly, the data group definitions INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO are unique because of the order of the system names. When using command interfaces which require a data group definition, MIMIX can derive the fully-qualified name of a data group definition if a partial name provided is sufficient to determine the unique name. If the first part of the name is unique, it can be used by itself to designate the data group definition. For example, if the data group definition INVENTORY CHICAGO HONGKONG is the only data group with the name INVENTORY, then specifying INVENTORY on any command requiring a data group name is sufficient. However, if a second data group named INVENTORY NEWYORK LONDON is created, the name INVENTORY by itself no longer describes a unique data group. INVENTORY CHICAGO would be the minimum parts of the name of the first data definition necessary to determine its uniqueness. If a third data group named INVENTORY CHICAGO LONDON was added, then the fully qualified name would be required to uniquely identify the data group. The order in which the systems are identified is also important. The system HONGKONG appears in only one of the data groups definitions. However, specifying INVENTORY HONGKONG will generate a not found error because HONGKONG is not the first system in any of the data group definitions. This applies to all external interfaces that reference multi-part definition names. MIMIX can also derive a fully qualified name for a transfer definition. Data group definitions and system definitions include parameters that identify associated transfer definitions. When a subsequent operation requires the transfer definition, MIMIX uses the context of the operation to determine the fully qualified name. For example, when starting a data group, MIMIX uses information in the data group definition, the systems specified in the data group name, and the specified transfer definition name to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer

27

definition, it reverses the order of the system names and checks again, avoiding the need for redundant transfer definitions. You can also use contextual system support (*ANY) to configure transfer definitions. When you specify *ANY in a transfer definition, MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Unlike the conventional configuration case, a specific search order is used if MIMIX is still unable to find an appropriate transfer definition. For more information, see Using contextual (*ANY) transfer definitions on page 181.

28

The MIMIX environment


A variety of product-defined operating elements and user-defined configuration elements collectively form an operational environment on each system. A MIMIX environment can be comprised of one or more MIMIX installations. Each system that participates in the same MIMIX environment must have the same operational environment. This topic describes each of the components of the MIMIX operating environment.

The product library


The name of the product library into which MIMIX is installed defines the connection among systems in the same MIMIX installation. The default name of the product installation library is MIMIX. Several items are shipped as part of the product library. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Each MIMIX installation also contains several default job descriptions and job classes within its library.

IFS directories
A default IFS directory structure is used in conjunction with the library-based objects of the MIMIX family of products. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Over time, the installation processes for products and fixes will restore objects to the IFS directory structure as well as to the QSYS library. The directories created when License Manager is installed or upgraded follow these guidelines: /LakeviewTech This is the root directory for all IFS-based objects. /LakeviewTech/system-based-area This directory structure contains system-based objects that need to exist only once on a system. The systembased-area represents a unique directory for each set of objects. Two structures that you should be aware of are: /LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location for users to place fixes downloaded from the Lakeview website. The VvRrMm value is the same as the release of License Manager on the system. Multiple VvRrMm directories will exist as the release of License Manager changes. /LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places software packages that it uploads to the System i5. /LakeviewTech/UserData/ is available to users to store product-related data. The directories created when MIMIX is installed or upgraded follow these guidelines. The requirements of your MIMIX environment determine the structure of these directories:

29

The MIMIX environment

/LakeviewTech/MIMIX/product-installation-library There is a unique directory structure for each installation of MIMIX. /LakeviewTech/MIMIX/product-installation-library/productarea There is a unique directory structure for each installation of MIMIX. The structure is determined by the set of objects needed by an area of the product and the product installation library.

Job descriptions and job classes


MIMIX uses a customized set of job descriptions and job classes. Customized job descriptions optimize characteristics for a category of jobs, including the user profile, job queue, message logging level, and routing data for the job. Customized job classes optimize runtime characteristics such as the job priority and CPU time slice for a category of jobs. All of the shipped job descriptions and job classes are configured with recommended default values. Job descriptions control batch processing. MIMIX features use a set of default job descriptions, MXAUDIT, MXSYNC, and MXDFT. When MIMIX is installed, these job descriptions are automatically restored in the product library. These job descriptions exist in the product library of each MIMIX installation. Jobs and related output are associated with the user profile submitting the request. Commands such as Compare File Attributes (CMPFILA), Compare File Data (CMPFILDTA), Synchronize Object (SYNCOBJ), as well as numerous others support this standard. Older commands that provide job description support for batch processing use different job descriptions that are located in the MIMIXQGPL library. The MIMIXQGPL library, along with these job descriptions, is automatically restored on the system when a MIMIX product is installed. Installing additional MIMIX installations on the same system does not create additional copies of these job descriptions. Table 2. shows a combined list of MIMIX job descriptions.
Table 2. Name Job descriptions used by MIMIX Description Shipped in Installation Library X Shipped in MIMIXQGPL Library

MXAUDIT

MIMIX Auditing. Used for MIMIX compare commands, such as those called by MIMIX audits, as the default value on the Job description (JOBD) parameter. MIMIX Default. Used for MIMIX load commands and by other commands that do not have a specific job description as the default value on the JOBD parameter. MIMIX Synchronization. Used for MIMIX synchronization commands, such as those called by MIMIX audits, as the default value on the JOBD parameter. MIMIX Apply. Used for MIMIX apply process jobs. MIMIX Communications. Used for all target communication jobs.

MXDFT

MXSYNC

MIMIXAPY MIMIXCMN

X X

30

Table 2. Name

Job descriptions used by MIMIX Description Shipped in Installation Library Shipped in MIMIXQGPL Library X X X X X X X

MIMIXDFT MIMIXMGR MIMIXMON MIMIXPRM MIMIXRGZ MIMIXSND MIMIXSYNC

MIMIX Default. Used for all MIMIX jobs that do not have a specific job description. MIMIX Manager. Used for MIMIX system manager and journal manager jobs. MIMIX Monitor. Used for most jobs submitted by the MIMIX Monitor product. MIMIX Promoter. Used for jobs submitted by the MIMIX Promoter product. MIMIX Reorganize File. Used for file reorganization jobs submitted by the database apply job. MIMIX Send. Used for database send, object send, object retrieve, container send, and status send jobs in MIMIX. MIMIX Synchronization. Used for MIMIX file synchronization. This is valid for synchronize commands that do not have a JOBD parameter on the display. MIMIX UPS Monitor. Used for the uninterruptible power source (UPS) monitor managed by the MIMIX Monitor product. MIMIX Verify. Used for MIMIX verify and compare command processes. This is valid for verify and compare commands that do not have a JOBD parameter on the display.

MIMIXUPS

MIMIXVFY

User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user profile. This profile owns all MIMIX objects, including the objects in the MIMIX product libraries and in the MIMIXQGPL library. The profile is created with sufficient authority to run all MIMIX products and perform all the functions provided by the MIMIX products. The authority of this user profile can be reduced, if business practices require, but this is not recommended. Reducing the authority of the MIMIXOWN requires significant effort by the user to ensure that the products continue to function properly and to avoid adversely affecting the performance of MIMIX products. See the License and Availability Manager book for additional security information for the MIMIXOWN user profile.

The system manager


The system manager consists of a pair of system management communication jobs between a management system and a network system. Each pair has a send side

31

The MIMIX environment

system manager job and a receiver side system manager job. These jobs must be active to enable replication. Once started, the system manager monitors for configuration changes and automatically moves any configuration changes to the network system. Dynamic status changes are also collected and returned to the management system. The system manager also gathers messages and timestamp information from the network system and places them in a message log and timestamp file on the management system. In addition, the system manager performs periodic maintenance tasks, including cleanup of the system and data group history files. Figure 1 shows a MIMIX installation with a management system and two network systems. In this installation, there are four pairs of system manager jobs; two between the first network system and the management system and two between the second network system and the management system. Each arrow represents a pair of system manager jobs. Since each pair has a send side system manager job and a receiver side system manager job, there are eight total system manager jobs in this installation.
Figure 1. System manager jobs in a MIMIX installation with one management system and

32

two network systems.

The System manager delay parameter in the system definition determines how frequently the system manager looks for work. Other parameters in the system definition control other aspects of system manager operation. System manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart the system managers based on the value of the Job restart time parameter in the system definitions for the network and management systems. For more information, see the section Configuring restart times for MIMIX jobs on page 313.

The journal manager


The journal manager is the process by which MIMIX maintains journal receivers on a system. A journal manager job is runs on each system in a MIMIX installation. If you have a MIMIX installation with a management system and two network systems, you

33

The MIMIX environment

have three journal manager jobs, one on each system. For more information, see Journal definition considerations on page 205. By default, MIMIX performs both change management and delete management for journal receivers used by the replication process. Parameters in a journal definition allow you to customize details of how the change and delete operations are performed. The Journal manager delay parameter in the system definition determines how frequently the journal manager looks for work. Journal manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in the system definition determines when the journal manager for that system restarts. For more information, see the section Configuring restart times for MIMIX jobs on page 313.

The MIMIXQGPL library


When a MIMIX product is installed, a library named MIMIXQGPL is restored on the system. The MIMIXQGPL library includes work management objects used by all MIMIX products. Many of these objects are customized and shipped with default settings designed to streamline operations for the products which use them. These objects include the MIMIXSBS subsystem and a variety of job descriptions and job classes. Note: If you have previous releases of MIMIX products on a system, you may find additional objects in the MIMIXQGPL library.

MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related processing. This subsystem is shipped with the proper job queue entries and routing entries for correct operation of the MIMIX jobs.

Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data libraries: MIMIX uses data libraries for storing the contents of the object cache. MIMIX creates the first data library when needed and may create additional data libraries. The names of data libraries are of the form product-library_n (where n is a number starting at 1). For system journal replication, MIMIX creates libraries named product-library_x, where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These ASP-specific data libraries are created when needed and are not deleted until the product is uninstalled.

Named definitions
MIMIX uses named definitions to identify related user-defined configuration information. You can create named definitions for system information, communication

34

(transfer) information, journal information, and replication (data group) information. Any definitions you create can be used by both user journal and system journal replication processes. One or more or each of the following definitions are required to perform replication: A system definition identifies to MIMIX the characteristics of a system that participates in a MIMIX installation. A transfer definition identifies to MIMIX the communications path and protocol to be used between two systems. MIMIX supports Systems Network Architecture (SNA), OptiConnect, and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols. A journal definition identifies to MIMIX a journal environment on a particular system. MIMIX uses the journal definition to manage the journal receiver environment used by the replication process. A data group definition identifies to MIMIX the characteristics of how replication occurs between two systems. A data group definition determines the direction in which replication occurs between the systems, whether that direction can be switched, and the default processing characteristics to use when processing the database and object information associated with the data group. A remote journal link (RJ link) is a MIMIX configuration element that identifies an i5/OS remote journaling environment. Newly created data groups use remote journaling as the default configuration. An RJ link identifies journal definitions that define the source and target journals, primary and secondary transfer definitions for the communications path used by MIMIX, and whether the i5/OS remote journal function sends journal entries asynchronously or synchronously. When a data group is added, the ADDRJLNK command is run automatically, using the transfer definition defined in the data group. The naming conventions used within definitions are described in Multi-part naming convention on page 27.

Data group entries


Data group entries are part of the MIMIX environment that must exist on each system in a MIMIX installation. MIMIX uses the data group entries that you create during configuration to determine whether or not a journal entry should be replicated. Data group file entry This type of data group entry identifies the location of a database file to be replicated and what its name and location will be on the target system. Within a file entry, you can override the default file entry options defined for the data group. MIMIX only replicates transactions for physical files because a physical file contains the actual data stored in members. MIMIX supports both positional and keyed access paths for accessing records stored in a physical file. Data group object entries This type of entry allows you to identify library-based objects for replication. Examples of library-based objects include programs, user profiles, message queues, and non-journaled database files. To select these types of objects for replication, you select individual objects or groups of objects by generic or specific object and library name, and object type,. Optionally, for files, you can specify an extended object attribute such as PF-DTA or DSPF.

35

The MIMIX environment

Data group IFS entries This type of entry allows you to identify integrated file system (IFS) objects for replication. IFS objects include directories and stream files. They reside in directories, similar to DOS or Unix files. You can select IFS objects for replication by specific or generic path name. Data group DLO entries This type of entry allows you to identify document library objects (DLOs) for replication. DLOs are documents and folders. They are contained in folders (except for first-level folders). To select DLOs for replication you select individual DLOs by specific or generic folder and DLO name, and owner. Data group data area entries This type of entry allows you to define a data area for replication by the data area polling process. However, the preferred way to replicate data areas is to use advanced journaling.

A single data group can contain any combination of these types of data group entries. If your license is for only one of the MIMIX products rather than for MIMIX ha1 or MIMIX ha Lite, only the entries associated with the product to which you are licensed will be processed for replication.

36

Journal receiver management


Parameters in journal definition commands determine how change management and delete management are performed on the journal receivers used by the replication process. Shipped default values result in the recommended behavior of allowing MIMIX to perform change management and delete management. Change management - The Receiver change management (CHGMGT) parameter controls how the journal receivers are changed. The recommended value *TIMESIZE results in MIMIX changing the journal receiver by both threshold size and time of day. Additional parameters in the journal definition control the size at which to change (THRESHOLD), the time of day to change (TIME), and when to reset the receiver sequence number (RESETTHLD). The conditions specified in these parameters must be met before change management can occur. For additional information, see Tips for journal definition parameters on page 201. If you do not use the recommended value *TIMESIZE for CHGMGT, consider the following: When you specify *TIMESYS, the system manages the receiver by size and during IPLs and MIMIX manages changing the receiver at a specified time. Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the same results as *TIMESIZE or *TIMESYS, respectively. When you specify *NONE, MIMIX does not handle changing the journal receivers. You must ensure that the system or another application performs change management to prevent the journal receivers from overflowing. When you allow the system to perform change management (*SYSTEM) and the attached journal receiver reaches its threshold, the system detaches the journal receiver and creates and attaches a new journal receiver. During an initial program load (IPL), the system creates and attaches a new journal receiver. During normal IPLs and most abnormal IPLs, the journal sequence number may be reset.

In a remote journaling configuration, MIMIX recognizes remote journals and ignores change management for the remote journals. The remote journal receiver is changed automatically by the i5/OS remote journal function when the receiver on the source system is changed. You can specify in the source journal definition whether to have receiver change management performed by the system or by MIMIX. Any change management values you specify for the target journal definition are ignored. You can also customize how MIMIX performs journal receiver change management through the use of exit programs. For more information, see Working with journal receiver management user exit points on page 541. Delete management - The Receiver delete management (DLTMGT) parameter controls how the journal receivers used for replication are deleted. It is strongly recommended that you use the value *YES to allow MIMIX to perform delete management. When MIMIX performs delete management, the journal receivers are only deleted after MIMIX is finished with them and all other criteria specified on the journal

37

Journal receiver management

definition are met. The criteria includes how long to retain unsaved journal receivers (KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and how long to keep detached journal receivers (KEEPJRNRCV). Note: If more than one MIMIX installation uses the same journal, the journal manager for each installation can delete the journal regardless of whether the other installations are finished with it. If you have this scenario, you need to use the journal receiver delete management exit points to control deleting the journal receiver. For more information, see Working with journal receiver management user exit points on page 541. Delete management of the source and target receivers occur independently from each other. It is highly recommended that you configure the journal definitions to have MIMIX perform journal delete management. The i5/OS remote journal function does not allow a receiver to be deleted until it is replicated from the local journal (source) to the remote journal (target). When MIMIX manages deletion, a target journal receiver cannot be deleted until it is processed by the database reader (DBRDR) process and it meets the other criteria defined in the journal definition. If you choose to manage journal receivers yourself, you need to ensure that journals are not removed before MIMIX has finished processing them. MIMIX operations can be affected if you allow the system to handle delete management. For example, the system may delete a journal receiver before MIMIX has completed its use.

Interaction with other products that manage receivers


If you run MIMIX replicate1 on the same System i5 as MIMIX ha1 (or MIMIX ha Lite) there may be considerations for journal receiver management. Although both MIMIX replicate1 and MIMIX ha1 support receiver change management, you need to choose only one product to perform change management activities for a specific journal. If you choose MIMIX replicate1, your MIMIX ha1 journal definition should specify CHGMGT(*NONE). If you choose MIMIX ha1, see change management for available options that can be specified in the journal definition, including system managed receivers. If both products scrape from the same journal, perform delete management only from MIMIX replicate1. This will prevent MIMIX ha1 deleting receivers before MIMIX replicate1 is finished with them. The journal definition within MIMIX ha1 should specify DLTMGT(*NO).

Processing from an earlier journal receiver


It is possible to have a situation where the operating system attempts to retransmit journal receivers that already exist on the target system. When this situation occurs, the remote journal function ends with an error and transmission of entries to the target system stops. This can occur in the following scenarios: When performing a clear pending start of the data group while also specifying a sequence number that is earlier in the journal stream than the last processed sequence number When starting a data group while specifying a database journal receiver that is earlier in the receiver chain than the last processed receiver.

38

For example, refer to Figure 2. Replication ended while processing journal entries in target receiver 2. Target journal receiver 1 is deleted through the configured delete management options. If the data group is started (STRDG) with a starting journal sequence number for an entry that is in journal receiver 1, the remote journal function attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1. However, receiver 2 already exists on the target system. When the operating system encounters receiver 2, an error occurs and the transmission to the target system ends. You can prevent this situation before starting that data group if you delete any target journal receivers following the receiver that will be used as the starting point. If you encounter the problem, recovery is simply to remove the target journal receivers and let remote journaling resend them. In this example, deleting target receiver 2 would prevent or resolve the problem.
Figure 2. Example of processing from an earlier journal receiver.

Source Journal Receivers

Target Journal Receivers

4 3 2 1 1

Considerations when journaling on target


The default behavior for MIMIX is to have journaling enabled on the target systems for the target files. After a transaction is applied to the target system, MIMIX writes the journal entry to a separate journal on the target system. This journaling on the target system makes it easier and faster to start replication from the backup system following a switch. As part of the switch processing, the journal receiver is changed before the data group is started. In a remote journaling environment, these additional journal receivers can become stranded on the backup system following a switch. When starting a data group after a switch, the i5/OS remote journal function begins transmitting journal entries from the just changed journal receiver. Because the backup system is now temporarily acting as the source system, the remote journal function interprets any earlier receivers as unprocessed source journal receivers and prevents them from being deleted. To remove these stranded journal receivers, you need to use the IBM command DLTJRNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter.

39

Operational overview

Operational overview
Before replication can begin, the following requirements must be met through the installation and configuration processes: MIMIX software must be installed on each system in the MIMIX installation. At least one communication link must be in place for each pair of systems between which replication will occur. The MIMIX operating environment must be configured and be available on each system. Journaling must be active for the database files and objects configured for user journal replication. For objects to be replicated from the system journal, the object auditing environment must be set up. The files and objects must be initially synchronized between the systems participating in replication.

Once MIMIX is configured and files and objects are synchronized, day-to-day operations for MIMIX can be performed from either the web-based MIMIX Availability Manager or from a 5250 emulator for a System i5. MIMIX Availability Manager is easy to use and preferable for daily operations. Newer MIMIX functions may only be available through this user interface. Through preferences, individuals have the ability to customize what systems, installations, and data groups to monitor.

Support for starting and ending replication


MIMIX Availability Manager and the 5250 emulator can be used to start and end replication. In the following paragraphs, only 5250 command names are used for simplicity. The corresponding windows have the same names as the commands to which they pass information. The Start MIMIX (STRMMX) and End MIMIX (ENDMMX) commands provide the ability to start and end all elements of a MIMIX environment. These commands include MIMIX services and manager jobs, all replication jobs for all data groups, as well as the master monitor and jobs that are associated with it. While other commands are available to perform these functions individually, the STRMMX and ENDMMX commands are preferred because they ensure that processes are started or ended in the appropriate order. The Start Data Group (STRDG) and End Data Group (ENDDG) commands operate at the data group level to control replication processes. These commands provide the flexibility to start or end selected processes and apply sessions associated with a data group, which can be helpful for balancing workload or resolving problems. For more information about both sets of commands, see the Using MIMIX book.

40

Support for checking installation status


Only MIMIX Availability Manager provides the ability to monitor multiple installations of MIMIX at once from a single interface. Status from each installation bubbles up to the Enterprise View, where you can quickly see whether a problem exists on the systems you are monitoring. Status icons and flyover text start the problem resolution process by guiding you to the appropriate action for the most severe problem present. In the 5250 emulator, the MIMIX Availability Status display reports the prioritized status of a single installation. Status from the installation is reported in three areas: Replication, Audits and Notification, and Services. Color and informational messages identify the most severe problem present in an area and identify the action to take to start problem isolation.

Support for automatically detecting and resolving problems


The functions provided by MIMIX AutoGuard are fully integrated into MIMIX user interfaces. Audits: MIMIX ships with a set of audits and associated audit monitors that are automatically scheduled to run daily. These audits check for common problems and automatically correct any detected problems within a data groups. Audits can also be invoked manually and automatic recovery can be optionally disabled. The Work with Audits display (WRKAUD) provides a summary view for audit status and a compliance view for adherence to auditing best practices. Similar windows exist in MIMIX Availability Manager. Error recovery during replication: MIMIX AutoGuard also provides the ability to have MIMIX check for and correct common problems during user journal and system journal replication that would otherwise cause a replication error. Automatic recovery can be optionally disabled. Problems that cannot be resolved are reported like any other replication error. For detailed information about MIMIX AutoGuard, refer to the Using MIMIX book.

Support for working with data groups


Data groups are central to performing day-to-day operations. The Data Group Status window in MIMIX Availability Manager and the Work with Data Groups (WRKDG) display provide status of replication jobs and indication of any replication errors for the data groups within an installation. Status icons or highlighted text indicates whether problems exist. Many options are available for taking action at the data group level and for drilling into detailed status information. Detailed status: When checking detailed status for a data group, MIMIX Availability Manager provides significant benefits over 5250 emulator commands. From a 5250 emulator, the command DSPDGSTS (option 8 from the Work with Data groups display) accesses the Data Group Status display. The initial view summarizes replication errors and the status of user journal (database) and system journal (object) processes for both source and target systems. By using function keys, you can display additional detailed views of only database or only object status.

41

Operational overview

When you choose to display detailed status for a data group from MIMIX Availability Manager, the highest priority problem that exists for the data group determines which of several possible views of the Data Group Details window will be displayed. You can often take action to resolve problems directly from these detailed status windows. Data Group Details - Status This window identifies all of the replication jobs and services jobs needed by the data group and provides their status. Similar information is available from the merged view of the Data Group Status display. Data Group Details - User Journal This window represents replication performed by user journal replication processes, including journaled files, IFS objects, data areas, and data queues. It includes information about the replication of user journal transactions, including journal progress, performance, and recent activity. Similar information is available from database views of the Data Group Status display. Data Group Details - System Journal This window represents replication performed by system journal replication processes, including journal progress, performance, and recent activity. Similar information is available from object views of the Data Group Status display. Data Group Details - Activity This window summaries activity for the selected data group that is experiencing replication problems. Problems are grouped by type of activity: File, Object, IFS Tracking, or Object Tracking. This window displays only one type of problem at a time, based on the activity type selected from the navigation bar. Similar information is available in the 5250 emulator when you use the following options from the Work with Data Groups display: 12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk entries not active.

Support for resolving problems


MIMIX includes functions that can assist you in resolving a variety of problems. Depending on the type of problem, some problem resolution tasks may need to be performed from the system where the problem occurs, such as on the source system where the journal resides or on the target system if the problem is related to the apply process. MIMIX will direct you to the correct system when this is required. MIMIX Availability Manager provides superior assistance for problem resolution. Action lists include only the appropriate choices for the problem and only those available from the system you are viewing. Object activity: The Work with Data Group Activity (WRKDGACT) command allows you to track system journal replication activity associated with a data group. You can see the object, DLO, IFS, and spooled file activity, which can help you determine the cause of an error. You can also see an error view that identifies the reason why the object is in error. Options on the Work with Data Group Activity display allow you to see messages associated with an entry, synchronize the entry between systems, and remove a failed entry with or without related entries. MIMIX Availability Manager provides similar capabilities to those of WRKDGACT from the following windows: Data Group Details - System Journal, Data Group Details -

42

Activity, and Object Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the system journal. Failed requests: During normal processing, system journal replication processes may encounter object requests that cannot be processed due to an error. Often the error is due to a transient condition, such as when an object is in use by another process at the time the object retrieve process attempts to gather the object data. Although MIMIX will attempt some automatic retries, requests may still result in a Failed status. In many cases, failed entries can be resubmitted and they will succeed. Some errors may require user intervention, such as a never-ending process that holds a lock on the object. MIMIX is shipped with the MIMIX Retry Monitor (#RTYDGACTE) which runs periodically and automatically resubmits all failed activity entries for all data groups. In order to use this monitor, it must be manually enabled, then started, using options on the Work with monitors (WRKMON) display. If your environment results in numerous transient failed entries it is recommended that you use the #RTYDGACTE monitor. You can manually request that MIMIX retry processing for a data group activity entry that has a status of *FAILED. These entries can be viewed using the Work with Data Group Activity (WRKDGACT) command. From the Work with Data Group Activity or Work with Data Group Activity Entries displays, you can use the retry option to resubmit individual failed entries or all of the entries for an object. This option calls the Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with Data Group Activity display, you can also specify a time at which to start the request, thereby delaying the retry attempt until a time when it is more likely to succeed. MIMIX Availability Manager supports manually retrying activities from appropriate windows by providing Retry as an available action in the Action List. Files on hold: When the database apply process detects a data synchronization problem, it places the file (individual member) on error hold and logs an error. File entries are in held status when an error is preventing them from being applied to the target system. You need to analyze the cause of the problem in order to determine how to correct and release the file and ensure that the problem does not occur again. An option on the Work with Data Groups display provides quick access to the subset of file entries that are in error for a data group. From the Work with DG File Entries display, you can see the status of an entry and use a number of options to assist in resolving the error. An alternative view shows the database error code and journal code. Available options include access to the Work with DG Files on Hold (WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with file entries that are in a held status. You can view and work with the entry for which the error was detected and work with all other entries following the entry in error. MIMIX Availability Manager provides similar capabilities to those of WRKDGFEHLD from the following windows: Data Group Details - User Journal, Data Group Details Activity, and File Activity Details. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the user journal. Journal analysis: With user journal replication, when the system that is the source of replicated data fails, it is possible that some of the generated journal entries may not have been transmitted to or received by the target system. However, it is not always possible to determine this until the failed system has been recovered. Even if the

43

Operational overview

failed system is recovered, damage to a disk unit or to the journal itself may prevent an accurate analysis of any missed data. Once the source system is available again, if there is no damage to the disk unit or journal and its associated journal receivers, you can use the journal analysis function to help determine what journal entries may have been missed and to which files the data belongs. You can only perform journal analysis on the system where a journal resides.

Support for switching a data group


Typically, you perform a switch using the MIMIX Switch Assistant or by using commands to call a customized implementation of MIMIX Model Switch Framework. In either case, the Switch Data Group (SWTDG) command is called programmatically to change the direction in which replication occurs between systems defined to a data group. The SWTDG command supports both planned and unplanned switches. In a planned switch, you are purposely changing the direction of replication for any of a variety of reasons. You may need to take the system offline to perform maintenance on its hardware or software, or you may be testing your disaster recovery plan. In a planned switch, the production system (the source of replication) is available. When you perform a planned switch, data group processing is ended on both the source and target systems. The next time you start the data group, it will be set to replicate in the opposite direction. In an unplanned switch, you are changing the direction of replication as a response to a problem. Most likely the production system is no longer available. When you perform an unplanned switch, you must run the SWTDG command from the target system. Data group processing is ended on the target system. The next time you start the data group, it will be set to replicate in the opposite direction. To enable a switchable data group to function properly for default user journal replication processes, four journal definitions (two RJ links) are required. Journal definition considerations on page 205 contains examples of how to set up these journal definitions. You can specify whether to end the RJ link during a switch. Default behavior for a planned switch is to leave the RJ link running. Default behavior during an unplanned switch is to end the RJ link. Once you have a properly configured data group that supports switching, you should be aware of how MIMIX supports unconfirmed entries and the state of the RJ link following a switch. For more information, see Support for unconfirmed entries during a switch on page 70 and RJ link considerations when switching on page 70. For additional information about switching, see the Using MIMIX book. For additional information about MIMIX Model Switch Framework, see the Using MIMIX Monitor book.

Support for working with messages


MIMIX sends a variety of system message based on the status of MIMIX jobs and processes. You can view messages generated by MIMIX from either the Message Log window or from the Work with Message Log (WRKMSGLOG) display.

44

These messages are sent to both the primary and secondary message queues that are specified for the system definition. In addition to these message queues, message entries are recorded in a MIMIX message log file. The MIMIX message log provides a powerful tool for problem determination. Maintaining a message log file allows you to keep a record of messages issued by MIMIX as an audit trail. In addition, the message log provides robust subset and filter capabilities, the ability to locate and display related job logs, and a powerful debug tool. When messages are issued, they are initially sent to the specified primary and secondary message queues. In the event that these message queues are erased, placing messages into the message log file secures a second level of information concerning MIMIX operations. The message log on the management system contains messages from the management system and each network system defined within the installation. The system manager is responsible for collecting messages from all network systems. On a network system, the message log contains only those messages generated by MIMIX activity on that system. MIMIX automatically performs cleanup of the message log on a regular basis. The system manager deletes entries from the message log file based on the value of the Keep system history parameter in the system definition. However, if you process an unusually high volume of replicated data, you may want to also periodically delete unnecessary message log entries since the file grows in size depending on the number of messages issued in a day.

45

Chapter 2

Replication process overview

In general terms, a replication path is a series of processes that, together, represent the critical path on which data to be replicated moves from its origin to its destination. MIMIX uses two replication paths to accommodate differences in how replication occurs for databases and objects. These paths operate with configurable levels of cooperation or can operate independently. The user journal replication path captures changes to critical files and objects configured for replication through the user journal using the i5/OS remote journaling function. In previous versions, MIMIX DB2 Replicator provided this function. The system journal replication path handles replication of critical system objects (such as user profiles or spooled files), integrated file system (IFS) objects, and document library object (DLOs) using the i5/OS system journal. In previous versions MIMIX Object Replicator provided this function.

Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating files, IFS objects, data areas, and data queues. Within each replication path, MIMIX uses a series of processes. This chapter describes the replication paths and the processes used in each. The topics in this chapter include: Replication job and supporting job names on page 47 describes the replication paths for database and object information. Included is a table which identifies the replication job names for each of the processes that make up the replication path. Cooperative processing introduction on page 50 describes three variations available for performing replication activities using a coordinated effort between user journal processing and system journal processing. System journal replication on page 53 describes the system journal replication path which is designed to handle the object-related availability needs of your system through system journal processing. User journal replication on page 61 describes remote journaling and the benefits of using remote journaling with MIMIX. User journal replication of IFS objects, data areas, data queues on page 72 describes a technique which allows replication of changed data for certain object types through the user journal. Lesser-used processes for user journal replication on page 76 describes two lesser used replication processes, MIMIX source-send processing for database replication and the data area poller process.

46

Replication job and supporting job names


The replication path for database information includes the i5/OS remote journal function, the MIMIX database reader process, and one or more database apply processes. If MIMIX source-send processes are used instead of remote journaling, then the processes include the database send process, the database receive process, and one or more database apply processes. The replication path for object information includes the object send process, the object receive process, and the object apply process. When a data retrieval request is replicated, the replication path also includes the object retrieve, container send, and container receive processes. A data retrieval request is an operation that creates or changes the content of an object. A self-contained request is an operation that deletes, moves, or renames an object, or that changes the authority or ownership of an object. Table 3 identifies the job names for each of the processes that make up the replication path. Except as noted, MIMIX automatically restarts the jobs in Table 3 to maintain the MIMIX environment. The default is to restart these MIMIX jobs daily at midnight (12:00 a.m.). If this time conflicts with scheduled workloads, you can configure a different time to restart the jobs. For more information, see Configuring restart times for MIMIX jobs on page 313.
Table 3. MIMIX processes and their corresponding job names Description Container receive process Container send process Data area polling Database apply process Database receive process Database reader Database send process Journal manager MIMIX Communications Daemon Object selection process Object apply process Object retrieve process Object send process Object receive process Status send System manager Runs on Target Source Source Target Target Target Source System System System Target Source Source Target Target System Job name sdn_CNRRCV sdn_CNRSND sdn_DAPOLL sdn_DBAPYs sdn_DBRCV sdn_DBRDR sdn_DBSND JRNMGR MXCOMMD MXOBJSELPR sdn_OBJAPY sdn_OBJRTV sdn_OBJSND sdn_OBJRCV sdn_STSSND SM******** Notes 1, 3 1, 3 3 3, 4 1, 3 3 1, 3 ---3 1, 3 1, 3 1, 3 1, 3 1, 2

Abbreviation CNRRCV CNRSND DAPOLL DBAPY DBRCV DBRDR DBSND JRNMGR MXCOMMD MXOBJSELPR OBJAPY OBJRTV OBJSND OBJRCV STSSND SYSMGR

47

Replication job and supporting job names

Table 3.

(Continued) MIMIX processes and their corresponding job names Description System manager receive process Status receive Tracking entry update process Runs on Network Source Source or Target Job name SR******** sdn_STSRCV sdn_TEUPD Notes 1, 2 1, 3 3, 5

Abbreviation SYSMGRRCV STSRCV TEUPD


Note:

1. Send and receive processes depend on communication. The job name varies, depending on the transfer protocol. OptiConnect job names start with APIA* in the QSOC subsystem. The SNA job name is derived from the remote location name. TCP/IP uses a job name port number or alias as the job name. The alias is defined on the service table entry. 2. The system manager runs on both source and target systems. The ******** in the job name format indicates the name of the system definition. 3. The characters sdn in a job name indicate the short data group name. 4. The character s is the apply session letter. 5. The job is used only for replication with advanced journaling and is started only when needed.

Updated for 5.010.00, 5.0.11.00, and 5.0.12.00.

48

49

Cooperative processing introduction

Cooperative processing introduction


Cooperative processing is when the MIMIX user journal processes and system journal processes work in a coordinated effort to perform replication activities for certain object types. When configured, cooperative processing enables MIMIX to perform replication in the most efficient way by evaluating the object type and the MIMIX configuration to determine whether to use the system journal replication processes, user journal replication processes, or a combination of both. Cooperative processing also provides a greater level of data protection, data management efficiency, and high availability by ensuring the complete replication of newly created or redefined files and objects. Object types that can be journaled to a user journal are eligible to be processed cooperatively when properly configured to MIMIX. MIMIX supports the following variations of cooperative processing for these object types: MIMIX Dynamic Apply (files) Legacy cooperative processing (files) Advanced journaling (IFS objects, data areas, and data queues).

When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication. IFS objects, data areas, or data queues that can be journaled are not automatically configured for advanced journaling, by default. These object types must be manually configured to use advanced journaling. In all variations of cooperative processing, the system journal is used to replicate the following operations: The creation of new objects that do not deposit an entry in a user journal when they are created. Restores of objects on the source system Move and rename operates from a non-replicated library or path into a library or path that is configured for replication.

MIMIX Dynamic Apply


Most environments can take advantage of cooperatively processed operations for journaled *FILE objects that are journaled primarily through a user (database) journal. MIMIX Dynamic Apply is the most efficient way to perform cooperative processing of logical and physical files. MIMIX Dynamic Apply intelligently handles files with

50

relationships by assigning them to the same or appropriate apply sessions. It is also much better at maintaining data integrity of replicated objects which previously needed legacy cooperative processing in order to replicate some operations such as creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is more efficient hold log processing by enabling multiple files to be processed through a hold log instead of just one file at a time. New data groups created with the shipped default configuration values are configured to use MIMIX Dynamic Apply. This configuration requires data group object entries and data group file entries. For more information, see Identifying logical and physical files for replication on page 105 and Requirements and limitations of MIMIX Dynamic Apply on page 110.

Legacy cooperative processing


In legacy cooperative processing, record and member operations of *FILE objects are replicated through user journal processes, while all other transactions are replicated through system journal processes. Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). Data groups that existed prior to upgrading to MIMIX version 5 are typically configured with legacy cooperative processing which requires data group object entries and data group file entries. It is recommended to use MIMIX Dynamic Apply for cooperative processing. Existing data groups configured to use legacy cooperative processing can be converted to use MIMIX Dynamic Apply. For more information, see Requirements and limitations of legacy cooperative processing on page 111.

Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data queues that are configured for cooperative processing. When these objects are configured for cooperative processing, replication of changed bytes of the journaled objects data occurs through the user journal. This is more efficient than replicating an entire object through the system journal each time changes occur. Such a configuration also allows for the serialization of updates to IFS objects, data areas, and data queues with database journal entries. In addition, processing time for these object types may be reduced, even for equal amounts of data, as user journal replication eliminates the separate save, send, and restore processes necessary for system replication. Frequently you will see the phrase user journal replication of IFS objects, data areas, and data queues used interchangeably with the term advanced journaling. These terms are the same. For more information, see User journal replication of IFS objects, data areas, data queues on page 72 and Planning for journaled IFS objects, data areas, and data queues on page 85.

51

Cooperative processing introduction

52

System journal replication


The system journal replication path is designed to handle the object-related availability needs of your system. You identify the critical system objects that you want to replicate, such as user profiles, programs, and DLOs. MIMIX uses the journal entries generated by the operating systems object auditing function to identify the changes to objects on production systems and replicates the changes to backup systems. If you are not already using the systems security audit journal (QAUDJRN, or system journal), when you use MIMIX commands to build the journaling environment, MIMIX creates the journal and correctly sets system values related to auditing. MIMIX checks the settings of the following system values, making changes as necessary: QAUDLVL (Security auditing level) system value. MIMIX sets the values *CREATE, *DELETE, *OBJMGT, and *SAVRST. MIMIX checks for values *SECURITY, *SECCFG, *SECRUN, and *SECVLDL and will set them only if the value *SECURITY is not already set. If any data group is configured to replicated spooled files, MIMIX also sets *SPLFDTA and *PRTDTA. QAUDCTL (Auditing control) system value. MIMIX sets the values *OBJAUD and *AUDLVL.

These system value settings, along with the object audit value of each object, control what journal entries are created in the system journal (QAUDJRN) for an object. If an operation on an object is not represented by an entry in the system journal, MIMIX is not aware of the operation and cannot replicate it. The system objects you want to replicate are defined to a data group through data group object entries, data group DLO entries, and data group IFS entries. The term name space refers to this collection of objects that are identified for replication by MIMIX using the system journal replication processes. An object is replicated when it is created, restored, moved, or renamed into the MIMIX name space. While in the MIMIX name space, changes to the object or to the authority settings of the object are also replicated. Replication through the system journal is event-driven. When a data group is started, each process used in the replication path waits for its predetermined event to occur then begins its activity. The processes are interdependent and run concurrently. The system journal replication path in MIMIX uses the following processes: Object send process: alternates between identifying objects to be replicated and transmitting control information about objects ready for replication to the target system. Object receive process: receives control information and waits for notification that additional source system processing, if any, is complete before passing the control information to the object apply process. Object retrieve process: if any additional information is needed for replication, obtains it and places it in a holding area. This process is also used when additional processing is required on the source system prior to transmission to the target system.

53

System journal replication

Container send process: transmits any additional information from a holding area to the target system and notifies the control process of that action. Container receive process: receives any additional information and places it into a holding area on the target system. Object apply process: replicates objects according to the control information and any required additional information that is retrieved from the holding area. Status send process: notifies the source system of the status of the replication. Status receive process: updates the status on the source system and, if necessary, passes control information back to the object send process.

MIMIX uses a collection of structures and customized functions for controlling these structures during replication. Collectively the customized functions and structures are referred to as the work log. The structures in the work log consist of log spaces, work lists (implemented as user queues), and distribution status file. When a data group is started, MIMIX uses the security audit journal to monitor for activity on objects within the name space. When activity occurs on the object, such as it is being accessed or changed, a corresponding journal entry is created in the security audit journal. As journal entries are added to the journal receiver on the source system, the object send process reads journal entries and determines if they represent operations to objects that are within the name space. For each journal entry for an object within the name space, the object send process creates an activity entry in the work log. Creation of an activity entry includes adding the entry to the log space and adding a record to the distribution status file. An activity entry includes a copy of the journal entry and any related information associated with a replication operation for an object, including the status of the entry. User interaction with activity entries is through the Work with Data Group Activity display and the Work with DG Activity Entries display. There are two categories of activity entries: those that are self-contained and those that require the retrieval of additional information. Processing self-contained activity entries on page 54 describes the simplest object replication scenario. Processing data-retrieval activity entries on page 55 describes the object replication scenario in which additional data must be retrieved from the source system and sent to the target system.

Processing self-contained activity entries


For a self-contained activity entry, the copied journal entry contains all of the information required to replicate the object. Examples of journal entries include Change Authority (T-CA), Object Move or Rename (T-OM), and Object Delete (TDO). After the object send process determines that an entry is to be replicated, it performs the following actions: Sets the status of the entry to PA (pending apply) Adds the sent date and time to the activity entry Writes the activity entry to the log space and adds a record to the distribution status file

54

Transmits the activity entry to a corresponding object receive process job on the target system.

The object receive process adds the received date and time to the activity entry, writes the activity entry to the log space, adds a record to the distribution status file, and places the activity entry on the object apply work list. Now each system has a copy of the activity entry. The next available object apply process job for the data group retrieves the activity entry from the object apply work list and replicates the operation represented by the entry. The object apply process adds the applied date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding status receive process on the source system. The status receive process updates the activity entry in the work log and the distribution status file.

Processing data-retrieval activity entries


In a data retrieval activity entry, additional data must be gathered from the object on the source system in order to replicate the operation. The copied journal entry indicates that changes to an object affect the attributes or data of the object. The actual content of the change is not recorded in the journal entry. To properly replicate the object, its content, attributes, or both, must be retrieved and transmitted to the target system. MIMIX may retrieve this data by using APIs or by using the appropriate save command for the object type. APIs store the data in one or more user spaces (*USRSPC) in a data library associated with the MIMIX installation. Save commands store the object data in a save file (*SAVF) in the data library. Collectively, these objects in the data library are known as containers. After the object send process determines that an entry is to be replicated and that additional processing or information on the source system is required, it performs the following actions: Sets the status of the entry to PR (pending retrieve) Adds the sent date and time to the activity entry Writes the activity entry to the log space and adds a record to the distribution status file Transmits the activity entry to a corresponding object receive process on the target system. Adds the entry to the object retrieve work list on the source system.

The object receive process adds the received date and time to the activity entry, writes the activity entry to the log space, and adds a record to the distribution status file. Now each system has a copy of the activity entry. The object receive process waits until the source system processing is complete before it adds the activity entry to the object apply work list.

55

System journal replication

Concurrently, the object send process reads the object send work list. When the object send process finds an activity entry in the object send work list, the object send process performs one or more of the following additional steps on the entry: If an object retrieve job packaged the object, the activity entry is routed to the container send work list. The activity entry is transmitted to the target system, its status is updated, and a retrieved date and time is added to the activity entry.

On the source system the next available object retrieve process for the data group retrieves the activity entry from the object retrieve work list and processes the referenced object. In addition to retrieving additional information for the activity entry, additional processing may be required on the source system. The object retrieve process may perform some or all of the following steps: Retrieve the extended attribute of the object. This may be one step in retrieving the object or it may be the primary function required of the retrieve process. If necessary, cooperative processing activities, such as adding or removing a data group file entry, are performed. The object identified by the activity entry is packaged into a container in the data library. The object retrieve process adds the retrieved date and time to the activity entry and changes the status of the entry to pending send. The activity entry is added to the object send work list. From there the object send job takes the appropriate action for the activity, which may be to send the entry to the target system, add the entry to the container send work list, or both.

The container send and receive processes are only used when an activity entry requires information in addition to what is contained within the journal entry. The next available job for the container send process for the data group retrieves the activity entry from the container send work list and retrieves the container for the packaged object from the data library. The container send job transmits the container to a corresponding job of the container receive process on the target system. The container receive process places the container in a data library on the target system. The container send process waits for confirmation from the container receive job, then adds the container sent date and time to the activity entry, changes the status of the activity entry to PA (pending apply), and adds the entry to the object send work list. The next available object apply process job for the data group retrieves the activity entry from the object apply work list, locates the container for the object in the data library, and replicates the operation represented by the entry. The object apply process adds the applied date and time to the activity entry, changes the status of the entry to CP (completed processing), and adds the entry to the status send work list. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding job for status receive process on the source system. The status receive process updates the activity entry in the log space and the distribution status file. If the activity entry requires further processing, such as if an updated container is needed on the target system, the status receive job adds the entry to the object send work list.

56

Processes with multiple jobs


The object retrieve, container send and receive, and object apply processes all consist of one or more asynchronous jobs. You can specify the minimum and maximum number of asynchronous jobs you want to allow MIMIX to run for each process and a threshold for activating additional jobs. The minimum number indicates how many permanent jobs should be started for the process. These jobs stay active as long as the data group is active. During periods of peak activity, if more requests are backlogged than are specified in the threshold, additional temporary jobs, up to the maximum number, may also be started. This load leveling feature allows system journal replication processes to react automatically to periodic heavy workloads. By doing this, the replication process stays current with production system activity. When system activity returns to a reduced level, the temporary jobs end after a period of inactivity elapses.

Tracking object replication


After you start a data group, you need to monitor the status of the replication processes and respond to any error conditions. Regular monitoring and timely responses to error conditions significantly reduce the amount of time and effort required in the event that you need to switch a data group. MIMIX provides an indication of high level status of the processes used in object replication and error conditions. You can access detailed status information through the Data Group Status window in MIMIX Availability Manager or the MIMIX Availability Status display in a 5250 emulator. When an operation cannot complete on either the source or target system (such as when the object is in use by another process and cannot be accessed), the activity entry may go to a failed state. MIMIX attempts to rectify many failures automatically, but some failures require manual intervention. Objects with at least one failed entry outstanding are considered to be in error. You should periodically review the objects in error, and the associated failed entries, and determine the appropriate action. You may retry or delete one or all of the failed entries for an object. You can check the progress of activity entries and take corrective action through the Work with Data Group Activity display and the Work with DG Activity Entries display. You can also subset directly to the activity entries in error from the Work with Data Groups display. If you have new objects to replicate that are not within the MIMIX name space, you need to add data group entries for them. Before any new data group entries can be replicated, you must end and restart the system journal replication processes in order for the changes to take effect. The system manager removes old activity entries from the work log on each system after the time specified in the system definition passes. The Keep data group history (days) parameter (KEEPDGHST) indicates how long the activity entries remain on the system. You can also manually delete activity entries. Containers in the data libraries are deleted after the time specified in the Keep MIMIX data (days) parameter (KEEPMMXDTA).

Managing object auditing

57

System journal replication

The system journal replication path within MIMIX relies on entries placed in the system journal by i5/OS object auditing functions. To ensure that objects configured for this replication path retain an object auditing value that supports replication, MIMIX evaluates and changes the objects auditing value when necessary. To do this, MIMIX employs a configuration value that is specified on the Object auditing value (OBJAUD) parameter of data group entries (object, IFS, DLO) configured for the system journal replication path. When MIMIX determines that an objects auditing value is lower than the configured value, it changes the object to have the higher configured value specified in the data group entry that is the closest match to the object. The OBJAUD parameter supports object audit values of *ALL, *CHANGE, or *NONE. MIMIX evaluates and may change an objects auditing value when specific conditions exist during object replication or during processing of a Start Data Group (STRDG) request. This evaluation process can also be invoked manually for all objects identified for replication by a data group. During replication - MIMIX may change the auditing value during replication when an object is replicated because it was created, restored, moved, or renamed into the MIMIX name space (the group of objects defined to MIMIX). While starting a data group - MIMIX may change the auditing value while processing a STRDG request if the request specified processes that cause object send (OBJSND) jobs to start and the request occurred after a data group switch or after a configuration change to one or more data group entries (object, IFS, or DLO). Shipped command defaults for the STRDG command allow MIMIX to set object auditing if necessary. If you would rather set the auditing level for replicated objects yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter when you start data groups. Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides the ability to manually set the object auditing level of existing objects identified for replication by a data group. When the command is invoked, MIMIX checks the audit value of existing objects identified for system journal replication. Shipped default values on the command cause MIMIX to change the object auditing value of objects to match the configured value when an objects actual value is lower than the configured value. The SETDGAUD command is used during initial configuration of a data group. Otherwise, it is not necessary for normal operations and should only be used under the direction of a trained MIMIX support representative. The SETDGAUD command also supports optionally forcing a change to a configured value that is lower than the existing value through its Force audit value (FORCE) parameter. Evaluation processing - Regardless of how the object auditing evaluation is invoked, MIMIX may find that an object is identified by more than one data group entry within the same class of object (IFS, DLO, or library-based). It is important to understand the order of precedence for processing data group entries. Data group entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set; object entries and DLO entries

58

are processed using the EBCDIC character set. The first entry (more generic) found that matches the object is used until a more specific match is found. The entry that most specifically matches the object is used to process the object. If the object has a lower audit value, it is set to the configured auditing value specified in the data group entry that most specifically matches the object. When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry, all of the directories in the objects directory path are checked and, if necessary, changed to the new auditing value. In the case of an IFS entry with a generic name, all descendents of the IFS object may also have their auditing value changed. When you change a data group entry, MIMIX updates all objects identified by the same type of data group entry in order to ensure that auditing is set properly for objects identified by multiple entries with different configured auditing values. For example, if a new DLO entry is added to a data group, MIMIX sets object auditing for all objects identified by the data groups DLO entries, but not for its object entries or IFS entries. For more information and examples of setting auditing values with the SETDGAUD command, see Setting data group auditing values manually on page 297.

59

System journal replication

60

User journal replication


MIMIX Remote Journal support enables MIMIX to take advantage of the cross-journal communications capabilities provided by the i5/OS remote journal function instead of using internal communications. Newly created data groups use remote journaling as the default configuration.

What is remote journaling?


Remote journaling is a function in the i5/OS operating system that allows you to establish journals and journal receivers on a target eServer System i5 system and associate them with specific journals and journal receivers on a source system. After the journals and journal receivers are established on both systems, the remote journal function can replicate journal entries from the source system to the journals and journal receivers located on the target system. The remote journal function supports both synchronous and asynchronous modes of operation. More information about the benefits and implications of each mode can be found in topic Overview of IBM processing of remote journals on page 63. You should become familiar with the terminology used by the i5/OS remote journal function. The Backup and Recovery and Journal management books are good sources for terminology and for information about considerations you should be aware of when you use remote journaling. The IBM redbooks AS/400 Remote Journal Function for High Availability and Data Replication (SG24-5189) and Striving for Optimal Journal Performance on DB2 Universal Database for iSeries (SG24-6286) provide an excellent overview of remote journaling in a high availability environment. You can find these books online at the IBM eServer iSeries Information Center.

Benefits of using remote journaling with MIMIX


MIMIX has internal send and receive processing as part of its architecture. IBM added the remote journal function to the System i5 within the licensed internal code layer of OS/400 in its V4R3 release. Moving cross-journal communications into the licensed internal code provides greater System i5 integration and efficiency. The MIMIX Remote Journal support allows MIMIX to take advantage of the cross-journal communications functions provided by the i5/OS remote journal function instead of using the internal communications provided by MIMIX. As stated in the AS/400 Remote Journal Function for High Availability and Data Replication redbook, The benefits of remote journal function include: It lowers the CPU consumption on the source machine by shifting the processing required to receive the journal entries from the source system to the target system. This is true when asynchronous delivery is selected. It eliminates the need to buffer journal entries to a temporary area before transmitting them from the source machine to the target machine. This translates into less disk writes and greater DASD efficiency on the source system. Since it is implemented in microcode, it significantly improves the

61

User journal replication

replication performance of journal entries and allows database images to be sent to the target system in realtime. This realtime operation is called the synchronous delivery mode. If the synchronous delivery mode is used, the journal entries are guaranteed to be in main storage on the target system prior to control being returned to the application on the source machine. It allows the journal receiver save and restore operations to be moved to the target system. This way, the resource utilization on the source machine can be reduced.

Restrictions of MIMIX Remote Journal support


The i5/OS remote journal function does not allow writing journal entries directly to the target journal receiver. This restriction severely limits the usefulness of cascading remote journals in a managed availability environment. MIMIX user journal replication does not support a cascading environment in which remote journal receivers on the target system are also source journal receivers for a third system. Users who require this type of environment may use multiple installations of MIMIX, implementing apply side journaling in one installation and using remote journaling to replicate the applied transactions to a third system.

62

Overview of IBM processing of remote journals


Several key concepts within the i5/OS remote journal function are important to understanding its impact on MIMIX replication. A local-remote journal pair refers to the relationship between a configured source journal and target journal. The key point about a local-remote journal pair is that data flows only in one direction within the pair, from source to target. When the remote journal function is activated and all journal entries from the source are requested, existing journal entries for the specified journal receiver on the source system which have not already been replicated are replicated as quickly as possible. This is known as catchup mode. Once the existing journal entries are delivered to the target system, the system begins sending new entries in continuous mode according to the delivery mode specified when the remote journal function was started. New journal entries can be delivered either synchronously or asynchronously.

Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal entries as they are generated by the source applications. The source applications do not continue processing until the journal entries are sent to the target journal. Each journal entry is first replicated to the target journal receiver in main memory on the target system (1 in Figure 3). When the source system receives notification of the delivery to the target journal receiver, the journal entry is placed in the source journal receiver (2) and the source database is updated (3). With synchronous delivery, journal entries that have been written to memory on the target system are considered unconfirmed entries until they have been written to

63

auxiliary storage on the source system and confirmation of this is received on the target system (4).
Figure 3. Synchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications 2 Source Journal Receiver (Local) 3 Production Database

Source Journal Message Queue

4 Target Journal Receiver (Remote)

Target System

Target Journal Message Queue

Unconfirmed journal entries are entries replicated to a target system but the state of the I/O to auxiliary storage for the same journal entries on the source system is not known. Unconfirmed entries only pertain to remote journals that are maintained synchronously. They are held in the data portion of the target journal receiver. These entries are not processed with other journal entries unless specifically requested or until confirmation of the I/O for the same entries is received from the source system. Confirmation typically is not immediately sent to the target system for performance reasons. Once the confirmation is received, the entries are considered confirmed journal entries. Confirmed journal entries are entries that have been replicated to the target system and the I/O to auxiliary storage for the same journal entries on the source system is known to have completed. With synchronous delivery, the most recent copy of the data is on the target system. If the source system becomes unavailable, you can recover using data from the target system. Since delivery is synchronous to the application layer, there are application performance and communications bandwidth considerations. There is some performance impact to the application when it is moved from asynchronous mode to synchronous mode for high availability purposes. This impact can be minimized by ensuring efficient data movement. In general, a minimum of a dedicated 100 megabyte ethernet connection is recommended for synchronous remote journaling.

64

MIMIX includes special switch processing for unconfirmed entries to ensure that the most recent transactions are preserved in the event of a source system failure. For more information, see Support for unconfirmed entries during a switch on page 70.

Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal first (A in Figure 4) and then applied to the source database (B). An independent job sends the journal entries from a buffer (C) to the target system journal receiver (D) at some time after control is returned to the source applications that generated the journal entries. Because the journal entries on the target system may lag behind the source systems database, in the event of a source system failure, entries may become trapped on the source system.
Figure 4. Asynchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications A Source Journal Receiver (Local) B Production Database

C Source Journal Message Queue

Buffer

Target System
D Target Journal Message Queue Target Journal Receiver (Remote)

With asynchronous delivery, the most recent copy of the data is on the source system. Performance critical applications frequently use asynchronous delivery. Default values used in configuring MIMIX for remote journaling use asynchronous delivery. This delivery mode is most similar to the MIMIX database send and receive processes.

65

User journal replication processes


Data groups created using default values are configured to use remote journaling support for user journal replication. The replication path for database information includes the i5/OS remote journal function, the MIMIX database reader process, and one or more database apply processes. The i5/OS remote journal function transfers journal entries to the target system. The database reader process (DBRDR) process reads journal entries from the target journal receiver of a remote journal configuration and places those journal entries that match replication criteria for the data group into a log space. Remote journaling does not allow entries to be filtered from being sent to the remote system. All entries deposited into the source journal will be transmitted to the target system. The database reader process performs the filtering that is identified in the data group definition parameters and file and tracking entry options. The database apply process applies the changes stored in the target log space to the target systems database. MIMIX uses multiple apply processes in parallel for maximum efficiency. Transactions are applied in real-time to generate a duplicate image of the journaled objects being replicated from the source system.

The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of a remote journal link. A remote journal link (RJ link) is a configuration element that identifies an i5/OS remote journaling environment. An RJ link identifies: A source journal definition that identifies the system and journal which are the source of journal entries being replicated from the source system. A target journal definition that defines a remote journal. Primary and secondary transfer definitions for the communications path for use by MIMIX. Whether the i5/OS remote journal function sends journal entries asynchronously or synchronously.

Once an RJ link is defined and other configuration elements are properly set, user journal replication processes will use the i5/OS remote journaling environment within its replication path. The concept of an RJ link is integrated into existing commands. The Work with RJ Links display makes it easy to identify the state of the i5/OS remote journaling environment defined by the RJ link.

Sharing RJ links among data groups


It is possible to configure multiple data groups to use the same RJ link. However, data groups should only share an RJ link if they are intended to be switched together or if they are non-switchable data groups. Otherwise, there is additional communications overhead from data groups replicating in opposite directions and the potential for

66

journal entries for database operations to be routed back to their originating system. See Support for unconfirmed entries during a switch on page 70 and RJ link considerations when switching on page 70 for more details.

RJ links within and independently of data groups


The RJ link is integrated into commands for starting and ending data group replication (STRDG and ENDDG). The STRDG and ENDDG commands automatically determine whether the data group uses remote journaling and select the appropriate replication path processes, including the RJ link, as needed. Two MIMIX commands provide the ability to use an RJ link without performing data replication. The Start Remote Journal Link (STRRJLNK) and the End Remote Journal Link (ENDRJLNK) commands provide this capability.

Differences between ENDDG and ENDRJLNK commands


You should be aware of differences between ending data group replication (ENDDG command) and ending only the remote journal link (ENDRJLNK command). You will primarily use the End Data Group (ENDDG) command to end replication processes and to optionally end the RJ link when necessary. The End Remote Journal Link (ENDRJLNK) command ends only the RJ link. Both commands include an end option (ENDOPT parameter) to specify whether to end immediately or in a controlled manner. These options on the ENDRJLNK command do not have the same meaning as on the ENDDG command. For ENDRJLNK, the ENDOPT parameter has the following values:
Table 4. *IMMED End option values on the End Remote Journal Link (ENDRJLNK) command. The target journal is deactivated immediately. Journal entries that are already queued for transmission are not sent before the target journal is deactivated. The next time the remote journal function is started, the journal entries that were queued but not sent are prepared again for transmission to the target journal. Any journal entries that are queued for transmission to the target journal will be transmitted before the i5/OS remote journal function is ended. At any time, the remote journal function may have one or more journal entries prepared for transmission to the target journal. If an asynchronous delivery mode is used over a slow communications line, it may take a significant amount of time to transmit the queued entries before actually ending the target journal.

*CNTRLD

The ENDRJLNK commands ENDOPT parameter is ignored and an immediate end is preformed when either of the following conditions are true: When the remote journal function is running in synchronous mode (DELIVERY(*SYNC)). When the remote journal function is performing catch-up processing.

67

RJ link monitors
User journal replication processes monitor the journal message queues of the journals identified by the RJ link. Two RJ link monitors are created automatically, one on the source system and one on the target system. These monitors provide added value by allowing MIMIX to automatically monitor the state of the remote journal link, to notify the user of problems, and to automatically recover the link when possible.

RJ link monitors - operation


The RJ link monitors are automatically started when the master monitor is started. If for some reason the monitors are not already started, they will be started when you start a remote journal link. The monitors are created if they do not already exist. The source RJ link monitor is named after the source journal definition and the target RJ link monitor is named after the target journal definition. The RJ link monitors are MIMIX message queue monitors. They monitor messages put on the message queues associated with the source and target journals. The operating system issues messages to these journal message queues when a failure is detected in i5/OS remote journal processing. Each RJ link monitor uses information provided in the messages to determine which remote journal link is affected and to try to automatically recover that remote journal link. (The state of a remote journal link can be seen by using the Work with RJ Links (WRKRJLNK) command.) There is a limit on the number of times that a link will be recovered in a particular time period; a continually failing link will eventually be marked failed and recovery will end. Typically this occurs when there are communications problems. Once the problem is resolved, you can start the RJ link monitors again the using the Work with Monitors (WRKMON) command and selecting the Start option. The RJ link monitor for the source does not end once it is started, since more than one remote journal link can use a source monitor. Users can end the monitors by using the Work with Monitors (WRKMON) command and selecting the End option. MIMIX Monitor commands can be used to see the status of your RJ link monitors. The WRKMON command lists all monitors for a MIMIX installation and displays whether the monitor is active or inactive. You can also view the status of your RJ link monitors on the DSPDGSTS status display (option 8 from the Work with Data Groups display). Both the source and target RJ link monitor processes appear on this display. The display shows whether or not the monitor processes are active. If MIMIX Monitor is not installed as recommended, the RJ link monitor status appears as unknown on the Display Data Group Status display.

RJ link monitors in complex configurations


In a broadcast scenario, a single source journal definition can link to multiple target journal definitions, each over its own remote journal link. One source RJ link monitor handles this broadcast, since there is one source RJ monitor per source journal definition communicating via a remote journal link. Alternately, in a cascade scenario an intermediate system can have both a source RJ link monitor and a target RJ link monitor running on it for the same journal definition. This intermediate system has the target journal definition for the system that

68

originated the replication and holds the source journal definition for the next system in the cascade. For more information about configuring for these environments, see Data distribution and data management scenarios on page 361.

69

Support for unconfirmed entries during a switch


The MIMIX Remote Journal support implements synchronous mode processing in a way that reduces data latency in the movement of journal entries from the source to the target system. This reduces the potential for and the degree of manual intervention when an unplanned outage occurs. Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the target system so they can be applied to the backup database if an unplanned switch is required. The unconfirmed entries are the most recent changes to the data. Maintaining this data on the target system is critical to your managed availability solution. In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX database apply process to be applied to the backup database. As a result, you will see the database apply process jobs run longer than they would under standard switch processing. If the apply process is ended by a user before the switch, MIMIX will restart the apply jobs to preserve these entries. As part of the unplanned switch processing, MIMIX checks whether the apply jobs are caught up. Then, unconfirmed entries are applied to the target database and added to a journal that will be transferred to the source system when that system is brought back up. When the backup system is brought online as the temporary the source system, the unconfirmed entries are processed before any new journal entries generated by the application are processed. Furthermore, to ensure full data integrity, once the original source system is operational these unconfirmed entries are the first entries replicated back to that system.

RJ link considerations when switching


By default, when a data group is ended or a planned switch occurs, the RJ link remains active. You need to consider whether to keep the original RJ link active after a planned switch of a data group. If the RJ link is used by another application or data group, the RJ link must remain active. Sharing an RJ link among multiple data groups is only recommended for the conditions identified in Sharing RJ links among data groups on page 66. If the RJ link is not used by any other application or data group, the link should be ended to prevent communications and processing overhead. When you are temporarily running production applications on the backup system after a planned switch, journal entries generated on the backup system are transmitted to the remote journal receiver (which is on the production system). MIMIX applies the entries to the original production database. If journaling is still active on the original production database, new journal entries are created for the entries that were just applied. These new journal entries are essentially a repeat of the same operation just performed against the database. Remote journaling causes the entries to be transmitted back to the backup system. MIMIX prevents these repeat entries from being reapplied, however, these repeated entries cause additional resources to be used within MIMIX and in communications. MIMIX Model Switch Framework considerations - When remote journaling is used in an environment in which MIMIX Model Switch Framework is implemented, you need to consider the implications of sharing an RJ link. In addition, default values

70

used during a planned switch cause the RJ link to remain active. You may need to end the RJ link after a planned switch.

71

User journal replication of IFS objects, data areas, data queues

User journal replication of IFS objects, data areas, data queues


IBM provides journaling support for IFS objects as well as for data areas and data queues. This capability allows transactions to be journaled in the user journal (database journal), much like transactions are recorded for database record changes. Each time an IFS object, data area, or data queue changes, only changed bytes are recorded in the journal entry. MIMIX enables you to take advantage of this capability of the i5/OS operating system when replicating these journaled objects. This support within MIMIX is often referred to as advanced journaling and is enabled by explicitly configuring data group object entries for data areas and data queues and data group IFS entries for IFS objects. In addition to data group object entries and IFS entries, MIMIX uses tracking entries to uniquely identify each object that is configured for advanced journaling. A data group that replicates some or all configured IFS objects, data areas, or data queues through a user journal may also replicate files from the same journal as well as replicate objects from the system journal. For example, a data group could be configured to support MIMIX Dynamic Apply for *FILE objects, advanced journaling for IFS objects and data areas, and system journal processes for data queues and other library-based objects. For more information, see Replication choices by object type on page 96 You may need to consider how much data is replicated through the same apply session for user journal replication processes and whether any transactions need to be serialized with database files. For more information, see Planning for journaled IFS objects, data areas, and data queues on page 85.

Benefits of advanced journaling


One of the most significant benefits of using advanced journaling is that IFS objects, data areas, and data queues are processed by replicating only changed bytes. For example, when IFS objects, data areas, or data queues are replicated through the system journal, the entire object is shipped across the communications link. While this may be sufficient for many applications, those using large files or making frequent small byte-level changes can be negatively impacted by the additional data transmission. When these objects are configured to allow user journal replication, MIMIX replicates only changed bytes of the data for IFS objects, data areas, and data queues. Another significant benefit of using advanced journaling for IFS objects, data areas, and data queues is that transactions can be applied in lock-step with a database file. This requires that the objects and database are configured to the same data group and the same database apply session. For example, assume that a hotel uses a database application to reserve rooms. Within the application, a data area contains a counter to indicate the number of rooms reserved for a particular day and a database file contains detailed information about reservations. Each time a room is reserved, both the counter and the database file are updated. If these updates do not occur in the same order on the target system,

72

the hotel risks reserving too many or too few rooms. Without advanced journaling, serialization of these transactions cannot not be guaranteed on the target system due to inherent differences in MIMIX processing from the user journal (database file) and the system journal (default for objects). With advanced journaling, MIMIX serializes these transactions on the target system by updating both the file and the data area through user journal processing. Thus, as long as the database file and data area are configured to be processed by the same apply session, updates occur on the target system in the same order they were originally made on the source system. Additional benefits of replicating IFS objects, data areas, and data queues from the user journal include: Replication is less intrusive. In traditional object replication, the save/restore process places locks on the replicated object on the source system. Database replication touches the user journal only, leaving the source object alone. Changes to objects replicated from the user journal may be replicated to the target system in a more timely manner. In traditional object replication, system journal replication processes must contend with potential locks placed on the objects by user applications. Processing time may be reduced, even for equal amounts of data. Database replication eliminates the separate save, send, and restore processes necessary for object replication. The objects replicated from the user journal can reduce burden on object replication processes when there is a lot of activity being replicated through the system journal. Commitment control is supported for B journal entry types for IFS objects journaled to a user journal. Advanced journaling can be used in configurations that use either remote journaling or MIMIX source-send processes for user journal replication.

Restrictions and configuration requirements vary for IFS objects and data area or data queue objects. If one or more of the configuration requirements are not met, the system journal replication path is used. For detailed information, including supported journal entry types, see Identifying data areas and data queues for replication on page 112 and Identifying IFS objects for replication on page 118.

Replication processes used by advanced journaling


When IFS objects, data areas, and data queues are properly configured, replication occurs through the user journal replication path. Processing occurs through the i5/OS remote journal function, the MIMIX database reader process1, and one database apply process (session A).

1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ support.

73

User journal replication of IFS objects, data areas, data queues

Tracking entries
A unique tracking entry is associated with each IFS object, data area, and data queue that is replicated using advanced journaling. The collection of data group IFS entries for a data group determines the subset of existing IFS objects on the source system that are eligible for replication using advanced journaling techniques. Similarly, the collection of data group object entries determines the subset of existing data areas and data queues on the source system that are eligible for replication using advanced journaling techniques. MIMIX requires a tracking entry for each of the eligible objects to identify how it is defined for replication and to assist with tracking status when it is replicated. IFS tracking entries identify IFS stream files, including the source and target file ID (FID), while object tracking entries identify data areas or data queues. When you initially configure a data group you must load tracking entries, start journaling for the objects which they identify, and synchronize the objects with the target system. The same is true when you add new or change existing data group IFS entries or object entries. It is also possible for tracking entries to be automatically created. After creating or changing data group IFS entries or object entries that are configured for advanced journaling, tracking entries are created the next time the data group is started. However, this method has disadvantanges.This can significantly increase the amount of time needed to start a data group. If the objects you intend to replicate with advanced journaling are not journaled before the start request is made, MIMIX places the tracking entries in *HLDERR state. Error messages indicate that journaling must be started and the objects must be synchronized between systems. Once a tracking entry exists, it remains until one of the following occurs: The object identified by the tracking entry is deleted from the source system and replication of the delete action completes on the target system. The data group configuration changes so that an object is no longer identified for replication using advanced journaling.

74

Figure 5 shows an IFS user directory structure, the include and exclude processing selected for objects within that structure, and the resultant list of tracking entries created by MIMIX.
Figure 5. IFS tracking entries produced by MIMIX

Viewing tracking entries is supported in both 5250 emulator and MIMIX Availability Manager interfaces. Their status is included with other data group status. You also can see what objects they identify, whether the objects are journaled, and their replication status. You can also perform operations on tracking entries, such as holding and releasing, to address replication problems.

IFS object file identifiers (FIDs)


Normally, when dealing with objects and database files, you have the ability of seeing the name of the object (filename, library name, and member name) in the journal entries. For IFS objects, it is impractical to put the name of the IFS object in the header of the journal entry due to potentially long path names. Each IFS object on a system has a unique 16-byte file ID (FID). The FID is used to identify IFS objects in journal entries. The FID is machine-specific, meaning that IFS objects with the same path name may have different FIDs on different systems. MIMIX tracks the FIDs for all IFS objects configured for replication with advanced journaling via IFS tracking entries. When the data group is switched, the source and target FID associations are reversed, allowing MIMIX to successfully replicate transactions to IFS objects.

75

Lesser-used processes for user journal replication

Lesser-used processes for user journal replication


This topic describes two lesser used replication processes, MIMIX source-send processing for database replication and the data area poller process.

User journal replication with source-send processing


This topic describes user journal replication when data groups are configured to use MIMIX source-send processes. Note: New data groups are created to use remote journaling support for user journal replication when shipped default values on commands are used. Using remote journaling support offers many benefits over using MIMIX source-send processes. MIMIX uses journaling to identify changes to database files and other journaled objects to be replicated. As journal entries are added to the journal receiver, the database send process collects data from journal entries on the source system and compares them to the data group file entries defined for the data group. Journal entries for which a match is found for the file and library are then transported to the target system for replication according to the DB journal entry processing parameter (DBJRNPRC) filtering specified in the data group definition. The Data group file entries (FEOPT) parameter, specified either at the data group level or on individual data group file entries, also indicates whether to send only the after-image of the change or both before-image and after-images. Alternatively, if all journal entries are sent to the target system, the journal entries are filtered there by the apply process. The matching for the apply process is at the file, library, and member level. Note: If an application program adds or removes members and all members within the file are to be processed by MIMIX, it is better to use *ALL as the member name in that data group file entry. If individual members are specified, only those members you identify are processed. On the target system, the database receive process transfers the data received over the communications line from the source system into a log space on the target system. The database apply process on the applies replicated database transactions from the log space to the appropriate database physical file member or data area on the target system. For database files, transactions are applied at record level (puts, updates, deletes) or file level (clears, reorganizations, member deletes). MIMIX uses multiple apply processes in parallel for maximum efficiency. Transactions are applied in real-time to generate a duplicate image of the files and data areas replicated from the source system. Throughout this process, MIMIX manages the journal receiver unless you have specified otherwise. The journal definition default operation specifies that MIMIX automatically create the next journal receiver when the journal receiver reaches the threshold size you specified in the journal definition. After MIMIX finishes reading the entries from the current journal receiver, it deletes this receiver (if configured to do so)

76

and begins reading entries from the next journal receiver. This eliminates excessive use of disk storage and allows valuable system resources to be available for other processing. Besides indicating the mapping between source and target file names, data group file entries identify additional information used by database processes. The data group file entry can also specify a particular apply session to use for processing on the target system. A status code in the data group file entry also stores the status of the file or member in the MIMIX process. If a replication problem is detected, MIMIX puts the member in hold error (*HLDERR) status so that no further transactions are applied. Files can also be put on hold (*HLD) manually. Putting a file on hold causes MIMIX to retain all journal entries for the file in log spaces on the target system. If you expect to synchronize files at a later time, it is better to put the file in an ignored state. By setting files to an ignored state, journal entries for the file in the log spaces are deleted and additional entries received from the target system are discarded. This keeps the log spaces to a minimal size and improves efficiency for the apply process. The file entry option Lock member during apply indicates whether or not to allow only restricted access (read-only) to the file on the backup system. This file entry option can be specified on the data group definition or on individual data group entries.

The data area polling process


Note: The preferred way to replicate data areas is through the user journal. Data areas can alternatively be replicated through system journal replication processes or with the data area poller. When a data group is configured to use the data area polling process, polling programs capture changes to data areas defined to the data group at specified intervals. MIMIX creates a journal entry when there is a change to a data area. MIMIX supports the following data area types:
Table 5. *CHAR *DEC *LGL Data area types supported by the data area polling process. character, up to 2000 bytes decimal, up to 24 bytes in length and 9 decimal positions logical, equal to 1 byte.

You define a data group data area entry for each data area that you want MIMIX to manage. The data group definition determines how frequently the polling programs check for changes to data areas. The data area polling process runs on the source system. This process retrieves each data area defined to a data group at the interval you specify and determines whether or not a data area has changed. MIMIX checks for changes to the data area type and length as well as to the contents of the data area. If a data area has changed, the data area polling process retrieves the data area and converts it into a journal entry. This

77

Lesser-used processes for user journal replication

journal entry is sent through the normal user journal replication processing and is used to update the data area on the target system. For example, if a data area that is defined to MIMIX is deleted and recreated with new attributes, the data area polling process will capture the new attributes and recreate the data area on the target system.

78

79

Chapter 3

Preparing for MIMIX

This chapter outlines what you need to do to prepare for using MIMIX. Preparing for the installation and use of MIMIX is a very important step towards meeting your availability management requirements. Because of their shared functions and their interaction with other MIMIX products, it is best to determine System i5 requirements for user journal and system journal processing in the context of your total MIMIX environment. Give special attention to planning and implementing security for MIMIX. General security considerations for all MIMIX products can be found in the License and Availability Manager book. In addition, you can make your systems more secure with MIMIX product-level and command-level security. Each product has its own productlevel security, but now you must consider the security implications of common functions used by each product. Information about setting security for common functions is also found in the License and Availability Manager book. The topics in this chapter include: Checklist: pre-configuration on page 81 provides a procedure to follow to prepare to configure MIMIX on each system that participates in a MIMIX installation. Data that should not be replicated on page 83 describes how to consider what data should not be replicated. Planning for journaled IFS objects, data areas, and data queues on page 85 describes considerations when planning to use advanced journaling for IFS objects, data areas, or data queues. Starting the MIMIXSBS subsystem on page 90 describes how to start the MIMIXSBS subsystem which all MIMIX products run in. Accessing the MIMIX Main Menu on page 91 describes the MIMIX Main Menu and its two assistance levels, basic and intermediate which provide options to help simplify daily interactions with MIMIX.

80

Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation. Do the following: 1. By now, you should have completed the following tasks: The checklist for installing MIMIX software in the License and Availability Manager book You should have also turned on product-level security and granted authority to user profiles to control access to the MIMIX products.

2. At this time, you should review the information in Data that should not be replicated on page 83. 3. Decide what replication choices are appropriate for your environment. For detailed information see the chapter Planning choices and details by object class on page 93. 4. If it is not already active, start the MIMIXSBS subsystem using topic Starting the MIMIXSBS subsystem on page 90. 5. Configure each system in the MIMIX installation, beginning with the management system. The chapter Configuration checklists on page 137 identifies the primary options you have for configuring MIMIX. 6. Once you complete the configuration process you choose, you may also need to do one or more of the following: If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to write exit programs for monitoring activity and you may want to ensure that your monitor definitions are replicated. See the Using MIMIX book for more information. Verify the configuration. Verify any exit programs that are called by MIMIX. Update any automation programs you use with MIMIX and verify their operation. If you plan to use switching support, you or your Certified MIMIX Consultant may need to take additional action to set up and test switching. In order to use MIMIX Switch Assistant, a default model switch framework must be configured and identified in MIMIX policies. For more information about MIMIX Model Switch Framework, see the Using MIMIX Monitor book. For more information about switching and policies, see the Using MIMIX book.

81

Checklist: pre-configuration

82

Data that should not be replicated


There are some considerations to keep in mind when defining data for replication. Not only do you need to determine what is critical to replicate, but you also need to consider data that should not be replicated. As you identify your critical data, consider the following: You may not need to replicate temporary files, work files, and temporary objects, including DLOs and stream files. Evaluate how your applications use such files to determine if they need to be replicated.

You should not replicate the following: LAKEVIEW, MIMIXQGPL, or any MIMIX installation libraries. The LAKEVIEW or MIMIXOWN user profiles. System user profiles from one system to another. For example, QSYSOPR and QSECOFR should not be replicated. IBM i5/OS objects from one system to another. IBM-supplied libraries, files, and other objects for i5/OS typically begin with the prefix letter Q.

83

Data that should not be replicated

84

Planning for journaled IFS objects, data areas, and data queues
You can choose to use the cooperative processing support within MIMIX to replicate any combination of journaled IFS objects, data queue objects, or data queue objects using user journal replication processes. In addition to configuration and journaling requirements and the restrictions that apply, you need to address several other considerations when planning to replicate journaled IFS objects, data areas, or data queues. These considerations affect whether journals should be shared, whether objects should be replicated in a data group shared with database files, whether configuration changes are needed to change apply sessions for database files, and whether exit programs need to be updated.

Is user journal replication appropriate for your environment?


While user journal replication has significant advantages, it may not be appropriate for your environment. Or, it may be appropriate for only some of the supported object types. Consider the following: Do the objects remain relatively static? Static objects typically persist after they are created, while their data may change. Examples of more dynamic objects include temporary objects, which are created, renamed, and deleted frequently. Objects for some applications, like those which heavily use *DTAQs, may be better suited to replication from the system journal. What release of IBM i is in use? On some operating system releases, the types of operations that can be replicated from a user journal are limited. The IBM i release in use may influence whether objects are considered static or dynamic for replication purposes.

The benefits of user journal replication are described in Benefits of advanced journaling on page 72. For restrictions and limitations, see Identifying data areas and data queues for replication on page 112 and Identifying IFS objects for replication on page 118.

Serialized transactions with database files


Transactions completed for database files and objects (IFS objects, data areas, or data queues) can be serialized with one another when they are applied to objects on the target system. If you require serialization, these objects and database files must share the same data group as well as the same database apply session, session A. Since MIMIX uses apply session A for all objects configured for advanced journaling, serialization may require that you change the configuration for database files to ensure that they use the same apply session. Load balancing may also become a concern. See Database apply session balancing on page 87.

Converting existing data groups


When converting an existing data group consider the following:

85

Planning for journaled IFS objects, data areas, and data queues

You may have previously used data groups with a Data group type (TYPE) value of *OBJ to separate replication of IFS, data area, or data queue objects from other activity. Converting these data groups to use advanced journaling will not cause problems with the data group. The data group definition and existing data group entries must be changed to the values required for advanced journaling. When converting an existing data group to use advanced journaling, all objects in the IFS path or the library specified that match the selection criteria are selected. You may need to create additional data group IFS or object entries in order to achieve the desired results. This may include creating entries that exclude objects from replication. Adding IFS, data area, or data queue objects configured for advanced journaling to an existing database replication environment may increase replication activity and affect performance. If a large amount of data is to be replicated, consider the overall replication performance and throughput requirements when choosing a configuration. Changing the replication mechanism of IFS objects, data areas, or data queues from system journal replication to user journal replication generally reduces bandwidth consumption, improves replication latency, and eliminates the locking contention associated with the save and restore process. However, if these objects have never been replicated, the addition of IFS byte stream files, data areas, or data queues to the replication environment will increase bandwidth consumption and processing workload.

Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group KEYAPP are running on IBM i V5R4. You use this data group for system journal replication of the objects in library PRODLIB. The data group has one data group object entry which has the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE) Example 1 - You decide to use advanced journaling for all *DTAARA and *DTAQ objects replicated with data group KEYAPP. You have confirmed that the data group definition specifies TYPE(*ALL) and does not need to change. After performing a controlled end of the data group, you change the data group object entry to have the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) When the data group is started, object tracking entries are loaded and journaling is started for the data area and data queue objects in PRODLIB. Those objects will now be replicated from a user journal. Any other object types in PRODLIB continue to be replicated from the system journal. Example 2 - You want to use advanced journaling for data group KEYAPP but one data area, XYZ, must remain replicated from the system journal. You will need the data group object entry described in Example 1

86

LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) You will also need a new data group object entry that specifies the following so that data area XYZ can be replicated from the system journal: LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD) COOPDB(*NO)

Database apply session balancing


In each data group, one database apply session, session A, is used for all IFS objects, data areas, and data queues replicated from a user journal. If you also replicate database files in the same data group, the way in which files are configured for replication can also affect how much data is processed by apply session A. In some cases, you may need adjust the configured apply session in data group object and file entries to either ensure that files that should be serialized remain in the same apply session or to move files to another apply session to manually balance loads. Consider the following: In MIMIX Dynamic Apply configurations, newly created database files are distributed evenly across database apply sessions by default. This ensures that the files are distributed in a way that will not overload any one apply session. In configurations using legacy cooperative processing, newly created database files are distributed to apply session A by default. In data groups that also replicate IFS objects, data areas or data queues through the user journal, it may be necessary to change the apply session to which cooperatively processed files are directed when the database files are created to prevent apply session A from becoming overloaded. The apply session can be changed in the file entry options (FEOPT) on the data group object and file entries. Logical files and physical files with referential constraints also have apply session requirements to consider. For more information see Considerations for LF and PF files on page 105.

User exit program considerations


When new or different journaled object types are added to an existing data group, user exit programs may be affected. Be aware of the following exit program considerations when changing an existing configuration to include IFS objects, data areas, or data queues configured for replication processing from a user journal. When IFS objects, data areas, or data queues are journaled to a user journal, new journal entry codes are provided to the user exit program. If the user exit program interprets the journal code, changes may be required. The path name for IFS objects cannot be interpreted in the same way as it can for database files. MIMIX uses the file ID (FID) to identify the IFS object being replicated. User exit programs that rely on the library and file names in the journal entry may need to be changed to either ignore IFS journal entries or process them by resolving the FID to a path name using the IBM-supplied APIs. Journaled IFS objects and data queues can have incomplete journal entries. For

87

Planning for journaled IFS objects, data areas, and data queues

incomplete journal entries, MIMIX provides two or more journal entries with duplicate journal entry sequence numbers and journal codes and types to the user exit program when the data for the incomplete entry is retrieved. Programs need to correctly handle these duplicate entries representing the single, original journal entry. Journal entries for journaled IFS objects, data areas, and data queues will be routed to the user exit program. This may be a performance consideration relative to user exit program design.

Contact your Certified MIMIX Consultant for assistance with user exit programs.

88

89

Starting the MIMIXSBS subsystem

Starting the MIMIXSBS subsystem


By default, all MIMIX products run in the MIMIXSBS subsystem that is created when you install the product. This subsystem must be active before you can use the MIMIX products. If the MIMIXSBS is not already active, start the subsystem by typing the command STRSBS SBS(MIMIXQGPL/MIMIXSBS) and pressing Enter. Any autostart job entries added to the MIMIXSBS subsystem will start when the subsystem is started. Note: You can ensure that the MIMIX subsystem is started after each IPL by adding this command to the end of the startup program for your system. Due to the unique requirements and complexities of each MIMIX implementation, it is strongly recommended that you contact your Certified MIMIX Consultant to determine the best way in which to design and implement this change.

90

Accessing the MIMIX Main Menu


The MIMIX command accesses the main menu for a MIMIX installation. The MIMIX Main Menu has two assistance levels, basic and intermediate. The command defaults to the basic assistance level, shown in Figure 6, with its options designed to simplify day-to-day interaction with MIMIX. Figure 7 shows the intermediate assistance level. The options on the menu vary with the assistance level. In either assistance level, the available options also depend on the MIMIX products installed in the installation library and their licensing. The products installed and the licensing also affect subsequent menus and displays. Accessing the menu - If you know the name of the MIMIX installation you want, you can use the name to library-qualify the command, as follows: Type the command library-name/MIMIX and press Enter. The default name of the installation library is MIMIX. If you do not know the name of the library, do the following: 1. Type the command LAKEVIEW/WRKPRD and press Enter. 2. Type a 9 (Display product menu) next to the product in the library you want on the Lakeview Technology Installed Products display and press Enter. Changing the assistance level - The F21 key (Assistance level) on the main menu toggles between basic and intermediate levels of the menu. You can also specify the the Assistance Level (ASTLVL) parameter on the MIMIX command. Note: Procedures are written assuming you are using the MIMIX Availability Status (WRKMMXSTS) display, which can only be selected from the MIMIX Basic

91

Accessing the MIMIX Main Menu

Main Menu. We recommend you use the MIMIX Basic Main Menu unless you must access the MIMIX Intermediate Main Menu.
Figure 6. MIMIX Basic Main Menu
MIMIX Basic Main Menu System: MIMIX Select one of the following: 1. Availability status 2. Start MIMIX 3. End MIMIX 5. Start or complete switch 11. Configuration menu 12. Work with monitors 13. Work with messages 31. Product management menu WRKMMXSTS SYSTEM1

WRKMON WRKMSGLOG LAKEVIEW/PRDMGT

Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.

Figure 7.

MIMIX Intermediate Main Menu


MIMIX Intermediate Main Menu System: SYSTEM1

MIMIX Select one of the following: 1. 2. 3. 4. Work Work Work Work with with with with data groups systems messages monitors WRKDG WRKSYS WRKMSGLOG WRKMON

11. Configuration menu 12. Compare, verify, and synchronize menu 13. Utilities menu 31. Product management menu LAKEVIEW/PRDMGT

Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Lakeview Technology Inc., 1990, 2007.

92

Chapter 4

Planning choices and details by object class


This chapter describes the replication choices available for objects and identifies critical requirements, limitations, and configuration considerations for those choices. Many MIMIX processes are customized to provide optimal handling for certain classes of related object types and differentiate between database files, library-based objects, integrated file system (IFS) objects, and document library objects (DLOs). Each class of information is identified for replication by a corresponding class of data group entries. A data group can have any combination of data group entry classes. Some classes even support multiple choices for replication. In each class, a data group entry identifies a source of information that can be replicated by a specific data group. When you configure MIMIX, each data group entry you create identifies one or more objects to be considered for replication or to be explicitly excluded from replication. When determining whether to replicate a journaled transaction, MIMIX evaluates all of the data group entries for the class to which the object belongs. If the object is within the name space determined by the existing data group entries, the transaction is replicated. The topics in this chapter include: Replication choices by object type on page 96 identifies the available replication choices for each object class. Configured object auditing value for data group entries on page 98 describes how MIMIX uses a configured object auditing value that is identified in data group entries and when MIMIX will change an objects auditing value to match this configuration value. Identifying library-based objects for replication on page 100 includes information that is common to all library-based objects, such as how MIMIX interprets the data group object entries defined for a data group. This topic also provides examples and additional detail about configuring entries to replicate spooled files and user profiles. Identifying logical and physical files for replication on page 105 identifies the replication choices and considerations for *FILE objects with logical or physical file extended attributes. This topic identifies the requirements, limitations, and configuration requirements of MIMIX Dynamic Apply and legacy cooperative processing. Identifying data areas and data queues for replication on page 112 identifies the replication choices and configuration requirements for library-based objects of type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of these object types when user journal processes (advanced journaling) is used. Identifying IFS objects for replication on page 118 identifies supported and unsupported file systems, replication choices, and considerations such as long path names and case sensitivity for IFS objects. This topic also identifies restrictions and configuration requirements for replication of these object types when user journal processes (advanced journaling) is used.

93

Identifying DLOs for replication on page 124 describes how MIMIX interprets the data group DLO entries defined for a data group and includes examples for documents and folders. Processing of newly created files and objects on page 127 describes how new IFS objects, data areas, data queues, and files that have journaling implicitly started are replicated from the user journal. Processing variations for common operations on page 130 describes configuration-related variations in how MIMIX replicates move/rename, delete, and restore operations.

94

Planning choices and details by object class

95

Replication choices by object type

Replication choices by object type


With version 5, a new configuration of MIMIX that uses shipped defaults for all configuration choices will use remote journaling support for replication from user journals. Default configuration choices result in physical files (data and source) as well as logical files to be processed through user journal replication and all other supported object types and classes to be replicated using system journal replication. You can optionally use other replication processes as described in Table 6.
Table 6. Replication choices by object class Replication Options Default: user journal with MIMIX Dynamic Apply1 Other: For PF data files, legacy cooperative processing2. (For PF source and LF files, system journal) Default: For other files, system journal Default: system journal Other: advanced journaling2 Other: Data area polling process associated with user journal2 Objects of type *DTAQ Default: system journal Other: advanced journaling2 Default: system journal Required Classes of DG Entry Object entries and File entries Object entries and File entries More Information Identifying logical and physical files for replication on page 105

Object Class and Type Objects of type *FILE, extended attributes: PF (data, source) LF

*FILE, other extended attributes Objects of type *DTAARA

Object entries

Identifying library-based objects for replication on page 100 Identifying data areas and data queues for replication on page 112

Object entries Object entries and Object tracking entries Data area entries

Object entries Object entries and Object tracking entries Object entries Identifying library-based objects for replication on page 100 Identifying IFS objects for replication on page 118

Other library-based objects IFS objects

Default: system journal Other: advanced journaling2

IFS entries IFS entries and IFS tracking entries DLO entries

DLOs
1. 2.

Default: system journal

Identifying DLOs for replication on page 124

New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply. Existing data groups can be converted to this method of cooperative processing. User journal replication can be configured for either remote journaling or MIMIX source-send processes.

96

97

Configured object auditing value for data group entries

Configured object auditing value for data group entries


When you create data group entries for library-based objects, IFS objects, or DLOs, you can specify an object auditing value within the configuration. This configured object auditing value affects how MIMIX handles changes to attributes of objects. It is particularly important for, but not limited to, objects configured for system journal replication. The Object auditing value (OBJAUD) parameter defines a configured object auditing level for use by MIMIX. This configured value is associated with all objects identified for processing by the data group entry. An objects actual auditing level determines the extent to which changes to the object are recorded in the system journal and replicated by MIMIX. The configured value is used during initial configuration and during processing of requests to compare objects that are identified by configuration data. In specific scenarios, MIMIX evaluates whether an objects auditing value matches the configured value of the data group entry that most closely matches the object being processed. If the actual value is lower than the configured value, MIMIX sets the object to the configured value so that future changes to the object will be recorded as expected in the system journal and therefore can be replicated. Note: MIMIX only considers changing an objects auditing value when the data group object entry is configured for system journal replication. MIMIX does not change the objects value for files that are configured for MIMIX Dynamic Apply or legacy cooperative processing or for data areas and data queues that are configured for user journal replication. The configured value specified in data group entries can affect replication of some journal entries generated when an object attribute changes. Specifically, the configured value can affect replication of T-ZC journal entries for files and IFS objects and T-YC entries for DLOs. Changes that generate other types of journal entries are not affected by this parameter. When MIMIX changes the audit level, the possible values have the following results: The default value, *CHANGE, ensures that all changes to the object by all users are recorded in the system journal. The value *ALL ensures that all changes or read accesses to the object by all users are recorded in the system journal. The journal entries generated by read accesses to objects are not used for replication and their presence can adversely affect replication performance. The value *NONE results in no entries recorded in the system journal when the object is accessed or changed.

The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries. The value *NONE prevents replication of attribute and data changes for the identified object or DLO because T-ZC and T-YC entries are not recorded in the system journal. For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or data queues configured for user journal replication, the value *NONE can improve MIMIX performance by preventing unneeded entries from being written to the system journal.

98

When a compare request includes an object with a configured object auditing value of *NONE, any differences found for attributes that could generate T-ZC or T-YC journal entries are reported as *EC (equal configuration). You may also want to read the following: For more information about when MIMIX sets an objects auditing value, see Managing object auditing on page 57. For more information about manually setting values and examples, see Setting data group auditing values manually on page 297. To see what attributes can be compared and replicated, see the following topics: Attributes compared and expected results - #FILATR, #FILATRMBR audits on page 591 Attributes compared and expected results - #OBJATR audit on page 596 Attributes compared and expected results - #DLOATR audit on page 606. Attributes compared and expected results - #IFSATR audit on page 604

99

Identifying library-based objects for replication

Identifying library-based objects for replication


MIMIX uses data group object entries to identify whether to process transactions for library-based objects. Collectively, the object entries identify which library-based objects can be replicated by a particular data group. Each data group object entry identifies one or more library-based objects. An object entry can specify either a specific or a generic name for the library and object. In addition, each object entry also identifies the object types and extended object attributes (for *FILE and *DEVD objects) to be selected, defines a configured object auditing level for the identified objects, and indicates whether the identified objects are to be included in or excluded from replication. For most supported object types which can be identified by data group object entries, only the system journal replication path is available. For a list of object types, see Supported object types for system journal replication on page 549. This list includes information about what can be specified for the extended attributes of *FILE objects. A limited number of object types which use the system journal replication path have unique configuration requirements. These are described in are described in Identifying spooled files for replication on page 102 and Replicating user profiles and associated message queues on page 104. For detailed procedures, see Configuring data group entries on page 265. Replication options for object types journaled to a user journal - For objects of type *FILE, *DTAARA, and *DTAQ, MIMIX supports multiple replication methods. For these object types, additional configuration data is evaluated when determining what replication path to use for the identified objects. For *FILE objects, the extended attribute and other configuration data are considered when MIMIX determines what replication path to use for identified objects. For logical and physical files, MIMIX supports several methods of replication. Each method varies in its efficiency, in its supported extended attributes, and in additional configuration requirements. See Identifying logical and physical files for replication on page 105 for additional details. For other extended attribute types, MIMIX supports only system journal replication. Only data group object entries are required to identify these files for replication.

For *FILE objects configured for replication through the system journal, MIMIX caches extended file attribute information for a fixed set of *FILE objects. Also, the Omit content (OMTDTA) parameter provides the ability to omit a subset of data-changing operations from replication. For more information, see Caching extended attributes of *FILE objects on page 345 and Omitting T-ZC content from system journal replication on page 387. For *DTAARA and *DTAQ object types, MIMIX supports replication using either system journal or user journal replication processes. A configuration that uses the user journal is also called an advanced journaling configuration. Additional information, including configuration requirements are described in Identifying data areas and data queues for replication on page 112.

100

How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects you specify in data group object entries will be selected for replication. MIMIX determines which replication process will be used only after it determines whether the library-based object will be replicated. When determining whether to process a journal entry for a library-based object, MIMIX looks for a match between the object information in the journal entry and one of the data group object entries. The data group object entries are checked from the most specific to the least specific. The library name is the first search element, then followed by the object type, attribute (for files and device descriptions), and the object name. The most significant match found (if any) is checked to determine whether to include or exclude the journal entry in replication. Table 7 shows how MIMIX checks a journal entry for a match with a data group object entry. The columns are arranged to show the priority of the elements within the object entry, with the most significant (library name) at left and the least significant (object name) at right.
Table 7. Matching order for library-based object names. Library Name Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Object Type Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Attribute1 Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Object Name Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL

Search Order 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1.

The extended object attribute is only checked for objects of type *FILE and *DEVD.

101

Identifying library-based objects for replication

When configuring data group object entries, the flexibility of the generic support allows a variety of include and exclude combinations for a given library or set of libraries. But, generic name support can also cause unexpected results if it is not well planned. Consider the search order shown in Table 7 when configuring data group object entries to ensure that objects are not unexpectedly included or excluded in replication. Example - For example, say you that you have a data group configured with data group object entries like those shown in Table 9. The journal entries MIMIX is evaluating for replication are shown in Table 8.
Table 8. Sample journal transactions for objects in the system journal Library FINANCE FINANCE FINANCE FINANCE Object BOOKKEEP ACCOUNTG BALANCE ACCOUNT1

Object Type *PGM *FILE *DTAARA *DTAARA

A transaction is received from the system journal for program BOOKKEEP in library FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group object entry shown in Table 9. A transaction for file ACCOUNTG in library FINANCE would also be replicated since it fits the third entry. A transaction for data area BALANCE in library FINANCE would not be replicated since it fits the second entry, an Exclude entry.
Table 9. Entry 1 2 3 Sample of data group object entries, arranged in order from most to least specific Source Library Finance Finance Finance Object Type *PGM *DTAARA *ALL Object Name *ALL *ALL acc* Attribute *ALL *ALL *ALL Process Type *INCLD *EXCLD *INCLD

Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be replicated. Although the transaction fits both the second and third entries shown in Table 9, the second entry determines whether to replicate because it provides a more significant match in the second criteria checked (object type). The second entry provides an exact match for the library name, an exact match for object type, and a object name match to *ALL. In order for MIMIX to process the data area ACCOUNT1, an additional data group object entry with process type *INCLD could be added for object type of *DTAARA with an exact name of ACCOUNT1 or a generic name ACC*.

Identifying spooled files for replication


MIMIX supports spooled file replication on an output queue basis. When an output queue (*OUTQ) is identified for replication by a data group object entry, its spooled files are not automatically replicated when default values are used. Table 10 identifies the values required for spooled file replication. When MIMIX processes an output

102

queue that is identified by an object entry with the appropriate settings, all spooled files for the output queue (*OUTQ) are replicated by system journal replication processes.
Table 10. Data group object entry parameter values for spooled file replication Value *ALL or *OUTQ *YES

Parameter Object type (OBJTYPE) Replicate spooled files (REPSPLF)

Is it important to consider which spooled files must be replicated and which should not. Some output queues contain a large number of non-critical spooled files and probably should not be replicated. Most likely, you want to limit the spooled files that you replicate to mission-critical information. It may be useful to direct important spooled files that should be replicated to specific output queues instead of defining a large number of output queues for replication. When an output queue is selected for replication and the data group object entry specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA and *PRTDTA are included in the system value for the security auditing level (QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the system journal. When a spooled file is created, moved, deleted, or its attributes are changed, the resulting entries in the system journal are processed by a MIMIX object send job and are replicated.

Additional choices for spooled file replication


MIMIX provides additional options to customize your choices for spooled file replication. Keeping deleted spooled files: You can also specify to keep spooled files on the target system after they have been deleted from the source system by using the Keep deleted spooled files parameter on the data group definition. The parameter is also available on commands to add and change data group object entries. Options for spooled file status: You can specify additional options for processing spooled files. The Spooled file options (SPLFOPT) parameter is only available on commands to add and change data group object entries. The following values support choosing how status of replicated spooled files is handled on the target system: *NONE This is the shipped default value. Spooled files on the target system will have the same status as on the source system. *HLD All replicated spooled files are put on hold on the target system regardless of their status on the source system. *HLDONSAV All replicated spooled files that have a saved status on the source system will be put on hold on the target system. Spooled files on the source system which have other status values will have the same status on the target system. This parameter can be helpful if your environment includes programs which automatically process spooled files on the target system. For example, if you have a

103

Identifying library-based objects for replication

program that automatically prints spooled files, you may want to use one of these values to control what is printed after replication when printers writers are active. If you move a spooled file between output queues which have different configured values for the SPLFOPT parameter, consider the following: Spooled files moved from an output queue configured with SPLFOPT(*NONE) to an output queue configured with SPLFOPT(*HLD) are placed in a held state on the target system. Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV) remain in a held state on the target system until you take action to release them.

Replicating user profiles and associated message queues


When user profile objects (*USRPRF) are identified by a data group object entry which specifies *ALL or *USRPRF for the Object type parameter, MIMIX replicates the objects using system journal replication processes. When MIMIX replicates user profiles, the message queue (*MSGQ) objects associated with the *USRPRF objects may also be created automatically on the target system as a result of replication. If the *MSGQ objects are not also configured for replication, the private authorities for the *MSGQ objects may not be the same between the source and target systems. If it is necessary for the private authorities for the *MSGQ objects be identical between the source and target systems, it is recommended that *MSGQ objects associated with *USRPRF objects be configured for replication. For example, Table 11 shows the data group object entries required to replicate user profiles beginning with the letter A and maintain identical private authorities on associated message queues. In this example, the user profile ABC and its associated message queue are excluded from replication.
Table 11. Entry 1 2 3 4 Sample data group object entries for maintaining private authorities of message queues associated with user profiles Source Library QSYS QUSRSYS QSYS QUSRSYS Object Type *USRPRF *MSGQ *USRPRF *MSGQ Object Name A* A* ABC ABC Process Type *INCLD *INCLD *EXCLD *EXCLD

104

Identifying logical and physical files for replication


MIMIX supports multiple ways of replicating *FILE objects with extended attributes of LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines the replication method used for these logical and physical files. The following configurations are possible: MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this configuration, logical files and physical files (source and data) are replicated primarily through the user (database) journal. This configuration is the most efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files. In this configuration, files are identified by data group object entries and file entries. Legacy cooperative processing - Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). It does not support source physical files or logical files. In legacy cooperative processing, record data and member data operations are replicated through user journal processes, while all other file transactions such as creates, moves, renames, and deletes are replicated through system journal processes. The database processes can use either remote journaling or MIMIX source-send processes, making legacy cooperative processing the recommended choice for physical data files when the remote journaling environment required by MIMIX Dynamic Apply is not possible. In this configuration, files are identified by data group object entries and file entries. User journal (database) only configurations - Environments that do not meet MIMIX Dynamic Apply requirements but which have data group definitions that specify TYPE(*DB) can only replicate data changes to physical files. These configurations may not be able to replicate other operations such as creates, restores, moves, renames, and some copy operations. In this configuration, files are identified by data group file entries. System journal (object) only configurations - Data group definitions which specify TYPE(*OBJ) are less efficient at processing logical and physical files. The entire member is updated with each replicated transaction. Members must be closed in order for replication to occur. In this configuration, files are identified by data group object entries.

You should be aware of common characteristics of replicating library-based objects, such when the configured object auditing value is used and how MIMIX interprets data group entries to identify objects eligible for replication. For this information, see Configured object auditing value for data group entries on page 98 and How MIMIX uses object entries to evaluate journal entries for replication on page 101. Some advanced techniques may require specific configurations. See Configuring advanced replication techniques on page 353 for additional information. For detailed procedures, see Creating data group object entries on page 267.

Considerations for LF and PF files


As of version 5, newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command

105

Identifying logical and physical files for replication

defaults are used. With this configuration, logical and physical files are processed primarily from the user journal. Cooperative journal - The value specified for the Cooperative journal (COOPJRN) parameter in the data group definition is critical to determining how files are cooperatively processed. When creating a new data group, you can explicitly specify a value or you can allow MIMIX to automatically change the default value (*DFT) to either *USRJRN or *SYSJRN based on whether operating system and configuration requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements are not met, MIMIX changes *DFT to *SYSJRN. Note: Data groups created prior to upgrading to version 5 continue to use their existing configuration. The installation process sets the value of COOPJRN to *SYSJRN and this value remains in effect until you take action as described in Converting to MIMIX Dynamic Apply on page 150. When a data group definition meets the requirements for MIMIX Dynamic Apply, any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements, any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. All other types of files are processed using system journal replication.

Logical file considerations - Consider the following for logical files. Logical files are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, they are replicated through the system journal. It is strongly recommended that logical files reside in the same data group as all of their associated physical files.

Physical file considerations - Consider the following for physical files Physical files (source and data) are replicated through the user journal when MIMIX Dynamic Apply requirements are met. Otherwise, data files are replicated using legacy cooperative processing if those requirements are met, and source files are replicated through the system journal. If a data group definition specifies TYPE(*DB) and the configuration meets other MIMIX Dynamic Apply requirements, source files need to be identified by both data group object entries and data group file entries. If a data group is configured for only user journal replication (TYPE is *DB) and does not meet other configuration requirements for MIMIX Dynamic Apply, source files should be identified by only data group file entries. If a data group is configured for only system replication (TYPE is *OBJ), any source files should be identified by only data group object entries. Any data group object entries configured for cooperative processing will be replicated through the

106

system journal and should not have any corresponding data group file entries. Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. See Requirements and limitations of MIMIX Dynamic Apply on page 110 and Requirements and limitations of legacy cooperative processing on page 111 for additional information. For more information about load balancing apply sessions, see Database apply session balancing on page 87.

Commitment control - This database technique allows multiple updates to one or more files to be considered a single transaction. When used, commitment control maintains database integrity by not exposing a part of a database transaction until the whole transaction completes. This ensures that there are no partial updates when the process is interrupted prior to the completion of the transaction. This technique is also useful in the event that a partially updated transaction must be removed, or rolled back, from the files or when updates identified as erroneous need to be removed. MIMIX fully simulates commitment control on the target system. When commitment control is used on a source system in a MIMIX environment, MIMIX maintains the integrity of the database on the target system by preventing partial transactions from being applied until the whole transaction completes. If the source system becomes unavailable, MIMIX will not have applied incomplete transactions on the target system. In the event of an incomplete (or uncommitted) commitment cycle, the integrity of the database is maintained. If your application dynamically creates database files that are subsequently used in a commitment control environment, use MIMIX Dynamic Apply for replication. Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit cycle is open when MIMIX tries to save the file. The save operation will be delayed and may fail if the file being saved has uncommitted transactions.

Files with LOBs


Large objects (LOBs) in files that are configured for either MIMIX Dynamic Apply or legacy cooperative processing are automatically replicated. LOBs can greatly increase the amount of data being replicated. As a result, you may see some degradation in your replication activity. The amount of degradation you see is proportionate to the amount of journal entries with LOBs that are applied per hour. This is also true during switch processing if you are using remote journaling and have unconfirmed entries with LOB data. Since the volume of data to be replicated can be very large, you should consider using the minimized journal entry data function along with LOB replication. IBM support for minimized journal entry data can be extremely helpful when database records contain static, very large objects. If minimized journal entry data is enabled, journal entries for database files containing unchanged LOB data may be complete and therefore processed like any other complete journal entry. This can significantly improve performance, throughput, and storage requirements. If minimized journal entry is used with files containing LOBs, keyed replication is not supported. For more information, see Minimized journal entry data on page 339.

107

Identifying logical and physical files for replication

User exit programs may be affected when journaled LOB data is added to an existing data group. Non-minimized LOB data produces incomplete entries. For incomplete journal entries, two or more entries with duplicate journal sequence numbers and journal codes and types will be provided to the user exit program when the data for the incomplete entry is retrieved and segmented. Programs need to correctly handle these duplicate entries representing the single, original journal entry. You should also be aware of the following restrictions: Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work against database files with LOB fields. There is no collision detection for LOB data. Most collision detection classes compare the journal entries with the content of the record on the target system. Although you can compare the actual content of the record, you cannot compare the content of the LOBs.

Configuration requirements for LF and PF files


MIMIX Dynamic Apply and legacy cooperative processing have unique requirements for data group definitions as well as many common requirements for data group object entries and file entries, as indicated in Table 12. In both configurations, you must have: A data group definition which specifies the required values. One or more data group object entries that specify the required values. These entries identify the items within the name space for replication. You may need to create additional entries to achieve the desired results, including entries which specify a Process type of *EXCLD. The identified existing objects must be journaled to the journal defined for the data group. Data group file entries for the items identified by data group object entries. Processing cannot occur without these corresponding data group file entries.

108

Table 12.

Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing MIMIX Dynamic Apply Required Values Legacy Cooperative Processing Required Values Configuration Notes

Critical Parameters

Data Group Definition Data group type (TYPE) *ALL or *DB *ALL See Requirements and limitations of MIMIX Dynamic Apply on page 110.

Use remote journal link (RJLNK) Cooperative journal (COOPJRN) File and tracking ent. opts: (FEOPT) Replication type Data Group Object Entries Object type (OBJTYPE) Attribute (OBJATR)

*YES *DFT or *USRJRN

any value *DFT or *SYSJRN See cooperative journal is default. See Requirements and limitations of MIMIX Dynamic Apply on page 110.

*POSITION

any value

*ALL or *FILE *ALL or one of the following: LF, LF38, PF-DTA, PF-SRC, PF38-DTA, PF38SRC *YES *FILE

*ALL or *FILE *ALL, PF-DTA, or PF38-DTA

Cooperate with database (COOPDB) Cooperating object types (COOPTYPE) File and tracking ent. opts: (FEOPT) Replication type

*YES *FILE

See Corresponding data group file entries required.

*POSITION

any value

See Requirements and limitations of MIMIX Dynamic Apply on page 110.

Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy cooperative processing require that existing files identified by a data group object entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must also be identified by data group file entries. When a file is identified by both a data group object entry and an data group file entry, the following are also required: The object entry must enable the cooperative processing of files by specifying

109

Identifying logical and physical files for replication

COOPDB(*YES) and COOPTYPE(*FILE). If name mapping is used between systems, the data group object entry and file entry must have the same name mapping defined. If the data group object entry and file entry specify different values for the File and tracking ent. opts (FEOPT) parameter, the values specified in the data group file entry take precedence. Files defined by data group file entries must have journaling started and must be synchronized. If journaling is not started, MIMIX cannot replicate activity for the file.

Typically, data group object entries are created during initial configuration and are then used as the source for loading the data group file entries. The #DGFE audit can be used to determine whether corresponding data group file entries exist for the files identified by data group object entries.

Requirements and limitations of MIMIX Dynamic Apply


MIMIX Dynamic Apply requires that user journal replication be configured to use remote journaling. Specific data group definition and data group entry requirements are listed in Table 12. MIMIX Dynamic Apply configurations have the following limitations. Operating system release - The following object changes are only replicated when running i5/OS release V5R4 or later: source file date/time, compiler, object control level, licensed program, program temporary fixes (PTF), authorized program analysis reports (APAR), allow change by program, user-defined attributes, days used count and reset date, product option ID, product option load ID, component ID, last used date, change date and time stamp, and members days used count and reset date. Files in library - It is recommended that files within a single library be replicated using the same user journal. Data group file entries for members - Data group file entries (DGFE) for specific member names are not supported unless they are created by MIMIX. MIMIX may create these for error hold processing. Name mapping - MIMIX Dynamic Apply configurations support name mapping at the library level only. Entries with object name mapping are not supported. For example, MYLIB/MYOBJ mapped to MYLIB/OTHEROBJ is not supported. If you require object name mapping, it is supported in legacy cooperative processing configurations. TYPE(*DB) data groups - MIMIX Dynamic Apply configurations that specify TYPE(*DB) in the data group definition will not be able to replicate the following actions: Files created using CPYF CRTFILE(*YES) on OS V5R3 into a library configured for replication Files restored into a source library configured for replication Files moved or renamed from a non-replicated library into a replicated library Files created which are not otherwise journaled upon creation into a library

110

configured for replication Files created by these actions can be added to the MIMIX configuration by running the #DGFE audit. The audit recovery will synchronize the file as part of adding the file entry to the configuration. In data groups that specify TYPE(*ALL), the above actions are fully supported. Referential constraints - The following restrictions apply: If using referential constraints with *CASCADE or *SETNULL actions you must specify *YES for the Journal on target (JRNTGT) parameter in the data group definition. Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same database apply session. If a particular preferred apply session has been specified in file entry options (FEOPT), MIMIX may ignore the specification in order to satisfy this restriction.

Positional replication only - Keyed replication is not supported by MIMIX Dynamic Apply. Data group definitions, data group object entries, and data group file entries must specify *POSITION for the Replication type element of the file and tracking entry options (FEOPT) parameter. The value *KEYED cannot be used.

Requirements and limitations of legacy cooperative processing


Legacy cooperative processing requires that data groups be configured for both database (user journal) and object (system journal) replication. While remote journaling is recommended, MIMIX source-send processing for database replication is also supported. Specific data group definition and data group entry requirements are listed in Table 12. Legacy cooperative processing configurations have the following limitations. Supported extended attributes - Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). When a *FILE object is configured for legacy cooperative processing, only file and member attribute changes identified by T-ZC journal entries with a subclass of 7=Change are logged and replicated through system journal replication processes. All member and data changes are logged and replicated through user journal replication processes. File entry options - If a file is moved or renamed and both names are defined by a data group file entry, the file entry options must be the same in both data group file entries. Referential constraints - Physical files with referential constraints require a field in another physical file to be valid. All physical files in a referential constraint structure must be in the same apply session. If this is not possible, contact Lakeview Customer Support.

111

Identifying data areas and data queues for replication

Identifying data areas and data queues for replication


MIMIX uses data group object entries to determine whether to process transactions for data area (*DTAARA) and data queue (*DTAQ) object types. Object entries can be configured so that these object types can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional). While user journal replication, also called advanced journaling, has significant advantages, you must decide whether it is appropriate for your environment. For more information, see Planning for journaled IFS objects, data areas, and data queues on page 85. For detailed procedures, see Configuring data group entries on page 265. Data areas can also be replicated by the data area poller process associated the user journal. However, this type of replication is the least preferred and requires data group data area entries. See Creating data group data area entries on page 289.

Configuration requirements - data areas and data queues


For any data group object entries you create for data areas or data queues, consider the following: You must have at least one data group object entry which specifies a a Process type of *INCLD. You may need to create additional entries to achieve the desired results. This may include entries which specify a Process type of *EXCLD. When specifying objects in data group object entries, specify only the objects that need to be replicated. Specifying *ALL or a generic name for the System 1 object (OBJ1) parameter will select multiple objects within the library specified for System 1 library (LIB1). When you create data group object entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of library-based objects. It is particularly important for, but not limited to, objects configured for system journal replication. For objects configured for user journal replication, the configured value can affect MIMIX performance. For detailed information, see Configured object auditing value for data group entries on page 98.

Additional requirements for user journal replication - The following additional requirements must be met before data areas or data queues identified by data group object entries can be replicated with user journal processes. The data group definition and data group object entries must specify the values indicated in Table 13 for critical parameters. Object tracking entries must exist for the objects identified by properly configured object entries. Typically these are created automatically when the data group is started. Journaling must be started on both the source and target systems for the objects

112

identified by object tracking entries.


Table 13. Critical configuration parameters for replicating *DTAARA and *DTAQ objects from a user journal Required Values Configuration Notes

Critical Parameters Data Group Definition Data group type (TYPE) Data Group Object Entry Cooperate with database (COOPDB) Cooperating object types (COOPTYPE)

*ALL

*YES *DTAARA *DTAQ The appropriate object types must be specified to enable advanced journaling. Otherwise, system journal replication results.

Additionally, see Planning for journaled IFS objects, data areas, and data queues on page 85 for additional details if any of the following apply: Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether data area or data queue objects should be replicated in a data group that also replicates database files. Serialized transactions - If you need to serialize transactions for database files and data area or data queue objects replicated from a user journal, you may need to adjust the configuration for the replicated files. Apply session load balancing - One database apply session, session A, is used for all data area and data queue objects are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly. User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.

Restrictions - user journal replication of data areas and data queues


For operating systems V5R4 and above, changes to data area and data queue content, as well as changes to structure (such as moves and renames) and number (such as creates and deletes), are recognized and supported through user journal replication. When considering replicating data areas and data queues using MIMIX user journal replication processes, be aware of the following restrictions: For V5R3 operating systems, only a static environment of data areas and data queues is replicated. For V5R3 systems, while changes to the actual data are recognized and replicated, attribute changes are not. MIMIX AutoGuard must be used to detect attribute changes that occur on the source objects and correct the

113

Identifying data areas and data queues for replication

differences on the target objects. These functions are supported in environments using V5R4 or higher operating systems. MIMIX does not support before-images for data updates to data areas, and cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. Furthermore, MIMIX does not provide a mechanism to prevent users or applications from updating replicated data areas on the target system accidentally. To guarantee the data integrity of replicated data areas between the source and target systems, you should run MIMIX AutoGuard on a regular basis. The apply of data area and data queue objects is restricted to a single database apply job (DBAPYA). If a data group has too much replication activity, this job may fall behind in the processing of journal entries. If this occurs, you should load-level the apply sessions by moving some or all of the database files to another database apply job. Pre-existing data areas and data queues to be selected for replication must have journaling started on both the source and target systems before the data group is started. The ability to replicate Distributed Data Management (DDM) data areas and data queues is not supported. If you need to replicate DDM data areas and data queues, use standard system journal replication methods.

Supported journal code E and Q entry types


The operating system uses journal codes E and Q to indicate that journal entries are related to operations on data areas and data queues, respectively. When configured for user journal replication, MIMIX recognizes specific E and Q journal entry types as eligible for replication from a user journal. Table 14 shows the currently supported journal entry types for data areas.
Table 14. Journal Code E E E E E E E E Journal entry types supported by MIMIX for data areas Type EA EB ED EE EG EH EK EL Description Update data area, after image Update data area, before image Data area deleted Create data area Start journal for data area End journal for data area Change journaled object attribute Data area restored 1 1 1 1 1 Notes

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

114

Table 14. Journal Code E E E E E E E E E

Journal entry types supported by MIMIX for data areas Type EM EN ES EW ZA ZB ZO ZP ZT Description Data area moved Data area renamed Data area saved Start of save for data area Change authority Change object attribute Ownership change Change primary group Auditing change 1 1 1 1 1 Notes 1 1

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

Table 15 shows the currently supported journal entry types for data queues.
Table 15. Journal Code Q Q Q Q Q Q Q Q Q Q Q Q Data queue journal entry types supported by MIMIX Type QA QB QC QD QE QG QJ QK QL QM QN QR Description Create data queue Start data queue journaling Data queue cleared, no key Data queue deleted End data queue journaling Data queue attribute changed Data queue cleared, has key Send data queue entry, has key Receive data queue entry, has key Data queue moved Data queue renamed Receive data queue entry, no key 1 1 1 1 Notes 1

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

115

Identifying data areas and data queues for replication

Table 15. Journal Code Q Q Q Q Q Q Q Q Q

Data queue journal entry types supported by MIMIX Type QS QX QY QZ ZA ZB ZO ZP ZT Description Send data queue entry, no key Start of save for data queue Data queue saved Data queue restored Change authority Change object attribute Ownership change Change primary group Auditing change 1 1 1 1 1 1 Notes

Notes: 1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.

For more information about journal entries, see Journal Entry Information (Appendix D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information Center.

116

117

Identifying IFS objects for replication

Identifying IFS objects for replication


MIMIX uses data group IFS entries to determine whether to process transactions for objects in the integrated file system (IFS), and what replication path is used. IFS entries can be configured so that the identified objects can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional). One of the most important decisions in planning for MIMIX is determining which IFS objects you need to replicate. Most likely, you want to limit the IFS objects you replicate to mission-critical objects. User journal replication, also called advanced journaling, is well suited to the dynamic environments of IFS objects. While user journal replication has significant advantages, you must decide whether it is appropriate for your environment. For more information, see Planning for journaled IFS objects, data areas, and data queues on page 85. For detailed procedures, see Creating data group IFS entries on page 282. Objects configured for user journal replication may have create, restore, delete, move, and rename operations. Differences in implementation details are described in Processing variations for common operations on page 130.

Supported IFS file systems and object types


The IFS objects to be replicated must be in the Root (/) or QOpenSys file systems. The following object types are supported: Directories (*DIR) Stream Files (*STMF) Symbolic Links (*SYMLNK)

Table 16 identifies the IFS file systems that are not supported by MIMIX and cannot be specified for either the System 1 object prompt or the System 2 object prompt in the Add Data Group IFS Entry (ADDDGIFSE) command.
Table 16. /QDLS /QFileSvr.400 /QFPNWSSTG IFS file systems that are not supported by MIMIX /QLANSrv /QNetWare /QNTC /QOPT /QSYS.LIB /QSR

Journaling is not supported for files in Network Work Storage Spaces (NWSS), which are used as virtual disks by IXS and IXA technology. Therefore, IFS objects configured to be replicated from a user journal must be in the Root (/) or QOpenSys file systems. Refer to the IBM book OS/400 Integrated File System Introduction for more information about IFS.

118

Considerations when identifying IFS objects


The following considerations for IFS objects apply regardless of whether replication occurs through the system journal or user journal.

MIMIX processing order for data group IFS entries


Data group IFS entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set. The first entry (more generic) found that matches the object is used until a more specific match is found.

Long IFS path names


MIMIX currently replicates IFS path names of 512 characters. However, any MIMIX command that takes an IFS path name as input may be susceptible to a 506 character limit. This character limit may be reduced even further if the IFS path name contains embedded apostrophes ('). In this case, the supported IFS path name length is reduced by four characters for every apostrophe the path name contains. For information about IFS path name naming conventions, refer to the IBM book, Integrated File System Introduction V5R4.

Upper and lower case IFS object names


When you create data group IFS entries, be aware of the following information about character case sensitivity for specifying IFS object names. The root file system on the System i5 is generally not case sensitive. Character case is preserved when creating objects, but otherwise character case is ignored. For example, you can create /AbCd or /ABCD, but not both. You can refer to the object by any mix of character case, such as /AbCd, /abcd, or /ABCD. The QOpenSys file system on the System i5 is generally case sensitive. Except for "QOpenSys" in a path name, all characters in a path name are case sensitive. For example, you can create both /QOpenSys/AbCd and /QOpenSys/ABCD. You must specify the correct character case when referring to an object.

During replication, MIMIX preserves the character case of IFS object names. For example, the creation of /AbCd on the source system will be replicated as /AbCd on the target system. Replication will not alter the character case of objects that already exist on the target system (unless the object is deleted and recreated). In the root file system, /AbCd and /ABCD are equivalent names. If /ABCD exists as such on the target system, changes to /AbCd will be replicated to /ABCD, but the object name will not be changed to /AbCd on the target system. When character case is not a concern (root file system), MIMIX may present path names as all upper case or all lower case. For example, the WRKDGACTE display shows all lower case, while the WRKDGIFSE display shows all upper case. Names can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and /ABCD will produce the same result.

119

Identifying IFS objects for replication

When character case does matter (QOpenSys file system), MIMIX presents path names in the appropriate case. For example, the WRKDGACTE display and the WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path. Names must be entered in the appropriate character case. For example, subsetting the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.

Configured object auditing value for IFS objects


When you create data group IFS entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of IFS objects. It is particularly important for, but not limited to, objects configured for system journal replication. For IFS objects configured for user journal replication, the configured value can affect MIMIX performance. For detailed information, see Configured object auditing value for data group entries on page 98.

Configuration requirements - IFS objects


For any data group IFS entry you create, consider the following: You must have at least one data group IFS entry which specifies a a Process type of *INCLD. You may need to create additional entries to achieve the desired results. This may include entries which specify a Process type of *EXCLD. When specifying which IFS objects in data group IFS entries, specify only the IFS objects that need to be replicated. The System 1 object (OBJ1) parameter selects all IFS objects within the path specified. You can specify an object auditing value within the configuration. For details, see Configured object auditing value for data group entries on page 98.

Additional requirements for user journal replication - The following additional requirements must be met before IFS objects identified by data group IFS entries can be replicated with user journal processes. The data group definition and data group IFS entries must specify the values indicated in Table 17 identifies for critical parameters. IFS tracking entries must exist for the objects identified by properly configured IFS entries. Typically these are created automatically when the data group is started. Journaling must be started on both the source and target systems for the objects identified by IFS tracking entries.
Critical configuration parameters for replicating IFS objects from a user journal Required Values Configuration Notes

Table 17.

Critical Parameters Data Group Definition Data group type (TYPE) Data Group IFS Entry

*ALL

120

Table 17.

Critical configuration parameters for replicating IFS objects from a user journal Required Values *YES Configuration Notes The default, *NO, results in system journal replication

Critical Parameters Cooperate with database (COOPDB)

Additionally, see Planning for journaled IFS objects, data areas, and data queues on page 85 for additional details if any of the following apply: Converting existing configurations - When converting an existing data group to use or add advanced journaling, you must consider whether journals should be shared and whether IFS objects should be replicated in a data group that also replicated database files. Serialized transactions - If you need to serialize transactions for database files and IFS objects replicated from a user journal, you may need to adjust the configuration for the replicated files. Apply session load balancing - One database apply session, session A, is used for all IFS objects that are replicated from a user journal. Other replication activity can use this apply session, and may cause it to become overloaded. You may need to adjust the configuration accordingly. User exit programs - If you use user exit programs that process user journal entries, you may need to modify your programs.

Restrictions - user journal replication of IFS objects


When considering replicating IFS objects using MIMIX user journal replication processes, be aware of the following restrictions: The operating system does not support before-images for data updates to IFS objects. As such, MIMIX cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. MIMIX will check the integrity of the IFS data through the use of regularly scheduled audits, specifically the #IFSATR audit. The apply of IFS objects is restricted to a single database apply job (DBAPYA). If a data group has too much replication activity, this job may fall behind in the processing of journal entries. If this occurs, you should load-level the apply sessions by moving some or all of the database files to another database apply job. Pre-existing IFS objects to be selected for replication must have journaling started both the source and target systems before the data group is started. A physical object, such as an IFS object, is identified by a hard link. Typically, an unlimited number of hard links can be created as identifiers for one object. For journaled IFS objects, MIMIX does not support the replication of additional hard links because doing so causes the same FID to be used for multiple names for the same IFS object.

121

Identifying IFS objects for replication

The ability to lock on apply IFS objects in order to prevent unauthorized updates from occurring on the target system is not supported when advanced journaling is configured. The ability to use the Remove Journaled Changes (RMVJRNCHG) command for removing journaled changes for IFS tracking entries is not supported. It is recommended that option 14 (Remove related) on the Work with Data Group Activity (WKRDGACT) display not be used for failed activity entries representing actions against cooperatively processed IFS objects. Because this option does not remove the associated tracking entries, orphan tracking entries can accumulate on the system.

Supported journal code B entry types


The system uses journal code B to indicate that the journal entry deposited is related to an IFS operation. Table 18 shows the currently supported IFS entry types that MIMIX can replicate for IFS objects configured for user journal replication.
Table 18. Journal Code B B B B B B B B B B B B B B B B B IFS entry types supported by MIMIX Type AA B1 B3 B5 B6 ET FA FR FS FW JT OA OG OO RN TR WA Description Change audit attributes Create files, directories, or symbolic links Move/rename object Remove link (unlink) Bytes cleared, after-image End journaling for object Change object attribute Restore object Saved IFS object Start of save-while-active Start journaling for object Change object authority Change primary group Change object owner Rename file identifier Truncated IFS object Write after-image 1 1 1 Notes

122

Table 18. Journal Code

IFS entry types supported by MIMIX Type Description Notes

Note: 1. The action identified in these entries are replicated cooperatively through the security audit journal.

123

Identifying DLOs for replication

Identifying DLOs for replication


MIMIX uses data group DLO entries to determine whether to process system journal transactions for document library objects (DLOs). Each DLO entry for a data group includes a folder path, document name, owner, an object auditing level, and an include or exclude indicator. In addition to specific names, MIMIX supports generic names for DLOs. In a data group DLO entry, the folder path and document can be generic or *ALL. When you create data DLO object entries, you can specify an object auditing value within the configuration. The configured object auditing value affects how MIMIX handles changes to attributes of DLOs. For detailed information, see Configured object auditing value for data group entries on page 98. For detailed procedures, see Creating data group DLO entries on page 287.

How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits them from processing. This information can help you understand what is included or omitted. When determining whether to process a journal entry for a DLO, MIMIX looks for a match between the DLO information in the journal entry and one of the data group DLO entries. The data group DLO entries are checked from the most specific to the least specific. The folder path is the most significant search element, followed by the document name, then the owner. The most significant match found (if any) is checked to determine whether to process the entry. An exact or generic folder path name in a data group DLO entry applies to folder paths that match the entry as well as to any unnamed child folders of that path which are not covered by a more explicit entry. For example, a data group DLO entry with a folder path of ACCOUNT would also apply to a transaction for a document in folder path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of ACCOUNT/J* were added, it would take precedence because it is more specific. For a folder path with multiple elements (for example, A/B/C/D), the exact checks and generic checks against data group DLO entries are performed on the path. If no match is found, the lowest path element is removed and the process is repeated. For example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until a match is found or until all elements of the path have been removed. If there is still no match, then checks for folder path *ALL are performed.

Sequence and priority order for documents


Table 19 illustrates the sequence in which MIMIX checks DLO entries for a match.
Table 19. Matching order for document names Folder Path Exact Exact Exact Document Name Exact Exact Generic* Owner Exact *ALL Exact

Search Order 1 2 3

124

Table 19.

Matching order for document names Folder Path Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* *ALL *ALL *ALL *ALL *ALL *ALL Document Name Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Owner *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL

Search Order 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Document example - Table 20 illustrates some sample data group DLO entries. For example, a transaction for any document in a folder named FINANCE would be blocked from replication because it matches entry 6. A transaction for document ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1 would be blocked by entry 1. A transaction for any document in FINANCE2 would be blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of entry 5.
Table 20. Entry 1 2 3 4 5 6 Sample data group DLO entries, arranged in order from most to least specific Folder Path FINANCE1 FINANCE1 FINANCE1 FINANCE1 FINANCE2/Q1 FIN* Document PAYROLL LEDGER* *ALL *ALL *ALL *ALL Owner *ALL *ALL SMITHA *ALL *ALL *ALL Process Type *EXCLD *EXCLD *EXCLD *INCLD *INCLD *EXCLD

Sequence and priority order for folders


Folders are treated somewhat differently than documents. Folders are replicated based on whether there are any data group DLO entries with a process type of *INCLD that would require the folder to exist on the target system. If a folder needs to exist to satisfy the folder path of an include entry, the folder will be replicated even if a different exclude entry prevents replication of the contents of the folder.

125

Identifying DLOs for replication

There is one exception to the requirement of replicating folders to satisfy the folder path for an include entry. A folder will not be replicated when the only include entry that would cause its replication specifies *ALL for its folder path and the folder matches an exclude entry with an exact or a generic folder path name, a document value of *ALL and an owner of *ALL. Table 20 and Table 21 illustrate the differences in matching folders to be replicated. In Table 20, above, a transaction for a folder named FINANCE would be blocked from replication because it matches entry 6. This would also affect all folders within FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4. Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5. Note that any transactions for documents in FINANCE2 or any child folders other than those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself must exist to satisfy entry 5. In Table 21, although entry 5 is an include entry, a transaction for folder ACCOUNT would be blocked from replication because it matches entry 2. This is because of the exception described above. ACCOUNT matches an exclude entry with an exact folder path, document value of *ALL, and an owner of *ALL, and the only include entry that would cause it to be replicated specifies folder path *ALL. The exception also affects all child folders in the ACCOUNT folder path. Note that the exception holds true even if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific folder name match takes precedence.
Table 21. Entry 1 2 3 4 5 Sample data group DLO entries, folder example Folder Path ACCOUNT2 ACCOUNT *ALL *ALL *ALL Document LEDGER* *ALL ABC* *ALL *ALL Owner *ALL *ALL *ALL JONESB *ALL Process Type *EXCLD *EXCLD *INCLD *INCLD *INCLD

A transaction for folder ACCOUNT2 would be replicated even though it is an exact path name match for exclude entry 1. The exception does not apply because entry 1 does not specify document *ALL. Entry 5 requires that ACCOUNT2 exist on the target system to satisfy the folder path requirements for document names other than LEDGER* and for child folders of ACCOUNT2.

126

Processing of newly created files and objects


Your production environment is dynamic. New objects continue to be created after MIMIX is configured and running. When properly configured, MIMIX automatically recognizes entries in the user journal that identify new create operations and replicates any that are eligible for replication. Optionally, MIMIX can also notify you of newly created objects not eligible for replication so that you can choose whether to add them to the configuration. Configurations that replicate files, data areas, data queues, or IFS objects from user journal entries require journaling to be started on the objects before replication can occur. When a configuration enables journaling to be implicitly started on new objects, a newly created object is already journaled. When the journaled object falls within the group of objects identified for replication by a data group, MIMIX replicates the create operation. Processing variations exist based on how the data group and the data group entry with the most specific match to the object are configured. These variations are described in the following subtopics. The MMNFYNEWE monitor is a shipped journal monitor that watches the security audit journal (QAUDJRN) for newly created libraries, folders, or directories that are not already included or excluded for replication by a data group and sends warning notifications when its conditions are met. This monitor is shipped disabled. User action is required to enable this monitor on the source system within your MIMIX environment. Once enabled, the monitor will automatically start with the master monitor. For more information about the conditions that are checked, see topic Notifications for newly created objects in the Using MIMIX book. For more information about requirements and restrictions for implicit starting of journaling as well as examples of how MIMIX determines whether to replicate a new object, see What objects need to be journaled on page 323.

Newly created files


When newly created *FILE objects are implicitly journaled and are eligible for replication, the replication processes used depend on how the data group definition is configured and how the data group entry with the most specific match to the file is configured.

New file processing - MIMIX Dynamic Apply


When a data group definition meets configuration requirements for MIMIX Dynamic Apply and data group object and file entries are properly configured, new files created on the source system that are eligible for replication will be re-created on the target system by MIMIX. The following briefly describes the events that occur for newly created files on the source system which are configured for MIMIX Dynamic Apply: System journal replication processes ignore the creation entry, knowing that user journal replication processes will get a create entry as well. User journal replication processes dynamically add a file entry for a file when a file create is seen in the user journal. The file entry is added with a status of *ACTIVE. User journal replication processes create the file on the target system. Replication

127

Processing of newly created files and objects

proceeds normally after the file has been created. All subsequent file changes including moves or renames, member operations (adds, changes, and removes), member data updates, file changes, authority changes, and file deletes are replicated through the user journal.

New file processing - legacy cooperative processing


When a data group definition meets configuration requirements for legacy cooperative processing and data group object and file entries are properly configured, files created on the source system will be saved and restored to the target system by system journal replication processes. The following briefly describes the events that occur when files are created that have been defined for legacy cooperative processing: System journal replication processes communicate with user journal replication processes to add a data group file entry for the file (ADDDGFE command). The file entry is added with the status of *HLD. A user journal transaction is created on the source system and is transferred to the target system to dynamically add the file to active user journal processes. Journaling on the file is started if it is not already active. System journal replication processes save the created file, restores it on the target system, then communicates with user journal replication processes to issue a release wait request against the file. The status of the file entry changes to *RLSWAIT. The database apply process waits for the save point in the journal, and then makes the file active. The status of the file entry changes to *ACTIVE.

Newly created IFS objects, data areas, and data queues


When journaling is implicitly started for IFS objects, data areas, and data queues, newly created objects that are eligible for replication are automatically replicated. Configuration values specified in the data group IFS entry or object entry that most specifically matches the new object determines what replication processes are used. Note: Non-journaled objects are replicated through the system journal. For IFS objects, MIMIX user journal replication processes will replicate creates of IFS objects if the parent directory is journaled to the journal defined for a data group. Typically, if MIMIX commands were used to start journaling on the parent directory, new objects are permitted to inherit journal information from the parent directory. For data areas and data queues, automatic journaling of new *DTAARA or *DTAQ objects is only supported in i5/OS V5R4 and higher. MIMIX configurations can be enabled to permit the automatic start of journaling for newly created data areas and data queues in libraries journaled to a user journal. New version 5 MIMIX installations that meet the i5/OS requirement and are configured for MIMIX Dynamic Apply of files automatically have this behavior. Installations that upgraded to version 5 may require conversion to MIMIX Dynamic Apply before automatic journaling of these object types can occur.

128

For more information about requirements for implicit starting of journaling, see What objects need to be journaled on page 323. If the object is journaled to the user journal, MIMIX user journal replication processes can fully replicate the create operation. The user journal entries contain all the information necessary for replication without needing to retrieve information from the object on the source system. MIMIX creates a tracking entry for the newly created object and an activity entry representing the T-CO (create) journal entry. If the object is not journaled to the user journal, then the create of the object is processed with system journal processing. If the specified values in data group entry that identified the object as eligible for replication do not allow the object type to be cooperatively processed, the create of the object and subsequent operations are replicated through system journal processes. When MIMIX replicates a create operation through the user journal, the create timestamp (*CRTTSP) attribute may differ between the source and target systems.

Determining how an activity entry for a create operation was replicated


To determine whether a create operation of a given object is being replicated through user journal processes or through system journal processes, do the following: 1. On the Data Group Activity Entries (WRKDGACTE) display, locate the entry for a create operation that you want to check. Create operations have a value of T-CO in the Code column. 2. Use option 5 (Display) next to an activity entry for a create operation. 3. On the resulting details display, check the value of the Requires container send field. If *YES appears for an activity entry representing a create operation, the create operation is being replicated through the system journal. If *NO appears in the field, the create operation is being replicated through the user journal.

129

Processing variations for common operations

Processing variations for common operations


Some variation exists in how MIMIX performs common operations such as moves, renames, deletes, and restores. The variations are based on the configuration of the data group entry used for replication. Configurations specify whether these operations are processed through the system journal, user journal, or a combination of both journals. Advanced journaling (user journal replication of data areas, data queues and IFS objects), legacy cooperative processing, and MIMIX Dynamic Apply utilize both journals, however MIMIX Dynamic Apply primarily processes through the user journal. For IFS objects, user journal replication offers full support of create, restore, delete, and move and rename operations. In environments using V5R4 and higher operating systems, user journal replication also offers full support of these operations for data area and data queue objects.

Move/rename operations - system journal replication


Table 22 describes how MIMIX processes a move or rename journal entry from the system journal. MIMIX uses system journal replication processes DLOs and for IFS objects and library-based objects which are not explicitly identified for user journal replication. The Original Source Object and New Name or Location columns indicate whether the object is identified within the name space for replication. The Action column indicates the operation that MIMIX will attempt on the target system.
Table 22. Current object move actions New Name or Location Within name space of objects to be replicated Excluded from or not identified for replication Within name space of objects to be replicated Excluded from or not identified for replication MIMIX Action on Target System Create Object 1 Delete Object 2 Move Object None

Original Source Object Excluded from or not identified for replication Identified for replication Identified for replication Excluded from or not identified for replication

1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry, it is not guaranteed that an object with the same name exists on the backup system or that it is really the same object as on the source system. To ensure the integrity of the target (backup) system, a copy of the source object must be brought over from the source system. 2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is no guarantee that the target library exists on the target system. Further, the customer is assumed not to care if the target object is replicated, since it is not defined with an Include entry, so deleting the object is the most straight forward approach.

130

Move/rename operations - user journaled data areas, data queues, IFS objects
IFS, data area, and data queue objects replicated by user journal replication processes can be moved or renamed while maintaining the integrity of the data. If the new location or new name on the source system remains within the set of objects identified as eligible for replication, MIMIX will perform the move or rename operation on the object on the target system. When a move or rename operation starts with or results in an object that is not within the name space for user journal replication, MIMIX may need to perform additional operations in order to replicate the operation. MIMIX may use a create or delete operation and may need to add or remove tracking entries. Each row in Table 23 summarizes a move/rename scenario and identifies the action taken by MIMIX.
Table 23. MIMIX actions when processing moves or renames of objects when user journal replication processes are involved New name or location Within name space of objects to be replicated with user journal processing Not identified for replication Not identified for replication Within name space of objects to be replicated with system journal processing Within name space of objects to be replicated with user journal processing MIMIX action Moves or renames the object on the target system and renames the associated tracking entry. See example 1.

Source object Identified for replication with user journal processing Not identified for replication Identified for replication with user journal processing Identified for replication with user journal processing

None. See example 2. Deletes the target object and deletes the associated tracking entry. The object will no longer be replicated. See example 3. Moves or renames the object using system journal processes and removes the associated tracking entry. See example 4.

Identified for replication with system journal processing

Creates tracking entry for the object using the new name or location and moves or renames the object using user journal processes. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication and synchronizes those objects. See example 5. Creates tracking entry for the object using the new name or location. If the object is a library or directory, MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication. Synchronizes all of the objects identified by these new tracking entries. See example 6.

Not identified for replication

Within name space of objects to be replicated with user journal processing

131

Processing variations for common operations

The following examples use IFS objects and directories to illustrate the MIMIX operations in move/rename scenarios that involve user journal replication (advanced journaling). The MIMIX behavior described is the same as that for data areas and data queues that are within the configured name space for advanced journaling. Table 24 identifies the initial set of source system objects, data group IFS entries, and IFS tracking entries before the move/rename operation occurs.
Table 24. Initial data group IFS entries, IFS tracking entries, and source IFS objects for examples Data Group IFS Entries /TEST/STMF* /TEST/DIR* Source System IFS Objects in Name Space /TEST/stmf1 /TEST/dir1/doc1 Associated Data Group IFS Tracking Entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1 system journal replication /TEST/NOTAJ* /TEST/notajstmf1 /TEST/notajdir1/doc1

Configuration Supports advanced journaling advanced journaling

Example 1, moves/renames within advanced journaling name space: The most common move and rename operations occur within advanced journaling name space. For example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/dir2, and that the IFS stream file /TEST/stmf1 was renamed to /TEST/stmf2. In both cases, the old and new names fall within advanced journaling name space, as indicated in Table 23. The rename operations are replicated and names are changed on the target system objects. The tracking entries for these objects are also renamed. The resulting changes on the target system objects and MIMIX configuration are shown in Table 25.
Table 25. Results of move/rename operations within name space for advanced journaling Resulting data group IFS tracking entries /TEST/stmf2 /TEST/dir2 /TEST/dir2/doc1

Resulting Target IFS objects /TEST/stmf2 /TEST/dir2/doc1

Example 2, moves/renames outside name space: When MIMIX encounters a journal entry for a source system object outside of the name space that has been renamed or moved to another location also outside of the name space, MIMIX ignores the transaction. The object is not eligible for replication. Example 3, moves/renames from advanced journaling name space to outside name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS stream file /TEST/stmf1 was renamed to /TEST/xstmf1. MIMIX is aware of only the original names, as indicated in Table 23. Thus, the old name is eligible for replication,

132

but the new name is not. MIMIX treats this as a delete operation during replication processing. MIMIX deletes the IFS directory and IFS stream file from the target system. MIMIX also deletes the associated IFS tracking entries. Example 4, moves/renames from advanced journaling to system journal name space: In this example, MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both the old names and new names are eligible for replication as indicated in Table 23. However, the new names fall within the name space for replication through the system journal. As a result, MIMIX removes the tracking entries associated with the original names and performs the rename operation the objects on the target system. Table 26 shows these results.
Table 26. Results of move/rename operations from advanced journaling to system journal name space Resulting data group IFS tracking entries (removed) (removed)

Resulting target IFS objects /TEST/notajstmf1 /TEST/notajdir1/doc1

Example 5, moves/renames from system journal to advanced journaling name space: In this example, MIMIX encounters journal entries indicating that source system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. MIMIX is aware that the old names are within the system journal name space and that the new names are within the advanced journaling name space. MIMIX creates tracking entries for the names and then performs the rename operation on the target system using advanced journaling. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 27 illustrates the results on the target system.
Table 27. Results of move/rename operations from system journal to advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1

Resulting target IFS objects /TEST/stmf1 /TEST/dir1/doc1

Example 6, moves/renames from outside to within advanced journaling name space: In this example MIMIX encounters journal entries indicating that the source system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/xstmf1 was renamed to /TEST/stmf1. The original names are outside of the name space and are not eligible for replication. However, the new names are within

133

Processing variations for common operations

the name space for advanced journaling as indicated in Table 23. Because the objects were not previously replicated, MIMIX processes the operations as creates during replication. See Newly created files on page 127. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 28 illustrates the results.
Table 28. Results of move/rename operations from outside to within advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1

Resulting target IFS objects /TEST/stmf1 /TEST/dir1/doc1

Delete operations - files configured for legacy cooperative processing


The following briefly describes the events that occur in MIMIX when a file that is defined for legacy cooperative processing is deleted: System journal replication processes communicate with user journal replication processes that a file has been deleted on the source system and indicates that the file should be deleted from the target system. A journal transaction which identifies the deleted file is created on the source system. The transaction is transferred dynamically. If the data group file entry is set to use the option to dynamically update active replication processes, the file and associated file entry will be dynamically removed from the replication processes. If the dynamic update option is not used, the data group changes are not recognized until all data group processes are ended and restarted. MIMIX system journal replication processes delete the file on the target system.

Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is encountered in the system journal, MIMIX system journal replication processes generate an activity entry representing the delete operation and handle the delete of the object from the target system. The user journal replication processes remove the corresponding tracking entry.

Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, the pre-existing object is replaced by a backup copy on the source system. With user journal replication, restores of IFS, data area, and data queue objects on the source system are

134

supported through cooperative processing between MIMIX system journal and user journal replication processes. Provided the object was journaled when it was saved, a restored IFS, data area, or data queue object is also journaled . During cooperative processing, system journal replication processes generate an activity entry representing the T-OR (restore) journal entry from the system journal and perform a save and restore operation on the IFS, data area, or data queue object. Meanwhile, user journal replication processes handle the management of the corresponding IFS or object tracking entry. MIMIX may also start journaling, or end and restart journaling on the object so that the journaling characteristics of the IFS, data area, or data queue object match the data group definition.

135

Processing variations for common operations

136

Chapter 5

Configuration checklists
MIMIX can be configured in a variety of ways to support your replication needs. Each configuration requires a combination of definitions and data group entries. Definitions identify systems, journals, communications, and data groups that make up the replication environment. Data group entries identify what to replicate and the replication option to be used. For available options, see Replication choices by object type on page 96. Also, advanced techniques, such as keyed replication, have additional configuration requirements. For additional information see Configuring advanced replication techniques on page 353. New installations: Before you start configuring MIMIX, system-level configuration for communications (lines, controllers, IP interfaces) must already exist between the systems that you plan to include in the MIMIX installation. Choose one of the following checklists to configure a new installation of MIMIX. Checklist: New remote journal (preferred) configuration on page 139 uses shipped default values to create a new installation. Unless you explicitly configure them otherwise, new data groups will use the i5/OS remote journal function as part of user journal replication processes. Checklist: New MIMIX source-send configuration on page 143 configures a new installation and is appropriate when your environment cannot use remote journaling. New data groups will use MIMIX source-send processes in user journal replication. To configure a new installation that is to use the integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ book.

Upgrades and conversions: You can use any of the following topics, as appropriate, to change a configuration: Checklist: Converting to remote journaling on page 147 changes an existing data group to use remote journaling within user journal replication processes. Converting to MIMIX Dynamic Apply on page 150 provides checklists for two methods of changing the configuration of an existing data group to use MIMIX Dynamic Apply for logical and physical file replication. Data groups that existed prior to installing version 5 must use this information in order to use MIMIX Dynamic Apply. Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 changes the configuration of an existing data group to use user journal replication processes for these objects. To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an existing installation, use topic Choosing the correct checklist for MIMIX for MQ in the MIMIX for IBM WebSphere MQ book. Checklist: Converting to legacy cooperative processing on page 157 changes the configuration of an existing data group so that logical and physical source files are processed from the system journal and physical data files use legacy

137

cooperative processing. Other checklists: The following configuration checklist employs less frequently used configuration tools and is not included in this chapter. Use Checklist: copy configuration on page 553 if you need to copy configuration data from an existing product library into another MIMIX installation.

138

Checklist: New remote journal (preferred) configuration


Use this checklist to configure a new installation of MIMIX. This checklist creates the preferred configuration that uses i5/OS remote journaling and uses MIMIX Dynamic Apply to cooperatively process logical and physical files. To configure your system manually, perform the following steps on the system that you want to designate as the management system of the MIMIX installation: 1. Communications between the systems must be configured and operational before you start configuring MIMIX. a. If communications is not configured, refer to Chapter 6, System-level communications for more information. b. If you have TCP configured and plan to use it for your transfer protocol, verify that is it is operational using the PING command. 2. Create system definitions for the management system and each of the network systems for the MIMIX installation. Use topic Creating system definitions on page 170. 3. Create transfer definitions to define the communications protocol used between pairs of systems. A pair of systems consists of a management system and a network system. Use topic Creating a transfer definition on page 184. 4. If you have implemented DDM password validation, you need to verify that your environment will allow MIMIX RJ support to work properly. Use topic Checking DDM password validation level in use on page 306. 5. If you are using the TCP protocol, ensure that the Lakeview TCP server is running on each system defined in the transfer definition. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not active on a system, use topic Starting the Lakeview TCP/IP server on page 189. Note: You can optionally configure so that the Lakeview TCP server starts automatically. Use the procedure in topic Using autostart job entries to start the TCP server on page 190. 6. If you are using the TCP protocol, ensure that the DDM TCP server is running using topic Starting the DDM TCP/IP server on page 308. 7. Verify that the communications link defined in each transfer definition is operational using topic Verifying a communications link for system definitions on page 194. 8. Start the MIMIX managers using topic Starting the system and journal managers on page 296. When the system manager is running, configuration information for data groups will be automatically replicated to the other system as you create it. 9. Create the data group definitions that you need using topic Creating a data group definition on page 247. The referenced topic creates a data group definition with appropriate values to support MIMIX Dynamic Apply. 10. Verify all potential communications links that can be used by this configuration using topic Verifying the communications link for a data group on page 195.

139

Checklist: New remote journal (preferred) configuration

11. Use Table 29 to create data group entries for this configuration. This configuration requires object entries and file entries for LF and PF files. For other object types or classes, any replication options identified in planning topic Replication choices by object type on page 96 are supported.
Table 29. Class Librarybased objects How to configure data group entries for the remote journal (preferred) configuration. Do the following: 1. Create object entries using. UseCreating data group object entries on page 267. 2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using Loading file entries from a data groups object entries on page 273.
Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF data files to ensure that legacy cooperative processing can be used.

Planning and Requirements Information Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112

3. After creating object entries, load object tracking entries for any *DTAARA and *DTAQ objects to be replicated from a user journal. Use Loading object tracking entries on page 285. IFS objects 1. Create IFS entries using Creating data group IFS entries on page 282. 2. After creating IFS entries, load IFS tracking entries for IFS objects to be replicated from a user journal. Use Loading IFS tracking entries on page 284. Create DLO entries using Creating data group DLO entries on page 287.

Identifying IFS objects for replication on page 118

DLOs

Identifying DLOs for replication on page 124

12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 13. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure Setting data group auditing values manually on page 297. 14. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source

140

system or batch processing until you have successfully completed Step 18. 15. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330 For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334

16. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 17. Verify your configuration. Topic Verifying the initial synchronization on page 487 identifies the additional aspects of your configuration that are necessary for successful replication. 18. Start the data groups. You should use the procedure Starting Selected Data Group Processes in the Using MIMIX book.

141

Checklist: New remote journal (preferred) configuration

142

Checklist: New MIMIX source-send configuration


Best practices for MIMIX are to use MIMIX Remote Journal support for database replication. However, in cases where you cannot use remote journaling, this checklist will configure a new installation that uses MIMIX source-send processes for database replication. System journal replication is also configured. To configure a source-send environment, perform the following steps on the system that you want to designate as the management system of the MIMIX installation: 1. Communications between the systems must be configured and operational before you start configuring MIMIX. a. If communications is not configured, refer to Chapter 6, System-level communications for more information. b. If you have TCP configured and plan to use it for your transfer protocol, verify that is it is operational using the PING command. 2. Create system definitions for the management system and each of the network systems for the MIMIX installation. Use topic Creating system definitions on page 170. 3. Create transfer definitions to define the communications protocol used between pairs of systems. A pair of systems consists of a management system and a network system. Use topic Creating a transfer definition on page 184. 4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running on each system defined in the transfer definition. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not active on a system, use topic Starting the Lakeview TCP/IP server on page 189. Note: You can optionally configure so that the Lakeview TCP server starts automatically. Use the procedure in topic Using autostart job entries to start the TCP server on page 190. 5. Verify that the communications link defined in each transfer definition is operational using topic Verifying a communications link for system definitions on page 194. 6. Start the MIMIX managers using topic Starting the system and journal managers on page 296. When the system manager is running, configuration information for data groups will be automatically replicated to the other system as you create it. 7. Create the data group definitions that you need using topic Creating a data group definition on page 247. 8. If the journaling environment does not exist, use topic Building the journaling environment on page 219 to create the journaling environment. 9. Verify all potential communications links that can be used by this configuration using topic Verifying the communications link for a data group on page 195. 10. Use Table 30 to create data group entries for this configuration. This configuration requires object entries and file entries for legacy cooperative processing of PF data files. For other object types or classes, any replication options identified in

143

Checklist: New MIMIX source-send configuration

planning topic Replication choices by object type on page 96 are supported.


Table 30. Class Librarybased objects How to configure data group entries a new MIMIX source-send configuration. Do the following: 1. Create object entries using Creating data group object entries on page 267. 2. After creating object entries, load file entries for PF (data) *FILE objects using Loading file entries from a data groups object entries on page 273. 3. After creating object entries, load object tracking entries for *DTAARA and *DTAQ objects to be replicated from a user journal. Use Loading object tracking entries on page 285. 1. Create IFS entries using Creating data group IFS entries on page 282. 2. After creating IFS entries, load IFS tracking entries for IFS objects to be replicated from a user journal. Use Loading IFS tracking entries on page 284. Create DLO entries using Creating data group DLO entries on page 287. Planning and Requirement Information Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112 Identifying IFS objects for replication on page 118

IFS objects

DLOs

Identifying DLOs for replication on page 124

11. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 12. If you anticipate a delay between configuring data group entries (object, DLO, or IFS) and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. Use the procedure Setting data group auditing values manually on page 297. 13. Ensure that there are no batch jobs or users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system or batch processing until you have successfully completed Step 17. 14. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems

144

For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330 For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334

15. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 16. Verify your configuration. Topic Verifying the initial synchronization on page 487 identifies the additional aspects of your configuration that are necessary for successful replication. 17. Start the data groups. You should use the procedure Starting Selected Data Group Processes in the Using MIMIX book.

145

Checklist: New MIMIX source-send configuration

146

Checklist: Converting to remote journaling


Use this checklist to convert an existing data group from using MIMIX source-send processes to using MIMIX Remote Journal support for user journal replication. Note: This checklist does not change values specified in data group entries that affect how files are cooperatively processed or how data areas, data queues, and IFS objects are processed. For example, files configured for legacy processing prior to this conversion will continue to be replicated with legacy cooperative processing. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. 1. If you use a startup program, make the modifications to the program described in Changes to startup programs on page 305. 2. If you have implemented DDM password validation, you need to verify that your environment will allow MIMIX RJ support to work properly. Use topic Checking DDM password validation level in use on page 306. 3. Do the following to ensure that you have a functional transfer definition: a. Modify the transfer definition to identify the RDB directory entry. Use topic Changing a transfer definition to support remote journaling on page 186. b. Verify the communication link using Verifying the communications link for a data group on page 195. 4. If you are using the TCP protocol, ensure that the DDM TCP server is running using topic Starting the DDM TCP/IP server on page 308. 5. Connect the journal definitions for the local and remote journals using Adding a remote journal link on page 225. This procedure also creates the target journal definition. 6. Build the journaling environment on each system defined by the RJ pair using Building the journaling environment on page 219. 7. Modify the data group definition as follows: a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter. b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts. c. Specify *YES for the Use remote journal link prompt. d. When you are ready to accept the changes, press Enter. 8. To make the configuration changes effective, you need to end the data group you are converting to remote journaling and start it again as follows: a. Perform a controlled end of the data group (ENDDG command), specifying *ALL for Process and *CNTRLD for End process. Refer to topic Ending all replication in a controlled manner in the Using MIMIX book.

147

Checklist: Converting to remote journaling

b. Start data group replication using the procedure Starting selected data group processes in the Using MIMIX book. Be sure to specify *ALL for Start processes prompt (PRC parameter) and *LASTPROC as the value for the Database journal receiver and Database sequence number prompts.

148

149

Converting to MIMIX Dynamic Apply

Converting to MIMIX Dynamic Apply


Use either procedure in this topic to change a data group configuration to use MIMIX Dynamic Apply. In a MIMIX Dynamic Apply configuration, objects of type *FILE (LF, PF source and data) are replicated using primarily user journal replication processes. This configuration is the most efficient way to process these files. Converting using the Convert Data Group command on page 150 automatically converts a data group configuration. Checklist: manually converting to MIMIX Dynamic Apply on page 151 enables you to perform the conversion yourself.

It is recommended that you contact your Certified MIMIX Consultant for assistance before performing this procedure. Requirements: Before starting, consider the following: Any data group that existed prior to installing version 5 must use one of these procedures in order to use MIMIX Dynamic Apply. As of version 5, newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command defaults are used. Any data group to be converted must already be configured to use remote journaling. Any data group to be converted must have *SYSJRN specified as the value of Cooperative journal (COOPJRN). Keyed replication cannot be present in the data group configuration. A minimum level of i5/OS PTFs are required on both systems. For a complete list of required and recommended IBM PTFs, log in to Support Central and refer to the Technical Documents page. The conversion must be performed from the management system. The data group must be active when starting the conversion.

For additional information about configuration requirements and limitations of MIMIX Dynamic Apply, see Identifying logical and physical files for replication on page 105.

Converting using the Convert Data Group command


The Convert Data Group (CVTDG) will automatically convert the configuration of specified data groups to enable MIMIX Dynamic Apply. This command will automatically attempt to perform the steps described in the manual procedure and will issue diagnostic messages if a step cannot be performed. Perform the following steps from the management system on an active data group: 1. From a command line enter the command: CVTDG DGDFN(name system1 system2) 2. Watch for diagnostic messages in the job log and take any recovery action indicated. The conversion is complete when you see message LVI321A.

150

Checklist: manually converting to MIMIX Dynamic Apply


Perform the following steps from the management system to enable an existing data group to use MIMIX Dynamic Apply: 1. Verify the environment meets the requirements and restrictions. See Requirements and limitations of MIMIX Dynamic Apply on page 110. 2. Apply any IBM PTFs (or their supersedes) associated with i5/OS releases as they pertain to your environment. Log in to Support Central and refer to the Technical Documents page for a list of required and recommended IBM PTFs. 3. Verify that the System Manager jobs are active. See Starting the system and journal managers on page 296. 4. Verify that data group is synchronized by running the MIMIX audits. See Verifying the initial synchronization on page 487. 5. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. Refer to topic Preparing for a controlled end of a data group in the Using MIMIX book. Note: Topic Ending a data group in a controlled manner in the Using MIMIX book includes subtask Preparing for a controlled end of a data group and the other subtasks needed for Step 6 and Step 7. 6. Perform a controlled end of the data group you are converting. Follow the procedure for Performing the controlled end in the Using MIMIX book. 7. Ensure that there are no open commit cycles for the database apply process. Follow the steps for Confirming the end request completed without problems in the Using MIMIX book. 8. From the management system, change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *USRJRN. Use the command: CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN) 9. Ensure that you have one or more data group object entries that specify the required values. These entries identify the items within the name space for replication. You may need to create additional entries to achieve desired results. For more information, see Identifying logical and physical files for replication on page 105. 10. To ensure that new files created while the data group is inactive are automatically journaled, create the QDFTJRN data areas into the libraries configured for replication of cooperatively processed files by running the following command from the source system: SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN) 11. From the management system, use the following command to load the data group file entries from the target system. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system. LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(value) SELECT(*NO)

151

Converting to MIMIX Dynamic Apply

For additional information about loading file entries, see Loading file entries from a data groups object entries on page 273. 12. Start journaling for all files not previously journaled. See Starting journaling for physical files on page 326. 13. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CRLPND(*YES) 14. Verify that data groups are synchronized by running the MIMIX audits. See Verifying the initial synchronization on page 487.

152

153

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling


Use this checklist to change the configuration of an existing data group so that IFS objects, *DTAARA and *DTAQ objects can be replicated from entries in a user journal. This environment is also called advanced journaling. Topic User journal replication of IFS objects, data areas, data queues on page 72 describes the benefits and restrictions of replicating these objects from user journal entries. It also identifies the MIMIX processes used for replication and the purpose of tracking entries. To convert existing data groups to use advanced journaling, do the following: 1. Determine if IFS objects, data areas, and data queues should be replicated in a data group shared with other objects undergoing database replication, or if these objects should be in a separate data group. Topic Planning for journaled IFS objects, data areas, and data queues on page 85 provides guidelines for the following planning considerations: Serializing transactions with database files Converting existing data groups, including examples Database apply session balancing User exit program considerations

2. Perform a controlled end of the data groups that will include objects to be replicated using advanced journaling. See the Using MIMIX book for how to end a data group in a controlled manner (ENDOPT(*CNTRLD)). 3. Ensure that all pending activity for objects and IFS objects has completed. Use the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity entries. Any activities that are still in progress will be listed. 4. The data group definitions used for user journal replication of IFS objects, data areas, and data queues must specify *ALL as the value for Data group type (TYPE). Verify the value in the data group definition is correct. If necessary, change the value. 5. Add or change data group IFS entries for the IFS objects you want to replicate. Be sure to specify *YES for the Cooperate with database prompt in procedure Adding or changing a data group IFS entry on page 282. For additional information, see Restrictions - user journal replication of IFS objects on page 121. 6. Add or change data group object entries for the data areas and data queues you want to replicate using the procedure Adding or changing a data group object entry on page 268. For additional information, see Restrictions - user journal replication of data areas and data queues on page 113. 7. Load the tracking entries associated with the data group IFS entries and data group object entries you configured. Use the procedures in Loading tracking entries on page 284.

154

8. Start journaling using the following procedures as needed for your configuration. If you ever plan to switch the data groups, you must also start journaling on the target system. For IFS objects, use Starting journaling for IFS objects on page 330 For data areas or data queues, use Starting journaling for data areas and data queues on page 334

9. Verify that journaling is started correctly. This step is important to ensure the IFS objects, data areas and data queues are actually replicated. For IFS objects, see Verifying journaling for IFS objects on page 332. For data areas and data queues, see Verifying journaling for data areas and data queues on page 336. 10. If you anticipate a delay between configuring data group IFS, object, or file entries and starting the data group, use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects are properly audited and that any transactions for the objects that occur between configuration and starting the data group are replicated. Use the procedure Setting data group auditing values manually on page 297. 11. Synchronize the IFS objects, data areas and data queues between the source and target systems. For IFS objects, follow the Synchronize IFS Object (SYNCIFS) procedures. For data areas and data queues, follow the Synchronize Object (SYNCOBJ) procedures. Refer to chapter Synchronizing data between systems on page 472 for additional information. 12. If you are replicating large amounts of data, you should specify i5/OS journal receiver size options that provide large journal receivers and large journal entries. Journals created by MIMIX are configured to allow maximum amounts of data. Journals that already exist may need to be changed. a. After IFS objects are configured, perform the steps in Verifying journal receiver size options on page 213 to ensure journaling is configured appropriately. b. Change any journal receiver size options necessary using Changing journal receiver size options on page 213. 13. If you have database replication user exit programs, changes may need to be made. See User exit program considerations on page 87. 14. Once you have completed the preceding steps, start the data groups. For more information about starting data groups, see the Using MIMIX book.

155

Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling

156

Checklist: Converting to legacy cooperative processing


If you find that you cannot use MIMIX Dynamic Apply for logical and physical files, use this checklist to change the configuration of an existing data group so that user journal replication (MIMIX Dynamic Apply) is no longer used. This checklist changes the configuration so that physical data files can be processed using legacy cooperative processing. Logical files and physical source files will be processed using the system journal. For more information, see Requirements and limitations of legacy cooperative processing on page 111. Important! Before you use this checklist, consider the following: As of version 5, newly created data groups are configured for MIMIX Dynamic Apply when default values are taken and configuration requirements are met. This checklist does not convert user journal replication processes from using remote journaling to MIMIX source-send processing. This checklist only affects the configuration of *FILE objects. The configuration of any other *DTAARA, *DTAQ, or IFS objects that are replicated through the user journal are not affected.

Perform the following steps to enable legacy cooperative processing and system journal replication: 1. Verify that data group is synchronized by running the MIMIX audits. See Verifying the initial synchronization on page 487. 2. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. Refer to topic Preparing for a controlled end of a data group in the Using MIMIX book. Note: Topic Ending a data group in a controlled manner in the Using MIMIX book includes subtask Preparing for a controlled end of a data group and the subtask needed for Step 3. 3. End the data group you are converting by performing a controlled end. Follow the procedure for Performing the controlled end in the Using MIMIX book. 4. From the management system, change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *SYSJRN. Use the command: CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN) 5. From the management system, use the following command to load the data group file entries from the target system. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system. LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO) For additional information about loading file entries, see Loading file entries from a data groups object entries on page 273. 6. Optional step: Delete the QDFTJRN data areas. These data areas automatically start journaling for newly created files. This may not be desired because the

157

Checklist: Converting to legacy cooperative processing

journal image (JRNIMG) value for these files may be different than the value specified in the MIMIX configuration. Such a difference will be detected by the file attributes (#FILATR) audit. To delete these data areas, run the following command from each system: DLTDTAARA DTAARA(library/QDFTJRN) 7. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CRLPND(*YES)

158

Chapter 6

System-level communications
This information is provided to assist you with configuring the System i5 communications that are necessary before you can configure MIMIX. MIMIX supports the following communications protocols: Transmission Control Protocol/Internet Protocol (TCP/IP) Systems Network Architecture (SNA) OptiConnect

MIMIX should have a dedicated communications line that is not shared with other applications, jobs, or users on the production system. A dedicated path will make it easier to fine-tune your MIMIX environment and to determine the cause of problems. For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its own subnet. For SNA, it is recommended that MIMIX have its own communication line instead of sharing an existing SNA device. Your Certified MIMIX Consultant can assist you in determining your communications requirements and ensuring that communications can efficiently handle peak volumes of journal transactions. If you plan to use system journal replication processes, you need to consider additional aspects that may affect the communications speed. These aspects include the type of objects being transferred and the size of data queues, user spaces, and files defined to cooperate with user journal replication processes. MIMIX IntelliStart can help you determine your communications requirements. The topics in this chapter include: Configuring for native TCP/IP on page 159 describes using native TCP/IP communications and provides steps to prepare and configure your system for it. Configuring APPC/SNA on page 163 describes basic requirements for SNA communications. Configuring OptiConnect on page 163 describes basic requirements for OptiConnect communications and identifies MIMIX limitations when this communications protocol is used.

Configuring for native TCP/IP


MIMIX has the ability to use native TCP/IP communications over sockets. This allows users with TCP communications on their networks to use MIMIX without requiring the use of IBM ANYNET through SNA. Using TCP/IP communications may or may not improve your CPU usage, but if your primary communications protocol is TCP/IP, this can simplify your network configuration. Native TCP/IP communications allow MIMIX users greater flexibility and provides another option in the communications available for use on their System i5 systems.

159

Configuring for native TCP/IP

MIMIX users can also continue to use IBM ANYNET support to run SNA protocols over TCP networks. Preparing your system to use TCP/IP communications with MIMIX requires the following: 1. Configure both systems to use TCP/IP. The procedure for configuring a system to use TCP/IP is documented in the information included with the i5/OS software. Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the instructions to configure the system to use TCP/IP communications. 2. If you need to use port aliases, do the following: a. Refer to the examples Port aliases-simple example on page 160 and Port aliases-complex example on page 161. b. Create the port aliases for each system using the procedure in topic Creating port aliases on page 162. 3. Once the system-level communication is configured, you can begin the MIMIX configuration process.

Port aliases-simple example


Before using the MIMIX TCP/IP support, you must first configure the system to recognize the feature. This involves identifying the ports that will be used by MIMIX to communicate with other systems. The port identifiers used depend on the configuration of the MIMIX installations. MIMIX installations vary according to the needs of each enterprise. At a minimum, a MIMIX installation consists of one management system and one network system. A more complex MIMIX installation may consist of one management system and multiple network systems. A large enterprise may even have multiple MIMIX installations that are interconnected. Figure 8 shows a simple MIMIX installation in which the management system (LONDON) and a network system (HONGKONG) use the TCP communications protocol through the port number 50410. Figure 9 shows a MIMIX installation with two network systems.
Figure 8. Creating Ports. In this example, the MIMIX installation consists of two systems.

Figure 9.

Creating Ports. In this example, the MIMIX installation consists of three systems,

160

System-level communications

two of which are network systems.

In both Figure 8 and Figure 9, if you need to use port aliases for port 50410, you need to have a service table entry on each system that equates the port number to the port alias. For example, you might have a service table entry on system LONDON that defines an alias of MXMGT for port number 50410. Similarly, you might have service table entries on systems HONGKONG and CHICAGO that define an alias of MXNET for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in the transfer definition.

Port aliases-complex example


If a network system communicates with more than one management system (it participates with multiple MIMIX installations), it must have a different port for each management system with which it communicates. Figure 10 shows an example of such an environment with two MIMIX installations. In the LIBA cluster, the port 50410 is used to communicate between LONDON (the management system) and HONGKONG and CHICAGO (network systems). In the LIBB cluster, the port 50411 is used to communicate between CHICAGO (the management system for this cluster) and MEXICITY and CAIRO. The CHICAGO system has two port numbers defined, one for each MIMIX installation in which it participates.
Figure 10. Creating Port Aliases. In this example, the system CHICAGO participates in two

161

Configuring for native TCP/IP

MIMIX installations and uses a separate port for each MIMIX installation.

If you need to use port aliases in an environment such as Figure 10, you need to have a service table entry on each system that equates the port number to the port alias. In this example, CHICAGO would require two port aliases and two service table entries. For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the PORT1 and PORT2 parameters on the transfer definitions.

Creating port aliases


The following procedure describes the steps for creating port aliases which allow MIMIX installations to communicate through TCP/IP. Notes: Perform this procedure on each system in the MIMIX installation that will use the TCP protocol. To allow communications in both directions between a pair of systems, such as between a management system and a network system, you need to add port aliases for both systems in the pair on each system. If you are using more than one MIMIX installation, define a different set of aliases for each MIMIX installation.

Do the following to create a port alias on a system: 1. From a command line, type the command CFGTCP and press Enter. 2. The Configure TCP/IP menu appears. Select option 21 (Configure related tables) and press Enter.

162

System-level communications

3. The Configure Related Tables display appears. Select option 1 (Work with service table entries) and press Enter. 4. The Work with Service Table Entries display appears. Do the following: a. Type a 1 in the Opt column next to the blank lines at the top of the list. b. In the blank at the top of the Service column, use uppercase characters to specify the alias that the System i5 will use to identify this port as a MIMIX native TCP port. Attention: MIMIX requires that you restrict the length of port aliases to 14 or fewer characters and suggests that you specify the alias in uppercase characters. Note: Port alias names are case sensitive and must be unique to the system on which they are defined. For environments that have only one MIMIX installation, Lakeview Technology recommends that you use the same port number or same port alias on each system in the MIMIX installation. c. In the blank at the top of the Port column, specify the number of an unused port ID to be associated with the alias. The port ID can be any number greater than 1024 and less than 55534 that is not being used by another application. You can page down through the list to ensure that the number is not being used by the system. d. In the blank at the top of the Protocol column, type TCP to identify this entry as using TCP/IP communications. e. Press Enter. 5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the information shown for the alias and port is what you want. At the Text 'description' prompt, type a description of the port alias, enclosed in apostrophes, and then press Enter.

Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA (APPN or APPC) line, controller, and device must exist between the systems that will be identified by the transfer definition. If a line, controller, and device do not exist, consult your network administrator before continuing.

Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist between the two system that you identify in the transfer definition You can use the OptiConnect product from IBM for all communication for most1 MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify OptiConnect communications. Then you can do the following:

163

Configuring OptiConnect

Ensure that the QSOC library is in the system portion of the library list. Use the command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library is in the system portion of the library list. If it not, use the CHGSYSVAL command to add this library to the system library list. When you create the transfer definition, specify *OPTI for the transfer protocol.

1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP communicaitons.

164

System-level communications

165

Chapter 7

Configuring system definitions


By creating a system definition, you identify to MIMIX characteristics of a System i5 system that participates in a MIMIX installation. When you create a system definition, MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) for the associated system. This journal definition is used by MIMIX system journal replication processes. It is recommended that you avoid naming system definitions based on their roles. System roles such as source, target, production, and backup change upon switching. The topics in this chapter include: Tips for system definition parameters on page 167 provides tips for using the more common options for system definitions. Creating system definitions on page 170 provides the steps to follow for creating system definitions. Changing a system definition on page 171 provides the steps to follow for changing a system definition. Multiple network system considerations on page 172 describes recommendations when configuring an environment that has multiple network systems.

166

Tips for system definition parameters


This topic provides tips for using the more common options for system definitions. Context-sensitive help is available online for all options on the system definition commands. System definition (SYSDFN) This parameter is a single-part name that represents a system within a MIMIX installation. This name is a logical representation and does not need to match the system name that it represents. Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). System type (TYPE) This parameter indicates the role of this system within the MIMIX installation. A system can be a management (*MGT) system or a network (*NET) system. Only one system in the MIMIX installation can be a management system. Transfer definitions (PRITRFDFN, SECTFRDFN) These parameters identify the primary and secondary transfer definitions used for communicating with the system. The communications path and protocol are defined in the transfer definitions. For MIMIX to be operational, the transfer definition names you specify must exist. MIMIX does not automatically create transfer definitions. If you accept the default value primary for the Primary transfer definition, create a transfer definition by that name. If you specify a Secondary transfer definition, it will be used by MIMIX if communications path specified by the primary transfer definition is not available. Cluster member (CLUMBR) You can specify if you want this system definition to be a member of a cluster. The system (node) will not be added to the cluster until the system manager is started the first time. Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition that cluster resource services will use to communicate to the node and for the node to communicate with other nodes in the cluster. You must specify *TCP as the transfer protocol. Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message log facility which is common to all MIMIX products. These parameters provide additional flexibility by allowing you to identify the message queues associated with the system definition and define the message filtering criteria for each message queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL library. You can specify a different message queue or optionally specify a secondary message queue. You can also control the severity and type of messages that are sent to each message queue. Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the delay times used for all journal management and system management jobs. The value of the journal manager delay parameter determines how often the journal manager process checks for work to perform. The value of the system manager delay parameter determines how often the system manager process checks for work to perform.

167

Tips for system definition parameters

Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output queue used by this system definition and define characteristics of how the queue is handled. Any MIMIX functions that generate reports use this output queue. You can hold spooled files on the queue and save spooled files after they are printed. Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of days to retain MIMIX system history and data group history. MIMIX system history includes the system message log. Data group history includes time stamps and distribution history. You can keep both types of history information on the system for up to a year. Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the number of days to retain new and acknowledged notifications. The Keep new notifications (days) parameter specifies the number of days to retain new notifications in the MIMIX data library. The Keep acknowledged notifications (days) parameter specifies the number of days to retain acknowledged notifications in the MIMIX data library. MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT) Three parameters define information about MIMIX data libraries on the system. The Keep MIMIX data (days) parameter specifies the number of days to retain objects in the MIMIX data library, including the container cache used by system journal replication processes. The MIMIX data library ASP parameter identifies the auxiliary storage pool (ASP) from which the system allocates storage for the MIMIX data library. For libraries created in a user ASP, all objects in the library must be in the same ASP as the library. The Disk storage limit (GB) parameter specifies the maximum amount of disk storage that may be used for the MIMIX data libraries. User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs under the MIMIXOWN user profile and uses several job descriptions to optimize MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library. Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system manager and journal manager, restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The management or network role of the system affects the results of the time you specify on a system definition. Changing the job restart time is considered an advanced technique. Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control characteristics of printed output. Product library (PRDLIB) This parameter is used for installing MIMIX into a switchable independent ASP, and allows you to specify a MIMIX installation library that does not match the library name of the other system definitions. The only time this parameter should be used is in the case of an INTRA system (which is handled by the default value) or in replication environments where it is necessary to have extra MIMIX system definitions that will switch locations along with the switchable independent ASP. Due to its complexity, changing the product library is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant. ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable independent ASP, and defines the ASP group (independent ASP) in which the product library exists. Again, this parameter should only be used in replication

168

environments involving a switchable independent ASP. Due to its complexity, changing the ASP group is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant.

169

Creating system definitions

Creating system definitions


To create a system definition, do the following: 1. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter. 2. The Work with System Definitions display appears. Type a 1 (Create) next to the blank line at the top of the list area and press Enter. 3. The Create System Definition (CRTSYSDFN) display appears. Specify a name at the System definition prompt. Once created, the name can only be changed by using the Rename System Definition command. 4. Specify the appropriate value for the system you are defining at the System type prompt. 5. Specify the names of the transfer definitions you want at the Primary transfer definition and, if desired, the Secondary transfer definition prompts. 6. If the system definition is for a cluster environment, do the following: a. Specify *YES at the Cluster member prompt. b. Verify that the value of the Cluster transfer definition is what you want. If necessary, change the value. 7. If you want use a secondary message queue, at the prompts for Secondary message handling specify the name and library of the message queue and values indicating the severity and the Information type of messages to be sent to the queue. 8. At the Description prompt, type a brief description of the system definition. 9. If you want to verify or change values for additional parameters, press F10 (Additional parameters). 10. To create the system definition, press Enter.

170

Changing a system definition


To change a system definition, do the following: 1. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter. 2. The Work with System Definitions display appears. Type a 2 (Change) next to the system definition you want and press Enter. 3. The Change System Definition (CHGSYSDFN) display appears. Press F10 (Additional parameters) 4. Locate the prompt for the parameter you need to change and specify the value you want. Press F1 (Help) for more information about the values for each parameter. 5. To save the changes press Enter.

171

Multiple network system considerations

Multiple network system considerations


When configuring an environment that has multiple network systems, it is recommended that each system definition in the environment specify the same name for the Primary transfer definition prompt. This configuration is necessary for the MIMIX system managers to communicate between the management system and all systems in the network. Data groups can use the same transfer definitions that the system managers use, or they can use differently named transfer definitions. Similarly, if you use secondary transfer definitions, it is recommended that each system definition in the multiple network environment specifies the same name for the Secondary transfer definition prompt. (The value of the Secondary transfer definition should be different than the value of the Primary transfer definition.) Figure 11 shows system definitions in a multiple network system environment. The management system (LONDON) specifies the value PRIMARY for the primary transfer definition in its system definition. The management system can communicate with the other systems using any transfer definition named PRIMARY that has a value for System 1 or System 2 that resolves to its system name (LONDON). Figure 12 shows the recommended transfer definition configuration which uses the value *ANY for both systems identified by the transfer definition. The management system LONDON could also use any transfer definition that specified the name LONDON as the value for either System 1 or System 2. The default value for the name of a transfer definition is PRIMARY. If you use a different name, you need to specify that name as the value for the Primary transfer definition prompt in all system definitions in the environment.
Figure 11. Example of system definition values in a multiple network system environment.
Work with System Definitions System: Type options, press Enter. 1=Create 2=Change 3=Copy 11=Verify communications link 13=Data group definitions 4=Delete 5=Display 6=Print 12=Journal definitions 14=Transfer definitions Cluster Member *NO *NO *NO LONDON 7=Rename

Opt __ __ __ __

System _______ CHICAGO NEWYORK LONDON

Type *NET *NET *MGT

-Transfer DefinitionsPrimary Secondary PRIMARY PRIMARY PRIMARY *NONE *NONE *NONE

Figure 12. Example of a contextual (*ANY) transfer definition in use for a multiple network

172

system environment.
Work with Transfer Definitions System: Type options, press Enter. 1=Create 2=Change 3=Copy 11=Verify communications link 4=Delete 5=Display 6=Print LONDON

7=Rename

Opt __

---------Definition--------Name System 1 System 2 __________ _______ ________ PRIMARY *ANY *ANY

Protocol *TCP

Threshold (MB) *NOMAX

173

Chapter 8

Configuring transfer definitions


By creating a transfer definition, you identify to MIMIX the communications path and protocol to be used between two systems. You need at least one transfer definition for each pair of systems between which you want to perform replication. A pair of systems consists of a management system and a network system. If you want to be able to use different transfer protocols between a pair of systems, create a transfer definition for each protocol. System-level communication must be configured and operational before you can use a transfer definition. You can also define an additional communications path in a secondary transfer definition. If configured, MIMIX can automatically use a secondary transfer definition if the path defined in your primary transfer definition is not available. In an Intra environment, a transfer definition defines a communications path and protocol to be used between the two product libraries used by Intra. For detailed information about configuring an Intra environment, refer to Configuring Intra communications on page 559. Once transfer definitions exist for MIMIX, they can be used for other functions, such as the Run Command (RUNCMD), or by other MIMIX products for their operations. The topics in this chapter include: Tips for transfer definition parameters on page 176 provides tips for using the more common options for transfer definitions. Using contextual (*ANY) transfer definitions on page 181 describes using the value (*ANY) when configuring transfer definitions. Creating a transfer definition on page 184 provides the steps to follow for creating a transfer definition. Changing a transfer definition on page 186 provides the steps to follow for changing a transfer definition. This topic also includes sub-task for how to changing a transfer definition when converting to a remote journaling environment. Finding the system database name for RDB directory entries on page 188 provides the steps to follow for finding the system database name for RDB directory entries. Starting the Lakeview TCP/IP server on page 189 provides the steps to follow if you need to start the Lakeview TCP/IP server. Using autostart job entries to start the TCP server on page 190 provides the steps to configure the Lakeview TCP server to start automatically every time the MIMIX subsystem is started Verifying a communications link for system definitions on page 194 provides the steps to verify that the communications link defined for each system definition is operational.

174

Configuring transfer definitions

Verifying the communications link for a data group on page 195 provides a procedure to verify the primary transfer definition used by the data group.

175

Tips for transfer definition parameters

Tips for transfer definition parameters


This topic provides tips for using the more common options for transfer definitions. Context-sensitive help is available online for all options on the transfer definition commands. Transfer definition (TFRDFN) This parameter is a three-part name that identifies a communications path between two systems. The first part of the name identifies the transfer definition. The second and third parts of the name identify two different system definitions which represent the systems between which communication is being defined. Lakeview recommends that you use PRIMARY as the name of one transfer definition. To support replication, a transfer definition must identify the two systems that will be used by the data group. You can explicitly specify the two systems, or you can allow MIMIX to resolve the names of the systems. For more information about allowing MIMIX to resolve the system names, see Using contextual (*ANY) transfer definitions on page 181. Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). For more information, see Naming convention for remote journaling environments with 2 systems on page 206. Short transfer definition name (TFRSHORTN) This parameter specifies the short name of the transfer definition to be used in generating a relational database (RDB) directory name. The short transfer definition name must be a unique, four-character name if you specify to have MIMIX manage your RDB directory entries. Lakeview recommends that you use the default value *GEN to generate the name. The generated name is a concatenation of the first character of the transfer definition name, the last character of the system 1 name, the last character of the system 2 name, and the fourth character will be either a blank, a letter (A - Z), or a single digit number (0 - 9). Transfer protocol (PROTOCOL) This parameter specifies the communications protocol to be used. Each protocol has a set of related parameters. If you change the protocol specified after you have created the transfer definition, MIMIX saves information about both protocols. For the *TCP protocol the following parameters apply: System x host name or address (HOST1, HOST2) These two parameters specify the host name or address of system 1 and system 2, respectively. The name is a mixed-case host alias name or a TCP address (nnn.nnn.nnn.nnn) and can be up to 256 characters in length. For the HOST1 parameter, the special value *SYS1 indicates that the host name is the same as the name specified for System 1 in the Transfer definition parameter. Similarly, for the HOST2 parameter, the special value *SYS2 indicates that the host name is the same as the name specified for System 2 in the Transfer definition parameter. System x port number or alias (PORT1, PORT2) These two parameters specify the port number or port alias of system1 and system 2, respectively. The value of each parameter can be a 14-character mixed-case TCP port number or port alias

176

with a range from 1000 through 55534. Lakeview Technology recommends using values between 40000 and 55500 to avoid potential conflicts with designations made by the operating system. By default, the PORT1 parameter uses the port 50410. For the PORT2 parameter, the default special value *PORT1 indicates that the value specified on the System 1 port number or alias (PORT1) parameter is used. If you configured TCP using port aliases in the service table, specify the alias name instead of the port number. For the *SNA protocol the following parameters apply: System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter. System x network identifier (NETID1, NETID2) These two parameters specify name of the network for system 1 and system 2, respectively. The default value *LOC indicates that the network identifier for the location name associated with the system is used. The special value *NETATR indicates that the value specified in the system network attributes is used. The special value *NONE indicates that the network has no name. For the NETID2 parameter, the special value *NETID1 indicates that the network identifier specified on the System 1 network identifier (NETID1) parameter is used. SNA mode (MODE) This parameter specifies the name of mode description used for communication. The default name is MIMIX. The special value *NETATR indicates that the value specified in the system network attributes is used.

The following parameters apply for the *OPTI protocol: System x location name (LOCNAME1, LOCNAME2) These two parameters specify the location name or address of system 1 and system 2, respectively. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter, the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2 parameter, the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter.

Threshold size (THLDSIZE) This parameter is accessible when you press F10 (Additional parameters). This controls the size of files and objects by specifying the maximum size of files and objects that are sent. If the file or object exceeds the threshold it is not sent. Valid values range from 1 through 9999999. The special value *NOMAX indicates that no maximum value is set. Transmitting large files and objects can consume excessive communications bandwidth and negatively impact communications performance, especially for slow communication lines.

177

Tips for transfer definition parameters

Relational database (RDB) This parameter is accessible when you press F10 (Additional parameters) and is valid when default remote journaling configuration is used. The parameter consists of a four relational database values, which identify the communications path used by the i5/OS remote journal function to transport journal entries: a relational database directory entry name, two system database names, and a management indicator for directory entries. This parameter creates two RDB directory entries, one on each system identified in the transfer definition. Each entry identifies the other systems relational database. Note: If you use the value *ANY for both system 1 and system 2 on the transfer definition, *NONE is used for the directory entry name, and no directory entry is generated. If MIMIX is managing your RDB directory entries, a directory entry is generated if you use the value *ANY for only one of the systems on the transfer definition. This directory entry is generated for the system that is specified as something other than *ANY. For more information about the use of the value *ANY on transfer definitions, see Using contextual (*ANY) transfer definitions on page 181. The four elements of the relational database parameter are: Directory entry This element specifies the name of the relational database entry. The default value *GEN causes MIMIX to create an RDB entry and add it to the relational database. The generated name is in the format MX_nnnnnnnnnn_ssss, where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer definition short name. If you specify a value for the RDB parameter, it is recommended that you limit its length to 18 characters. When you specify the special value *NONE, the directory entry is not added or changed by MIMIX. System 1 relational database This element specifies the name of the relational database for System 1. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to Finding the system database name for RDB directory entries on page 188. Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP. System 2 relational database This element specifies the name of the relational database for System 2. The default value *SYSDB specifies that MIMIX will determine the relational database name. If you are managing the RDB directory entries and you need to determine the system database name, refer to Finding the system database name for RDB directory entries on page 188. Note: For remote journaling that uses an independent ASP, specify the database name for the independent ASP. Manage directory entries This element specifies that MIMIX will manage the relational database directory entries associated with the transfer definition whether the directory entry name is specified or whether the directory entry name is generated by MIMIX. Management of the relational database directory entries consists of adding, changing, and deleting the directory entries on both systems, as needed, when the transfer definition is created, changed, or deleted. The

178

special value *DFT indicates that MIMIX manages the relational database directory entries only when the name is generated using the special value *GEN on the Directory entry element of this parameter. The special value *YES indicates that the directory entries on each system are managed by MIMIX. If the relational database directory entries do not exist, MIMIX adds them. If they do exist, MIMIX changes them to match the values specified by the Relational database (RDB) parameter. When any of the transfer definition relational database values change, the directory entry is also changed. When the transfer definition is deleted, the directory entries are also deleted.

179

Tips for transfer definition parameters

180

Using contextual (*ANY) transfer definitions


When the three-part name of transfer definition specifies the value *ANY for System 1 or System 2 instead system names, MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Such a transfer definitions is called contextual transfer definition. For remote journaling environments, best practice is to use transfer definitions that identify specific system definitions in the thee-part transfer definition name. Although you can use contextual transfer definitions with remote journaling, they are not recommended. For more information, see Considerations for remote journaling on page 182. In MIMIX source-send configurations, a contextual transfer definition may be an aid in configuration. For example, if you create a transfer definition named PRIMARY SYSA *ANY. This definition can be used to provide the necessary parameters for establishing communications between SYSA and any other system. The *ANY value represents several transfer definitions, one for each system definition. For example, a transfer definition PRIMARY SYSA *ANY in an installation that has three system definitions (SYSA, SYSB, INTRA) represents three transfer definitions: PRIMARY SYSA SYSA PRIMARY SYSA SYSB PRIMARY SYSA INTRA

Search and selection process


Data group definitions and system definitions include parameters that identify associated transfer definitions. When an operation requires a transfer definition, MIMIX uses the context of the operation to determine the fully qualified name. For example, when starting a data group, MIMIX uses information in the data group definition, the systems specified in the data group name and the specified transfer definition name, to derive the fully qualified transfer definition name. If MIMIX is still unable to find an appropriate transfer definition the following search order is used: 1. PRIMARY SYSA SYSB 2. PRIMARY *ANY SYSB 3. PRIMARY SYSA *ANY 4. PRIMARY SYSB SYSA 5. PRIMARY *ANY SYSA 6. PRIMARY SYSB *ANY 7. PRIMARY *ANY *ANY When you specify *ANY in the three-part name of a transfer definition, and you have specified *TFRDFN for the Protocol parameter on such commands as RUNCMD or VFYCMNLNK, MIMIX searches your system and selects those systems with a

181

Using contextual (*ANY) transfer definitions

transfer definition that matches the transfer definition that you specified, for example, (PRIMARY SYSA SYSB).

Considerations for remote journaling


Best practice for a remote journaling environment is to use a transfer definition that identifies specific system definitions in the thee-part transfer definition name. By specifying both systems, the transfer definition can be used for replication from either direction. If you do use a contextual transfer definition in a remote journaling environment, the value *ANY can be used for the system where the local journal (source) resides. This value can be either the second or third parts of the three-part name. For example, a transfer definition of PRIMARY name *ANY is valid in a remote journaling environment, where name identifies the system definition for the system where the remote journal (target) resides. A transfer definition of PRIMARY *ANY name is also valid. The command would look like this: CRTTFRDFN TFRDFN(PRIMARY name *ANY) TEXT('description') MIMIX Remote Journal support requires that each transfer definition that will be used has a relational database (RDB) directory entry to properly identify the remote system. An RDB directory entry cannot be added to a transfer definition using the value *ANY for the remote system. To support a switchable data group when using contextual transfer definitions, each system in the remote journaling environment must be defined by a contextual transfer definition. For example, an environment with systems NEWYORK and CHICAGO, you would need a transfer definition named PRIMARY NEWYORK *ANY as well as a transfer definition named PRIMARY CHICAGO *ANY.

Considerations for MIMIX source-send configurations


When creating a transfer definition for a MIMIX source-send configuration that uses contextual system capability (*ANY) and the TCP protocol, take the default values for other parameters on the CRTTFRDFN command. For example, using the naming conventions for contextual systems, the command would look like this: CRTTFRDFN TFRDFN(PRIMARY *ANY *ANY) TEXT('Recommended configuration') Note: Ensure that you consult with your site TCP administrator before making these changes. For an Intra environment, an additional transfer definition is needed. If there is an Intra system definition defined, the transfer definition must specify a unique port number to communicate with Intra. The following is an example of an additional transfer definition that uses port number 42345 to establish communications with the Intra system: CRTTFRDFN TFRDFN(PRIMARY *ANY INTRA) PORT2(42345) TEXT('Recommended configuration')

182

Naming conventions for contextual transfer definitions


The following suggested naming conventions make the contextual (*ANY) transfer definitions more useful in your environment. *TCP protocol: The MIMIX system definition names should correspond to DNS or host table entries that tie the names to a specific TCP address. *SNA protocol: The MIMIX system definition names must match SNA environment (controller names) for the respective systems. The MIMIX system definitions should match the net attribute system name (DSPNETA). For example, with two MIMIX systems called SYSA and SYSB, on the SYSA system there would have to be a controller called SYSB that is used for SYSA to SYSB communications. Conversely, on SYSB, a SYSA controller would be necessary. *OPTI protocol: The MIMIX system definition names must match the OptiConnect names for the systems (DSPOPCLNK).

Additional usage considerations for contextual transfer definitions


The Run Command (RUNCMD) and the Verify Communications Link (VFYCMNLNK) commands requires specific system names to verify communications between systems. These commands do not handle transfer definitions that specify *ANY in the three-part name. When the VFYCMNLNK command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display, MIMIX determine the specific system names. However, when the command is called from option 11 on the Work with Transfer Definitions display, entered from a command line, or included in automation programs, you will receive an error message if the transfer definition has the value *ANY for either system 1 or system 2.

183

Creating a transfer definition

Creating a transfer definition


System-level communication must be configured and operational before you can use a transfer definition. To create a transfer definition, do the following: 1. Access the Work with Transfer Definitions display by doing one of the following: From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter. From the MIMIX Cluster Menu, select option 21 (Work with transfer definitions) and press Enter.

2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the blank line at the top of the list area and press Enter. 3. The Create Transfer Definition display appears. Do the following: a. At the Transfer definition prompts, specify a name and the two system definitions between which communications will occur. b. At the Short transfer definition name prompt, accept the default value *GEN to generate a short transfer definition name. This short transfer definition name is used in generating relational database directory entry names if you specify to have MIMIX manage your RDB directory entries. c. At the Transfer protocol prompt, specify the communications protocol you want, then press Enter. If you are creating a transfer definition for a cluster environment, you must accept the default of *TCP for the Transfer protocol prompt. 4. Additional parameters for the protocol you selected appear on the display. Verify that the values shown are what you want. Make any necessary changes. 5. At the Description prompt, type a text description of the transfer definition, enclosed in apostrophes. 6. Optional step: If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value. 7. Optional step: If you need to change the relational database information, press F10 (Additional parameters). See Tips for transfer definition parameters on page 176 for details about the Relational database (RDB) parameter. If MIMIX is not managing the RDB directory entries, it may be necessary to change the RDB values. 8. To create the transfer definition, press Enter.

184

185

Changing a transfer definition

Changing a transfer definition


To change a transfer definition, do the following: 1. Access the Work with Transfer Definitions display by doing one of the following: From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter.

2. The Work with Transfer Definitions display appears. Type 2 (Change) next to the definition you want and press Enter. 3. The Change Transfer Definition (CHGTFRDFN) display appears. If you want to change which protocol is used between the specified systems, specify the value you want for the Transfer protocol prompt. 4. Press Enter to display the parameters for the specified transfer protocol. Locate the prompt for the parameter you need to change and specify the value you want. Press F1 (Help) for more information about the values for each parameter. 5. If you need to set a maximum size for files and objects to be transferred, press F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid value. 6. If you need to change your relational database information, press F10 (Additional parameters). At the Relational database (RDB) prompt, specify the desired values for each of the four elements and press Enter. For special considerations when changing your transfer definitions that are configured to use RDB directory entries see Tips for transfer definition parameters on page 176. 7. To save changes to the transfer definition, press Enter.

Changing a transfer definition to support remote journaling


If the value *ANY is specified for either system in the transfer definition, before you complete this procedure refer to Using contextual (*ANY) transfer definitions on page 181. Contextual transfer definitions are not recommended in a remote journaling environment. To support remote journaling, modify the transfer definition you plan to use as follows: 1. From the MIMIX Configuration menu, select option 2 (Work with transfer definitions) and press Enter. 2. The Work with Transfer Definitions display appears. Type a 2 (Change) next to the definition you want and press Enter. 3. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 (Additional parameters), then press Page Down. 4. At the Relational database (RDB) prompt, specify the desired values for each of the four elements and press Enter. Note: See Tips for transfer definition parameters on page 176 for detailed information about the Relational database (RDB) parameter. Also see Finding the system database name for RDB directory entries on

186

page 188 for special considerations when changing your transfer definitions that are configured to use RDB directory entries.

187

Finding the system database name for RDB directory entries

Finding the system database name for RDB directory entries


To find the system database name, do the following: 1. Login to the system that was specified for System 1 in the transfer definition. 2. From the command line type DSPRDBDIRE and press Enter. Look for the relational database name that has a corresponding remote location name of *LOCAL. 3. Repeat steps 1 and 2 to find the system database name for System 2.

Using i5/OS commands to work with RDB directory entries


If you did not accept the default value *GEN for the Directory entry element and *DFT for the Manage directory entries element when you created your transfer definition, or if you specified *NO for Manage directory entries element on the Relational Database (RDB) parameter which specifies that MIMIX is managing your RDB directory entries, you can use the i5/OS Add RDB Directory Entry (ADDRDBDIRE) command to add RDB directory entries. You can also use the i5/OS Change RDB Directory Entry (CHGRDBDIRE) command to change an existing RDB directory entry.

188

Starting the Lakeview TCP/IP server


Use this procedure if you need to start the Lakeview TCP/IP server. You can also start the TCP/IP server automatically. Once the TCP communication connections have been defined in a transfer definition, the Lakeview TCP server must be started on each of the systems identified by the transfer definition. Note: Use the host name and port number (or port alias) defined in the transfer definition for the system on which you are running this command. From a 5250 emulator, do the following on the system on which you want to start the TCP server: 1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 2. The Utilities Menu appears. Select option 51 (Start TCP server) and press Enter. 3. The Start Lakeview TCP Server display appears. At the Host name or address prompt, specify the host name for the local system as defined in the transfer definition. 4. At the Port number or alias prompt, verify that the value shown is correct. If necessary, change the value. Note: If you specify an alias, you must have an entry in the service table on this system that equates the alias to the port number. 5. Press Enter. 6. Verify that the Lakeview server job is running under the MIMIX subsystem on that system. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER.

189

Using autostart job entries to start the TCP server

Using autostart job entries to start the TCP server


To use TCP/IP communications, the MIMIX TCP/IP server must be started each time the MIMIX subsystem is started. This can become a time consuming task that can be mistakenly forgotten. For these reasons, many users prefer to add an autostart job entry to start the Lakeview TCP server automatically with the MIMIXSBS subsystem. The autostart job entry uses a job description that contains the STRSVR command which will automatically start the Lakeview TCP server when the MIMIXSBS subsystem is started. The STRSVR command is defined in the RQSDTA (request data) parameter of the job description.

Adding an autostart job entry


To configure an autostart job entry to start the Lakeview TCP server automatically with the MIMIXSBS subsystem, do the following: Note: Perform this procedure on both of the systems defined as system 1 and system 2 in the transfer definition. 1. Type the command CRTDUPOBJ and press Enter. 2. The Create Duplicate Object (CRTDUPOBJ) display appears. Specify these values at the following prompts: a. At the From object prompt specify MIMIXCMN. b. At the From library prompt specify MIMIXQGPL. c. At the Object type prompt specify *JOBD. d. At the To library prompt specify MIMIXQGPL. e. At the New object prompt specify a name for the new object. Lakeview Technology recommends that you use the port number for the system with which the server is associated in the form PORTnnnnn where nnnnn is the port number. If you are using port aliases, specify the alias associated with the port number. f. Press Enter. The new object is created. 3. Type the command CHGJOBD and press F4 (Prompt). 4. The Change Job Description (CHGJOBD) prompt display appears. Specify the port number in the form PORTnnnnn or the port alias in the Job description prompt and MIMIXQGPL for the Library. 5. Press F10 (Additional parameters). 6. Page Down to the second group of parameters and specify the following: a. At the Request data or command prompt, specify the STRSVR command using the values you need in the following string:
'MIMIX/STRSVR HOST(local-cp-name) PORT(nnnnn) JOBD(MIMIXQGPL/yyyy)'

where yyyy is either the port number in the form PORTnnnnn or the port alias.

190

b. Press Enter. The job description is changed. 7. Type the command ADDAJE and press Enter. 8. The Add Autostart Job Entry (ADDAJE) display appears. Specify the following values to configure the job description to start each time the MIMIXSBS subsystem is started: a. At the Subsystem description prompt specify MIMIXSBS. b. At the Library prompt, specify MIMIXQGPL. c. At the Job name prompt specify a name to describe the job being processed. Lakeview Technology suggests that you use the value you specified in Step 4. d. At the Job description prompt specify the name of the job description you just changed in Step 4. e. At the Library prompt specify MIMIXQGPL. f. Press Enter. The job description is added to the automatic start procedures within the MIMIXSBS subsystem. Each time the MIMIXSBS subsystem is started, this TCP server is also started.

Identifying the autostart job entry in the MIMIXSBS subsystem


Autostart job entries need to be reviewed occasionally for possible changes, such as when a configuration change has been made. The autostart job entry may need to be updated to call the new system name or port number. The first step is to identify the autostart job entry in the MIMIXSBS subsystem. This procedure enables you to display the appropriate autostart job entrys information to determine if its STRSVR command needs to be updated. This command contains the system name and port number or port alias for the system, which may need to be changed. To display the autostart job entry information, do the following: 1. Type the command DSPSBSD MIMIXQGPL/MIMIXSBS and press Enter. The Display Subsystem Description display appears. 2. Type 3 (Autostart job entries) and press enter. The Display Autostart Job Entries display appears. 3. Identify the Job Description and Library of the autostart job entry. Typically the job description is named PORTnnnnn where nnnnn is the port number. Press Enter. 4. Using the information identified in step 3, type the command DSPJOBD Library/Job Description and press Enter. The Display Job Description display appears. 5. Page down to view the Request data information and determine whether the STRSVR command needs to be updated. If updates are needed, perform the steps in Changing the job description for an autostart job entry on page 191.

Changing the job description for an autostart job entry


If a system name or port number has changed and an autostart job entry is used for the STRSVR command, the autostart job entry for the STRSVR command must be updated to call the new system name or port number. The RQSDTA (request data)

191

Using autostart job entries to start the TCP server

parameter in the job description determines which program or command is run when the MIMIXSBS subsystem is started. Use the following command to change the job description to call the new system definition name or port number used for the autostart job entry which calls the STRSVR command when the MIMIXSBS subsystem is started:
CHGJOBD JOBD(MIMIXLIB/STRMXSVR) RQSDTA(MIMIXLIB/STRSVR HOST(System name) PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN))

where System name is the system host name for the system where the autostart job entry is defined in the MIMIX transfer definition. where nnnnn is either the port number in the form PORTnnnnn or the port alias of the system where the autostart job entry is defined in the MIMIX transfer definition.

192

193

Verifying a communications link for system definitions

Verifying a communications link for system definitions


Do the following to verify that the communications link defined for each system definition is operational: 1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, type a 1 (Work with system definitions) and press Enter. 3. From the Work with System Definitions display, type an 11 (Verify communications link) next to the system definition you want and press Enter. You should see a message indicating the link has been verified. Note: If the system manager is not active, this process will only verify that communications to the remote system is successful. You will also see a message in the job log indicating that "communications link failed after 1 request." If you are performing this procedure as directed by the manual configuration checklist before the system manager is active, this result is expected and indicates that the remote system could not return communications to the local system. Once you start the system managers as directed by the checklist, the configuration information needed for successful two-way communications is automatically sent to the remote system. The checklist will direct you to verify all communications paths again at the appropriate point in the configuration process. 4. Repeat this procedure for all system definitions. If the communications link defined for a system definition uses SNA protocol, do not check the link from the local system. Note: If your transfer definition uses the *TCP communications protocol, then MIMIX uses the Verify Communication Link command to validate the information that has been specified for the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system.

194

Verifying the communications link for a data group


Before you synchronize data between systems, ensure that the communications link for the data group is active. This procedure verifies the primary transfer definition used by the data group. If your configuration requires multiple data groups, be sure to check communications for each data group definition. Do the following: 1. From the Work with Data Group Definitions display, type an 11 (Verify communications link) next to the data group you want and press F4. 2. The Verify Communications Link display appears. Ensure that the values shown for the prompts are what you want. 3. To start the check, press Enter. 4. You should see a message "VFYCMNLNK command completed successfully." If your data group definition specifies a secondary transfer definition, use the following procedure to check all communications links.

Verifying all communications links


The Verify Communications Link (VFYCMNLNK) command requires specific system names to verify communications between systems. When the command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display, MIMIX identifies the specific system names. For transfer definitions using TCP protocol: MIMIX uses the Verify Communication Link (VFYCMNLNK) command1 to validate the values specified for the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system. When the command is called from option 11 on the Work with Transfer Definitions display or when entered from a command line, you will receive an error message if the transfer definition specifies the value *ANY for either system 1 or system 2. 1. From the Work with Transfer Definitions display, type an 11 (Verify communications link) next to all transfer definitions and press Enter. 2. The Verify Communications Link display appears. If you are checking a Transfer definition with the value of *ALL, you need to specify a value for the System 1 or System 2 prompt. Ensure that the values shown for the prompts are what you want and then press Enter. You will see the Verify Communications Link display for each transfer definition you selected. 3. You should see a message "VFYCMNLNK command completed successfully."

1. On installations running service pack SPC05 or higher.

195

Verifying the communications link for a data group

196

Chapter 9

Configuring journal definitions


By creating a journal definition you identify to MIMIX a journal environment that can be used in the replication process. MIMIX uses the journal definition to manage the journaling environment, including journal receiver management. A journal definition does not automatically build the underlying journal environment that it defines. If the journal environment does not exist, it must be built. This can be done after the journal definition is created. Configuration checklists indicate when to build the journal environment. The topics in this chapter include: Journal definitions created by other processes on page 200 describes the security audit journal (QAUDJRN) and other journal definitions that are automatically created by MIMIX. Tips for journal definition parameters on page 201 provides tips for using the more common options for journal definitions. Journal definition considerations on page 205 provides things to consider when creating journal definitions for remote journaling. Journal receiver size for replicating large object data on page 213 provides procedures to verify that a journal receiver is large enough to accommodate large IFS stream files and files containing LOB data, and if necessary, to change the receiver size options. Creating a journal definition on page 215 provides the steps to follow for creating a journal definition. Changing a journal definition on page 217 provides the steps to follow for changing a journal definition. Building the journaling environment on page 219 describes the journaling environment and provides the steps to follow for building it. Changing the remote journal environment on page 222 provides steps to follow when changing an existing remote journal configuration. The procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling environment or for any other changes that affect the target journal. Adding a remote journal link on page 225 describes how to create a MIMIX RJ link, which will in turn create a target journal definition with appropriate values to support remote journaling. In most configurations, the RJ link is automatically created for you when you follow the steps of the configuration checklists. Changing a remote journal link on page 227 describes how to change an existing RJ link. Temporarily changing from RJ to MIMIX processing on page 228 describes how to change a data group configured for remote journaling to temporarily use MIMIX send processing. Changing from remote journaling to MIMIX processing on page 229 describes

197

how change a data group that uses remote journaling so that it uses MIMIX send processing. Remote journaling is preferred. Removing a remote journaling environment on page 231 describes how to remove a remote journaling environment that you no longer need.

198

Configuring journal definitions

199

Journal definitions created by other processes

Journal definitions created by other processes


When you create system definitions, MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) on that system. The QAUDJRN is used only by MIMIX system journal replication processes. If you do not already have a journaling environment for the security audit journal, it will be created when the first data group that replicates from the system journal is started. When you create a data group definition, MIMIX automatically creates a journal definition if one does not already exist. Any journal definitions that are created in this manner will be named with the value specified in the data group definition. In an environment that uses MIMIX Remote Journal support, the process of creating a data group definition creates a remote journal link which in turn creates the journal definition for the target journal. The target journal definition is created using values appropriate for remote journaling. Any journal definitions created by another process can be changed if necessary.

200

Tips for journal definition parameters


This topic provides tips for using the more common options for journal definitions. Context-sensitive help is available online for all options on the journal definition commands. Journal definition (JRNDFN) This parameter is a two-part name that identifies a journaling environment on a system. The first part of the name identifies the journal definition. When a journal definition for the security audit journal (system journal) is automatically created as a result of creating a system definition, the first part of the name is QAUDJRN. The second part of the name identifies a system definition which represents the system on which you want the journal to reside. Note: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). Journal definition names cannot be UPSMON or begin with the characters MM. If the journal definition is configured by MIMIX for use with MIMIX RJ support, the Name is the first eight characters from the name of the source journal definition followed by the characters @R. If a journal definition name is already in use, the name may include @S, @T, @U, @V, or @W. There are additional specific naming conventions for journal definitions that are used with remote journaling. MIMIX uses the first six characters of the journal definition name to generate the journal receiver prefix. MIMIX restricts the last character of the prefix from being numeric. If the last character of a prefix resulting from the journal definition name is numeric, it can become part of the receiver number and no longer match the journal name. Journal (JRN) This parameter specifies the qualified name of a journal to which changes to files or objects to be replicated are journaled. For the journal name, the default value *JRNDFN uses the name of the journal definition for the name of the journal. For the journal library, the default value *DFT allows MIMIX to determine the library name based on the ASP in which the journal library is allocated, as specified in the Journal library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses #MXJRNIASP for the default journal library name; otherwise, the default library name is #MXJRN. Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal library. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32. The value *CRTDFT indicates that the command default value for the i5/OS Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library. For libraries that are created in a user ASP, all objects in the library must be in the same ASP as the library.

201

Tips for journal definition parameters

Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be used in the name of journal receivers associated with the journal used in the replication process and the library in which the journal receivers are located. The prefix must be unique to the journal definition and cannot end in a numeric character. The default value *GEN for the name prefix indicates that MIMIX will generate a unique prefix, which usually is the first six characters of the journal definition name with any trailing numeric characters removed. If that prefix is already used in another journal definition, a unique six character prefix name is derived from the definition name. If the journal definition will be used in a configuration which broadcasts data to multiple systems, there are additional considerations. See Journal definition considerations on page 205. The value *DFT for the journal receiver library allows MIMIX to determine the library name based on the ASP in which the journal receiver is allocated, as specified in the Journal receiver library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses #MXJRNIASP for the default journal receiver library name. Otherwise, the default library name is #MXJRN. You can specify a different name or specify the value *JRNLIB to use the same library that is used for the associated journal. Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receiver library. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32. The value *CRTDFT indicates that the command default value for the i5/OS Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library. For libraries that are created in a user ASP, all objects in the library must be in the same ASP as the library. Target journal state (TGTSTATE) This parameter specifies the requested status of the target journal, and can be used with active journaling support or journal standby state. Use the default value *ACTIVE to set the target journal state to active when the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)). Use the value *STANDBY to journal objects on the target system while preventing most journal entries from being deposited into the target journal. For more information about journal standby state, see Configuring for high availability journal performance enhancements on page 341. Journal caching (JRNCACHE) This parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Use the recommended default value *BOTH to perform journal caching on both the source and the target systems. You can also specify values *SRC, *TGT, or *NONE. Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD) Four parameters control how journal receivers associated with the replication process are changed. The Receiver change management (CHGMGT) parameter controls whether MIMIX performs change management operations for the journal receivers used in the replication process. The recommended value is the shipped default of *TIMESIZE, where MIMIX changes journal receivers by both threshold size and time of day.

202

The following parameters specify conditions that must be met before change management can occur. Receiver threshold size (MB) (THRESHOLD) You can specify the size, in megabytes, of the journal receiver at which it is changed. The default value is 6600 MB. This value is used when MIMIX or the system changes the receivers. If you decide to decrease the size of the Receiver threshold size you will need to manually change your journal receiver to reflect this change. If you change the journal receiver threshold size in the journal definition, the change is effective with the next receiver change. Time of day to change receiver (TIME) You can specify the time of day at which MIMIX changes the journal receiver. The time is based on a 24 hour clock and must be specified in HHMMSS format. Reset sequence threshold (RESETTHLD) You can specify the sequence number (in millions) at which to reset the receiver sequence number. When the threshold is reached, the next receiver change resets the sequence number to 1.

For information about how change management occurs in a remote journal environment and about using other change management choices, see Journal receiver management on page 37 Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT, KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal receivers associated with the replication process. The Receiver delete management (DLTMGT) parameter specifies whether or not MIMIX performs delete management for the journal receivers. By default, MIMIX performs the delete management operations. MIMIX operations can be adversely affected if you allow the system or another process to handle delete management. For example, if another process deletes a journal receiver before MIMIX is finished with it, replication can be adversely affected. All of the requirements that you specify in the following parameters must be met before MIMIX deletes a journal receiver: Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers allows you to back out (rollback) changes in the event that you need to recover from a disaster. The default value *YES causes MIMIX to keep unsaved journal receivers until they are saved. Keep journal receiver count (KEEPRCVCNT) You can specify the number of detached journal receivers to retain. For example, if you specify 2 and there are 10 journal receivers including the attached receiver (which is number 10), MIMIX retains two detached receivers (8 and 9) and deletes receivers 1 through 7. Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of days to retain detached journal receivers. For example, if you specify to keep the journal receiver for 7 days and the journal receiver is eligible for deletion, it will be deleted after 7 days have passed from the time of its creation. The exact time of the deletion may vary. For example, the deletion may occur within a few hours after the 7 days have passed.

203

Tips for journal definition parameters

For information see Journal receiver management on page 37 Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receivers. The default value *LIBASP indicates that the storage space for the journal receivers is allocated from the same ASP that is used for the journal receiver library. Threshold message queue (MSGQ) This parameter specifies the qualified name of the threshold message queue to which the system sends journal-related messages such as threshold messages. The default value *JRNDFN for the queue name indicates that the message queue uses the same name as the journal definition. The value *JRNLIB for the library name indicates that the message queue uses the library for the associated journal. Exit program (EXITPGM) This parameter allows you to specify the qualified name of an exit program to use when journal receiver management is performed by MIMIX. The exit program will be called when a journal receiver is changed or deleted by the MIMIX journal manager. For example, you might want to use an exit program to save journal receivers as soon as MIMIX finishes with them so that they can be removed from the system immediately. Minimize entry specific data (MINENTDTA) This parameter specifies which object types allow journal entries to have minimized entry-specific data. For additional information about improving journaling performance with this capability, see Minimized journal entry data on page 339.
Updated for 5.0.02.00.

204

Journal definition considerations


Consider the following as you create journal definitions for remote journaling: The source journal definition identifies the local journal and the system on which the local journal exists. Similarly, the target journal definition identifies the remote journal and the system on which the remote journal exists. Therefore, the source journal definition identifies the source system of the remote journal process and the target journal definition identifies the target system of the remote journal process. You can use an existing journal definition as the source journal definition to identify the local journal. However, using an existing journal definition for the target journal definition it is not recommended. The existing definition is likely to be used for journaling and therefore is not appropriate as the target journal definition for a remote journal link. MIMIX recognizes the receiver change management parameters (CHGMGT, THRESHOLD, TIME, RESETTHLD) specified in the source journal definition and ignores those specified in the target journal definition. When a new receiver is attached to the local journal, a new receiver with the same name is automatically attached to the remote journal. The receiver prefix specified in the target journal definition is ignored. Each remote journal link defines a local-remote journal pair that functions in only one direction. Journal entries flow from the local journal to the remote journal. The direction of a defined pair of journals cannot be switched. If you want to use the RJ process in both directions for a switchable data group, you need to create journal definitions for two remote journal links (four journal definitions). For more information, see Example journal definitions for a switchable data group on page 207. MIMIX will try to create *TYPE2 journals when possible and *TYPE1 journals when a *TYPE2 journal cannot be created. MIMIX creates the environment that is appropriate for the type of journal created. Refer to the IBM book, Backup and Recovery, for information about save and restore considerations for *TYPE2 and *TYPE1 journals in a remote journaling environment. After the journal environment is built for a target journal definition, MIMIX cannot change the value of the target journal definitions Journal receiver prefix (JRNRCVPFX) or Threshold message queue (MSGQ), and several other values. To change these values see the procedure in the IBM topic Library Redirection with Remote Journals in the IBM eServer iSeries Information Center. If you are configuring MIMIX for a scenario in which you have one or more target systems, there are additional considerations for the names of journal receivers. Each source journal definition must specify a unique value for the Journal receiver prefix (JRNRCVPFX) parameter. MIMIX ensures that the same prefix is not used more than once on the same system but cannot determine if the prefix is used on a target journal while it is being configured. If the prefix defined by the source journal definition is reused by target journals

205

Journal definition considerations

that reside in the same library and ASP, attempts to start the remote journals will fail with message CPF699A (Unexpected journal receiver found). When you create a target journal definition instead of having it generated using the Add Remote Journal Link (ADDRJLNK) command, use the default value *GEN for the prefix name for the JRNRCVPFX on a target journal definition. The receiver name for source and target journals will be the same on the systems but will not be the same in the journal definitions. In the target journal, the prefix will be the same as that specified in the source journal definition.

Naming convention for remote journaling environments with 2 systems


If you allow MIMIX to generate the target journal definition when you create a remote journal link, MIMIX implements the following naming conventions for the target journal definition and for the objects in its associated journaling environment. If you specify your own target journal definition, follow these same naming conventions to reduce the potential for confusion and errors. The two-part name of the target journal definition is generated as follows: The Name is the first eight characters from the name of the source journal definition followed by the characters @R when the journal definition is created for MIMIX RJ support. If a journal definition name is already in use, the name may instead include @S, @T, @U, @V, or @W. Note: Journal definition names cannot be UPSMON or begin with the characters MM. The System is the value entered in the target journal definition system field.

For example, if the source journal definition name is MYJRN and you specified TGTJRNDFN(*GEN CHICAGO), the target journal definition will be named MYJRN@R CHICAGO. The target journal definition will have the following characteristics and associated new objects: The Journal name will have the same name as the source journal. The Journal library will use the first eight characters of the name of the source journal library followed by the characters @R. The Journal library ASP will be copied from source journal definition. The Journal receiver prefix will be copied from the source journal definition. The Journal receiver library will use the first eight characters of the name of the source journal receiver library followed by the characters @R. The Message queue library will use the first eight characters of the name of the source message queue library followed by the characters @R. The value for the Receiver change management (CHGMGT) parameter will be *NONE.

206

Example journal definitions for a switchable data group


To support a switchable data group in a remote journaling environment, you need to have four journal definitions configured: two for the RJ link used for normal production-to-backup operations, and two for the RJ link used for replication in the opposite direction. In this example, a switchable data group named PAYABLES is created between systems CHICAGO and NEWYORK. System 1 (CHICAGO) is the data source. The data group definition specifies *YES to Use remote journal link. Command defaults create the data group using a generated short data group name and using the data group name for the system 1 and system 2 journal definitions. To create the RJ link and associated journal definitions for normal operations, option 10 (Add RJ link) on the Work with Journal Definitions display is used on an existing journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13). This is the source journal definition for normal operations. The process of adding the link creates the target journal definition PAYABLES@R NEWYORK (the last entry listed in Figure 13). To create the RJ link and associated definitions for replication in the opposite direction, a new source journal definition, PAYABLES NEWYORK, is created (the second entry listed in Figure 13). Then that definition is used to create second RJ link, which in turn generates the target journal definition PAYABLES@R CHICAGO (the third entry listed in Figure 13).
Figure 13. Example journal definitions for a switchable data group.
Work with Journal Definitions CHICAGO Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 10=Add RJ link 12=Work with RJ links 14=Build 17=Work with jrn attributes 24=Delete jrn environment ---- Definition ---Name System PAYABLES CHICAGO PAYABLES NEWYORK PAYABLES@R CHICAGO PAYABLES@R NEWYORK ------ Journal ------Name Library PAYABLES MIMIXJRN PAYABLES MIMIXJRN PAYABLES MIMIXJRN@R PAYABLES MIMIXJRN@R - Management Change Delete *SYSTEM *YES *SYSTEM *YES *NONE *YES *NONE *YES 7=Rename

Opt

RJ Link *SRC *SRC *TGT *TGT

F3=Exit F12=Cancel

F4=Prompt F18=Subset

F5=Refresh F21=Print list

Bottom F6=Create F22=Work with RJ links

207

Journal definition considerations

Identifying the correct journal definition on the Work with Journal Definition display can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the association between journal definitions much more clearly.
Figure 14. Example of RJ links for a switchable data group.
Work with RJ Links System: Type options, press Enter. 1=Add 2=Change 4=Remove 14=Build 15=Remove RJ connection 24=Delete target jrn environment ---Source Jrn Def--Name System PAYABLES PAYABLES CHICAGO NEWYORK 5=Display 6=Print 9=Start 17=Work with jrn attributes CHICAGO 10=End

Opt

---Target Jrn Def--Name System PAYABLES@R PAYABLES@R NEWYORK CHICAGO

Priority *SYSDFT *SYSDFT

Dlvry *ASYNC *ASYNC

State *INACTIVE *INACTIVE

Bottom Parameters or command ===> F3=Exit F4=Prompt F12=Cancel F13=Repeat

F5=Refresh F6=Add F16=Jrn Definitions

F9=Retrieve F18=Subset

F11=View 2 F21=Print list

Naming convention for multimanagement environments


The i5/OS remote journal function requires unique names for the local journal receiver and the remote receiver. In a MIMIX environment that uses multimanagement functions1, more than one system serves as the management system for MIMIX operations. In a multimanagement environment, it is possible that each node that is a management system is also both a source and target for replication activity. The following manually implemented naming convention ensures that journal receivers have unique names. Library name-mapping - In target journal definitions, specify journal library and receiver library names that include a two-character identifier, nn, to represent the node of the associated source (local journal). Place this identifier before the remote journal indicator @R at the end of the name, like this: nn@R. Also include this identifier at the end of the target journal definition name. This convention allows for the use of the same local journal name for all data groups and places all journals and receivers from the same source in the same library. To ensure that journal receivers in a multimanagement environment have unique names, the following is strongly recommended: Limit the data group name to six characters. This will simplify keeping an association between the data group name and the names of associated journal definitions by allowing space for the source node identifier within those names.
1. A MIMIX cluster1 access code is required for multimanagement functions.

208

Manually create journal definitions (CRTJRNDFN command) using the library name-mapping convention. Journal definitions created when a data group is created may not have unique names and will not create all the necessary target journal definitions. Once the appropriately named journal definitions are created for source and target systems, manually create the remote journal links between them (ADDRJLNK command).

Example journal definitions for three management nodes


The following figures illustrate the library-mapping naming convention for journal definitions in a multimanagement environment with three nodes. In this example, all three nodes are designated as management systems. The data group name ABC. When implementing the naming convention, it is helpful to consider one source node at a time and create all the journal definitions necessary for replication from that source. This technique is illustrated in the example. Library-mapping example: In Figure 15, a three node environment is shown in three separate graphics. Each graphic identifies one node as a replication source, with arrows pointing to the possible target nodes and lists the journal definitions needed to replicate from that source. In each graphic, library name-mapping is evident in the names shown for the target journal definitions and their journal and receiver libraries. For example, when SYS01 is the source, journal definition ABC SYS01 identifies the local journal on SYS01. The source identifier 01 appears target journal definitions ABC01@R SYS02 and ABC01@R SYS03 and in the library names defined within each. Figure 15 also includes a list of all the journal definitions associated with all nodes from this example as they would appear on the Work with Journal definitions display.

209

Journal definition considerations

Figure 15. Library-mapped journal definitions - three node environment. All nodes are management systems

210

Figure 16 shows the RJ links needed for this example.


Figure 16. Library-mapped names shown in RJ links for three node environment

211

Journal definition considerations

212

Journal receiver size for replicating large object data


For potentially large IFS stream files and files containing LOB data, it is important that your journal receiver is large enough to accommodate the data. You may need to change your journal receiver size in order to accommodate the data. For data groups that can be switched, the journal receivers on both the source and target systems must be large enough to accomodate the data.

Verifying journal receiver size options


To display the current journal receiver size options for journals used by MIMIX, do the following from the system where the journal definition is located: 1. Enter the command installation-library/WRKJRNDFN 2. Next to the journal definition for the system you are on, type a 17 (Work with journal attributes). 3. View the Receiver size options field to see how the journal is configured. The value should support large journal entries, such as *MAXOPT2.

Changing journal receiver size options


To change the journal receiver size, do the following: 1. From a command line, type CHGJRN (Change Journal) and press F4 to prompt. 2. At the Journal prompt, enter the journal and library names for the journal you wish to change. 3. At the Receiver size option prompt, specify a value that supports large journal entries, such as *MAXOPT2. Make sure the other systems in your environment are compatible in size. Note: Do not specify *MAXOPT3.

213

Journal receiver size for replicating large object data

214

Creating a journal definition


Do the following to create a journal definition: 1. Access the Work with Journal Definitions display. From the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter. 2. The Work with Journal Definitions display appears. Type 1 (Create) next to the blank line at the top of the list area and press Enter. 3. The Create Journal Definition display appears. At the Journal definition prompts, specify a two-part name. Note: Journal definition names cannot be UPSMON or begin with the characters MM. 4. Verify that the following prompts contain the values that you want. If you have not journaled before, the default values are appropriate. If you need to identify an existing journaling environment to MIMIX, specify the information you need. Journal Library Journal library ASP Journal receiver prefix Library Journal receiver library ASP 5. At the Target journal state prompt, specify the requested status of the target journal. The default value is *ACTIVE. This value can be used with active journaling support or journal standby state. 6. At the Journal caching prompt, specify whether the system should cache journal entries in main storage before writing them to disk. The recommended default value is *BOTH. 7. Set the values you need to manage changing journal receivers, as follows: a. At the Receiver change management prompt, specify the value you want. Lakeview recommends that you use the default values. For more information about valid combinations of values, press F1 (Help). b. Press Enter. c. One or more additional prompts related to receiver change management appear on the display. Verify that the values shown are what you want and, if necessary, change the values. Receiver threshold size (MB) Time of day to change receiver Reset sequence threshold d. Press Enter. 8. Set the values you need to manage deleting journal receivers, as follows:

215

Creating a journal definition

a. Lakeview recommends that you accept the default value *YES for the Receiver delete management prompt to allow MIMIX to perform delete management. b. Press Enter. c. One or more additional prompts related to receiver delete management appear on the display. If necessary, change the values. Keep unsaved journal receivers Keep journal receiver count Keep journal receivers (days) 9. At the Description prompt, type a brief text description of the transfer definition. 10. This step is optional. If you want to access additional parameters that are considered advanced functions, press F10 (Additional parameters). Make any changes you need to the additional prompts that appear on the display. 11. To create the journal definition, press Enter.

216

Changing a journal definition


To change a journal definition, do the following: 1. Access the Work with Journal Definitions display according to your configuration needs: In a clustering environment, from the MIMIX Cluster Menu select option 20 (Work with system definitions) and press Enter. When the Work with System Definitions display appears, type 12 (Journal Definitions) next to the system name you want and press Enter. In a standard MIMIX environment, from the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter.

2. The Work with Journal Definitions display appears. Type 2 (Change) next to the definition you want and press Enter. 3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice to see all prompts for the display. 4. Make any changes you need to the prompts. Press F1 (Help) for more information about the values for each parameter. 5. If you need to access advanced functions, press F10 (Additional parameters). When the additional parameters appear on the display, make the changes you need. 6. To accept the changes, press Enter. Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective with the next receiver change. Before a change to any other parameter is effective, you must rebuild the journal environment. Rebuilding the journal environment ensures that it matches the journal definition and prevents problems starting the data group.

217

Changing a journal definition

218

Building the journaling environment


Before replication for a data group can occur, the journal environment for all journal definitions used by that data group must be created on each system. A journaling environment includes the following objects: library, journal, journal receiver, and threshold message queue on the system specified in the journal definition. The Build Journal Environment (BLDJRNENV) command is used to build the journal environment objects for a journal definition. When the BLDJRNENV command is run, if the objects do not exist, they are created based on what is specified in the journal definition. If the journal exists, the Source for values (JRNVAL) parameter of the BLDJRNENV command is used to determine the source for the values of these objects. The journal receiver prefix and library, message queue and library, and threshold parameters are updated from the source specified in the JRNVAL parameter. Specifying *JRNENV for the JRNVAL parameter changes the values of the objects in the journal definition to match the values in the existing journal environment objects. Specifying *JRNDFN for the JRNVAL parameter changes the values of the journal environment objects to match the values of the objects in the journal definition. In a remote journal environment, the values specified in the journal definition (*JRNDFN) are only applicable to the source journal. If the data group definition specifies to journal on the target system, the journal environment must be built on each system that will be a target system for replication of that data group. If you do not build either source or target journal environments, the first time the data group starts MIMIX will automatically build the journal environments for you. Note: When building a journal environment, ensure the journal receiver prefix in the specified library is not already used. If the journal receiver prefix in the specified library is already used, you must change it to an unused value. For switchable data groups not specified to journal on the target system, it is recommended to build the source journaling environments for both directions of replication so the environments exist for data group replication after switching. All previous steps in your configuration checklist must be complete before you use this procedure. To build the journaling environment, do the following: Note: If you are journaling on the target system, perform this procedure for both the source and target systems. 1. From the MIMIX Main Menu, select 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select one of the following and press Enter: a. Select 8 (Work with remote journal links) to build the journaling environments for remote journaling. b. Select 3 (Work with journal definitions) to build all other journaling environments. 3. From the Work with display, type 14 (Build) next to the journal definition you want

219

Building the journaling environment

to build and press Enter. Option 14 calls the Build Journal Environment (BLDJRNENV) command. For environments using remote journaling, the command is called twice (first for the source journal definition and then for the target journal definition). A status message is issued indicating that the journal environment was created for each system. 4. If you plan to journal access paths, you need to change the value of the receiver size options. To do this, do the following: a. Type the command CHGJRN and press F4 (Prompt): b. For the JRN parameter, specify the name of the journal from the journal definition. c. Specify *GEN for the JRNRCV parameter. d. Specify *NONE for the RCVSIZOPT parameter. e. Press Enter.

220

221

Changing the remote journal environment

Changing the remote journal environment


Use the following checklist to guide you through the process of changing an existing remote journal configuration. For example, this procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling (RJ) environment or for any other changes that affect the target journal. These steps can be used for synchronous or asynchronous remote journals. Important! Changing the RJ environment must be done in the correct sequence. Failure to follow the proper sequence can introduce errors in replication and journal management. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. 1. Verify that no other data groups use the RJ link using topic Identifying data groups that use an RJ link on page 310. 2. Use topic Ending a data group in a controlled manner in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. Specify the following on the ENDDG command: *ALL for the Process prompt *CNTRLD for the End process prompt *YES for the End remote journaling prompt.

3. Verify that the remote journal link is not in use on both systems. Use topic Displaying status of a remote journal link in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue. 4. Remove the connection to the remote journal as follows: a. Access the journal definitions for the data group whose environment you want to change. From the Work with Data Groups display, type a 45 (Journal definitions) next to the data group that you want and press Enter. b. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. You can select either the source or target journal definition. Note: The target journal definition will end with @R. c. From the Work with RJ Links display, choose the link based on the name in the Target Jrn Def column. Type a 15 (Remove RJ connection) next to the link with the target journal definition you want and press Enter d. A confirmation display appears. To continue removing the connections for the selected links, press Enter. 5. From the Work with RJ Links display, do the following to delete the target system objects associated with the RJ link: Note: The target journal definition will end with @R. a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter.

222

b. A confirmation display appears. To continue deleting the journal, its associated message queue, and the journal receiver, press Enter. 6. Make the changes you need for the target journal. For example, to change the target (remote) journal definition to a new receiver library, do the following: a. Press F12 to return to the Work with Journal Definitions display. b. Type option 2 (Change) next to the journal definition for the target system you want and press Enter. 7. From the Work with Journal Definitions display, type a 14 (Build) next to the target journal definition and press Enter. Note: The target journal definition will end with @R. 8. Return to the Work with Data Groups display. Then do the following: a. Type an 8 (Display status) next to the data group you want and press Enter. b. Locate the name of the receiver in the Last Read field for the Database process. 9. Do the following to start the RJ link: a. From the Work with Data Groups display, type a 44 (RJ links) next to the data group you want and press Enter. b. Locate the link you want based on the name in the Target Jrn Def column. Type a 9 (Start) next to the link with the target journal definition and press F4 (Prompt) c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver name from Step 8b as the value for the Starting journal receiver (STRRCV) and press Enter. 10. Start the data group using default values Refer to topic Starting selected data group processes in the Using MIMIX book.

223

Changing the remote journal environment

224

Adding a remote journal link


This procedure requires that a source journal definition exists. The process of creating an RJ link will create the target journal definition with appropriate values for remote journaling. Before you create the RJ link you should be familiar with the Journal definition considerations on page 205. To create a link between journal definitions, do the following: 1. From the MIMIX Configuration menu, select option 3 (Work with journal definitions) and press Enter. 2. The Work with Journal Definitions display appears. Type a 10 (Add RJ link) next to the journal definition you want and press Enter. 3. The Add Remote Journal Link (ADDRJLNK) display appears. The journal definition you selected in the previous step appears in the prompts for the Source journal definition. Verify that this is the definition you want as the source for RJ processing. 4. At the Target journal definition prompts, specify *GEN as the Name and specify the value you want for System. Note: If you specify the name of a journal definition, the definition must exist and you are responsible for ensuring that its values comply with the recommended values. Refer to the related topic on considerations for creating journal definitions for remote journaling for more information. 5. Verify that the values for following prompts are what you want. If necessary, change the values. Delivery Sending task priority Primary transfer definition Secondary transfer definition If you are using an independent ASP in this configuration you also need to identify the auxiliary storage pools (ASPs) from which the journal and journal receiver used by the remote journal are allocated. Verify and change the values for Journal library ASP, Journal library ASP device, Journal receiver library ASP, and Journal receiver lib ASP dev as needed.

6. At the Description prompt, type a text description of the link, enclosed in apostrophes. 7. To create the link between journal definitions, press Enter.

225

Adding a remote journal link

226

Changing a remote journal link


Changes to the delivery and sending task priority take effect only after the remote journal link has been ended and restarted. To change characteristics of the link between source and target journal definitions, do the following: 1. Before you change a remote journal link, end activity for the link. The Using MIMIX book describes how to end only the RJ link. Note: If you plan to change the primary transfer definition or secondary transfer definition to a definition that uses a different RDB directory entry, you also need to remove the existing connection between objects. Use topic Removing a remote journaling environment on page 231 before changing the remote journal link. 2. From the Work with RJ Links display, type a 2 (Change) next to the entry you want and press Enter. 3. The Change Remote Journal Link (CHGRJLNK) display appears. Specify the values you want for the following prompts: Delivery Sending task priority Primary transfer definition Secondary transfer definition Description

4. When you are ready to accept the changes, press Enter. 5. To make the changes effective, do the following: a. If you removed the RJ connection in Step 1, you need to use topic Building the journaling environment on page 219. b. Start the data group which uses the RJ link.

227

Temporarily changing from RJ to MIMIX processing

Temporarily changing from RJ to MIMIX processing


This procedure is appropriate for when you plan to continue using remote journaling as your primary means of transporting data to the target system but, for some reason, temporarily need to revert to MIMIX send processing. Important! If the data group is configured for MIMIX Dynamic Apply, you must complete the procedure in Checklist: Converting to legacy cooperative processing on page 157 before you remove remote journaling. For the data group you want to change, do the following: 1. Use topic Ending a data group in a controlled manner in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. Specify the following on the ENDDG command: *ALL for the Process prompt *CNTRLD for the End process prompt *YES for the End remote journaling prompt.

2. Verify that the process is ended. On the Work with Data Groups display, the data group should change to show a red L in the Source DB column. 3. Modify the data group definition as follows: a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter. b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts. c. Specify *NO for the Use remote journal link prompt. d. To accept the change press Enter. 4. Use the procedure Starting selected data group processes in the Using MIMIX book, specifying *ALL for the Start Process prompt.

228

Changing from remote journaling to MIMIX processing


Use this procedure when you no longer want to use remote journaling for a data group and want to permanently change the data group to use MIMIX send processing. Important! If the data group is configured for MIMIX Dynamic Apply, you must complete the procedure in Checklist: Converting to legacy cooperative processing on page 157 before you remove remote journaling. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. 1. Perform a controlled end for the data group that you want to change using topic Ending a data group in a controlled manner in the Using MIMIX book. On the ENDDG command, specify the following: *ALL for the Process prompt *CNTRLD for the End process prompt Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not in use by any other processes or data groups before ending and removing the RJ environment. 2. Perform the procedure in topic Removing a remote journaling environment on page 231. 3. Modify the data group definition as follows: a. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter. b. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts. c. Specify *NO for the Use remote journal link prompt. d. To accept the change, press Enter. 4. Start data group replication using the procedure Starting selected data group processes in the Using MIMIX book and specify *ALL for the Start processes prompt (PRC parameter).

229

Changing from remote journaling to MIMIX processing

230

Removing a remote journaling environment


Use this procedure when you want to remove a remote journaling environment that you no longer need. This procedure removes configuration elements and system objects necessary for data group replication with remote journaling. 1. Verify that the remote journal link is not used by any data group. Use Identifying data groups that use an RJ link on page 310. If you identify a data group that uses the remote journal link, check with your MIMIX administrator and determine how to proceed. Possible courses of action are: If the data group is being converted to use MIMIX send processing or if the data group will no longer be used, perform a controlled end of the data group. When the data group is ended, continue with Step 2 of this procedure. If the data group needs to remain operable using remote journaling, do not continue with this procedure. Attention: Do not continue with this procedure if you identified a data group that uses the remote journal link and the data group must continue to be operational. This procedure removes configuration elements and system objects necessary for replication with remote journaling 2. End the remote journal link and verify that it has a state value of *INACTIVE before you continue. Refer to topics Ending a remote journal link independently and Checking status of a remote journal link in the Using MIMIX book. 3. From the management system, do the following to remove the connection to the remote journal: a. Access the journal definitions for the data group whose environment you want to change. From the Work with Data Groups display, type a 45 (Journal definitions) next to the data group that you want and press Enter. b. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. You can select either the source or target journal definition. c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next to the link that you want and press Enter. Note: If more than one RJ link is available for the data group, ensure that you choose the link you want. d. A confirmation display appears. To continue removing the connections for the selected links, press Enter. 4. From the Work with RJ Links display, do the following to delete the target system objects associated with the RJ link: a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter.

231

Removing a remote journaling environment

b. A confirmation display appears. To continue deleting the journal, its associated message queue, the journal receiver, and to remove the connection to the source journal receiver, press Enter. 5. Delete the target journal definition using topic Deleting a Definition in the Using MIMIX book. When you delete the target journal definition, its link to the source journal definition is removed. 6. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK monitors which have the same name as the RJ link.

232

Chapter10

Configuring data group definitions


By creating a data group definition, you identify to MIMIX the characteristics of how replication occurs between two systems. You must have at least one data group definition in order to perform replication. In an Intra environment, a data group definition defines how replication occurs between the two product libraries used by INTRA. Once data group definitions exist for MIMIX, they can also be used by the MIMIX Promoter product. The topics in this chapter include: Tips for data group parameters on page 234 provides tips for using the more common options for data group definitions. Creating a data group definition on page 247 provides the steps to follow for creating a data group definition. Changing a data group definition on page 251 provides the steps to follow for changing a data group definition. Fine-tuning backlog warning thresholds for a data group on page 251 describes what to consider when adjusting the values at which the backlog warning thresholds are triggered.

233

Tips for data group parameters

Tips for data group parameters


This topic provides tips for using the more common options for data group definitions. Context-sensitive help is available online for all options on the data group definition commands. Refer to Additional considerations for data groups on page 244 for more information. Shipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. For additional information see Table 12 in Considerations for LF and PF files on page 105. Data group names (DGDFN, DGSHORTNAM) These parameters identify the data group. The Data group definition (DGDFN) is a three-part name that uniquely identifies a data group. The three-part name must be unique to a MIMIX cluster. The first part of the name identifies the data group. The second and third parts of the name (System 1 and System 2) specify system definitions representing the systems between which the files and objects associated with the data group are replicated. Notes: In the first part of the name, the first character must be either A - Z, $, #, or @. The remaining characters can be alphanumeric and can contain a $, #, @, a period (.), or an underscore (_). Data group names cannot be UPSMON or begin with the characters MM. For Clustering environments only, MIMIX recommends using the value *RCYDMN in System 1 and System 2 fields for Peer CRGs.

One of the system definitions specified must represent a management system. Although you can specify the system definitions in any order, you may find it helpful if you specify them in the order in which replication occurs during normal operations. For many users normal replication occurs from a production system to a backup system, where the backup system is defined as the management system for MIMIX. For example, if you normally replicate data for an application from a production system (MEXICITY) to a backup system (CHICAGO) and the backup system is the management system for the MIMIX cluster, you might name your data group SUPERAPP MEXICITY CHICAGO. The Short data group name (DGSHORTNAM) parameter indicates an abbreviated name used as a prefix to identify jobs associated with a data group. MIMIX will generate this prefix for you when the default *GEN is used. The short name must be unique to the MIMIX cluster and cannot be changed after the data group is created. Data source (DTASRC) This parameter indicates which of the systems in the data group definition is used as the source of data for replication. Allow to be switched (ALWSWT) This parameter determines whether the direction in which data is replicated between systems can be switched. If you plan to use the data group for high availability purposes, use the default value *YES. This allows you to use one data group for replicating data in either direction between the two systems. If you do not allow switching directions, you need to have second data group with

234

similar attributes in which the roles of source and target are reversed in order to support high availability. Data group type (TYPE) The default value *ALL indicates that the data group can be used by both user journal and system journal replication processes. This enables you to use the same data group for all of the replicated data for an application. The value *ALL is required for user journal replication of IFS objects, data areas, and data queues. MIMIX Dynamic Apply also supports the value *DB. For additional information, see Requirements and limitations of MIMIX Dynamic Apply on page 110 Note: In Clustering environments only, the data group value of *PEER is available. This provides you with support for system values and other system attributes that MIMIX currently does not support. Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the transfer definitions used to communicate between the systems defined by the data group. The name you specify in these parameters must match the first part of a transfer definition name. By default, MIMIX uses the name PRIMARY for a value of the primary transfer definition (PRITFRDFN) parameter and for the first part of the name of a transfer definition. If you specify a secondary transfer definition (SECTRFDFN), it is used if the communications path specified in the primary transfer definition is not available. Once MIMIX starts using the secondary transfer definition, it continues to use it even after the primary communication path becomes available again. Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of seconds that the send process waits when there are no entries available to process. Jobs go into a delay state when there are no entries to process. Jobs wait for the time you specify even when new entries arrive in the journal. A value of 0 uses more system resources. Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1, ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply to data groups that can include database files or tracking entries. Data group types of *ALL or *DB include database files. Data group types of *ALL may also include tracking entries. Journal on target (JRNTGT) The default value *YES enables journaling on the target system, which allows you to switch the direction of a data group more quickly. Replication of files with some types of referential constraint actions may require a value of *YES. For more information, see Considerations for LF and PF files on page 105. If you specify *NO, you must ensure that, in the event of a switch to the direction of replication, you manually start journaling on the target system before allowing users to access the files. Otherwise, activity against those files may not be properly recorded for replication. System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) parameters identify the user journal definitions associated with the systems defined as System 1 and System 2, respectively, of the data group. The value *DGDFN indicates that the journal definition has the same name as the data

235

Tips for data group parameters

group definition. The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters interact to automatically create as much of the journaling environment as possible. The DTASRC parameter determines whether system 1 or system 2 is the source system for the data group. When you create the data group definition, if the journal definition for the source system does not exist, a journal definition is created. If you specify to journal on the target system and the journal definition for the target system does not exist, that journal definition is also created. The names of journal definitions created in this way are taken from the values of the JRNDFN1 and JRNDFN2 parameters according to which system is considered the source system at the time they are created. You may need to build the journaling environment for these journal definitions. System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2) parameters identify the name of the primary auxiliary storage pool (ASP) device within an ASP group on each system. The value *NONE allows replication from libraries in the system ASP and basic user ASPs 2-32. Specify a value when you want to replicate IFS objects from a user journal or when you want to replicate objects from ASPs 33 or higher. For more information see Benefits of independent ASPs on page 564. Use remote journal link (RJLNK) This parameter identifies how journal entries are moved to the target system. The default value, *YES, uses remote journaling to transfer data to the target system. This value results in the automatic creation of the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK command), if needed. The RJ link defines the source and target journal definitions and the connection between them. When ADDRJLNK is run during the creation of a data group, the data group transfer definition names are used for the ADDRJLNK transfer definition parameters. MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate when MIMIX source-send processes must be used. Cooperative journal (COOPJRN) This parameter determines whether cooperatively processed operations for journaled objects are performed primarily by user (database) journal replication processes or system (audit) journal replication processes. Cooperative processing through the user journal is recommended and is called MIMIX Dynamic Apply. For data groups created on version 5, the shipped default value *DFT resolves to *USRJRN (user journal) when configuration requirements for MIMIX Dynamic Apply are met. If those requirements are not met, *DFT resolves to *SYSJRN and cooperative processing is performed through system journal replication processes. Number of DB apply sessions (NBRDBAPY) You can specify the number of apply sessions allowed to process the data for the data group. DB journal entry processing (DBJRNPRC) This parameter allows you to specify several criteria that MIMIX will use to filter user journal entries before they reach the database apply (DBAPY) process. Each element of the parameter identifies a criteria that can be set to either *SEND or *IGNORE. The value *SEND causes the journal entries meeting the criteria to be processed and sent to the database apply process. For data groups configured to use

236

MIMIX source-send processes, *SEND can minimize the amount of data that is sent over a communications path. The value *IGNORE prevents the entries from being sent to the database apply process. Certain database techniques, such as keyed replication, may require that an element be set to a specific value. The following available elements describe how journal entries are handled by the database reader (DBRDR) or the database send (DBSND) processes. Before images This criteria determines whether before-image journal entries are filtered out before reaching the database apply process. If you use keyed replication, the before-images are often required and you should specify *SEND. *SEND is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. For files not in data group This criteria determines whether journal entries for files not defined to the data group are filtered out. Generated by MIMIX activity This criteria determines whether journal entries resulting from the MIMIX database apply process are filtered out. Not used by MIMIX This criteria determines whether journal entries not used by MIMIX are filtered out.

Additional parameters: Use F10 (Additional parameters) to access the following parameters. These parameters are considered advanced configuration topics. Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog threshold criteria for the remote journal function. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the RJ link. The threshold can be specified as a time difference, a number of journal entries, or both. When a time difference is specified, the value is amount of time, in minutes, between the timestamp of the last source journal entry and the timestamp of the last remote journal entry. When a number of journal entries is specified, the value is the number of journal entries that have not been sent from the local journal to the remote journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold. Synchronization check interval (SYNCCHKITV) This parameter, which is only valid for database processing, allows you to specify how many before-image entries to process between synchronization checks. For MIMIX to use this feature, the journal image file entry option (FEOPT parameter) must allow before-image journaling (*BOTH). When you specify a value for the interval, a synchronization check entry is sent to the apply process on the target system. The apply process compares the before-image to the image in the file (the entire record, byte for byte). If there is a synchronization problem, MIMIX puts the data group file entry on hold and stops applying journal entries. The synchronization check transactions still occur even if you specify to ignore before-images in the DB journal entry processing (DBJRNPRC) parameter. Time stamp interval (TSPITV) This parameter, which is only valid for database processing, allows you to specify the number of entries to process before MIMIX creates a time stamp entry. Time stamps are used to evaluate performance. Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.

237

Tips for data group parameters

Verify interval (VFYITV) This parameter allows you to specify the number of journal transactions (entries) to process before MIMIX performs additional processing. When the value specified is reached, MIMIX verifies that the communications path between the source system and the target system is still active and that the send and receive processes are successfully processing transactions. A higher value uses less system resources. A lower value provides more timely reaction to error conditions. Larger, high-volume systems should have higher values. This value also affects how often the status is updated with the "Last read" entries. A lower value results in more accurate status information. Data area polling interval (DTAARAITV) This parameter specifies the number of seconds that the data area poller waits between checks for changes to data areas. The poller process is only used when configured data group data area entries exist. The preferred methods of replicating data areas require that data group object entries be used to identify data areas. When object entries identify data areas, the value specified in them for cooperative processing (COOPDB) determines whether the data areas are processed through the user journal with advanced journaling, or through the system journal. Journal at creation (JRNATCRT) This parameter allows you to specify whether to start journaling when objects are created in the libraries replicated by the data group. This applies to new objects of type *FILE, *DTAARA, and *DTAQ that are cooperatively processed. All new objects of the same type are journaled, including those not replicated by the data group. If multiple data groups include the same library in their configurations, only allow one data group to use journal at object creation (*YES or *DFT). The default for this parameter is *DFT which allows MIMIX to determine the objects to journal at creation. For example, a data group is configured to cooperatively process only file ABC from library APPDTA. The library also contains data areas and temporary files that are not configured for replication. Specifying a value that permits journaling of newly created objects (*YES or *DFT) will result in all newly created files in library APPDTA being journaled. Newly created data areas in this library would not be journaled. Note: There are operating system restrictions and some IBM library restrictions. For more information, see the requirements for implicit starting of journaling in What objects need to be journaled on page 323. For additional information, see Processing of newly created files and objects on page 127. Parameters for automatic retry processing: MIMIX may use delay retry cycles when performing system journal replication to automatically retry processing an object that failed due to a locking condition or an in-use condition. It is normal for some pending activity entries to undergo delay retry processingfor example, when a conflict occurs between replicated objects in MIMIX and another job on the system. The following parameters define the scope of two retry cycles: Number of times to retry (RTYNBR) This parameter specifies the number of attempts to make during a delay retry cycle. First retry delay interval (RTYDLYITV1) This parameter specifies the amount of time, in seconds, to wait before retrying a process in the first (short) delay retry cycle. Second retry delay interval (RTYDLYITV2) specifies the amount of time, in

238

seconds, to wait before retrying a process in the second (long) delay retry cycle. This is only used after all the retries for the RTYDLYITV1 parameter have been attempted. After the initial failed save attempt, MIMIX delays for the number of seconds specified for the First retry delay interval (RTYDLYITV1) before retrying the save operation. This is repeated for the specified number of times (RTYNBR). If the object cannot be saved after all attempts in the first cycle, MIMIX enters the second retry cycle. In the second retry cycle, MIMIX uses the number of seconds specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the save attempt for the specified number of times (RTYNBR). If the object identified by the entry is in use (*INUSE) after the first and second retry cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic object recovery policy is enabled. The values in effect for the Number of third delay/retries policy and the Third retry interval (min.) policy determine the scope of the third retry cycle. After all attempts have been performed, if the object still cannot be processed because of contention with other jobs, the status of the entry will be changed to *FAILED. Adaptive cache (ADPCHE) This parameter enables adaptive caching for a data group. Adaptive caching is a technique by which MIMIX caches data into memory before it is needed by user journal replication processes. Using adaptive caching provides greater elapsed time performance by using additional memory. File and tracking entry options (FEOPT) This parameter specifies default options that determine how MIMIX handles file entries and tracking entries for the data group. All database file entries, object tracking entries, and IFS tracking entries defined to the data group use these options unless they are explicitly overridden by values specified in data group file or object entries. File entry options in data group object entries enable you to set values for files and tracking entries that are cooperatively processed. The options are as follows: Journal image This option allows you to control the kinds of record images that are written to the journal when data updates are made to database file records, IFS stream files, data areas or data queues. The default value *AFTER causes only after-images to be written to the journal. The value *BOTH causes both before-images and after-images to be written to the journal. Some database techniques, such as keyed replication, may require the use of both before-image and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. Omit open/close entries This option allows you to specify whether open and close entries are omitted from the journal. The default value *YES indicates that open and close operations on file members or IFS tracking entries defined to the data group do not create open and close journal entries and are therefore omitted from the journal. If you specify *NO, journal entries are created for open and close operations and are placed in the journal. Replication type This option allows you to specify the type of replication to use for

239

Tips for data group parameters

database files defined to the data group. The default value *POSITION indicates that each file is replicated based on the position of the record within the file. Positional replication uses the values of the relative record number (RRN) found in the journal entry header to locate a database record that is being updated or deleted. MIMIX Dynamic Apply requires the value *POSITION. The value *KEYED indicates that each file is replicated based on the value of the primary key defined to the database file. The value of the key is used to locate a database record that is being deleted or updated. MIMIX strongly recommends that any file configured for keyed replication also be enabled for both beforeimage and after-image journaling. Files defined using keyed replication must have at least one unique access path defined. For additional information, see Keyed replication on page 355. Lock member during apply This option allows you to choose whether you want the database apply process to lock file members when they are being updated during the apply process. This prevents inadvertent updates on the target system that can cause synchronization errors. Members are locked only when the apply process is active. Apply session With this option, you can assign a specific apply session for processing files defined to the data group. The default value *ANY indicates that MIMIX determines which apply session to use and performs load balancing. Notes: Any changes made to the apply session option are not effective until the data group is started with *YES specified for the clear pending and clear error parameters. For IFS and object tracking entries, only apply session A is valid. For additional information see Database apply session balancing on page 87.

Collision resolution This option determines how data collisions are resolved. The default value *HLDERR indicates that a file is put on hold if a collision is detected. The value *AUTOSYNC indicates that MIMIX will attempt to automatically synchronize the source and target file. You can also specify the name of the collision resolution class (CRCLS) to use. A collision resolution class allows you to specify how to handle a variety of collision types, including calling exit programs to handle them. See the online help for the Create Collision Resolution Class (CRTCRCLS) command for more information. Note: The *AUTOSYNC value should not be used if the Automatic database recovery policy is enabled.

Disable triggers during apply This option determines if MIMIX should disable any triggers on physical files during the database apply process. The default value *YES indicates that triggers should be disabled by the database apply process while the file is opened. Process trigger entries This option determines if MIMIX should process any journal entries that are generated by triggers. The default value *YES indicates that journal entries generated by triggers should be processed.

240

Database reader/send threshold (DBRDRTHLD) This parameter specifies the backlog threshold criteria for the database reader (DBRDR) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the DBRDR process. If the data group is configured for MIMIX source-send processing instead of remote journaling, this threshold applies to the database send (DBSND) process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold. Database apply processing (DBAPYPRC) This parameter allows you to specify defaults for operations associated with the database apply processes. Each configured apply session uses the values specified in this parameter. The areas for which you can specify defaults are as follows: Force data interval You can specify the number of records that are processed before MIMIX forces the apply process information to disk from cache memory. A lower value provides easier recovery for major system failures. A higher value provides for more efficient processing. Maximum open members You can specify the maximum number of members (with journal transactions to be applied) that the apply process can have open at one time. Once the limit specified is reached, the apply process selectively closes one file before opening a new file. A lower value reduces disk usage by the apply process. A higher value provides more efficient processing because MIMIX does not open and close files as often. Threshold warning You can specify the number of entries the apply process can have waiting to be applied before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the database apply process and a message is sent to the primary and secondary message queues. Apply history log spaces You can specify the maximum number of history log spaces that are kept after the journal entries are applied. Any value other than zero (0) affects performance of the apply processes. Keep journal log user spaces You can specify the maximum number of journal log spaces to retain after the journal entries are applied. Log user spaces are automatically deleted by MIMIX. Only the number of user spaces you specify are kept. Size of log user spaces (MB) You can specify the size of each log space (in megabytes) in the log space chain. Log spaces are used as a staging area for journal entries before they are applied. Larger log spaces provide better performance.

Object processing (OBJPRC) This parameter allows you to specify defaults for object replication. The areas for which you can specify defaults are as follows: Object default owner You can specify the name of the default owner for objects

241

Tips for data group parameters

whose owning user profile does not exist on the target system. The product default uses QDFTOWN for the owner user profile. DLO transmission method You can specify the method used to transmit the DLO content and attributes to the target system. The value *OPTIMIZED uses i5/OS APIs. The *SAVRST uses i5/OS save and restore commands. IFS transmission method You can specify the method used to transmit IFS object content to the target system. The value *SAVRST uses i5/OS save and restore commands. The value *OPTIMIZED uses i5/OS APIs. Note: It is recommended that you use the *OPTIMIZED method of IFS transmission only in environments in which the high volume of IFS activity results in persistent replication backlogs. The i5/OS save and restore method guarantees that all attributes of an IFS object are replicated. The IFS optimization method does not currently replicate digital signatures or other attributes that have been added in i5/OS V5R2 or later. User profile status You can specify the user profile Status value for user profiles when they are replicated. This allows you to replicate user profiles with the same status as the source system in either an enabled or disabled status for normal operations. If operations are switched to the backup system, user profiles can then be enabled or disabled as needed as part of the switching process. Keep deleted spooled files You can specify whether to retain replicated spooled files on the target system after they have been deleted from the source system. When you specify *YES, the replicated spooled files are retained on the target system after they are deleted from the source system. MIMIX does not perform any clean-up of these spooled files. You must delete them manually when they are no longer needed. If you specify *NO, the replicated spooled files are deleted from the target system when they are deleted from the source system. Keep DLO system object name You can specify whether the DLO on the target system is created with the same system object name as the DLO on the source system. The system object name is only preserved if the DLO is not being redirected during the replication process. If the DLO from the source system is being directed to a different name or folder on the target system, then the system object name will not be preserved. Object retrieval delay You can specify the amount of time, in seconds, to wait after an object is created or updated before MIMIX packages the object. This delay provides time for your applications to complete their access of the object before MIMIX begins packaging the object.

Object send threshold (OBJSNDTHLD) This parameter specifies the backlog threshold criteria for the object send (OBJSND) process. When the backlog reaches any of the specified criterion, the threshold exceeded condition is indicated in the status of the OBJSND process. The threshold can be specified as time, journal entries, or both. When time is specified, the value is the amount of time, in minutes, between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. When a journal entry quantity is specified, the value is the number of journal entries that have not been read from the journal. If *NONE is specified for a criterion, that criterion is not considered when determining whether the backlog has reached the threshold.

242

Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object retrieve requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists. Container send processing (CNRSNDPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle container send requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog jobs threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically end. If the backlog reaches the warning message threshold, the threshold exceeded condition is indicated in the status of the container send (CNRSND) process. If *NONE is specified for the warning message threshold, the process status will not indicate that a backlog exists. Object apply processing (OBJAPYPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object apply requests and the threshold at which the number of pending requests queued for processing triggers additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. During periods of peak activity, if the number of pending requests exceeds the backlog threshold, additional jobs, up to the maximum, are started to handle the extra work. When the backlog is handled and activity returns to normal, the extra jobs will automatically terminate. You can also specify a threshold for warning message that indicates the number of pending requests waiting in the queue for processing before a warning message is sent. When the threshold is reached, the threshold exceeded condition is indicated in the status of the object apply process and a message is sent to the primary and secondary message queues. User profile for submit job (SBMUSR) This parameter allows you to specify the name of the user profile used to submit jobs. The default value *JOBD indicates that the user profile named in the specified job description is used for the job being submitted. The value *CURRENT indicates that the same user profile used by the job that is currently running is used for the submitted job. Send job description (SNDJOBD) This parameter allows you to specify the name and library of the job description used to submit send jobs. The product default uses MIMIXSND in library MIMIXQGPL for the send job description.

243

Tips for data group parameters

Apply job description (APYJOBD) This parameter allows you to specify the name and library of the job description used to submit apply requests. The product default uses MIMIXAPY in library MIMIXQGPL for the apply job description. Reorganize job description (RGZJOBD) This parameter, used by database processing, allows you to specify the name and library of the job description used to submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL for the reorganize job description. Synchronize job description (SYNCJOBD) This parameter, used by database processing, allows you to the name and library of the job description used to submit synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for synchronization job description. This is valid for any synchronize command that does not have JOBD parameter on the display. Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the MIMIX environment. You can change the time at which these jobs restart. The source or target role of the system affects the results of the time you specify on a data group definition. Results may also be affected if you specify a value that uses the job restart time in a system definition defined to the data group. Changing the job restart time is considered an advanced technique. Recovery window (RCYWIN) Configuring a recovery window1 for a data group specifies the minimum amount of time, in minutes, that a recovery window is available and identifies the replication processes that permit a recovery window. A recovery window introduces a delay in the specified processes to create a minimum time during which you can set a recovery point. Once a recovery point is set, you can react to anticipated problems and take action to prevent a corrupted object from reaching the target system. When the processes reach the recovery point, they are suspended so that any corruption in the transactions after that point will not automatically be processed. By its nature, a recovery window can affect the data group's recovery time objective (RTO). Consider the effect of the duration you specify on the data group's ability to meet your required RTO. You should also disable auditing for any data group that has a configured recovery window. For more information, see Preventing audits from running in the Using MIMIX book.

Additional considerations for data groups


If unwanted changes are recorded to a journal but not realized until a later time, you can backtrack to a time prior to when the changes were made by using the Remove Journal Changes (RMVJRNCHG) command provided by IBM. In order to use this command, your configuration must specify the following: For the data group definition, the following values must be specified for the parameters indicated: DB journal entry processing (DBJRNPRC) Before images *SEND
1. Recovery windows and recovery points are supported with the MIMIX CDP feature, which requires an additional access code.

244

File and tracking entry options (FEOPT) Journal image *BOTH

For each data group file entry, the following must be specified: File entry options Journal image *DGDFT or *BOTH Finally, if you are changing an existing data group to have these values, you must end and restart the data group. Once you have these values specified, you will be able to use the RMVJRNCHG command if needed.
Updated for 5.0.08.00 and 5.0.13.00.

245

Tips for data group parameters

246

Creating a data group definition


Shipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. These data group use remote journaling as an integral part of the user journal replication processes. For additional information see Table 12 in Considerations for LF and PF files on page 105. For information about command parameters, see Tips for data group parameters on page 234. To create a data group, do the following: 1. To access the appropriate command, do the following: a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and press Enter b. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. c. From the Work with Data Group Definitions display, type a 1 (Create) next to the blank line at the top of the list area and press Enter. 2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid three-part name at the Data group definition prompts. Note: Data group names cannot be UPSMON or begin with the characters MM. 3. For the remaining prompts on the display, verify the values shown are what you want. If necessary, change the values. a. If you want a specific prefix to be used for jobs associated with the data group, specify a value at the Short data group name prompt. Otherwise, MIMIX will generate a prefix. b. Ensure that the value of the Data source prompt represents the system that you want to use as the source of data to be replicated. c. Verify that the value of the Allow to be switched prompt is what you want. d. Verify that the value of the Data group type prompt is what you need. MIMIX Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing and user journal replication of IFS objects, data areas, and data queues require *ALL. e. Verify that the value of the Primary transfer definition prompt is what you want. f. If you want MIMIX to have access to an alternative communications path, specify a value for the Secondary transfer definition prompt. g. Verify that the value of the Reader wait time (seconds) prompt is what you want. h. Press Enter. 4. If you specified *OBJ for the Data group type, skip to Step 9. 5. The Journal on target prompt appears on the display. Verify that the value shown is what you want and press Enter.

247

Creating a data group definition

Note: If you specify *YES and you require that the status of journaling on the target system is accurate, you should perform a save and restore operation on the target system prior to loading the data group file entries. If you are performing your initial configuration, however, it is not necessary to perform a save and restore operation. You will synchronize as part of the configuration checklist. 6. More prompts appear on the display that identify journaling information for the data group. You may need to use the Page Down key to see the prompts. Do the following: a. Ensure that the values of System 1 journal definition and System 2 journal definition identify the journal definitions you need. Notes: If you have not journaled before, the value *DGDFN is appropriate. If you have an existing journaling environment that you have identified to MIMIX in a journal definition, specify the name of the journal definition. If you only see one of the journal definition prompts, you have specified *NO for both the Allow to be switched prompt and the Journal on target prompt. The journal definition prompt that appears is for the source system as specified in the Data source prompt. b. If any objects to replicate are located in an auxiliary storage pool (ASP) group on either system, specify values for System1 ASP group and System 2 ASP group as needed. The ASP group name is the name of the primary ASP device within the ASP group. c. The default for the Use remote journal link prompt is *YES, which required for MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a transfer definition and an RJ link, if needed. To create a data group definition for a source-send configuration, change the value to *NO. d. At the Cooperative journal (COOPJRN) prompt, specify the journal for cooperative operations. For new data groups, the value *DFT automatically resolves to *USRJRN when Data group type is *ALL or *DB and Remote journal link is *YES. The value *USRJRN processes through the user (database) journal while the value *SYSJRN processes through the system (audit) journal. 7. At the Number of DB apply sessions prompt, specify the number of apply sessions you want to use. 8. Verify that the values shown for the DB journal entry processing prompts are what you want. Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information. 9. At the Description prompt, type a text description of the data group definition, enclosed in apostrophes. 10. Do one of the following:

248

To accept the basic data group configuration, Press Enter. Most users can accept the default values for the remaining parameters. The data group is created when you press Enter. To access prompts for advanced configuration, press F10 (Additional Parameters) and continue with the next step.

Advanced Data Group Options: The remaining steps of this procedure are only necessary if you need to access options for advanced configuration topics. The prompts are listed in the order they appear on the display. Because i5/OS does not allow additional parameters to be prompt-controlled, you will see all parameters regardless of the value specified for the Data group type prompt. 11. Specify the values you need for the following prompts associated with user journal replication: Remote journaling threshold Synchronization check interval Time stamp interval Verify interval Data area polling interval Journal at creation

12. Specify the values you need for the following prompts associated with system journal replication: Number of times to retry First retry delay interval Second retry delay interval

13. Accept the value *YES for the Adaptive cache prompt unless the system is memory constrained. 14. Specify the values you need for each of the prompts on the File and tracking ent. opts (FEOPT) parameter. Notes: Replication type must be *POSITION for MIMIX Dynamic Apply. Apply session A is used for IFS objects, data areas, and data queues that are configured for user journal replication. For more information see Database apply session balancing on page 87. The journal image value *BOTH is required for the IBM RMVJRNCHG (Remove Journal Change) command. See Additional considerations for data groups on page 244 for more information.

15. Specify the values you need for each element of the following parameters: Database reader/send threshold Database apply processing Object processing

249

Creating a data group definition

Object send threshold Object retrieve processing Container send processing Object apply processing

16. If necessary, change the values for the following prompts: User profile for submit job Send job description and its Library Apply job description and its Library Reorganize job description and its Library Synchronize job description and its Library Job restart time

17. When you are sure that you have defined all of the values that you need, press Enter to create the data group definition.
Updated for 5.0.13.00.

250

Changing a data group definition


For information about command parameters, see Tips for data group parameters on page 234. To change a data group definition, do the following: 1. From the Work with DG Definitions display, type a 2 (Change) next to the data group you want and press Enter. 2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to see additional prompts. 3. Make any changes you need for the values of the prompts. Page Down to see more of the prompts. Note: If you change the Number of DB apply sessions prompt (NBRDBAPY), you need to start the data group specifying *YES for the Clear pending prompt (CLRPND). 4. If you need to access advanced functions, press F10 (Additional parameters). Make any changes you need for the values of the prompts. 5. When you are ready to accept the changes, press Enter.

Fine-tuning backlog warning thresholds for a data group


MIMIX supports the ability to set a backlog threshold on each of the replication jobs used by a data group. When a job has a backlog that reaches or exceeds the specified threshold, the threshold condition is indicated in the job status and reflected in user interfaces. Threshold settings are meant to inform you that, while normal replication processes are active, a condition exists that could become a problem. What is an acceptable risk for some data groups may not be acceptable for other data groups or in some environments. For example, a threshold condition which occurs after starting a process that was temporarily ended or while processing an unusually large object which rarely changes may be an acceptable risk. However, a process that is continuously in a threshold condition or having multiple processes frequently in threshold conditions may indicate a more serious exposure that requires attention. Ultimately, each threshold setting must be a balance between allowing normal fluctuations to occur while ensuring that a job status is highlighted when a backlog approaches an unacceptable level of risk to your recovery time objectives (RTO) or risk of data loss. Important! When evaluating whether threshold settings are compatible with your RTO, you must consider all of the processes in the replication paths for which the data group is configured and their thresholds. Each threshold represents only one process in either the user journal replication path or the system journal replication path. If the threshold for one process is set higher than its shipped value, a backlog for that process may not result in a threshold condition while being sufficiently large to cause subsequent processes to have backlogs which exceed their thresholds. Consider the cumulative effect that having multiple processes in

251

Fine-tuning backlog warning thresholds for a data group

threshold conditions would have on RTO and your tolerance for data loss in the event of a failure. Table 31 lists the shipped values for thresholds available in a data group definition, identifies the risk associated with a backlog for each replication process, and identifies available options to address a persistent threshold condition. For each data group, you may need to use multiple options or adjust one or more threshold values multiple times before finding an appropriate setting.
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions

Replication Process Backlog Threshold and its Shipped Default Values

Note: Select a name to view a description

Remote journaling threshold 10 minutes

All journal entries in the backlog for the remote journaling function exist only in the source system journal and are waiting to be transmitted to the remote journal. These entries cannot be processed by MIMIX user journal replication processes and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required. For data groups that use remote journaling, all journal entries in the database reader backlog are physically located on the target system but MIMIX has not started to replicate them. If the source system fails, these entries need to be read and applied before switching. For data groups that use MIMIX source-send processing, all journal entries in the database send backlog, are waiting to be read and to be transmitted to the target system. The backlogged journal entries exist only in the source system and are at risk of being lost if the source system fails. After the source system becomes available again, journal analysis may be required. All of the entries in the database apply backlog are waiting to applied to the target system. If the source system fails, these entries need to be applied before switching. A large backlog can also affect performance.

Option 3 Option 4

Database reader/send threshold 10 minutes

Option 2 Option 3 Option 4

Database apply warning message threshold 100,000 entries

Option 2 Option 3 Option 4

252

Table 31.

Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions Option 2 Option 3 Option 4

Replication Process Backlog Threshold and its Shipped Default Values Object send threshold 10 minutes

All of the journal entries in the object send backlog exist only in the system journal on the source system and are at risk of being lost if the source system fails. MIMIX may not have determined all of the information necessary to replicate the objects associated with the journal entries. As this backlog clears, subsequent processes may have backlogs as replication progresses. All of the objects associated with journal entries in the object retrieve backlog are waiting to be packaged so they can be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses. All of the packaged objects associated with journal entries in the container send backlog are waiting to be sent to the target system. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. As this backlog clears, subsequent processes may have backlogs as replication progresses All of the entries in the object apply backlog are waiting to be applied to the target system. If the source system fails, these entries need to be applied before switching. Any related objects for which an automatic recovery action was collecting data may be lost.

Object retrieve warning message threshold 100 entries

Option 1 Option 2 Option 3 Option 4

Container send warning message threshold 100 entries

Option 1 Option 2 Option 3 Option 4

Object apply warning message threshold 100 requests

Option 1 Option 2 Option 3 Option 4

The following options are available, listed in order of preference. Some options are not available for all thresholds. Option 1 - Adjust the number of available jobs. This option is available only for the object retrieve, container send, and object apply processes. Each of these processes have a configurable minimum and maximum number of jobs, a threshold at which more jobs are started, and a warning message threshold. If the number of entries in a backlog divided by the number of active jobs exceeds the job threshold, extra jobs are automatically started in an attempt to address the backlog. If the backlog reaches the higher value specified in the warning message threshold, the process status reflects the threshold condition. If the process frequently shows a threshold status, the

253

Fine-tuning backlog warning thresholds for a data group

maximum number of jobs may be too low or the job threshold value may be too high. Adjusting either value in the data group configuration can result in more throughput. Option 2 - Temporarily increase job performance. This option is available for all processes except the RJ link. Use work management functions to increase the resources available to a job by increasing its run priority or its timeslice (CHGJOB command). These changes are effective only for the current instance of the job. The changes do not persist if the job is ended manually or by nightly cleanup operations resulting from the configured job restart time (RESTARTTIME) on the data group definition. Option 3 - Change threshold values or add criterion. All processes support changing the threshold value. In addition, if the quantity of entries is more of a concern than time, some processes support specifying additional threshold criteria not used by shipped default settings. For the remote journal, database reader (or database send), and object send processes, you can adjust the threshold so that a number of journal entries is used as criteria instead of, or in conjunction with a time value. If both time and entries are specified, the first criterion reached will trigger the threshold condition. Changes to threshold values are effective the next time the process status is requested. Option 4 - Get assistance. If you tried the other options and threshold conditions persist, contact your Certified MIMIX Consultant for assistance. It may be necessary to change configurations to adjust what is defined to each data group or to make permanent work management changes for specific jobs.
Updated for 5.0.13.00.

254

Chapter11

Additional options: working with definitions


The procedures for performing common functions, such as copying, displaying, and renaming, are very similar for all types of definitions used by MIMIX. The generic procedures in this topic can be used for copying, deleting, displaying, and printing definitions. Specific procedures are included for renaming each type of definition and for swapping system definition names. The topics in this chapter include: Copying a definition on page 255 provides a procedure for copying a system definition, transfer definition, journal definition, or a data group definition. Deleting a definition on page 256 provides a procedure for deleting a system definition, transfer definition, journal definition, or a data group definition. Displaying a definition on page 257 provides a procedure for displaying a system definition, transfer definition, journal definition, or a data group definition. Printing a definition on page 257 provides a procedure for creating a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition. Renaming definitions on page 258 provides procedure for renaming definitions, such as renaming a system definition which is typically done as a result in a change of software.

Copying a definition
Use this procedure on a management system to copy a system definition, transfer definition, journal definition, or a data group definition. Notes for data group definitions: The data group entries associated with a data group definition are not copied. Before you copy a data group definition, ensure that activity is ended for the definition to which you are copying.

Notes for journal definitions: The journal definition identified in the From journal definition prompt must exist before it can be copied. The journal definition identified in the To journal defining prompt cannot exist when you specify *NO for the Replace definition prompt. If you specify *YES for the Replace definition prompt, the To journal defining prompt must exist. It is possible to introduce conflicts in your configuration when replacing an existing journal definition. These conflicts are automatically resolved or an error message is sent when the journal environment for the definition is built.

To copy a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the

255

Deleting a definition

MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to definition you want and press Enter. 4. The Copy display for the definition type you selected appears. At the To definition prompt, specify a name for the definition to which you are copying information. 5. If you are copying a journal definition or a data group definition, the display has additional prompts. Verify that the values of prompts are what you want. 6. The value *NO for the Replace definition prompt prevents you from replacing an existing definition. If you want to replace an existing definition, specify *YES. 7. To copy the definition, press Enter.

Deleting a definition
Use this procedure on a management system to delete a system definition, transfer definition, journal definition, or a data group definition. Attention: When you delete a system or data group definition, information associated with the definition is also deleted. Ensure that the definition you delete is not being used for replication and be aware of the following: If you delete a system definition, all other configuration elements associated with that definition are deleted. This includes journal definitions, transfer definitions, and data group definitions with all associated data group entries. If you delete a data group definition, all of its associated data group entries are also deleted. The delete function does not clean up any records for files in the error/hold file.

When you delete a journal definition, only the definition is deleted. The files being journaled, the journal, and the journal receivers are not deleted. To delete a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. Ensure that the definition you want to delete is not being used for replication. Do the following:

256

Additional options: working with definitions

a. From the MIMIX Main Menu, select option 2 (Work with systems) and press Enter. b. Type an 8 (Work with data groups) next to the system you want and press Enter. c. The result is a list of data groups for the system you selected. Type a 17 (File entries) next to the data group you want and press Enter. d. On the Work with DG File Entries display, verify that the status of the file entries is *INACTIVE. If necessary, use option 10 (End journaling). e. On the Work with Data Groups display, use option 10 (End data group). f. Before deleting a system definition, on the Work with Systems display, uses option 10 (End managers). 2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 3. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to definition you want and press Enter. 5. A confirmation display appears with a list of definitions to be deleted. To delete the definitions press Enter.

Displaying a definition
Use this procedure to display a system definition, transfer definition, journal definition, or a data group definition. To display a definition, do the following: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 5 (Display) next to definition you want and press Enter. 4. The definition display appears. Page Down to see all of the values.

Printing a definition
Use this procedure to create a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition. To print a definition, do the following;

257

Renaming definitions

Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select the option for the type of definition you want and press Enter. 3. The "Work with" display for the definition type appears. Type a 6 (Print) next to definition you want and press Enter. 4. A spooled file is created with a name of MX***DFN, where *** indicates the type of definition. You can print the spooled file according to your standard print procedures.

Renaming definitions
The procedures for renaming a system definition, transfer definition, journal definition, or data group definition must be run from a management system. Attention: Before you rename any definition, ensure that all other configuration elements related to it are not active. This section includes the following procedures: Renaming a system definition on page 258 Renaming a transfer definition on page 261 Renaming a journal definition with considerations for RJ link on page 262 Renaming a data group definition on page 263

Renaming a system definition


When you rename a system definition, all other configuration information that references the system definition is automatically modified to include the updated system name. This includes journal definitions, transfer definitions, data group definitions, and associated data group entries. A typical reason for renaming a system definition is a result of a change in hardware. Other reasons may include the naming convention in an environment has changed, the location of the system is changing and the name correlates to the systems location, or simply because a new name is preferred over the current name. Another reason for renaming a system definition is to swap system defintion names. For example, if the roles of two systems change so that the system which was the production system becomes the backup system and vice versa, the system defintion names may also be swapped to reflect this change. When swapping system defintion

258

Additional options: working with definitions

names, a temporary system definition name must be used because there cannot be two system definitions with the same name. Attention: Before you rename a system definition, ensure that MIMIX activity is ended by using the End Data Group (ENDDG) and End MIMIX Manager (ENDMMXMGR) commands. To rename system definitions, do the following for each system whose definition you are renaming from the management system unless noted otherwise: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. Perform a controlled end of the MIMIX installation. See the Using MIMIX book for procedures for ending MIMIX. 2. End the MIMIXSBS subsystem on all systems. See the Using MIMIX book for procedures for ending the MIMIXSBS subsystem. 3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter. 4. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you are renaming, and press Enter. 5. For each data group listed, do the following: a. From the Work with Data Groups display, select option 8 (Display status) and press Enter. b. Record the Last Read Receiver name and Sequence # for both database and object. 6. If changing the host name or IP address, do the following steps. Otherwise, continue with Step 7. a. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. b. From the MIMIX Configuration Menu, select option 2 (Work with transfer definitons) and press Enter. c. The Work with Transfer Definitions display appears. Select option 2 (Change) for each transfer definition that includes the system whose definition you are renaming and press Enter. d. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 to access additional parameters. e. Specify the new host name or IP address for the System 1 host name or address and System 2 host name or address and press Enter. Note: Many installations will have an autostart entry for the STRSVR command. Autostart entries must be reviewed for possible updates of a new system name or IP address. For more information, see Identifying the autostart job entry in the MIMIXSBS subsystem on page 191 and Changing the job description for an autostart job entry on page 191.

259

Renaming definitions

7. Start the MIMIXSBS subsystem and the port jobs on all systems using the host names or IP addresses. If you changed these, use the host name or IP address specified in Step 6. 8. For all systems, ensure communications before continuing. Follow the steps in topic Verifying all communications links on page 195. 9. From the Work with Systems Definitions (WRKSYSDFN) display type a 7 (Rename) next to the system whose definition is being renamed and press Enter. 10. The Rename System Definitions (RNMSYSDFN) display appears. At the To system definition prompt, specify the new name for the system whose definition is being renamed and press Enter. 11. The Confirm Rename System Defintion display appears. Press Enter. 12. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems) and press Enter. 13. The Work with Systems display appears. Type a 9 (Start) next to the management system you want and press Enter. 14. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following: a. At the Manager prompt, specify *ALL. b. Press F10 to access additional parameters. c. In the Reset configuration prompt, specify *YES. d. Press Enter. 15. The Work with Systems display appears. For each network system, do the following: a. Type a 9 (Start) next to each network system you want and press Enter. b. The Start MIMIX Managers (STRMMXMGR) display appears. Press Enter. Wait for the MIMIX Managers to start before continuing. 16. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definitions you have renamed and press Enter. 17. For each data group listed, do the following: a. From the Work with Data Groups display, select option 9 (Start DG) and press Enter. b. The Start Data Group (STRDG) display appears. Press F10 to display additional parameters. c. Type the Receiver names and Sequence #, adding 1 to the sequence #s, that were recorded in Step 5b for both database and object. Press Enter. 18. From the Work with Systems display, select option 8 (Work with data groups) on the system whose definition you have renamed and ensure all data groups are active. You should see the letter A, highlighted blue in the database source column. Refer to the Using MIMIX book for more information. 19. Press F3 to return to the Work with Systems display.

260

Additional options: working with definitions

20. From the Work with Systems display, select option 8 (Work with data groups) on the management system and press Enter. 21. From the Work with Data Groups display, select option 9 (Start DG) for data groups (highlighted red) that are not active and press Enter. 22. The Start Data Group (STRDG) display appears. Press Enter. Additional parameters are displayed. Press Enter again to start the data groups. 23. The Work with data groups display appears. Ensure all data groups are active. You should see the letter A, highlighted blue in the database source column. Refer to the Using MIMIX book for more information. Press F5 to refresh data.

Renaming a transfer definition


When you rename a transfer definition, other configuration information which references it is not updated with the new name. You must manually update other information which references the transfer definition. The following procedure renames the transfer definition and includes steps to update the other configuration information that references the transfer definition including the system definition, data group definition, and remote journal link. All of the steps must be completed. To rename a transfer definition, do the following from the management system: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select option 2 (Work with transfer definitions) and press Enter. 3. From the Work with Transfer Definitions menu, type a 7 (Rename) next to the definition you want to rename and press Enter. 4. The Rename Transfer Definition display for the definition type you selected appears. At the To transfer definition prompt, specify the values you want for the new name and press Enter. 5. Press F12 to return to the MIMIX Configuration Menu. 6. From the MIMIX Configuration Menu, select option 1 (Work with system definitions) and press Enter. 7. From the Work with System Definitions menu, type a 2 (Change) next to the system name whose transfer definition needs to be changed and press Enter. 8. From the Change System Definition display, specify the new name for the transfer definition and press Enter.

9. Press F12 to return to the MIMIX Configuration Menu. 10. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 11. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name whose transfer definition needs to be changed and press Enter.

261

Renaming definitions

12. From the Change Data Group Definition display, specify the new name for the transfer definition and press Enter until the Work with DG Definitions display appears. 13. Press F12 to return to the MIMIX Configuration Menu. 14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal links) and press Enter. 15. From the Work with RJ Links menu, press F11 to display the transfer definitions. 16. Type a 2 (Change) next to the RJ link where you changed the transfer definition and press Enter. 17. From the Change Remote Journal Link display, specify the new name for the transfer definition and press Enter.

Renaming a journal definition with considerations for RJ link


When you rename a journal definition, other configuration information which references it is not updated with the new name. This procedure includes steps for renaming the journal definition in the data group definition, including considerations when an RJ link is used. If you rename a journal definition, the journal name will also be renamed if you used the default value of *JRNDFN when configuring the journal definition. If you do not want the journal name to be renamed, you must specify the journal name rather than the default of *JRNDFN for the journal (JRN) parameter. To rename a journal definition, do the following from the management system: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. 1. Perform a controlled end for the data group in your remote journaling environment. Use topic Ending all replication in a controlled manner in the Using MIMIX book. 2. If using remote journaling, do the following. Otherwise, continue with Step 3: a. End the remote journal link in a controlled manner. Use topic Ending a remote journal link independently in the Using MIMIX book. b. Verify that the remote journal link is not in use on both systems. Use topic Displaying status of a remote journal link in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue. c. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. d. From the MIMIX Configuration Menu, select option 8 (Work with remote journal links) and press Enter. e. Remove the remote journal connection (the RJ link). From the Work with RJ Links display, type a 15 (Remove RJ connection) next to the link that you want and press Enter. A confirmation display appears. To continue removing the connections for the selected links, press Enter.

262

Additional options: working with definitions

f. Press F12 to return to the MIMIX Configuration Menu. 3. From the MIMIX Configuration Menu, select option 3 (Work with journal definitions) and press Enter. 4. From the Work with Journal Definitions menu, type a 7 (Rename) next to the journal definition names you want to rename and press Enter. 5. The Rename Journal Definition display for the definition you selected appears. At the To journal definition prompts, specify the values you want for the new name. a. If the journal name is *JRNDFN, ensure that there are no journal receiver prefixes in the specified library whose names start with the journal receiver prefix. See Building the journaling environment on page 219 for more information. 6. Press Enter. The Work with Journal Definitions display appears. 7. If using remote journaling, do the following to change the corresponding definition for the remote journal. Otherwise, continue with Step 8: a. Type a 2 (Change) next to the corresponding remote journal definition name you changed and press Enter. b. Specify the values entered in Step 5 and press Enter. 8. From the Work with Journal Definitions menu, type a 14 (Build) next to the journal definition names you changed and press F4. 9. The Build Journaling Environment display appears. At the Source for values prompt, specify *JRNDFN. 10. Press Enter. You should see a message that indicates the journal environment was created. 11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 12. From the Work with DG Definitions menu, type a 2 (Change) next to the data group name that uses the journal definition you changed and press Enter. 13. Press F10 to access additional parameters. 14. From the Change Data Group Definition display, specify the new name for the System 1 journal definition and System 2 journal definition and press Enter twice.

Renaming a data group definition


Do the following to rename a data group definition: Note: The following procedure includes using MIMIX menus. See Accessing the MIMIX Main Menu on page 91 for information about using these. Attention: Before you rename a data group definition, ensure that the data group has a status of *INACTIVE. 1. Ensure that the data group is ended. If the data group is active, end it using the

263

Renaming definitions

procedure Ending a data group in a controlled manner in the Using MIMIX book. 2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu) and press Enter. 3. From the MIMIX Configuration Menu, select option 4 (Work with data group definitions) and press Enter. 4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data group name you want to rename and press Enter. 5. From the Rename Data Group Definition display, specify the new name for the data group definition and press Enter.

264

Chapter12

Configuring data group entries


Data group entries can identify one or many objects to be replicated or excluded from replication. You can add individual data group entries, load entries from an existing source, and change entries as needed. The topics in this chapter include: Creating data group object entries on page 267 describes data group object entries which are used to identify library-based objects for replication. Procedures for creating these are included. Creating data group file entries on page 272 describes data group file entries which are required for user journal replication of *FILE objects. Procedures for creating these are included. Creating data group IFS entries on page 282 describes data group IFS entries which identify IFS objects for replication. Procedures for creating these are included. Loading tracking entries on page 284 describes how to manually load tracking entries for IFS objects, data areas, and data queues that are configured for user journal replication. Creating data group DLO entries on page 287 describes data group DLO entries which identify document library objects (DLOs) for replication by MIMIX system journal replication processes. Procedures for creating these are included. Creating data group data area entries on page 289 describes data group data area entries which identify data areas to be replicated by the data area poller process. Procedures for creating these are included. Additional options: working with DG entries on page 291 provides procedures for performing data group entry common functions, such as copying, removing, and displaying,

The appendix Supported object types for system journal replication on page 549 lists i5/OS object types and indicates whether each object type is replicated by MIMIX.

265

266

Creating data group object entries


Data group object entries are used to identify library-based objects for replication. How replication is performed for the objects identified depends on the object type and configuration settings. For object types that cannot be journaled to a user journal, system journal replication processes are used. For object types that can be journaled (*FILE, *DTAARA, and *DTAQ), values specified in the object entry and other configuration information determine whether the object is replicated through the system journal or is cooperatively processed with the user journal. For *FILE objects, several configuration options are available, some of which also require data group file entries to be configured. For detailed concepts and requirements for supported configurations, see the following topics: Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112

When you configure MIMIX, you can create data group object entries by adding individual object entries or by using the custom load function for library-based objects. The custom load function can simplify creating data group entries. This function generates a list of objects that match your specified criteria, from which you can selectively create data group object entries. For example, if you want to replicate all but a few of the data areas in a specific library, you could use the Add Data Group Object Entry (ADDDGOBJE) command to create a single data group object entry that includes all data areas in the library. Then, using the same object selection criteria with the custom load function, you can select from a list of data areas in the library to create exclude entries for the objects you do not want replicated. Once you have created data group object entries, you can tailor them to meet your requirements. You can also use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group.

Loading data group object entries


In this procedure, you specify selection criteria that results in a list of objects with similar characteristics. From the list, you can select multiple objects for which MIMIX will create appropriate data group object entries. You can customize individual entries later, if necessary. From the management system, do the following to create a custom load of object entries: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 20 (Object entries) next to the data group you want and press Enter. 3. The Work with DG Object Entries display appears. Press F19 (Load).

267

Creating data group object entries

4. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to specify the selection criteria: a. Identify the library and objects to be considered. Specify values for the System 1 library and System 1 object prompts. b. If necessary, specify values for the Object type, Attribute, System 2 library, and System 2 object prompts. c. At the Process type prompt, specify whether resulting data group object entries should include or exclude the identified objects. d. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. To ensure that journaled files, data areas, and data queues will be replicated from the user journal, you must specify the object types. e. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see all of the prompts. 5. To specify file entry options that will override those set in the data group definition, do the following: a. Press F9 (All parameters). b. Press Page Down until you locate the File entry options prompt. c. Specify the values you need on the elements of the File entry options prompt. 6. To generate the list of objects, press Enter. Note: If you skipped Step 5, you may need to press Enter multiple times. 7. The Load DG Object Entries display appears with the list of objects that matched your selection criteria. Either type a 1 (Select) next to the objects you want or press F21 (Select all). Then press Enter. 8. If necessary, you can use Adding or changing a data group object entry on page 268 to customize values for any of the data group object entries. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

Adding or changing a data group object entry


Note: If you are converting a data group to use user journal replication for data areas or data queues, use this procedure when directed by Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154. From the management system, do the following to add a new data group object entry or change an existing entry: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter.

268

2. From the Work with Data Groups display, type a 20 (Object entries) next to the data group you want and press Enter. 3. The Work with DG Object Entries display appears. Do one of the following: To add a new entry, type a 1 (Add) next to the blank line at the top of the list and press Enter. To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.

4. The appropriate Data Group Object Entry display appears. When adding an entry, you must specify values for the System 1 library and System 1 object prompts. Note: When changing an existing object entry to enable replication of data areas or data queues from a user journal (COOPDB(*YES)), make sure that you specify only the objects you want to enable for the System 1 object prompt. Otherwise, all objects in the library specified for System 1 library will be enabled. 5. If necessary, specify a value for the Object type prompt. 6. Press F9 (All parameters). 7. If necessary, specify values for the Attribute, System 2 library, System 2 object, and Object auditing value prompts.

8. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. 9. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. Note: To ensure that journaled files, data areas, or data queues will be replicated from the user journal, you must specify *YES for Cooperate with database and you must specify the appropriate object types for Cooperating object types. 10. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts. 11. To specify file entry options that will override those set in the data group definition, do the following: a. If necessary, Press Page Down to locate the File entry options prompt. b. Specify the values you need on the elements of the File entry options prompt. 12. Press Enter. 13. For object entries configured for user journal replication of data areas or data queues, return to Step 7 in procedure Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 to complete additional steps necessary to complete the conversion. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly

269

Creating data group object entries

restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

270

271

Creating data group file entries

Creating data group file entries


Data group file entries are required for user journal replication of *FILE objects. When you configure MIMIX, you can create data group file entry information by creating data group file entries individually or by loading entries from another source. Once you have created the file entries, you can tailor them to meet your requirements. Note: If you plan to use either MIMIX Dynamic Apply or legacy cooperative processing, files must be defined by both data group object entries and data group file entries. It is strongly recommended that you create data group object entries first. Then, load the data group file entries from the object entry information defined for the files. You can use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group. For detailed concepts and requirements for supported configurations, see the following topics: Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105

Loading file entries


If you need to create data group file entries for many files, you can have MIMIX create the entries for you using the Load Data Group File Entries (LODDGFE) command. The Configuration source (CFGSRC) parameter supports loading from a variety of sources, listed below in order most commonly used: *DGOBJE - File entry information is loaded from the information in data group object entries configured for the data group. If you are configuring to use MIMIX Dynamic Apply or legacy cooperative processing, this value is recommended. *NONE - File entry information is loaded from a library on either the source or target system, as determined by the specified for System 1 library (LIB1), System 2 library (LIB2), and Load from system (LODSYS) parameters. *JRNDFN - File entry information is loaded from a journal specified in the journal definition associated with the specified data group. File entries will be created for all files currently journaled to the journal specified in the journal definition. *DGFE - File entry information is loaded from data group file entries defined to another data group. This option supports loading from version 4 and version 5 data groups on the same system. This value is typically used when loading file entries from a data group in a different installation of MIMIX.

When loading from a data group, you can also specify the source from which file entry options are loaded, and override elements if needed. The Default FE options source (FEOPTSRC) parameter determines whether file entry options are loaded from the specified configuration source (*CFGSRC) or from the data group definition (*DGDFT). Any file entry option with a value of *DFT is loaded from the specified source. Any values specified on elements of the File entry options (FEOPT)

272

parameter override the values loaded from the FEOPTSRC parameter for all data group file entries created by a load request. Regardless of where the configuration source and file entry option source are located, the Load Data Group File Entries (LODDGFE) command must be used from a system designated as a management system. Note: The Load Data Group File Entries (LODDGFE) command performs a journal verification check on the file entries using the Verify Journal File Entries (VFYJRNFE) command. In order to accurately determine whether files are being journaled to the target system, you should first perform a save and restore operation to synchronize the files to the target system before loading the data group file entries.

Loading file entries from a data groups object entries


This topic contains examples and a procedure. The examples illustrate the flexibility available for loading file entry options. Example - Load from the same data group This example illustrates how to create file entries when converting a data group to use MIMIX Dynamic Apply. In this example, data group DGDFN1 is being converted. The data group definition specifies *SYS1 as its data source (DTASRC). However, in this example, file entries will be loaded from the target system to take advantage of a known synchronization point at which replication will later be started.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(*SYS2) SELECT(*NO)

Since no value was specified for FROMDGDFN, its default value *DGDFN causes the file entries to load from existing object entries for DGDFN1. The value *SYS2 for LODSYS causes this example configuration to load from its target system. Entries are added (UPDOPT(*ADD) to the existing configuration. Since all files identified by object entries are wanted, SELECT(*NO) bypasses the selection list. The data group file entries for DGDFN1 created have file entry options which match those found in the object entries because no values were specified for FEOPTSRC or FEOPT parameters. Example - Load from another data group with mixed sources for file entry options The file entries for data group DGDFN1 are created by loading from the object entries for data group DGDFN2, with file entry options loaded from multiple sources.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) FROMDGDFN(DGDFN2) FEOPT(*CFGSRC *DGDFT *CFGSRC *DGDFT)

The data group file entries created for DGDFN1 are loaded from the configuration information in the object entries for DGDFN2, with file entry options coming from multiple sources. Because the command specified the first element (Journal image) and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC, the resulting file entries have the same values for those elements as the data group object entries for DGDFN2. Because the command specified the second element (Omit open/close entries) and the fourth element (Lock member during apply) as *DGDFT, these elements are loaded from the data group definition. The rest of the file entry options are loaded from the configuration source (object entries for DGDFN2).

273

Creating data group file entries

Procedure: Use this procedure to create data group file entries from the object entries defined to a data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears. The name of the data group for which you are creating file entries and the Configuration source value of *DGOBJE are pre-selected. Press Enter. 5. The following prompts appear on the display. Specify appropriate values. a. From data group definition - To load from entries defined to a different data group, specify the three-part name of the data group. b. Load from system - Ensure that the value specified is appropriate. For most environments, files should be loaded from the source system of the data group you are loading. (This value should be the same as the value specified for Data source in the data group definition.) c. Update option - If necessary, specify the value you want. d. Default FE options source - Specify the source for loading values for default file entry options. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 6. 6. Optionally, you can specify a file entry option value to override those loaded from the configuration source. Do the following: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. 8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 9. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.

274

Loading file entries from a library


Example: The data group file entries are created by loading from a library named TESTLIB on the source system. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication.
LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB)

Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used. Procedure: Use this procedure to create data group file entries from a library on either the source system or the target system. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *NONE and press Enter. 5. Identify the location of the files to be used for loading. For common configurations, you can accomplish this by specifying a library name at the System 1 library prompt and accepting the default values for the System 2 library, Load from system, and File prompts. If you are using system 2 as the data source for replication or if you want the library name to be different on each system, then you need to modify these values to appropriately reflect your data group defaults. 6. If necessary, specify the values you want for the following: Update option prompt Add entry for each member prompt 7. The value of the Default FE options source prompt is ignored when loading from a library. To optionally specify file entry options, do the following: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source.

275

Creating data group file entries

9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 10. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.

Loading file entries from a journal definition


Example: The data group file entries are created by loading from the journal associated system 1 of the data group. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication. The journal definition 1 specified in the data group definition identifies the journal.
LODDGFE DGDFN(DGDFN1) CFGSRC(*JRNDFN) LODSYS(*SYS1)

Since the FEOPT parameter was not specified, the resulting data group file entries are created with a value of *DFT for all of the file entry options. Because there is no MIMIX configuration source specified, the value *DFT results in the file entry options specified in the data group definition being used. Procedure: Use this procedure to create data group file entries from the journal associated with a journal definition specified for the data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *JRNDFN and press Enter. File and library names on the source and target systems are set to the same names for the load operation. 5. At the Load from system prompt, ensure that the value specified represents the appropriate system. The journal definition associated with the specified system is used for loading. For common configurations, the value that corresponds to the source system of the data group you are loading should be used. (This value should match the value specified for Data source in the data group definition.) 6. If necessary, specify the value you want for the Update option prompt. 7. The value of the Default FE options source prompt is ignored when loading from a journal definition. To optionally specify file entry options, do the following: a. Press F10 (Additional parameters).

276

b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. 9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 10. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.

Loading file entries from another data groups file entries


Example 1: The data group file entries are created by loading from file entries for another data group, DGDFN2.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) FROMDGDFN(DGDFN2)

Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group, the value *DFT results in file entry options which match those specified in DGDFN2. Example 2: The data group file entries are created by loading from file entries for another data group, DGDFN2 in another installation MXTEST.
LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) PRDLIB(MXTEST) FROMDGDFN(DGDFN2)

Since the FEOPT parameter was not specified, the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Because the configuration source is another data group in another installation, the value *DFT results in file entry options which match those specified in DGDFN2 in installation MXTEST. Procedure: Use this procedure to create data group file entries from the file entries defined to another data group. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading file entries are not effective until the data group is restarted. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. Press F19 (Load). 4. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. At the Configuration source prompt, specify *DGFE and press Enter.

277

Creating data group file entries

5. At the Production library prompt, either accept *CURRENT or specify the name of an installation library from which the data group you are copying is located. 6. At the From data group definition prompts, specify the three-part name of the data group from which you are loading. 7. If necessary, specify the value you want for the Update option prompt. 8. Specify the source for loading values for default file entry options at the Default FE options source prompt. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 9. 9. If necessary, do the following specify a file entry option value to override those loaded from the configuration source: a. Press F10 (Additional parameters). b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure. 10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source 11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). 12. To create the file entries, press Enter. All selected files identified from the configuration source are represented in the resulting file entries. Each generated file entry includes all members of the file. If necessary, you can use Changing a data group file entry on page 279 to customize values for any of the data group file entries.
Updated for 5.0.08.00.

Adding a data group file entry


When you add a single data group file entry to a data group definition, the configuration is dynamically updated and MIMIX automatically starts journaling of the file on the source system if the file exists and is not already journaled. Special entries are inserted into the journal data stream to enable the dynamic update. The added data group file entry is recognized by MIMIX as soon as each active process receives the special entries. For each MIMIX process, there may be a delay before the addition is recognized. This is true especially for very active data groups. Use this procedure to add a data group file entry to a data group. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. From the Work with DG File Entries display, type a 1 (Add) next to the blank line

278

at the top of the list and press Enter. 4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File and Library prompts, specify the file that you want to replicate. 5. By default, all members in the file are replicated. If you want to replicate only a specific member, specify its name at the Member prompt. Note: All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt. See Step 7. 6. Verify that the values of the remaining prompts on the display are what you want. If necessary, change the values as needed. Notes: If you change the value of the Dynamically update prompt to *NO, you need to end and restart the data group before the addition is recognized. If you change the value of the Start journaling of file prompt to *NO and the file is not already journaled, MIMIX will not be able to replicate changes until you start journaling the file.

7. Optionally, you can specify file entry options that will override those defined for the data group. Do the following: a. Press F10 (Additional parameters), then press Page Down. b. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure 8. Press Enter to create the data group file entry.

Changing a data group file entry


Use this procedure to change an existing data group file entry. From the management system, do the following: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 17 (File entries) next to the data group you want and press Enter. 3. Locate the file entry you want on the Work with DG File Entries display. Type a 2 (Change) next to the entry you want and press Enter. 4. The Change Data Group File Entry (CHGDGFE) display appears. Press F10 (Additional parameters) to see all available prompts. You can change any of the values shown on the display. Notes: If the file is currently being journaled and transactions are being applied, do not change the values specified for To system 1 file (TOFILE1) and To member (TOMBR1).

279

Creating data group file entries

All replicated members of a file must be in the same database apply session. For data groups configured for multiple apply sessions, specify the apply session on the File entry options prompt.

5. To accept your changes, press Enter. The replication processes do not recognize the change until the data group has been ended and restarted.

280

281

Creating data group IFS entries

Creating data group IFS entries


Data group IFS entries identify IFS objects for replication. The identified objects are replicated through the system journal unless the data group IFS entries are explicitly configured to allow the objects to be replicated through the user journal. Topic Identifying IFS objects for replication on page 118 provides detailed concepts and identifies requirements for configuration variations for IFS objects. Supported file systems are included, as well as examples of the effect that multiple data group IFS entries have on object auditing values.

Adding or changing a data group IFS entry


Note: If you are converting a data group to use user journal replication for IFS objects, use this procedure when directed by Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154. Changes become effective after one of the following occurs: The data group is ended and restarted Nightly maintenance routines end and restart MIMIX jobs A MIMIX audit that uses IFS entries to select objects to audit is started.

From the management system, do the following to add a new data group IFS entry or change an existing IFS entry: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 22 (IFS entries) next to the data group you want and press Enter. 3. The Work with Data Group IFS Entries display appears. Do one of the following: To add a new entry, type a 1 (Add) next to the blank line at the top of the display and press Enter. To change an existing entry, type a 2 (Change) next to the entry you want and press Enter.

4. The appropriate Data Group IFS Entry display appears. When adding an entry, you must specify a value for the System 1 object prompt. Notes: The object name must begin with the '/' character and can be up to 512 characters in total length. The object name can be a simple name, a name that is qualified with the name of the directory in which the object is located, or a generic name that contains one or more characters followed by an asterisk (*), such as /ABC*. Any component of the object name contained between two '/' characters cannot exceed 255 characters in length. All objects in the specified path are selected. When changing an existing IFS entry to enable replication from a user journal (COOPDB(*YES)), make sure that you specify only the IFS objects you want to enable.

282

5. If necessary, specify values for the System 2 object and Object auditing value prompts. 6. At the Process type prompt, specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. 7. Specify the appropriate value for the Cooperate with database prompt. To ensure that journaled IFS objects can be replicated from the user journal, specify *YES. To replicate from the system journal, specify *NO. 8. If necessary, specify a value for the Object retrieval delay prompt. 9. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. Press Page Down to see more prompts. 10. Press Enter to create the IFS entry. 11. For IFS entries configured for user journal replication, return to Step 7 in procedure Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on page 154 to complete additional steps necessary to complete the conversion. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

283

Loading tracking entries

Loading tracking entries


Tracking entries are associated with the replication of IFS objects, data areas, and data queues with advanced journaling techniques. A tracking entry must exist for each existing IFS object, data area, or data queue identified for replication. IFS tracking entries identify existing IFS stream files on the source system that have been identified as eligible for replication with advanced journaling by the collection of data group IFS entries defined to a data group. Similarly, object tracking entries identify existing data areas and data queues on the source system that have been identified as eligible for replication using advanced journaling by the collection of data group object entries defined to a data group. When you initially configure a data group, you must load tracking entries and start journaling for the objects which they identify. Similarly, if you add new or change existing data group IFS entries or object entries, tracking entries for any additional IFS objects, data areas, or data queues must be loaded and journaling must be started on the objects which they identify.

Loading IFS tracking entries


After you have configured the data group IFS entries for advanced journaling, use this procedure to load IFS tracking entries which match existing IFS objects. This procedure uses the Load DG IFS Tracking Entries (LODDGIFSTE) command. Default values for the command will load IFS tracking entries from objects on the system identified as the source for replication without duplicating existing IFS tracking entries. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted. From the management system, do the following: 1. Ensure that the data group is ended. If the data group is active, end it using the procedure Ending a data group in a controlled manner in the Using MIMIX book. 2. On a command line, type LODDGIFSTE and press F4 (Prompt). The Load DG IFS Tracking Entries (LODDGIFSTE) command appears. 3. At that prompts for Data group definition, specify the three-part name of the data group for which you want to load IFS tracking entries. 4. Verify that the value specified for the Load from system prompt is appropriate for your environment. If necessary, specify a different value. 5. Verify that the value specified for the Update option prompt is appropriate for your environment. If necessary, specify a different value. 6. At the Submit to batch prompt, specify the value you want. 7. Press Enter. 8. If you specified *NO for batch processing, the request is processed. If you will see additional prompts for Job description and Job name. If necessary, specify different values and press Enter.

284

9. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group. Note: The command used in this procedure does not start journaling on the tracking entries. Start journaling for the tracking entries when indicated by your configuration checklist.

Loading object tracking entries


After you have configured the data group object entries for advanced journaling, use this procedure to load object tracking entries which match existing data areas and data queues. This procedure uses the Load DG Obj Tracking Entries (LODDGOBJTE) command. Default values for the command will load object tracking entries from objects on the system identified as the source for replication without duplicating existing object tracking entries. Note: The data group must be ended before using this procedure. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted. From the management system, do the following: 1. Ensure that the data group is ended. If the data group is active, end it using the procedure Ending a data group in a controlled manner in the Using MIMIX book. 2. On a command line, type LODDGOBJTE and press F4 (Prompt). The Load DG Obj Tracking Entries (LODDGOBJTE) command appears. 3. At that prompts for Data group definition, specify the three-part name of the data group for which you want to load object tracking entries. 4. Verify that the value specified for the Load from system prompt is appropriate for your environment. If necessary, specify a different value. 5. Verify that the value specified for the Update option prompt is appropriate for your environment. If necessary, specify a different value. 6. At the Submit to batch prompt, specify the value you want. 7. Press Enter. 8. If you specified *NO for batch processing, the request is processed. If you will see additional prompts for Job description and Job name. If necessary, specify different values and press Enter. 9. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group. Note: The command used in this procedure does not start journaling on the tracking entries. Start journaling for the tracking entries when indicated by your configuration checklist.
Updated for 5.0.08.00.

285

Loading tracking entries

286

Creating data group DLO entries


Data group DLO entries identify document library objects (DLOs) for replication by MIMIX system journal replication processes. When you configure MIMIX, you can create data group DLO entries by loading from a generic entry and selecting from documents in the list, or by creating individual DLO entries. Once you have created the DLO entries, you can tailor them to meet your requirements. For detailed concepts and requirements, see Identifying DLOs for replication on page 124.

Loading DLO entries from a folder


If you need to create data group DLO entries for a group of documents within a folder, you can specify information so that MIMIX will create the data group DLO entries for you. (You can customize individual entries later, if necessary.) The user profile you use to perform this task must be enrolled in the system distribution directory on the management system. Note: The MIMIXOWN user profile is automatically added to the system directory when MIMIX is installed. This entry is required for DLO replication and should not be removed. From the management system, do the following to create DLO entries by loading from a list. 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data group you want and press Enter. 3. The Work with DG DLO Entries display appears. Press F19 (Load). 4. The Load DG DLO Entries (LODDGDLOE) display appears. Do the following to specify the selection criteria: a. Identify the library and objects to be considered. Specify values for the System 1 folder and System 1 document prompts. b. If necessary, specify values for the Owner, System 2 folder, System 2 object, and Object auditing value prompts. c. At the Process type prompt, specify whether resulting data group DLO entries should include or exclude the identified documents d. If necessary, specify a value for the Object retrieval delay prompt. e. Press Enter. 5. Additional prompts appear to optionally use batch processing and to load entries without load without selecting entries from a list. Press Enter. 6. The Load DG DLO Entries display appears with the list of document that matched your selection criteria. Either type a 1 (Select) next to the documents you want or

287

Creating data group DLO entries

press F21 (Select all). Then press Enter. 7. If necessary, you can use Adding or changing a data group DLO entry on page 288 to customize values for any of the data group DLO entries. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

Adding or changing a data group DLO entry


The data group must be ended and restarted before any changes can become effective. From the management system, do the following to add or change a DLO entry: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data group you want and press Enter. 3. The Work with DG DLO Entries display appears. Do one of the following: To add a new entry, type a 1 (Add) next to the blank line at the top of the list and press Enter. To change an existing entry, type a 2 (Change) next to the entry you want and press Enter. Then skip to Step 5.

4. If you are adding a new DLO entry, the Add Data Group DLO Entry display appears. Identify the library and objects to be considered. Specify values for the System 1 folder and System 1 document prompts. 5. Do the following: a. If necessary, specify values for the Owner, System 2 folder, System 2 object, and Object auditing value prompts. b. At the Process type prompt, specify whether resulting data group DLO entries should include or exclude the identified documents c. If necessary, specify a value for the Object retrieval delay prompt. 6. Press Enter. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. This includes after the nightly restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an audit runs.

288

Creating data group data area entries


This procedure creates data group data area entries that identify data areas to be replicated by the data area poller process. Note: The data area poller method is not the preferred way to replicate data areas.The preferred method of replicating data areas is with user journal replication processes using advanced journaling. The next best method is identifying them with data group object entries for system journal replication processes. For detailed concepts and requirements for supported configurations, see the following topics: Identifying library-based objects for replication on page 100 Identifying data areas and data queues for replication on page 112

You can load all data group data area entries from a library or you can add individual data area entries. Once the data group data area entries are created, you can tailor them to meet your requirements by adding, changing, or deleting entries. You must define data group data area entries from the management system. The data area entries can be created from libraries on either system. If the system manager is configured and running, all created and changed data group data area entries are sent to the network systems automatically.

Loading data area entries for a library


Before any addition or change is recognized, you need to end and restart the data group. From the management system, do the following to load data area entries for use with the data area poller: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 19 (Data area entries) next to the data group you want and press Enter. 3. The Work with DG Data Area Entries display appears. Press F19 (Load). 4. The Load DG Data Area Entries (LODDGDAE) display appears. The values of the System 1 library and System 2 library prompts indicate the name of the library on the respective systems. Specify a name for the System 1 library prompt and verify that the value shown for the System 2 library prompt is what you want. 5. Ensure that the value of the Load from system prompt indicates the system from which you want to load data areas. 6. Verify that the remaining prompts on the display contain the values you want. If necessary, change the values. 7. To create the data group data area entries, press Enter. If you submitted the job for batch processing, MIMIX sends a message indicating that a data areas load job has been submitted. A completion message is sent when the load has

289

Creating data group data area entries

finished.

Adding or changing a data group data area entry


Before any addition or change is recognized, you need to end and restart the data group. From the management system, do the following to add a new entry or change an existing data area entry for use with the data area poller: 1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and press Enter. 2. From the Work with Data Groups display, type a 19 (Data area entries) next to the data group you want and press Enter. 3. From the Work with DG Data Area Entries display do one of the following To add a new data area entry, type a 1 (Add) at the blank line at the top of the list and press Enter. The Add Data Group Data Area Entry display appears. To change an existing data area entry, type a 2 (Change) next to the data group data area entry you want and press Enter. The Change Data Group Data Area Entry display appears.

4. Specify the values you want at the prompts for System 1 data area and Library and System 2 data area and Library. 5. Press Enter to create the data area entry or accept the change.

290

Additional options: working with DG entries


The procedures for performing common functions, such as copying, removing, and displaying, are very similar for all types of data group entries used by MIMIX. Each generic procedure in this topic indicates the type of data group entry for which it can be used.

Copying a data group entry


Use this procedure from the management system to copy a data group entry from one data group definition to another data group definition. The data group definition to which you are copying must exist. To copy a data group entry to another data group definition, do the following: 1. From the Work with DG Definitions display, type the option you want next to the data group from which you are copying and press Enter. Any of these options will allow an entry to be copied: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. The "Work with" display for the entry you selected appears. Type a 3 (Copy) next to the entry you want and press Enter. 3. The Copy display for the entry appears. Specify a name for the To definition prompt. 4. Additional prompts appear on display that are specific for the type of entry. The values of these prompts define the data to be replicated by the definition to which you are copying. Ensure that the prompts identify the necessary information.
Table 32. Values to specify for each type of data group entry. To File 1 To Member To File 2 To system 1 data area To system 2 data area System 1 library System 1 object Object type Attribute System 1 folder System 1 document Owner

For file entries, provide:

For data area entries, provide: For object entries, provide:

For DLO entries, provide:

291

Additional options: working with DG entries

Table 32.

Values to specify for each type of data group entry. To system 1 object

For IFS entries, provide:

5. The value *NO for the Replace definition prompt prevents you from replacing an existing entry in the definition to which you are copying. If you want to replace an existing entry, specify *YES. 6. To copy the entry, press Enter. 7. For file entries, end and restart the data group being copied.

Removing a data group entry


Use this procedure from the management system to remove a data group entry from a data group definition. You may want to remove an entry when you no longer need to replicate the information that the entry identifies. Note: For all data group entries except file entries, the change is not recognized until after the send, receive, and apply processes for the associated data group and ended and restarted. Data group file entries support dynamic removals if you prompt the RMVDGFE command and specify Dynamically update (*YES). If you specify Dynamically update (*YES), you do not need to end the processes for the data group when you use the default. The change is recognized as soon as each active process receives the update. If a file is on hold and you want to delete the data group file entry, it is best to use *YES. This forces all currently held entries to be deleted, all current entries to be ignored, and prevents additional entries from accumulating. If you accept the default of Dynamically update (*NO), the change is not recognized until after the send receive, and apply processes for the associated data group are ended and restarted. When you specify Dynamically update (*NO), the remove function does not clean up any records in the error/hold log. If an entry is held when you delete it, its information remains in the error/hold log. Additional transactions for the file or member can be accumulating in the error/hold log or will be applied to the file. To remove an entry, do the following: 1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be removed: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. The "Work with" display for the entry you selected appears. Type a 4 (Remove) next to the entry you want and press Enter.

292

3. For data group file entries, a display with additional prompts appears. Specify the values you want and press Enter. 4. A confirmation display appears with a list of entries to be deleted. To delete the entries, press Enter.

Displaying a data group entry


Use this procedure to display a data group entry for a data group definition. To display a data group entry, do the following: 1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be displayed: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. The "Work with" display for the entry you selected appears. Type a 5 (Display) next to the entry you want and press Enter. 3. The appropriate data group entry display appears. Page Down to see all of the values.

Printing a data group entry


Use this procedure to create a spooled file which you can print that identifies a system definition, transfer definition, journal definition, or a data group definition. Not all types of entries support the print function. To print a data group entry, do the following; 1. From the Work with DG Definitions display, type the option for the entry you want next to the data group and press Enter. Any of these options will allow an entry to be printed: Option 17 (File entries) Option 19 (Data area entries) Option 22 (IFS entries) 2. The "Work with" display for the entry you selected appears. Type a 6 (Print) next to the entry you want and press Enter. 3. A spooled file is created with a name of MXDG***E, where *** is the type of entry. You can print the spooled file according to your standard print procedures.

293

Chapter13

Additional supporting tasks for configuration


The tasks in this chapter provide supplemental configuration tasks. Always use the configuration checklists to guide you though the steps of standard configuration scenarios. Accessing the Configuration Menu on page 295 describes how to access the menu of configuration options from a 5250 emulator. Starting the system and journal managers on page 296 provides procedures for starting these jobs. System and journal manager jobs must be running before replication can be started. Setting data group auditing values manually on page 297 describes when to manually set the object auditing level for objects defined to MIMIX and provides a procedure for doing so. Checking file entry configuration manually on page 303 provides a procedure using the CHKDGFE command to check the data group file entries defined to a data group. Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit, which calls the CHKDGFE command and can automatically correct detected problems. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. Changes to startup programs on page 305 describes changes that you may need to make to your configuration to support remote journaling. Checking DDM password validation level in use on page 306 describes how to check the whether the DDM communications infrastructure used by MIMIX Remote Journal support requires a password. This topic also describes options for ensuring that systems in a MIMIX configuration have the same password and the implications of these options. Starting the DDM TCP/IP server on page 308 describes how to start this server that is required in configurations that use remote journaling. Identifying data groups that use an RJ link on page 310 describes how to determine which data groups use a particular RJ link. Using file identifiers (FIDs) for IFS objects on page 312 describes the use of FID parameters on commands for IFS tracking entries. When IFS objects are configured for replication through the user journal, commands that support IFS tracking entries can specify a unique FID for the object on each system. This topic describes the processing resulting from combinations of values specified for the object and FID prompts. Configuring restart times for MIMIX jobs on page 313 describes how to change the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to ensure that the MIMIX environment remains operational.

294

Accessing the Configuration Menu


The MIMIX Configuration Menu provides access to the options you need for configuring MIMIX. To access the MIMIX Configuration Menu, do the following: 1. Access the MIMIX Basic Main Menu. See Accessing the MIMIX Main Menu on page 91. 2. From the on the MIMIX Basic Main Menu, select option 11 (Configuration menu) and press Enter.

295

Starting the system and journal managers

Starting the system and journal managers


If the system managers are running, they will automatically send configuration information to the network system as you complete configuration tasks. This procedure starts all the system managers, journal managers, and, if the system is participating in a cluster, cluster services. The system managers, journal managers, and cluster services must be active to start replication. To start all of the system managers, journal managers, and cluster services (for a cluster environment) during configuration, do the following: 1. Access the MIMIX Basic Main Menu. See Accessing the MIMIX Main Menu on page 91. 2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access the MIMIX Intermediate Main Menu. 3. Select option 2 (Work with Systems) and press Enter. 4. The Work with Systems display appears with a list of the system definitions. Type a 9 (Start) next to each of the system definitions you want and press Enter. This will start all managers on all of these systems in the MIMIX environment. 5. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following: a. Verify that *ALL appears as the value for the Manager prompt. b. Press Enter to complete this request. 6. If you selected more than one system definition in Step 4, the Start MIMIX Managers (STRMMXMGR) display will be shown for each system definition that you selected. Repeat Step 5 for each system definition that you selected.

296

Setting data group auditing values manually


Default behavior for MIMIX is to change the auditing value of IFS, DLO, and librarybased objects configured for system journal replication as needed when starting data groups with the Start Data Group (STRDG) command. To manually set the system auditing level of replicated objects, or to force a change to a lower configured level, you can use the Set Data Group Auditing (SETDGAUD) command. The SETDGAUD command allows you to set the object auditing level for all existing objects that are defined to MIMIX by data group object entries, data group DLO entries, and data group IFS entries. The SETDGAUD command can be used for data groups configured for replicating object information (type *OBJ or *ALL). When to set object auditing values manually - If you anticipate a delay between configuring data group entries and starting the data group, you should use the SETDGAUD command before synchronizing data between systems. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. You can also use the SETDGAUD command to reset the object auditing level for all replicated objects if a user has changed the auditing level of one or more objects to a value other than what is specified in the data group entries. Processing options - MIMIX checks for existing objects identified by data group entries for the specified data group. The object auditing level of an existing object is set to the auditing value specified in the data group entry that most specifically matches the object. Default behavior is that MIMIX only changes an objects auditing value if the configured value is higher than the objects existing value. However, you can optionally force a change to a configured value that is lower than the existing value through the commands Force audit value (FORCE) parameter. The default value *NO for the FORCE parameter prevents MIMIX from reducing the auditing level of an object. For example, if the SETDGAUD command processes a data group entry with a configured object auditing value of *CHANGE and finds an object identified by that entry with an existing auditing value of *ALL, MIMIX does not change the value. If you specify *YES for the FORCE parameter, MIMIX will change the auditing value even if it is lower than the existing value.

For IFS objects, it is particularly important that you understand the ramifications of the value specified for the FORCE parameter. For more information see Examples of changing of an IFS objects auditing value on page 298. Procedure -To set the object auditing value for a data group, do the following on each system defined to the data group: 1. Type the command SETDGAUD and press F4 (Prompt). 2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the data group you want.

297

Setting data group auditing values manually

3. At the Object type prompt, specify the type of objects for which you want to set auditing values. 4. If you want to allow MIMIX to force a change to a configured value that is lower than the objects existing value, specify *YES for the Force audit value prompt. Note: This may affect the operation of your replicated applications. Lakeview recommends that you force auditing value changes only when you have specified *ALLIFS for the Object type. 5. Press Enter.

Examples of changing of an IFS objects auditing value


The following examples show the effect of the value of the FORCE parameter when manually changing the object auditing values of IFS objects configured for system journal replication. The auditing values resulting from the SETDGAUD command can be confusing when your environment has multiple data group IFS entries, each with different auditing levels, and more than one entry references objects sharing common parent directories. The following examples illustrate how these conditions affect the results of setting object auditing for IFS objects. Data group entries are processed in order from most generic to most specific. IFS entries are processed using the unicode character set. The first entry (more generic) found that matches the object is used until a more specific match is found. When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry, all of the directories in the objects directory path are checked and, if necessary, changed to the new auditing value. In the case of an IFS entry with a generic name, all descendents of the IFS object may also have their auditing value changed. Example 1: This scenario shows a simple implementation where data group IFS entries have been modified to have a configured value of *CHANGE from a previously configured value of *ALL. Table 33 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command.
Table 33. Example 1 configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*CHANGE) OBJAUD(*CHANGE) Process type PRCTYPE(*EXCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD)

Order processed 1 2 3

Simply ending and restarting the data group will not cause these configuration changes to be effective. Because the change is to a lower auditing level, the change must be forced with the SETDGAUD command. Similarly, running the SETDGAUD command with FORCE(*NO) does not change the auditing values for this scenario.

298

Table 34 shows the intermediate and final results as each data group IFS entry is processed by the force request.
Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry Note 1 Note 1 Note 1 Note 1 Note 1 *CHANGE *CHANGE Changed by 2nd entry *CHANGE Changed by 3rd entry Note 2 *CHANGE Final results of FORCE(*YES) *CHANGE *CHANGE *ALL *CHANGE *CHANGE

Existing objects

/DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF

*ALL *ALL *ALL *ALL *ALL

Notes: 1. Because the first data group IFS entry excludes objects from replication, object auditing processing does not apply. 2. This objects auditing value is evaluated when the third data group IFS entry is processed but the entry does not cause the value to change. The existing value is the same as the configured value of the third entry at the time it is processed.

Example 2: Table 35 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command. In this scenario there are multiple configured values.
Table 35. Example 2 configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*ALL) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD)

Order processed 1 2 3

For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are the same or lower than the existing values. Running the command with FORCE(*YES) does change the existing objects values. Table 36 shows the intermediate values as each entry is processed by the force request and the final results of the change. Data group IFS entry #3 in Table 35

299

Setting data group auditing values manually

prevents directory /DIR1 from having an auditing value of *CHANGE or *NONE because it is the last entry processed and it is the most specific entry.
Table 36. Intermediate audit values which occur during FORCE(*YES) processing for example 2. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE *NONE *NONE Changed by 2nd entry *NONE Changed by 3rd entry *ALL *ALL Final results of FORCE(*YES) *ALL *ALL *CHANGE *NONE *NONE

Existing objects

/DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF

*ALL *ALL *ALL *ALL *ALL

Example 3: This scenario illustrates why you may need to force the configured values to take effect after changing the existing data group IFS entries from *ALL to lower values. Table 37 identifies a set of data group IFS entries and their configured auditing values. The entries are listed in the order in which they are processed by the SETDGAUD command.
Table 37. Example 3: configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*NONE) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD)

Order processed 1 2 3

For this scenario, running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are lower than the existing values. In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured auditing values take effect. Table 38 shows the intermediate values as each entry is processed by the force request and the final results of the change.
Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE *CHANGE

Existing objects

/DIR1 /DIR1/STMF /DIR1/STMF2

*ALL *ALL *ALL

300

Table 38.

Intermediate audit values which occur during FORCE(*YES) processing for example 3. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE

Existing objects

/DIR1/DIR2 /DIR1/DIR2/STMF

*ALL *ALL

Example 4: This example begins with the same set of data group IFS entries used in example 3 (Table 37) and uses the results of the forced change in example 3 as the auditing values for the existing objects in Table 39. Table 39 shows how running the SETDGAUD command with FORCE(*NO) causes changes to auditing values. This scenario is quite possible as a result of a normal STRDG request. Complex data group IFS entries and multiple configured values cause these potentially undesirable results. Note: Any addition or change to the data group IFS entries can cause these results to occur.
Table 39. Example 4: comparison of objects actual values Auditing value Existing values /DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF *NONE *NONE *CHANGE *NONE *NONE After SETDGAUD FORCE(*NO) *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE After SETDGAUD FORCE(*YES) *NONE *NONE *CHANGE *NONE *NONE

Existing objects

There is no way to maintain the existing values in Table 39 without ensuring that a forced change occurs every time SETDGAUD is run, which may be undesirable. In this example, the next time data groups are started, the objects auditing values will be set to those shown in Table 39 for FORCE(*NO). Any addition or change to the data group IFS entries can potentially cause similar results the next time the data group is started. To avoid this situation, we recommend that you configure a consistent auditing value of *CHANGE across data group IFS entries which identify objects with common parent directories.

301

Setting data group auditing values manually

Example 5: This scenario illustrates the results of SETDGAUD command when the objects auditing value is determined by the user profile which accesses the object (value *USRPRF). Table 40 shows the configured data group IFS entry.
Table 40. Example 5 configuration of data group IFS entries Specified Object /DIR1/STMF Object auditing value OBJAUD(*NONE) Process type PRCTYPE(*INCLD)

Order processed 1

Table 41 compares the results running the SETDGAUD command with FORCE(*NO) and FORCE(*YES). Running the command with FORCE(*NO) does not change the value. The value *USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an auditing value of *USRPRF is not considered for change. Running the command with FORCE(*YES) does force a change because the existing value and the configured value are not equal.
Table 41. Example 5: comparison of objects actual values Auditing value Existing values /DIR1/STMF *USRPRF After SETDGAUD FORCE(*NO) *USRPRF After SETDGAUD FORCE(*YES) *NONE

Existing objects

302

Checking file entry configuration manually


The Check DG File Entries (CHKDGFE) command provides a means to detect whether the correct data group file entries exist with respect to the data group object entries configured for a specified data group in your MIMIX configuration. When file entries and object entries are not properly matched, your replication results can be affected. Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit, which calls the CHKDGFE command and can automatically correct detected problems. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. To check your file entry configuration manually, do the following: 1. On a command line, type CHKDGFE and press Enter. The Check Data Group File Entries (CHKDGFE) command appears. 2. At the Data group definition prompts, select *ALL to check all data groups or specify the three-part name of the data group. 3. At the Options prompt, you can specify that the command be run with special options. The default, *NONE, uses no special options. If you do not want an error to be reported if a file specified in a data group file entry does not exist, specify *NOFILECHK. 4. At the Output prompt, specify where the output from the command should be sentto print, to an outfile, or to both. See Step 6. 5. At the User data prompt, you can assign your own 10-character name to the spooled file or choose not to assign a name to the spooled file. The default, *CMD, uses the CHKDGFE command name to identify the spooled file. 6. At the File to receive output prompts, you can direct the output of the command to the name and library of a specific database file. If the database file does not exist, it will be created in the specified library with the name MXCDGFE. 7. At the Output member options prompts, you can direct the output of the command to the name of a specific database file member. You can also specify how to handle new records if the member already exists. Do the following: a. At the Member to receive output prompt, accept the default *FIRST to direct the output to the first member in the file. If it does not exist, a new member is created with the name of the file specified in Step 6. Otherwise, specify a member name. b. At the Replace or add records prompt, accept the default *REPLACE if you want to clear the existing records in the file member before adding new records. To add new records to the end of existing records in the file member, specify *ADD. 8. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to check data group file entries.

303

Checking file entry configuration manually

To submit the job for batch processing, accept *YES. Press Enter and continue with the next step.

9. At the Job description prompts, specify the name and library of the job description used to submit the batch request. Accept MXAUDIT to submit the request using Lakeviews default job description, MXAUDIT. 10. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 11. To start the data group file entry check, press Enter.

304

Changes to startup programs


If you use startup programs, ensure that you include the following operations when you configure for remote journaling: If you use TCP/IP as the communications protocol you need to start TCP/IP, including the DDM server, before starting replication. If you use OptiConnect as the communications protocol, the QSOC subsystem must be active.

305

Checking DDM password validation level in use

Checking DDM password validation level in use


MIMIX Remote Journal support uses the DDM communications infrastructure. This infrastructure can be configured to require a password to be provided when a server connection is made. The MIMIXOWN user profile, which establishes the remote journal connection, ships with a preset password so that it is consistent on all systems. If you have implemented DDM password validation on any systems where MIMIX will be used, you should verify the DDM level in use. If the MIMIXOWN password is not the same on both systems, you may need to change the MIMIXOWN user profile or the DDM security level to allow MIMIX Remote Journal support to function properly. These changes have security implications of which you should be aware. To check the DDM password validation level in use, do the following on both systems: 1. From a command line, type CHGDDMTCPA and press F4 (prompt). 2. Check the value of the Password required field. If the value is *NO or *VLDONLY, no further action is required. Press F12 (Cancel). If the field contains any other value, you must take further action to enable MIMIX RJ support to function in your environment. Press F12, then continue with the next step.

3. You have two options for changing your environment to enable MIMIX RJ support to function. Each option has security implications. You must decide which option is best for your environment. The options are: Option 1. Enable MIMIXOWN user profile for DDM environment on page 306. MIMIX must be installed and transfer definitions must exist before you can make the necessary changes. For new installations this should automatically configured for you. Option 2. Allow user profiles without passwords on page 307. You can use this option before or after MIMIX is installed. However, this option should be performed before configuring MIMIX RJ support.

Option 1. Enable MIMIXOWN user profile for DDM environment


This option changes the MIMIXOWN user profile to have a password and adds server authentication entries to recognize the MIMIXOWN user profile. Do the following from both systems: 1. Access the Work with Transfer Definitions (WRKTFRDFN) display. Then do the following: a. Type a 5 (Display) next to each transfer definition that will be used with MIMIX RJ support and press Enter. b. Page down to locate the value for Relational database (RDB parameter) and record the value indicated.

306

c. If you selected multiple transfer definitions, press Enter to advance to the next selection and record its RDB value. Ensure that you record the values for all transfer definitions you selected. Note: If the RDB value was generated by MIMIX, it will be in the form of the characters MX followed by the System1 definition, System2 definition, and the name of the transfer definition, with up to 18 characters. 2. On the source system, change the MIMIXOWN user profile to have a password and to prevent signing on with the profile. To do this, enter the following command: CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password) INLMNU(*SIGNOFF) Note: The password is case sensitive and must be the same on all systems in the MIMIX network. If the password does not match on all systems, some MIMIX functions will fail with security error message LVE0127. 3. You need a server authentication entry for the MIMIXOWN user profile for each RDB entry you recorded in Step 1. To add a server authentication entry, type the following command, using the password you specified in Step 2 and the RDB value from Step 1. Then press Enter. ADDSVRAUTE USRPRF(MIMIXOWN) SERVER(recorded-RDB-value) PASSWORD(user-defined-password) 4. Repeat Step 2 and Step 3 on the target system.

Option 2. Allow user profiles without passwords


This option changes DDM TCP attributes to allow user profiles without passwords to function in environments that use DDM password validation. Do the following: 1. From a command line on the source system, type CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter. 2. From a command line on the target system, CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter.

307

Starting the DDM TCP/IP server

Starting the DDM TCP/IP server


Use this procedure if you need to start the DDM TCP/IP server in an environment configured for MIMIX RJ support. From the system on which you want to start the TCP server, do the following: 1. Ensure that the DDM TCP/IP attributes allow the DDM server to be automatically started when the TCP/IP server is started (STRTCP). Do the following: a. Type the command CHGDDMTCPA and press F4 (Prompt). b. Check the value of the Autostart server prompt. If the value is *YES, it is set appropriately. Otherwise, change the value to *YES and press Enter. 2. To prevent install problems due to locks on the library name, ensure that the MIMIX product library is not in your user library list. 3. To start the DDM server, type the command STRTCPSVR(*DDM) and press Enter.

308

309

Identifying data groups that use an RJ link

Identifying data groups that use an RJ link


Use this procedure to determine which data groups use a remote journal link before you end a remote journal link or remove a remote journaling environment. 1. Enter the command WRKRJLNK and press Enter. 2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link you want. 3. From the command line, type WRKDGDFN and press Enter. 4. For all data groups listed on the Work with DG Definitions display, check the Journal Definition column for the name of the source journal definition you recorded in Step 2. If you do not find the name from Step 2, the RJ link is not used by any data group. The RJ link can be safely ended or can have its remote journaling environment removed without affecting existing data groups. If you find the name from Step 2 associated with any data groups, those data groups may be adversely affected if you end the RJ link. A request to remove the remote journaling environment removes configuration elements and system objects that need to be created again before the data group can be used. Continue with the next step.

5. Press F10 (View RJ links). Consider the following and contact your MIMIX administrator before taking action that will end the RJ link or remove the remote journaling environment. When *NO appears in the Use RJ Link column, the data group will not be affected by a request to end the RJ link or to the remote journaling environment. Note: If you allow applications other than MIMIX to use the RJ link, they will be affected if you end the RJ link or remove the remote journaling environment. When *YES appears in the Use RJ Link column, the data group may be affected by a request to end the RJ link. If you use the procedure for ending a remote journal link independently in the Using MIMIX book, ensure that any data groups that use the RJ link are inactive before ending the RJ link.

310

311

Using file identifiers (FIDs) for IFS objects

Using file identifiers (FIDs) for IFS objects


Commands used by advanced journaling for IFS objects use file identifiers (FIDs) to uniquely identify the correct IFS tracking entries to process. The System 1 file identifier and System 2 file identifier prompts ensure that IFS tracking entries are accurately identified during processing. These prompts can be used alone or in combination with the System 1 object prompt. These prompts enable the following combinations: Processing by object path: A value is specified for the System 1 object prompt and no value is specified for the System 1 file identifier or System 2 file identifier prompts. When processing by object path, a tracking entry is required for all commands with the exception of the SYNCIFS command. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified object path name. Processing by object path and FIDs: A value is specified for the System 1 object prompt and a value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts. When processing by object path and FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values. If the specified object path name does not match the object path name in the tracking entry, the command cannot continue processing. Processing by FIDs: A value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts and, with the exception of the SYNCIFS command, no value is specified for the System 1 object prompt. In the case of SYNCIFS, the default value *ALL is specified for the System 1 object prompt. When processing by FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values.

312

Configuring restart times for MIMIX jobs


Certain MIMIX jobs are restarted, or recycled, on a regular basis in order to maintain the MIMIX environment. The ability to configure this activity can ease conflicts with your scheduled workload by changing when the MIMIX jobs restart to a more convenient time for your environment. The default operation of MIMIX is to restart MIMIX jobs at midnight (12:00 a.m.). However, you can change the restart time by setting a different value for the Job restart time parameter (RSTARTTIME) on system definitions and data group definitions. The time is based on a 24 hour clock. The values specified in the system definitions and data group definitions are retrieved at the time the MIMIX jobs are started. Changes to the specified values have no effect on jobs that are currently running. Changes are effective the next time the affected MIMIX jobs are started. For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2 for the Job restart time (RSTARTTIME) parameter. Respectively, these values use the restart time specified in the system definition identified as System 1 or System 2 for the data group. Both system and data group definition commands support the special value *NONE, which prevents the MIMIX jobs from automatically restarting. Be sure to read Considerations for using *NONE on page 315 before using this value.

Configurable job restart time operation


To make effective use of the configurable job restart time, you may need to set the job restart time in as few as one or as many as all of these locations: One or more data group definitions The system definition for the management system The system definitions for one or more network systems.

MIMIX system-level jobs affected by the Job restart time value specified in a system definition are: system manager (SYSMGR), system manager receive (SYSMGRRCV), and journal manager (JRNMGR). MIMIX data group-level jobs affected by the Job restart time value specified in a data group definition are: object send (OBJSND), object receive (OBJRCV), database send (DBSND), database receive (DBRCV), database reader (DBRDR), object retrieve (OBJRTV), container send (CNRSND), container receive (CNRRCV), status send (STSSND), status receive (STSRCV), and object apply (OBJAPY). Also, the role of the system on which you change the restart time affects the results. For system definitions, the value you specify for the restart time and the role of the system (management or network) determines which MIMIX system-level jobs will restart and when. For data group definitions, the value you specify for the restart time and the role of the system (source or target) determines which data group-level jobs will restart and when. Time zone differences between systems also influence the results you obtain. MIMIX system-level jobs restart when they detect that the time specified in the system definition has passed.

313

Configuring restart times for MIMIX jobs

The system manager jobs are a pair of jobs that run between a network system and the management system. The management and network systems both have journal manager jobs, but the jobs operate independently. The job restart time specified in the management systems system definition determines when to restart the journal manager on the management system. The job restart time specified in the network systems system definition determines when to restart the journal manager job on the network system, when to restart the system manager jobs on both systems, and also affects when cleanup jobs on both systems are submitted. Table 42 shows how the role of the system affects the results of the specified job restart time.
Table 42. System Definition Role Management System Effect of the systems role on changing the job restart time in a system definition. Effect on Jobs by the value specified Jobs System managers Cleanup jobs Journal managers Collector services Network System System managers Time *NONE

Specified value is not used to determine restart time. Restart is determined by value specified for network system. Job on management system restarts at time specified. Jobs on both systems restart when time on the management system reaches the time specified. Jobs are submitted on both systems by system manager jobs after they restart. Job on network system restarts at time specified. Job on management system is not restarted. Jobs are not restarted on either system.

Cleanup jobs

Jobs are submitted on both systems when midnight occurs on the management system. Job on network system is not restarted.

Journal managers Collector services

For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is built into the job restart processing. The actual delay is unique to each job. By distributing the jobs within this range the load on systems and communications is more evenly distributed, reducing bottlenecks caused by many jobs simultaneously attempting to end, start, and establish communications. MIMIX determines the actual restart time for the object apply (OBJAPY) jobs based on the timestamp of the system on which the jobs run. For all other affected jobs, MIMIX determines the actual start time for object or database jobs based on the timestamp of the system on which the OBJSND or the DBSND job runs. Table 43 shows how these key jobs affect when

314

other data group-level jobs restart.


Table 43. Systems on which data group-level jobs run.

In each row, the highlighted job determines the restart time for all jobs in the row. Source System Jobs Object send (OBJSND) Object retrieve (OBJRTV) Container send (CNRSND) Status receive (STSRCV) Database send (DBSND) 1 Target System Jobs Object receive (OBJRCV) Container receive (CNRRCV) Status Send (STSSND) Database receive (DBRCV) 1 Database reader (DBRDR) 1 Object apply (OBJAPY) When MIMIX is configured for remote journaling, the DBSND and DBRCV jobs are replaced by the DBRDR job. The DBRDR job restarts when the specified time occurs on the target system.
1

For more information about MIMIX jobs see Replication job and supporting job names on page 47.

Considerations for using *NONE


Attention: The value *NONE for the Job restart time parameter is not recommended. If you specify *NONE in a system definition or a data group definition, you need to develop and implement alternative procedures to ensure that the affected MIMIX jobs are periodically restarted. Restarting the jobs ensures that long running MIMIX jobs are not ended by the system due to resource constraints and refreshes the job log to avoid overflow and abnormal job termination. If you specify the value *NONE for the Job restart time in a data group definition, no MIMIX data group-level jobs are automatically restarted. If you specify the value *NONE for the Job restart time in a system definition, the cleanup jobs started by the system manager will continue to be submitted based on when midnight occurs on the management system. All other affected MIMIX systemlevel jobs will not be restarted. Table 42 shows the effect of the value *NONE.

Examples: job restart time


Restart time examples: system definitions on page 316 and Restart time examples: system and data group definition combinations on page 316 illustrate the effect of using the Job restart time (RSTARTTIME) parameter. These examples assume that the system configured as the management system for MIMIX operations is also the target system for replication during normal operation. For each example, consider the effect it would have on nightly backups that complete between midnight and 1 a.m. on the target system.

315

Configuring restart times for MIMIX jobs

Restart time examples: system definitions


These examples show the effect of changing the job restart time only in system definitions. Example 1: MIMIX is running Monday noon when you change the job restart time to 013000 in system definition NEWYORK, which is the management system. The network systems system definition uses the default value 000000 (midnight). MIMIX remains up the rest of the day. Because the current jobs use values that existed prior to your change, all the MIMIX system-level jobs on NEWYORK automatically restart at midnight. As a result of your change, the journal manager on NEWYORK restarts at 1:30 a.m. Tuesday and thereafter. The network systems journal manager restarts when midnight occurs on that system. The system manager jobs on both systems restart and submit the cleanup jobs when the management system reaches midnight. Example 2: It is Friday evening and all MIMIX processes on the system CHICAGO are ended while you perform planned maintenance. During that time you change the job restart time to 040000 in system definition CHICAGO, which is a network system. You start MIMIX processing again at 11:07 p.m. so your changes are in effect. The MIMIX system-level jobs that restart Saturday and thereafter at 4 a.m. Chicago time are: The journal manager job on CHICAGO The system manager jobs on the management system and on CHICAGO The cleanup jobs are submitted on the management system and on CHICAGO

Because the management systems system definition uses the default value of midnight, the journal manager on the management system restarts when midnight occurs on that system. Example 3: Friday afternoon you change system definition HONGKONG to have a job restart time value of *NONE. HONGKONG is the management system. LONDON is the associated network system and its system definition uses the default setting 000000 (midnight). You end and restart the MIMIX jobs to make the change effective. The journal manager on HONGKONG is no longer restarted. At midnight (00:00 a.m. Saturday and thereafter) HONGKONG time, the system manager jobs on both systems restart and submit cleanup jobs on both systems. In your runbook you document the new procedures to manually restart the journal manager on HONGKONG. Example 4: Wednesday evening you change the system definitions for LONDON and HONG KONG to both have a job restart time of *NONE. HONGKONG is the management system. You restart the MIMIX jobs to make the change effective. At midnight HONGKONG time, only the cleanup jobs on both systems are submitted. In your runbook you document the new procedures to manually restart the journal managers and system managers.

Restart time examples: system and data group definition combinations


These examples show the effect of changing the job restart time in various combinations of system definitions and data group definitions.

316

Example 5: You have a data group that operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both the system definitions and the data group definition use the default value 000000 (midnight) for the job restart time. For both systems, the MIMIX system-level jobs restart at midnight. The data group jobs on both systems restart between 2 and 35 minutes after midnight. Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a job restart time value of 013500. The data group operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both system definitions use the default restart time of midnight. MIMIX jobs remain up and running. At midnight, the system-level jobs on both systems restart using the values from the preexisting configuration; the data group-level jobs restart on both systems between 0:02 and 0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between 1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups restart at midnight. Example 7: You have a data group that operates between SYSTEMA and SYSTEMB which are both in the same time zone and are defined as the values of System 1 and System 2, respectively. The data group definition specifies a job restart time value of *SYSDFN2. The system definition for SYSTEMA specifies the default job restart time of 000000 (midnight). SYSTEMB is the management system and its system definition specifies the value *NONE for the job restart time. The journal manager on SYSTEMB does not restart and the data group jobs do not restart on either system because of the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA restarts at midnight. System manager jobs on both systems restart and submit cleanup jobs at midnight as a result of the value in the network system and the fact that the systems are in the same time zone. Example 8A: You have a data group defined between CHICAGO and NEWYORK (System 1 and System 2, respectively) and the data groups job restart time is set to 030000 (3 a.m.). CHICAGO is the source system as well as a network system; its system definition uses the default job restart time of midnight. NEWYORK is the target system as well as the management system; its system definition uses a job restart time of 020000 (2 a.m.). There is a one hour time difference between the two systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17 shows the effect of the time zone difference on this configuration. The journal manager on CHICAGO restarts at midnight Chicago time and the journal manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs on both systems restart when the management system (NEWYORK) reaches the restart time specified for the network system (CHICAGO). The cleanup jobs are submitted by the system manager jobs when they restart. With the exception of the object apply jobs (OBJAPY), the data group jobs restart during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJAPY jobs are based on the time on the target system, which is an hour ahead of the source

317

Configuring restart times for MIMIX jobs

system time used for the other jobs, the OBJAPY jobs restart between 3:02 and 3:35 a.m. New York time.
Figure 17. Results of Example 8A. This is configured as a standard MIMIX environment.

Example 8B: This scenario is the same as example 8A with one exception. In this scenario, the MIMIX environment is configured to use MIMIX Remote Journal support. Figure 18 shows that the database reader (DBRDR) job restarts based on the time on the target system. Because the database send (DBSND) and database receive (DBRCV) jobs are not used in a remote journaling environment, those jobs do not restart.
Figure 18. Results of example 8B. This environment is configured to use MIMIX Remote Journal support.

318

Configuring the restart time in a system definition


To configure the restart time for MIMIX system-level jobs in an existing environment, do the following: 1. On the Work with System Definitions display, type a 2 (Change) next to the system definition you want and press F4 (Prompt). 2. Press F10 (Additional parameters), then scroll down to the bottom of the display. 3. At the Job restart time prompt, specify the value you want. You need to consider the role of the system definition (management or network system) and the effect of any time zone differences between the management system and the network system. Notes: The time is based on a 24 hour clock, and must be specified in HHMMSS format. Although seconds are ignored, the complete time format must be specified. Valid values range from 000000 to 235959. The value 000000 is the default and is equivalent to midnight (00:00:00 a.m.). If you specify *NONE, cleanup jobs are submitted on both the network and management systems based on when midnight occurs on the management system. System manager and journal manager jobs will not restart. The value *NONE is not recommended. For more information, see Considerations for using *NONE on page 315.

4. To accept the change, press Enter. The change has no effect on jobs that are currently running. The value for the Job restart time is retrieved from the system definition at the time the jobs are started. The change is effective the next time the jobs are started.

Configuring the restart time in a data group definition


To configure the restart time for MIMIX data group-level jobs in an existing environment, do the following: 1. On the Work with Data Group Definitions display, type a 2 (Change) next to the data group definition you want and press F4 (Prompt). 2. Press F10 (Additional parameters), then scroll down to the bottom of the display. 3. At the Job restart time prompt, specify the value you want. You need to consider the effect of any time zone differences between the systems defined to the data group. Notes: The time is based on a 24 hour clock, and must be specified in HHMMSS format. Although seconds are ignored, the complete time format must be specified. Valid values range from 000000 to 235959. The value 000000 is the default and is equivalent to midnight (00:00:00 a.m.). The value *NONE is not recommended. For more information, see Considerations for using *NONE on page 315.

319

Configuring restart times for MIMIX jobs

4. To accept the change, press Enter. Changes have no effect on jobs that are currently running. The value for the Job restart time is retrieved at the time the jobs are started. The change is effective the next time the jobs are started.

320

321

Starting, ending, and verifying journaling


Chapter 14
This chapter describes procedures for starting and ending journaling. Journaling must be active on all files, IFS objects, data areas and data queues that you want to replicate through a user journal. Normally, journaling is started during configuration. However, there are times when you may need to start or end journaling on items identified to a data group. The topics in this chapter include: What objects need to be journaled on page 323 describes, for supported configuration scenarios, what types of objects must have journaling started before replication can occur. It also describes when journaling is started implicitly, as well as the authority requirements necessary for user profiles that create the objects to be journaled when they are created. MIMIX commands for starting journaling on page 325 identifies the MIMIX commands available for starting journaling and describes the checking performed by the commands. Journaling for physical files on page 326 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for physical files identified by data group file entries. Journaling for IFS objects on page 330 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for IFS objects replicated cooperatively (advanced journaling). IFS tracking entries are used in these procedures. Journaling for data areas and data queues on page 334 includes procedures for displaying journaling status, starting journaling, ending journaling, and verifying journaling for data area and data queue objects replicated cooperatively (advanced journaling). IFS tracking entries are used in these procedures.

322

What objects need to be journaled


A data group can be configured in a variety of ways that involve a user journal in the replication of files, data areas, data queues and IFS objects. Journaling must be started for any object to be replicated through a user journal or to be replicated by cooperative processing between a user journal and the system journal. Requirements for system journal replication - System journal replication processes use a special journal, the security audit (QAUDJRN) journal. The IBM i system logs events in this journal to create a security audit trail. When data group object entries, IFS entries, and DLO entries are configured, each entry specifies an object auditing value that determines the type of activity on the objects to be logged in the journal. Object auditing is automatically set for all objects defined to a data group when the data group is first started, or any time a change is made to the object entries, IFS entries, or DLO entries for the data group. Because security auditing logs the object changes in the system journal, no special action is need. Requirements for user journal replication - User journal replication processes require that the journaling be started for the objects identified by data group file entries. Both MIMIX Dynamic Apply and legacy cooperative processing use data group file entries and therefore require journaling to be started. Configurations that include advanced journaling for replication of data areas, data queues, or IFS objects also require that journaling be started on the associated object tracking entries and IFS tracking entries, respectively. Starting journaling ensures that changes to the objects are recorded in the user journal, and are therefore available for MIMIX to replicate. During initial configuration, the configuration checklists direct you when to start journaling for objects identified by data group file entries, IFS tracking entries, and object tracking entries. The MIMIX commands STRJRNFE, STRJRNIFSE, and STRJRNOBJE simplify the process of starting journaling. For more information about these commands, see MIMIX commands for starting journaling on page 325. Although MIMIX commands for starting journaling are preferred, you can also use IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling if you have the appropriate authority for starting journaling. Requirements for implicit starting of journaling - Journaling can be automatically started for newly created database files, data areas, data queues, or IFS objects when certain requirements are met. The user ID creating the new objects must have the required authority to start journaling and the following requirements must be met: IFS objects - A new IFS object is automatically journaled if the directory in which it is created is journaled as a result of a request that permitted journaling inheritance for new objects. Typically, if MIMIX started journaling on the parent directory, inheritance is permitted. If you manually start journaling on the parent directory using the IBM command STRJRN, specify INHERIT(*YES). This will allow IFS objects created within the journaled directory to inherit the journal options and journal state of the parent directory. Database files created by SQL statements - A new file created by a CREATE

323

What objects need to be journaled

TABLE statement is automatically journaled if the library in which it is created contains a journal named QSQJRN. New *FILE, *DTAARA, *DTAQ objects - A new object is automatically journaled if it is created in a library that contains a QDFTJRN data area and the data area has enabled automatic journaling for the object type. The Journal at creation (JRNATCRT parameter) in the data group definition enables MIMIX to create the QDFTJRN data area and enable automatic journaling for an object type. When a data group is started, MIMIX may automatically create the QDFTJRN data area. If the data group configuration meets the requirements for MIMIX Dynamic Apply, MIMIX evaluates all data group entries for each object type to determine whether to create the QDFTJRN data area. MIMIX uses the data group entry with the most specific match to the object type and library that also specifies *ALL for its System 1 object (OBJ1) and Attribute (OBJATR) prompts. Note: MIMIX prevents the QDFTJRN data area from being created the following libraries: QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*, QRPL*, QRCL*, QRPL*, QGPL, QTEMP and SYSIB*. Automatic journaling of new *DTAARA or *DTAQ objects is only supported in IBM i V5R4 and higher. For example, if MIMIX finds only the following data group object entries for library MYLIB, it would use the first entry when determining whether to create the QDFTJRN data area because it is the most specific entry that also meets the OBJ1(*ALL) and OBJATR(*ALL) requirements. The second entry is not considered in the determination because its OBJ1 and OBJATR values do not meet these requirements.
LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES) PRCTYPE(*INCLD) LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES) PRCTYPE(*INCLD)
Updated for 5.0.02.00.

Authority requirements for starting journaling


Normal MIMIX processes run under the MIMIXOWN user profile, which ships with *ALLOBJ special authority. Therefore, it is not necessary for other users to account for journaling authority requirements when using MIMIX commands (STRJRNFE, STRJRNIFSE, STRJRNOBJE) to start journaling. When the MIMIX journal managers are started, or when the Build Journaling Environment (BLDJRNENV) command is used, MIMIX checks the public authority (*PUBLIC) for the journal. If necessary, MIMIX changes public authority so the user ID in use has the appropriate authority to start journaling. Authority requirements must be met to enable the automatic journaling of newly created objects and if you use IBM commands to start journaling instead of MIMIX commands. If you create database files, data areas, or data queues for which you expect automatic journaling at creation, the user ID creating these objects must have the required authority to start journaling.

324

If you use the IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling, the user ID that performs the start journaling request must have the appropriate authority requirements.

For journaling to be successfully started on an object, one of the following authority requirements must be satisfied: The user profile of the user attempting to start journaling for an object must have *ALLOBJ special authority. The user profile of the user attempting to start journaling for an object must have explicit *ALL object authority for the journal to which the object is to be journaled. Public authority (*PUBLIC) must have *OBJALTER, *OBJMGT, and *OBJOPR object authorities for the journal to which the object is to be journaled.

MIMIX commands for starting journaling


Before you use any of the MIMIX commands for starting journaling, the data group file entries, IFS tracking entries, or object tracking entries associated with the commands object class must be loaded. The MIMIX commands for starting journaling are: Start Journal Entry (STRJRNFE) - This command starts journaling for files identified by data group file entries. Start Journaling IFS Entries (STRJRNIFSE) - This command starts journaling of IFS objects configured for advanced journaling. Data group IFS entries must be configured and IFS tracking entries be loaded (LODDGIFSTE command) before running the STRJRNIFSE command to start journaling. Start Journaling Obj Entries (STRJRNOBJE) - This command starts journaling of data area and data queue objects configured for advanced journaling. Data group object entries must be configured and object tracking entries be loaded (LODDGOBJTE command) before running the STRJRNOBJE command to start journaling.

If you attempt to start journaling for a data group file entry, IFS tracking entry, or object tracking entry and the files or objects associated with the entry are already journaled, MIMIX checks that the physical file, IFS object, data area, or data queue is journaled to the journal associated with the data group. If the file or object is journaled to the correct journal, the journaling status of the data group file entry, IFS tracking or object tracking entry is changed to *YES. If the file or object is not journaled to the correct journal or the attempt to start journaling fails, an error occurs and the journaling status is changed to *NO.

325

Journaling for physical files

Journaling for physical files


Data group file entries identify physical files to be replicated. When data group file entries are added to a configuration, they may have an initial status of *ACTIVE. However, the physical files which they identify may not be journaled. In order for replication to occur, journaling must be started for the files on the source system. This topic includes procedures to display journaling status, and to start, end, or verify journaling for physical files.

Displaying journaling status for physical files


Use this procedure to display journaling status for physical files identified by data group file entries. Do the following: 1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display. 2. On the Work with Data Groups display, type 17 (File entries) next to the data group you want and press Enter. 3. The Work with DG File Entries display appears. The initial view shows the current and requested status of the data group file entry. Press F10 (Journaled view). At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the physical file associated with the file entry is journaled on each system. Note: Logical files will have a status of *NA. Data group file entries exist for logical files only in data groups configured for MIMIX Dynamic Apply.

Starting journaling for physical files


Use this procedure to start journaling for physical files identified by data group file entries. In order for replication to occur, journaling must be started for the file on the source system. This procedure invokes the Start Journal Entry (STRJRNFE) command. The command can also be entered from a command line. Do the following: 1. Access the journaled view of the Work with DG File Entries display as described in Displaying journaling status for physical files on page 326. 2. From the Work with DG File Entries display, type a 9 (Start journaling) next to the file entries you want. Then do one of the following: To start journaling using the command defaults, press Enter. To modify command defaults, press F4 (Prompt) then continue with the next step.

3. The Start Journal Entry (STRJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want.

326

4. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt. 6. To start journaling for the physical file associated with the selected data group, press Enter. The system returns a message to confirm the operation was successful.

Ending journaling for physical files


Use this procedure to end journaling for a physical file associated with a data group file entry. Once journaling for a file is ended, any changes to that file are not captured and are not replicated. You may need to end journaling if a file no longer needs to be replicated, to prepare for upgrading MIMIX software, or to correct an error. This procedure invokes the End Journaling File Entry (ENDJRNFE) command. The command can also be entered from a command line. To end journaling, do the following: 1. Access the journaled view of the Work with DG File Entries display as described in Displaying journaling status for physical files on page 326. 2. From the Work with DG File Entries display, type a 10 (End journaling) next to the file entry you want and do one of the following: Note: MIMIX cannot end journaling on a file that is journaled to the wrong journal. For example, a file that does not match the journal definition for that data group. If you want to end journaling outside of MIMIX, use the ENDJRNPF command. To end journaling using command defaults, press Enter. Journaling is ended. To modify additional prompts for the command, press F4 (Prompt) and continue with the next step.

3. The End Journal File Entry (ENDJRNFE) display appears. If you want to end journaling for all files in the library, specify *ALL at the System 1 file prompt. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt. 6. To end journaling, press Enter.

327

Journaling for physical files

Verifying journaling for physical files


Use this procedure to verify if a physical file defined by a data group file entry is journaled correctly. This procedure invokes the Verify Journaling File Entry (VFYJRNFE) command to determine whether the file is journaled and whether it is journaled to the journal defined in the journal definition. When these conditions are met, the journal status on the Work with DG File Entries display is set to *YES. The command can also be entered from a command line. To verify journaling for a physical file, do the following: 1. Access the journaled view of the Work with DG File Entries display as described in Displaying journaling status for physical files on page 326. 2. From the Work with DG File Entries display, type a 11 (Verify journaling) next to the file entry you want and do one of the following: To verify journaling using command defaults, press Enter. To modify additional prompts for the command, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling File Entry (VFYJRNFE) display appears. The Data group definition prompts and the System 1 file prompts identify your selection. Accept these values or specify the values you want. 4. Specify the value you want for the Verify journaling on system prompt. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) when determining where to verify journaling. 5. If you want to use batch processing, specify *YES for the Submit to batch prompt 6. Press Enter.

328

329

Journaling for IFS objects

Journaling for IFS objects


IFS tracking entries are loaded for a data group after the data group IFS entries have been configured for replication through the user journal (advanced journaling). However, loading IFS tracking entries does not automatically start journaling on the IFS objects they identify. In order for replication to occur, journaling must be started on the source system for the IFS objects identified by IFS tracking entries. This topic includes procedures to display journaling status, and to start, end, or verify journaling for IFS objects identified for replication through the user journal. These references go to different files in different books. You should be aware of the information in Long IFS path names on page 119

Displaying journaling status for IFS objects


Use this procedure to display journaling status for IFS objects identified by IFS tracking entries. Do the following: 1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display. 2. On the Work with Data Groups display, type 50 (IFS trk entries) next to the data group you want and press Enter. 3. The Work with DG IFS Trk. Entries display appears. The initial view shows the object type and status at the right of the display. Press F10 (Journaled view). At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the IFS object identified by the tracking is journaled on each system.

Starting journaling for IFS objects


Use this procedure to start journaling for IFS objects identified by IFS tracking entries. This procedure invokes the Start Journaling IFS Entries (STRJRNIFSE) command. The command can also be entered from a command line. To start journaling for IFS objects, do the following: 1. If you have not already done so, load the IFS tracking entries for the data group. Use the procedure in Loading IFS tracking entries on page 284. 2. Access the journaled view of the Work with DG IFS Trk. Entries display as described in Displaying journaling status for IFS objects on page 330. 3. From the Work with DG IFS Trk. Entries display, type a 9 (Start journaling) next to the IFS tracking entries you want. Then do one of the following: To start journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

4. The Start Journaling IFS Entries (STRJRNIFSE) display appears. The Data group

330

definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 7. The System 1 file identifier and System 2 file identifier prompts identify the file identifier (FID) of the IFS object on each system. You cannot change the values2. 8. To start journaling on the IFS objects specified, press Enter.

Ending journaling for IFS objects


Use this procedure to end journaling for IFS objects identified by IFS tracking entries. This procedure invokes the End Journaling IFS Entries (ENDJRNIFSE) command. The command can also be entered from a command line. To end journaling for IFS objects, do the following: 1. Access the journaled view of the Work with DG IFS Trk. Entries display as described in Displaying journaling status for IFS objects on page 330. 2. From the Work with DG IFS Trk. Entries display, type a 10 (End journaling) next to the IFS tracking entries you want. Then do one of the following: To end journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The End Journaling IFS Entries (ENDJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required.
1. When the command is invoked from a command line, you can change values specified for the IFS objects prompts. Also, you can specify as many as 300 object selectors by using the + for more values prompt. 2. When the command is invoked from a command line, use F10 to see the FID prompts. Then you can optionally specify the unique FID for the IFS object on either system. The FID values can be used alone or in combination with the IFS object path name.

331

Journaling for IFS objects

5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2. 7. To end journaling on the IFS objects specified, press Enter.

Verifying journaling for IFS objects


Use this procedure to verify if an IFS object identified by an IFS tracking entry is journaled correctly. This procedure invokes the Verify Journaling IFS Entries (VFYJRNIFSE) command to determine whether the IFS object is journaled, whether it is journaled to the journal defined in the data group definition, and whether it is journaled with the attributes defined in the data group definition. The command can also be entered from a command line. To verify journaling for IFS objects, do the following: 1. Access the journaled view of the Work with DG IFS Trk. Entries display as described in Displaying journaling status for IFS objects on page 330. 2. From the Work with DG IFS Trk. Entries display, type a 11 (Verify journaling) next to the IFS tracking entries you want. Then do one of the following: To verify journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling IFS Entries (VFYJRNIFSE) display appears. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. You cannot change the values shown for the IFS objects prompts1. 4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. You cannot change the values shown2. 7. To verify journaling on the IFS objects specified, press Enter. Using file identifiers (FIDs) for IFS objects on page 312.

332

333

Journaling for data areas and data queues

Journaling for data areas and data queues


Object tracking entries are loaded for a data group after the data group object entries have been configured replication through the user journal (advanced journaling). However, loading object tracking entries does not automatically start journaling on the objects they identify. In order for replication to occur, journaling must be started for the objects on the source system for the objects identified by object tracking entries. This topic includes procedures to display journaling status, and to start, end, or verify journaling for data areas and data queues identified for replication through the user journal.

Displaying journaling status for data areas and data queues


To check journaling status for data areas and data queues identified by object tracking entries. Do the following: 1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the Work with Data Groups display. 2. On the Work with Data Groups display, type 52 (Obj trk entries) next to the data group you want and press Enter. 3. The Work with DG Obj. Trk. Entries display appears. The initial view shows the object type and status at the right of the display. Press F10 (Journaled view). At the right side of the display, the Journaled System 1 and System 2 columns indicate whether the object identified by the tracking is journaled on each system.

Starting journaling for data areas and data queues


Use this procedure to start journaling for data areas and data queues identified by object tracking entries. This procedure invokes the Start Journaling Obj Entries (STRJRNOBJE) command. The command can also be entered from a command line. To start journaling for data areas and data queues, do the following: 1. If you have not already done so, load the object tracking entries for the data group. Use the procedure in Loading object tracking entries on page 285. 2. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in Displaying journaling status for data areas and data queues on page 334. 3. From the Work with DG Obj. Trk. Entries display, type a 9 (Start journaling) next to the object tracking entries you want. Then do one of the following: To start journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

4. The Start Journaling Obj Entries (STRJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the

334

tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 5. Specify the value you want for the Start journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. 6. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 7. To start journaling on the objects specified, press Enter.

Ending journaling for data areas and data queues


Use this procedure to end journaling for data areas and data queues identified by object tracking entries. This procedure invokes the End Journaling Obj Entries (ENDJRNOBJE) command. The command can also be entered from a command line. To end journaling for data areas and data queues, do the following: 1. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in Displaying journaling status for data areas and data queues on page 334. 2. From the Work with DG Obj. Trk. Entries display, type a 10 (End journaling) next to the object tracking entries you want. Then do one of the following: To verify journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The End Journaling Obj Entries (ENDJRNOBJE) display appears. The Data group definition and IFS objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 4. Specify the value you want for the End journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. To end journaling on the objects specified, press Enter.

335

Journaling for data areas and data queues

Verifying journaling for data areas and data queues


Use this procedure to verify if an object identified by an object tracking entry is journaled correctly. This procedure invokes the Verify Journaling Obj Entries (VFYJRNOBJE) command to determine whether the object is journaled, whether it is journaled to the journal defined in the data group definition, and whether it is journaled with the attributes defined in the data group definition. The command can also be entered from a command line. To verify journaling for objects, do the following: 1. Access the journaled view of the Work with DG Obj. Trk. Entries display as described in Displaying journaling status for data areas and data queues on page 334. 2. From the Work with DG Obj. Trk. Entries display, type a 11 (Verify journaling) next to the object tracking entries you want. Then do one of the following: To verify journaling using the command defaults, press Enter. To modify the command defaults, press F4 (Prompt) and continue with the next step.

3. The Verify Journaling Obj Entries (VFYJRNOBJE) display appears. The Data group definition and Objects prompts identify the object associated with the tracking entry you selected. Although you can change the values shown for these prompts, it is not recommended unless the command was invoked from a command line. 4. Specify the value you want for the Verify journaling on system prompt. Press F4 to see a list of valid values. When *DGDFN is specified, MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. 5. To use batch processing, specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. 6. To verify journaling on the objects specified, press Enter.

336

Chapter15

Configuring for improved performance


This chapter describes how to modify your configuration to use advanced techniques to improve journal performance and MIMIX performance. Journal performance: The following topics describe how to improve journal performance: Minimized journal entry data on page 339 describes benefits of and restrictions for using minimized user journal entries for *FILE and *DTAARA objects. A discussion of large object (LOB) data in minimized entries and configuration information are included. Configuring for high availability journal performance enhancements on page 341 describes journal caching and journal standby state within MIMIX to support IBMs High Availability Journal Performance i5/OS option 42, Journal Standby feature and Journal caching. Requirements and restrictions are included.

MIMIX performance: The following topics describe how to improve MIMIX performance: Caching extended attributes of *FILE objects on page 345 describes how to change the maximum size of the cache used to store extended attributes of *FILE objects replicated from the system journal. Increasing data returned in journal entry blocks by delaying RCVJRNE calls on page 346 describes how you can improve object send performance by changing the size of the block of data from a receive journal entry (RCVJRNE) call and delaying the next call based on a percentage of the requested block size. Configuring high volume objects for better performance on page 350 describes how to change your configuration to improve system journal performance. Improving performance of the #MBRRCDCNT audit on page 351 describes how to use the CMPRCDCNT commit threshold policy to limit comparisons and thereby improve performance of this audit in environments which use commitment control.

337

338

Minimized journal entry data


MIMIX supports the ability to process minimized journal entries placed in a user journal for object types of file (*FILE) and data area (*DTAARA). The i5/OS operating system provides the ability to create journal entries using an internal format that minimizes the data specific to these object types that are stored in the journal entry. This support is enabled in the MIMIX create or change journal definitions commands and built using the Build Journal Environment (BLDJRNENV) command. When a journal entry for one of these object types is generated, the system compares the size of the minimized format to the standard format and places whichever is smaller in the journal. For database files, only update journal entries (R-UP and RUB) and rollback-type update entries (R-BR and R-UR) can be minimized. If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record includes LOB fields, LOB data is journaled only when that LOB is changed. Changes to other fields in the record will not cause the LOB data to be journaled unless the LOB is also changed. When database files have records with static LOB values, minimized journal entries can produce considerable savings. The benefit of using minimized journal entries is that less data is stored in the journal. In a MIMIX replication environment, you also benefit by having less data sent over communications lines and saved in MIMIX log spaces. Factors in your environment such as the percentage of journal entries that are updates (R-UP), the size of database records, the number of bytes typically changed in an update, may influence how much benefit you achieve.

Restrictions of minimized journal entry data


The following MIMIX and operating system restrictions apply: If you plan to use keyed replication do not use minimized journal entry data. Minimized journal entries cannot be used when MIMIX support for keyed replication is in use, since the key may not be present in a minimized journal entry. The use of the value *FLDBDY for minimized journal entry data is limited to systems running i5/OS V5R4 or higher. Minimized before-images cannot be selected for automatic before-image synchronization checking.

Your environment may impose additional restrictions: If you rely on full image captures in the receiver as part of your auditing rules, do not configure for minimized entry data. Even if you do not rely on full image captures for auditing purposes, consider the effect of how data is minimized. The minimizing resulting from specifying *FILE does not occur on field boundaries. Therefore, the entry specific data may not be viewable and may not be used for auditing purposes. When *FLDBDY is specified, file data for modified fields is minimized on field boundaries. With *FLDBDY, entry-specific data is viewable and may be used for auditing purposes.

339

Minimized journal entry data

Configuring for minimized journal entry data may affect your ability to use the Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For example, using option 2 (Change) on WRKDGFEHLD to convert a minimized record update (RUP) to a record put (RPT), will result in failure when applied. RPTs requires the presence of a full, non-minimized, record.

See the IBM book, Backup and Recovery for restrictions and usage of journal entries with minimized entry-specific data.
Updated for 5.0.02.00.

Configuring for minimized journal entry data


By default, MIMIX user journal replication processes use complete journal entry data. To enable MIMIX to use minimized journal entry data for specific object types, do the following: 1. From the Work with Journal Definitions display, use option 2 (Change) to access the journal definition you want. 2. On the following display, press Enter twice to see all prompts for the display. Page down to the bottom of the display. 3. Press F10 (Additional parameters) to access the Minimize entry specific data prompt. 4. Specify the values you want at the Minimize entry specific data prompt and press Enter. 5. In order for the changes to be effective, you must build the journaling environment using the updated journal definition. To do this, type 14 (Build) next to the definition you just modified on the Work with Journal Definitions display and press Enter.

340

Configuring for high availability journal performance enhancements


MIMIX supports IBMs High Availability Journal Performance i5/OS option 42, Journal Standby feature and Journal caching. These high availability performance enhancements improve replication performance on the target system and provide significant performance improvement by eliminating the need to start journaling at switch time. MIMIX support of IBMs high availability performance enhancements consists of two independent components: journal standby state and journal caching. These components work individually or together, although when used together, each component must be enabled separately. Journal standby state minimizes replication impact on the target system by providing the benefits of an active journal without writing the journal entries to disk. As such, journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. Moreover, journal standby state minimizes switch times by retaining the journal relationship for replicated objects. Journal caching provides a means by which to cache journal entries and their corresponding database records into main storage and write to disks only as necessary. Journal caching is particularly helpful during batch operations when large numbers of add, update, and delete operations against journaled objects are performed. Journal standby state and journal caching can be used in source send configuration environments as well as in environments where remote journaling is enabled. For restrictions of MIMIX support of IBMs high availability performance enhancements, see Restrictions of high availability journal performance enhancements on page 343. Note: For more information, also see the topics on journal management and system performance in the IBM eServer iSeries Information Center.

Journal standby state


Journal standby state minimizes replication impact by providing the benefits of an active journal without writing the journal entries to disk. As such, journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. Moreover, If you are journaling on apply, journal standby state can provide a performance improvement on the apply session. If you are not using journaling on target and want to have a switchable data group, then using journal standby state may offer a benefit in reduced switch time. When a journal is in standby state, it is not necessary to start journaling for objects on the target system prior to switching. All that is necessary prior to switching, is to change the journal state to active. You can start or stop journaling while the journal standby state is enabled. However, commitment control cannot be used for files that are journaled to any journal in standby state. Most referential constraints cannot be used when the journal is in standby state. When journal standby state is not an option because of these

341

Configuring for high availability journal performance enhancements

restrictions, journal caching can be used as an alternative. See Journal caching on page 342.

Minimizing potential performance impacts of standby state


It is possible to experience degraded performance of database apply (DBAPY) processing after enabling journal standby state. You can reduce potential impacts by using the Change Recovery for Access Paths (CHGRCYAP) command, which allows you to change the target access path recovery time for the system. Note: While this procedure improves performance, it can cause potentially longer initial program loads (IPL). Deciding to use standby state is a trade off between run-time performance and IPL duration. Do the following: 1. On a command line, type the following and press Enter:
CHGRCYAP

2. At the Include access paths prompt, specify *ELIGIBLE to include only eligible access paths in the recovery time specification.

Journal caching
Journal caching is an attribute of the journal that is defined. When journal caching is enabled, the system caches journal entries and their corresponding database records into main storage. This means that neither the journal entries nor their corresponding database records are written to disk until an efficient disk write can be scheduled. This usually occurs when the buffer is full, or at the first commit, close, or file end of data. Because most database transactions must no longer wait for a synchronous write of the journal entries to disk, the performance gain can be significant. For example, batch operations must usually wait for each new journal entry to be written to disk. Journal caching can be helpful during batch operations when large numbers of add, update, and delete operations against journaled objects are performed. The default value for journal caching is *BOTH. It is recommended that you use the default value of *BOTH to perform journal caching on both the source and the target systems. For more information about journal caching, see IBMs Redbooks TechnoteJournal Caching: Understanding the Risk of Data Loss.

MIMIX processing of high availability journal performance enhancements


You can enable both journal standby state and journal caching using a combination of MIMIX and IBM commands. For example, the Journal state (JRNSTATE) parameter, available on the IBM command Change Journal (CHGJRN), offers equivalent and complementary function to the MIMIX parameter Target journal state (TGTSTATE). Note: For purposes of this document, only MIMIX parameters are described in detail.

342

To enable journal standby state or journal caching in a MIMIX environment, two parameters have been added to the Create Journal Definition (CRTJRNDFN) and Change Journal Definition (CHGJRNDFN) commands: Target journal state (TGTSTATE) and Journal caching (JRNCACHE). See Creating a journal definition on page 215 and Changing a journal definition on page 217. When journaling is used on the target system, the TGTSTATE parameter specifies the requested status of the target journal. Valid values for the TGTSTATE parameter are *ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)), the target journal state is set to active when the data group is started. When *STANDBY is specified, objects are journaled on the target system, but most journal entries are prevented from being deposited into the target journal. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the TGTSTATE value should remain unchanged. The JRNCACHE parameter specifies whether the system should cache journal entries in main storage before writing them to disk. Valid values for the JRNCACHE parameter are *TGT, *BOTH, *NONE, or *SRC. Although journal caching can be configured on the target system, source system, or both, it is recommended to be performed on both (*BOTH) the target system and source system. The recommended value of *BOTH is the default. An additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the JRNCACHE value should remain unchanged.

Requirements of high availability journal performance enhancements


Table 44 identifies the software required in order to use MIMIX support of IBMs high availability performance enhancements. Each system in the replication environment must have this software installed and be up to date with the latest PTFs and service packs applied.
Table 44. Software i5/OS LPP installed and available Software requirements for MIMIX support of IBMs high availability performance enhancements Minimum level V5R3M0 or higher Product 5722SS1, option 42, feature 5117, i5/OS HA Journal Performance

Restrictions of high availability journal performance enhancements


MIMIX support of IBMs high availability performance enhancements has a unique set of restrictions and high availability considerations. Make sure that you are aware of these restrictions before using journal standby state or journal caching in your MIMIX environment. When using journal standby state or journal caching, be aware of the following restrictions documented by IBM: Do not use these high availability performance enhancements in conjunction with

343

Configuring for high availability journal performance enhancements

commitment control. For journals in standby mode, commitment control entries are not sent to or deposited in the journal. Note: MIMIX does not use commitment control on the target system. As such, MIMIX support of IBMs high availability performance enhancements can be configured on the target system even if commitment control is being used on the source system. Do not use these high availability performance enhancements in conjunction with referential constraints, with the exception of referential constraint types of *RESTRICT.

Also be aware of the following additional restrictions: Do not change journal standby state or journal caching on IBM-supplied journals. These journal names begin with Q and reside in libraries which names also begin with Q (not QGPL). Attempting to change these journals results in an error message. Do not place a remote journal in journal standby state. Journal caching is also not allowed on remote journals. Do not use MIMIX support of IBMs high availability performance enhancements in a cascading environment.

344

Caching extended attributes of *FILE objects


In order to accurately replicate actions against *FILE objects, it is sometimes necessary to retrieve the extended attribute of a *FILE object, such as PF, LF or DSPF. Whenever large volumes of journal entries for *FILE objects are replicated from the security audit journal (system journal), MIMIX caches this information for a fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute. The result is a potential reduction of CPU consumption by the object send job and a significant performance improvement. This function can be tailored to suit your environment. The maximum size of the cache is controlled though the use of a data area in the MIMIX product library. The cache size indicates the number of entries that can be contained in the cache. If the data area is not created or does not exist in the MIMIX product library, the size of the cache defaults to 15. To configure the extended attribute cache, do the following: 1. Create the data area on the systems on which the object send jobs are running. Type the following command: CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR) LEN(2) 2. Specify the cache size (xx). Valid cache values are numbers 00 through 99. Type the following command: CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE('xx, RCVJRNE_delay_values') Notes: The four RCVJRNE delay values are specified in this string along with the cache size. See topic Increasing data returned in journal entry blocks by delaying RCVJRNE calls on page 346 for more information. Using 00 for the cache size value disables the extended attribute cache.

345

Increasing data returned in journal entry blocks by delaying RCVJRNE calls

Increasing data returned in journal entry blocks by delaying RCVJRNE calls


Enhancements have been made to MIMIX to increase the performance of the object send job when a small number of journal entries are present during the Receive Journal Entry (RCVJRNE) call. Journal entries are received in configurable-sized blocks that have a default size of 99,999 bytes. When multiple RCVJRNE calls are performed and each block retrieved is less than 99,999 bytes, unnecessary overhead is created. Through additional controls added to the MXOBJSND *DTAARA objects within the MIMIX installation library, you can now specify the size of the block of data received from RCVJRNE and delay the next RCVJRNE call based on a percentage of the requested block size. Doing so increases the probability of receiving a full journal entry block and improves object send performancereducing the number of RCVJRNE calls while simultaneously increasing the quantity of data returned in each block. This delay, along with the extended file attribute cache capability, also reduces CPU consumption by the object send job. See Caching extended attributes of *FILE objects on page 345 for related information.

Understanding the data area format


This enhancement allows you to provide byte values for the block size to receive data from RCVJRNE, as well as specify the percentage of that block size to use for both a small delay block and a medium delay block in the data area. These values are added in segments to the string of characters used by the file attribute cache size. Each block segment is followed by a multiplier value, which determines how long the previously specified journal entry block is delayed. The duration of the delay is the multiplier value multiplied by the value specified on the Reader wait time (seconds) (RDRWAIT) parameter in the data group definition. The RDRWAIT default value is 1 second. The RCVJRNE block size is specified in kilobytes, ranging from 32 Kb to 4000 Kb. If not specified, the default size is 99,999 bytes (100 Kb -1). The following defines each segment and includes the number of characters that particular segment can contain: DTAARA VALUE(cache_size2, small_block_percentage2, small_multipler2, medium_block_percentage2, medium_multiplier2, block_size4) To illustrate the effect of specific delay and multiplier values, let us assume the following: DTAARA VALUE(15,10,02,30,01,0200) In this example, a small block is defined as any journal entry block consisting of 10 percent of the RCVJRNE block size of 200 Kb, or 20,000 bytes. Assuming the RDRWAIT default is in effect, small journal entry blocks will be delayed for 2 seconds before the next RCVJRNE call. Similarly, a medium block is defined as any journal entry block containing between 10 and 30 percent of the RCVJRNE block size, between 20,001 and 60,000 bytes. Medium blocks are then delayed for 1 second assuming the default RDRWAIT value is used.

346

Note: Delays are not applied to blocks larger than the specified medium block percentage. In the previous example, no delays will be applied to blocks larger than 30 percent of the RCVJRNE block size, or 60,000 bytes.

Determining if the data area should be changed


Before changing the data area, it is recommended that you contact a Certified MIMIX Consultant for assistance with running object send processing with diagnostic messages enabled. Review the set of LVI0001 messages returned as a result. By default, the RCVJRNE block size is 99,999 bytes, with the small block value set to 5,000 bytes and the medium block value set to 20,000 bytes. If the resulting messages indicate that you are processing full journal entry blocks, there is no need to add a delay to the RCVJRNE call. In this case, the object send job is already running as efficiently as possible. Note that a block is considered full when the next journal entry in the sequence cannot fit within the size limitations of the block currently being processed. Note: Reviewing these messages can also be helpful once you have changed the default values, to ensure that the object send job is operating efficiently. The following is an example of LVI0001 messages: LVI0001 OM2120 Block Sizes (in Kb): Small=20; Medium=60 LVI0001 OM2120 Block Counts: Small=129; Medium=461; Large=46; Full=1 LVI0001 OM2120 Using RCVJRNE Block Size (in Kb): 200 LVI0001 OM2120 - Range Counts: 0%=80; 2%=28; 5%=21; 10%=23; 15%=56; 20%=161; 25%=221; 30%=23 LVI0001 OM2120 - Range Counts: 40%=10; 50%=4; 60%=5; 70%=3; 80%=0; 90%=1; Full=1 OM2120 File Attr Cache: Size= 30, no cache lookup attempts In the above example, 636 blocks were sent but only one of the sent blocks were full. Making changes to the delay multiplier or altering the small or medium block size specification would probably make sense in this scenario. Lakeview provides recommendations for changing the block size values in Configuring the RCVJRNE call delay and block values on page 347.

Configuring the RCVJRNE call delay and block values


To configure the delay and block values when retrieving journal entry blocks, do the following: Note: Prior to configuring the RCVJRNE call delay, carefully read the information provided in Understanding the data area format on page 346 and Determining if the data area should be changed on page 347. 1. Create the data area on the systems on which the object send jobs are running. Type the following command: CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR)

347

Increasing data returned in journal entry blocks by delaying RCVJRNE calls

LEN(20) Note: Although you will see improvements from the file attribute cache with the default character value (LEN(2)), enhancements are maximized by recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE call delays. 2. Specify the RCVJRNE block size, percentages, and multipliers to be used for the delay. Valid values for the RCVJRNE block size are 32Kb to 4000Kb. Valid values for the percentages and multipliers are numbers 01 through 99. Lakeview recommends typing the following as a starting point where cache size is the two character number for the size of the file attribute cache: CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE(cache_size,10,02,30,01,0100) Note: For information about the cache size, see Caching extended attributes of *FILE objects on page 345.

348

349

Configuring high volume objects for better performance

Configuring high volume objects for better performance


Some objects, such as data areas and data queues can have significant activity against them and can cause MIMIX to use significant CPU resource. One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate thousands of journal entries for a single *DTAQ. For each journal entry, system journal replication processes package all of the entries of the *DTAQ and sends it to the apply system. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ API. If the data group is configured for multiple Object retrieve processing (OBJRTVPRC) jobs, then several object retrieve jobs could be started (up to the maximum configured) to handle the activity against the *DTAQ. MIMIX contains redundancy logic that eliminates multiple journal entries for the same object when the entire object is replicated. When you configure a data group for system journal replication, you should: Place all *DTAQs in the same object-only data group Limit the maximum number of object retrieve jobs for the data group to one. Defaults can be used for the other object data group jobs.

350

Improving performance of the #MBRRCDCNT audit


Environments that use commitment control may find that, in some conditions, a request to run the #MBRRCDCNT audit or the Compare Record Count (CMPRCDCNT) command can be extremely long-running. This is possible in environments that use commitment control with long-running commit transactions that include large numbers (tens of thousands) of record operations within one transaction. In such an environment, the compare request can be long running when the number of members to be compared is very large and there are uncommitted changes present at the time of the request. The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify a threshold at which requests to compare record counts will no longer perform the comparison due to commit cycle activity on the source system. The shipped default values for this policy (CMPRCDCMT parameter) permit record count comparison requests without regard to commit cycle activity on the source system. These policy default values are suitable for environments that do not have the commitment control environment indicated, or that can tolerate a long-running comparison. If your environment cannot tolerate a long-running request, you can specify a numeric value for the CMPRCDCMT parameter for either the MIMIX installation or for a specific data group. This will change the behavior of MIMIX by affecting what is compared, and can improve performance of #MBRRCDCNT and CMPRCDCNT requests. Note: Equal record counts suggest but do not guarantee that files are synchronized. When a threshold is specified for the CMPRCDCNT commit threshold policy, record count comparisons can have a higher number of file members that are not compared. This must be taken into consideration when using the comparison results to gauge of whether systems are synchronized. A numeric value for the CMPRCDCMT parameter defines the maximum number of uncommitted record operations that can exist for files waiting to be applied in an apply session at the time a compare record count request is invoked. The number specified must be representative of the number of uncommitted record operations. When a numeric value is specified, MIMIX recognizes whether the number of uncommitted record operations for an apply session exceeds the threshold at the time a compare request is invoked. If an apply session has not reached the threshold, the comparison is performed. If the threshold is exceeded, MIMIX will not attempt to compare members from that apply session. Instead, the results will display the *CMT value for the difference indicator, indicating that commit cycle activity on the source system prevented active processing from comparing counts of current records and deleted records in the selected member. Each database apply session is evaluated against the threshold independently. As a result, it is possible for record counts to be compared for files in one apply session but not be compared in another apply session, as illustrated in the following example.

351

Improving performance of the #MBRRCDCNT audit

Example: This example shows the result of setting the policy for a data group to a value of 10,000. Table 45 shows the files replicated by each of the apply sessions used by the data group and the result of comparison. Because of the number of uncommitted record operations present at the time of the request, files processed by apply sessions A and C are not compared.
Table 45. Apply Session A B C D Sample results with a policy threshold value of 10,000. Files Uncommitted Record Operation Per File A01 A02 B01 B02 C01 C02 D01 D02 11,000 0 5,000 0 7,000 6,000 50 500 Apply Session Total > 10,000 < 10,000 > 10,000 < 10,000 Not compared, *CMT Not compared, *CMT Compared Compared Not compared, *CMT Not compared, *CMT Compared Compared Result

352

Chapter16

Configuring advanced replication techniques


This chapter describes how to modify your configuration to support advanced replication techniques for user journal (database) and system journal (object) replication. User journal replication: The following topics describe advanced techniques for user journal replication: Keyed replication on page 355 describes the requirements and restrictions of replication that is based on key values within the data. This topic also describes how to configure keyed replication at the data group or file entry level as well as how to verify key attributes. Data distribution and data management scenarios on page 361 defines and identifies configuration requirements for the following techniques: bi-directional data flow, file combining, file sharing, file merging, broadcasting, and cascading. Trigger support on page 368 describes how MIMIX handles triggers and how to enable trigger support. Requirements and considerations for replication of triggers, including considerations for synchronizing files with triggers, are included. Constraint support on page 370 identifies the types of constraints MIMIX supports. This topic also describes delete rules for referential constraints that can cause dependent files to change and MIMIX considerations for replication of constraint-induced modifications. Handling SQL identity columns on page 373 describes the problem of duplicate identity column values and how the Set Identity Column Attribute (SETIDCOLA) command can be used to support replication of SQL tables with identity columns. Requirements and limitations of the SETIDCOLA command as well as alternative solutions are included. Collision resolution on page 381 describes available support within MIMIX to automatically resolve detected collisions without user intervention and its requirements. This topic also describes how to define and work with collision resolution classes.

System journal replication: The following topics describe advanced techniques for system journal replication: Omitting T-ZC content from system journal replication on page 387 describes considerations and requirements for omitting content of T-ZC journal entries from replicated transactions for logical and physical files. Selecting an object retrieval delay on page 391 describes how to set an object retrieval delay value so that a MIMIX lock on an object does not interfere with your applications. This topic includes several examples. Configuring to replicate SQL stored procedures and user-defined functions on page 393 describes the requirements for replicating these constructs and how configure MIMIX to replicate them.

353

Using Save-While-Active in MIMIX on page 396 describes how to change type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU.

354

Keyed replication
By default, MIMIX user journal replication processes use positional replication. You can change from positional replication to keyed replication for database files.

Keyed vs positional replication


In data groups that are configured for user journal replication, default values use positional replication. In positional file replication, data on the target system is identified by position, or relative record number (RRN), in the file member. If data exists in a file on the source system, an exact copy must exist in the same position in a file on the target system. When the file on the source system is updated, MIMIX finds the data in the exact location on the target system and updates that data with the changes. User journal replication processes support the update of files by key, allowing replication to be based on key values within the data instead of by the position of the data within the file. Key replication support is subject to the requirements and restrictions described. Positional file replication provides the best performance. Keyed file replication offers a greater level of flexibility, but you may notice greater CPU usage when MIMIX must search each file for the specified key. You also need to be aware that data collisions can occur when an attempt is made to simultaneously update the same data from two different sources. Lakeview Technology recommends positional replication for most high availability requirements. Keyed replication is best used for more flexible scenarios, such as file sharing, file routing, or file combining.

Requirements for keyed replication


Journal images - MIMIX may need to be configured so that both before and after images of the journal transaction are placed in the journal. The Journal image element of the File and tracking entry options (FEOPT) parameter controls which journal images are placed in the journal. Default values result in only an after-image of the record. However, some configurations require both beforeimages and after-images. The Journal image value specified in the data group definition is in effect unless a different value is specified for the FEOPT parameter in a file entry or object entry. It is recommended that you use the Journal image value of *BOTH whenever there are file entries with keyed replication to prevent before images from being filtered out by the database send process. If the unique key fields of the database file are updated by applications, you must use the value *BOTH. Unique access path - At least one unique access path must exist for the file being replicated.The access path can be either part of the physical file itself or it can be defined in a logical file dependent on the physical file.

355

Keyed replication

You can use the Verify Key Attributes (VFYKEYATR) command to determine whether a physical file is eligible for keyed replication. See Verifying key attributes on page 359.

Restrictions of keyed replication


MIMIX does not support keyed replication in data groups that are configured for MIMIX Dynamic Apply. The Compare File Data (CMPFILDTA) command cannot compare files that are configured for keyed replication. If you run the the #FILDTA audit or the CMPFILDTA command against keyed files, the files are excluded from the comparison and a message indicates that files using *KEYED replication were not processed. When keyed replication is in use, the journal and journal definition cannot be configured to allow object types to support minimized entry specific data. For more information, see Minimized journal entry data on page 339.

Implementing keyed replication


You can implement keyed replication for an entire data group or for individual data group file entries. If you configure a data group for keyed replication, MIMIX uses keyed replication as the default for all processing of all associated data group file entries. If you configure individual data group file entries for keyed replication, the values you define in the data group file entry override the defaults used by the data group for the associated file. Attention: If you attempt to change the file replication from *KEYED to *POSITION, a warning message will be returned that indicates that the position of the file may not match the position of the file on the backup system. Attempting to change from keyed to positional replication can result in a mismatch of the relative record numbers (RRN) between the target system and source system.

Changing a data group configuration to use keyed replication


You can define keyed replication for a data group when you are initially configuring MIMIX or you can change the configuration later. To use keyed replication for all database replication defined for a data group, the following requirements must be met: 1. Before you change a data group definition to support keyed replication, do the following: a. Verify that the files defined to the data group are journaled correctly. b. If the files are not currently journaled correctly, you need to end journaling for the file entries defined to the data group. Use topic Ending Journaling in the Using MIMIX book. 2. In the data group definition used for replication you must specify the following: Data group type of *ALL or *DB.

356

DB journal entry processing must have Before images as *SEND for source send configurations. When using remote journaling, all journal entries are sent. Verify that you have the value you need specified for the Journal image element of the File and tracking ent. options. *BOTH is recommended. File and tracking ent. options must specify *KEYED for the Replication type element.

3. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic Verifying Key Attributes in the Using MIMIX book. 4. If you have modified file entry options on individual data group file entries, you need to ensure that the values used are compatible with keyed replication. 5. Start journaling for the file entries using Starting journaling for physical files on page 326.

Changing a data group file entry to use keyed replication


By default, data group file entries use the same file entry options as specified in the data group definition. If you configure individual data group file entries for keyed replication, the values you define in the data group file entry override the defaults used by the data group for the associated file. If you want to use keyed replication for one or more individual data group file entries defined for a data group, you need the following: 1. Before you change a data group file entry to support keyed replication, you should ensure that the file is already journaled correctly. If the file is not being journaled correctly, for example the data group file entry is not set as described in Step 4, you will need to end journaling for the file entries. 2. The data group definition used for replication must have a Data group type of *ALL or *DB. 3. DB journal entry processing must have Before images as *SEND for source send configurations. When using remote journaling, all journal entries are sent. 4. The data group file entry must have File and tracking ent. options set as follows: To override the defaults from the data group definition to use keyed replication on only selected data group file entries, verify that you have the value you need specified for the Journal image (*BOTH is recommended) and specify *KEYED for the Replication type. If you are using keyed replication at the data group level, the data group file entries can use the default value *DGDFT for both Journal image and Replication type.

Note: You can use any of the following ways to configure data group file entries for keyed replication: Use either procedure in topic Loading file entries on page 272 to add or modify a group of data group file entries. If you are modifying existing file

357

Keyed replication

entries in this way, you should specify *UPDADD for the Update option parameter. Use topic Adding a data group file entry on page 278 to create a new file entry. Use topic Changing a data group file entry on page 279 to modify an existing file entry. 5. The files identified by the data group file entries for the data group must be eligible for keyed replication. See topic Verifying Key Attributes in the Using MIMIX book. 6. After you have changed individual data group file entries, you need to start journaling for the file entries using Starting journaling for physical files on page 326.

358

Verifying key attributes


Before you configure for keyed replication, verify that the file or files you for which you want to use keyed replication are actually eligible. Do the following to verify that the attributes of a file are appropriate for keyed replication: 1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key Attributes display appears. 2. Do one of the following: To verify a file in a library, specify a file name and a library. To verify all files in a library, specify *ALL and a library. To verify files associated with the file entries for a data group, specify *MIMIXDFN for the File prompt and press Enter. Prompts for the Data group definition appear. Specify the name of the data group that you want to check.

3. Press Enter. 4. A spooled file is created that indicates whether you can use keyed replication for the files in the library or data group you specified. Display the spooled file (WRKSPLF command) or use your standard process for printing. You can use keyed replication for the file if *BOTH appears in the Replication Type Allowed column. If a value appears in the Replication Type Defined column, the file is already defined to the data group with the replication type shown.

359

360

Data distribution and data management scenarios


MIMIX supports a variety of scenarios for data distribution and data management including bi-directional data flow, file combining, file sharing, and file merging. MIMIX also supports data distribution techniques such as broadcasting, and cascading. Often, this support requires a combination of advanced replication techniques as well as customizing. These techniques require additional planning before you configure MIMIX. You may need to consider the technical aspects of implementing a technique as well as how your business practices may be affected. Consider the following: Can each system involved modify the data? Do you need to filter data before sending to it to another system? Do you need to implement multiple techniques to accomplish your goal? Do you need customized exit programs? Do any potential collision points exist and how will each be resolved?

MIMIX user journal replication provides filtering options within the data group definition. Also, MIMIX provides options within the data group definition and for individual data group file entries for resolving most collision points. Additionally, collision resolution classes allow you to specify different resolution methods for each collision point.

Configuring for bi-directional flow


Both MIMIX user journal and system journal replication processes allow data to flow bi-directionally, but their implementations and configuration requirements are distinct. In user journal replication processing, bi-directional data flow is a data sharing technique in which the same named database file can be replicated between databases on two systems in two directions at the same time. When MIMIX user journal replication processes are configured for bi-directional data flow, each system is both a source system and a target system. System journal replication processing supports the bi-directional flow of objects between two systems, but it does not support simultaneous (bi-directional) updates to the same object on multiple systems. Updating the same object from two systems at the same time can cause a loss of data integrity.

File sharing is a scenario in which a file can be shared among a group of systems and can be updated from any of the systems in the group. MIMIX implements file sharing among systems defined to the same MIMIX installation. To enable file sharing, MIMIX must be configured to allow bi-directional data flow. An example of file sharing is when an enterprise maintains a single database file that must be updated from any of several systems.

Bi-directional requirements: system journal replication


To configure system journal replication processes to support bi-directional flow of objects, you need the following:

361

Data distribution and data management scenarios

Configure two data group definitions between the two systems. In one data group, specify *SYS1 for the Data source (DTASRC) parameter. In the other data group, specify *SYS2 for this parameter. Each data group definition should specify *NO for the Allow to be switched (ALWSWT) parameter.

Note: In system journal replication, MIMIX does not support simultaneous updates to the same object on multiple systems and does not support conflict resolution for objects. Once an object is replicated to a target system, system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system.

Bi-directional requirements: user journal replication


To configure user journal replication processes to support bi-directional data flow, you need the following: Configure two data group definitions between the two systems. In one data group, specify *SYS1 for the Data source (DTASRC) parameter. In the other data group, specify *SYS2 for this parameter. For each data group definition, set the DB journal entry processing (DBJRNPRC) parameter so that its Generated by MIMIX element is set to *IGNORE. This prevents any journal entries that are generated by MIMIX from being sent to the target system and prevents looping. The files defined to each data group must be configured for keyed replication. Use topic Verifying key attributes on page 359 to determine if files can use keyed replication. Analyze your environment to determine the potential collision points in your data. You need to understand how each collision point will be resolved. Consider the following: Can the collision be resolved using the collision resolution methods provided in MIMIX or do you need customized exit programs? See Collision resolution on page 381. How will your business practices be affected by collision scenarios? For example, say that you have an order entry application that updates shared inventory records such as Figure 19. If two locations attempt to access the last item in stock at the same time, which location will be allowed to fill the order? Does the other location automatically place a backorder or generate a report?
Figure 19. Example of bi-directional configuration to implement file sharing.

362

Configuring for file routing and file combining


File routing and file combining are data management techniques supported by MIMIX user journal replication processes. The way in which data is used can affect the configuration requirements for a file routing or file combining operation. Evaluate the needs for each pair of systems (source and target) separately. Consider the following: Does the data need to be updated in both directions between the systems? If you need bi-directional data flow, see topic Configuring for bi-directional flow on page 361. Will users update the data from only one or both systems? If users can update data from both systems, you need to prevent the original data from being returned to its original source system (recursion). Is the file routing or file combining scenario a complete solution or is it part of a larger solution? Your complete solution may be a combination of multiple data management and data distribution techniques. Evaluate the requirements for each technique separately for a pair of systems (source and target). Each technique that you need to implement may have different configuration requirements.

File combining is a scenario in which all or partial information from files on multiple systems can be sent to and combined in a single file on a target system. In its user journal replication processes, MIMIX implements file combining between multiple source systems and a target system that are defined to the same MIMIX installation. MIMIX determines what data from the multiple source files is sent to the target system based on the contents of a journal transaction. An example of file combining is when many locations within an enterprise update a local file and the updates from all local files are sent to one location to update a composite file. The example in Figure 20

363

Data distribution and data management scenarios

shows file combining from multiple source systems onto a composite file on the management system.
Figure 20. Example of file combining

To enable file combining between two systems, MIMIX user journal replication must be configured as follows: Configure the data group definition for keyed replication. See topic Keyed replication on page 355. If only part of the information from the source system is to be sent to the target system, you need an exit program to filter out transactions that should not be sent to the target system. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file combining operation effectively becomes a file routing operation. To ensure that the data group will perform file combining operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication. After the combining operating is complete, if the combined data will be replicated or distributed again, you need to prevent it from returning to the system on which it originated.

File routing is a scenario in which information from a single file can be split and sent to files on multiple target systems. In user journal replication processes, MIMIX implements file routing between a source system and multiple target systems that are defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit program that makes the file routing decision. The user exit program determines what data from the source file is sent to each of the target systems based on the contents

364

of a journal transaction. An example of file routing is when one location within an enterprise performs updates to a file for all other locations, but only updated information relevant to a location is sent back to that location. The example in Figure 21 shows the management system routing only the information relevant to each network system to that system.
Figure 21. Example of file routing

To enable file routing, MIMIX user journal replication processes must be configured as follows: Configure the data group definition for keyed replication. See topic Keyed replication on page 355. The data group definition must call an exit program that filters transactions so that only those transactions which are relevant to the target system are sent to it. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs, the file routing operation effectively becomes a file combining operation. To ensure that the data group will perform file routing operations after a switch, you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication.

Configuring for cascading distributions


Cascading is a distribution technique in which data passes through one or more intermediate systems before reaching its destination. MIMIX supports cascading in both its user journal and system journal replication paths. However, the paths differ in their implementation.

365

Data distribution and data management scenarios

Data can pass through one intermediate system within a MIMIX installation. Additional MIMIX installations will allow you to support cascading in scenarios that require data to flow though two or more intermediate systems before reaching its destination. Figure 22 shows the basic cascading configuration that is possible within one MIMIX installation.
Figure 22. Example of a simple cascading scenario

To enable cascading you must have the following: Within a MIMIX installation, the management system must be the intermediate system. Configure a data group between the originating system (a network system) to the intermediate (management) system. Configure another data group for the flow from the intermediate (management) system to the destination system. For user journal replication, you also need the following: The data groups should be configured to send journal entries that are generated by MIMIX. To do this, specify *SEND for the Generated by MIMIX element of the DB journal entry processing (DBJRNPRC) parameter. When this is the case, MIMIX performs the database updates. If it is possible for the data to be routed back to the originating or any intermediate systems, you need to use keyed replication. Note: Once an object is replicated to a target system, MIMIX system journal replication processes prevent looping by not allowing the same object, regardless of name mapping, to be replicated back to its original source system. Cascading may be used with other data management techniques to accomplish a specific goal. Figure 23 shows an example where the Chicago system is a management system in a MIMIX installation that collects data from the network systems and broadcasts the updates to the other participating systems. The network systems send unfiltered data to the management system. Figure 23 is a cascading scenario because changes that originate on the Hong Kong system pass through an intermediate system (Chicago) before being distributed to the Mexico City system and other network systems in the MIMIX installation. Exit programs are required for the

366

data groups acting between the management system and the destination systems and need to prevent updates from flowing back to their system of origin.
Figure 23. Bi-directional example that implements cascading for file distribution.

367

Trigger support

Trigger support
A trigger program is a user exit program that is called by the database when a database modification occurs. Trigger programs can be used to make other database modifications which are called trigger-induced database modifications.

How MIMIX handles triggers


The method used for handling triggers is determined by settings in the data group definition and file entry options. MIMIX supports database trigger replication using one of the following ways: Using i5/OS trigger support to prevent the triggers from firing on the target system and replicating the trigger-induced modifications. Ignoring trigger-induced modifications found in the replication stream and allowing the triggers to fire on the target system.

Considerations when using triggers


You should choose only one of these methods for each data group file entry. Which method you use depends on a variety of considerations: The default replication type for data group file entry options is positional replication. With positional replication, each file is replicated based on the position of the record within the file. The value of the relative record number used in the journal entry is used to locate a database record being updated or deleted. When positional replication is used and triggers fire on the target system they can cause trigger-induced modifications to the files being replicated. These trigger-induced modifications can change the relative record number of the records in the file because the relative record numbers of the trigger-induced modifications are not likely to match the relative record numbers generated by the same triggers on the source system. Because of this, triggers should not be allowed to fire on the target system. You should prevent the triggers from firing on the target system and replicate the trigger-induced modifications from source to the target system. When trigger-induced modifications are made by replicated files to files not replicated by MIMIX, you may want the triggers to fire on the target system. This will ensure that the files that are not replicated receive the same trigger-induced modifications on the target system as they do on the source system. When triggers do not cause database record changes, you may choose to allow them to fire on the target system. However, if non-database changes occur and you are using object replication, the object replication will replicate trigger-induced object changes from the source system. In this case, the triggers should not be permitted to fire. When triggers are allowed to fire on the target system, the files being updated by these triggers should be replicated using the same apply session as the parent files to avoid lock contention. A slight performance advantage may be achieved by replicating the triggerinduced modifications instead of ignoring them and allowing the triggers to fire.

368

This is because the database apply process checks each transaction before processing to see if filtering is required, and firing the trigger adds additional overhead to database processing.

Enabling trigger support


Trigger support is enabled for user journal replication by specifying the appropriate file entry option values for parameters on the Create Data Group Definition (CRTDGDFN) and Change Data Group Definition (CHGDGDFN) commands. You can also enable trigger support at a file level by specifying the appropriate file entry options associated with the file. If you already have a trigger solution in place you can continue to use that implementation or you can use the MIMIX trigger support.

Synchronizing files with triggers


When you are synchronizing a file with triggers and you are using MIMIX trigger support, you must specify *DATA on the Sending mode parameter on the Synchronize DG File Entry (SYNCDGFE) command. On the Disable triggers on file parameter, you can specify if you want the triggers disabled on the target system during file synchronization. The default is *DGFE, which will use the value indicated for the data group file entry. If you specify *YES, triggers will be disabled on the target system during synchronization. A value of *NO will leave triggers enabled. For more information on synchronizing files with triggers, see About synchronizing file entries (SYNCDGFE command) on page 480.

369

Constraint support

Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of constraints: referential, unique, primary key and check. Unique, primary key and check constraints are single file operations transparent to MIMIX. If a constraint is met for a database operation on the source system, the same constraint will be met for the replicated database operation on the target. Referential constraints, however, ensure the integrity between multiple files. For example, you could use a referential constraint to: Ensure when an employee record is added to a personnel file that it has an associated department from a company organization file. Empty a shopping cart and remove the order records if an internet shopper exits without placing an order.

When constraints are added, removed or changed on files replicated by MIMIX, these constraint changes will be replicated to the target system. With the exception of files that have been placed on hold, MIMIX always enables constraints and applies constraint entries. MIMIX tolerates mismatched before images or minimized journal entry data CRC failures when applying constraint-generated activity. Because the parent record was already applied, entries with mismatched before images are applied and entries with minimized journal entry data CRC failures are ignored. To use this support: Ensure that your target system is at the same release level or greater than the source system to ensure the target system is able to use all of the i5/OS function that is available on the source system. If an earlier i5/OS level is installed on the target system the operation will be ignored. You must have your MIMIX environment configured for either MIMIX Dynamic Apply or legacy cooperative processing.

Referential constraints with delete rules


Referential constraints can cause changes to dependent database files when the parent file is changed. Referential constraints defined with the following delete rules cause dependent files to change: *CASCADE: Record deletion in a parent file causes records in the dependent file to be deleted when the parent key value matches the foreign key value. *SETNULL: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. For those dependent records that meet the preceding criteria, all null capable fields in the foreign key are set to null. Foreign key fields with the non-null attribute are not updated. *SETDFT: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. For those dependent records that meet the preceding criteria, the foreign key field or fields are set to their corresponding default values.

370

Referential constraint handling for these dependent files is supported through the replication of constraint-induced modifications. MIMIX does not provide the ability to disable constraints because i5/OS would check every record in the file to ensure constraints are met once the constraint is reenabled. This would cause a significant performance impact on large files and could impact switch performance. If the need exists, this can be done through automation.

Replication of constraint-induced modifications


MIMIX always attempts to apply constraint-induced modifications. Earlier levels of MIMIX provided the Process constraint entries element in the File entry options (FEOPT) parameter, which now is removed.1 Any previously specified value is now mapped to *YES so that processing always occurs. The considerations for replication of constraint-induced modifications are: Files with referential constraints and any dependent files must be replicated by the same apply session. When referential constraints cause changes to dependent files not replicated by MIMIX, enabling the same constraints on the target system will allow changes to be made to the dependent files.

Updated for 5.0.08.00.

1. This element was removed in version 5 service pack 5.0.08.00.

371

Constraint support

372

Handling SQL identity columns


If you replicate an SQL table with an identity column with a switchable data group, you may experience problems following a switch to the backup system. The next identity column generated on the backup system may not be as expected. In environments with both systems running i5/OS V5R4 or higher and MIMIX service pack 5.0.09.00 or higher, MIMIX automatically checks for scenarios that can cause duplicate identity column values and, if possible, attempts to prevent the problem from occurring. Even in this environment, MIMIX cannot prevent all troublesome scenarios from occurring. As a result, the Set Identity Column Attribute (SETIDCOLA) command is available to help support SQL tables with identity columns. This command is useful for handling scenarios that would otherwise result in errors caused by duplicate identity column values when inserting rows into tables.

The identity column problem explained


In SQL, a table may have a single numeric column which is designated an identity column. When rows are inserted into the table, the database automatically generates a value for this column, incrementing the value with each insertion. Several attributes define the behavior of the identity column, including: Minimum value, Maximum value, Increment amount, Start value, Cycle/No Cycle, Cache amount. This discussion is limited to the following attributes: Increment amount - the amount by which each new rows identity column differs from the previously inserted row. This can be a positive or negative value. Start value - the value used for the next row added. This can be any value, including one that is outside of the range defined by the minimum and maximum values. Cycle/No Cycle - indicates whether or not values cycle from maximum back to minimum, or from minimum to maximum if the increment is negative.

Nothing prevents identity column values from being generated more than once. However, in typical usage, the identity column is also a primary, unique key and set to not cycle. The value generator for the identity column is stored internally with the table. Following certain actions which transfer table data from one system to another, the next identity column value generated on the receiving system may not be as expected. This can occur after a MIMIX switch and after other actions such as certain save/restore operations on the backup system. Similarly, other actions such as applying journaled changes (APYJRNCHG), also do not keep the value generator synchronized. Any SQL table with an identity column that is replicated by a switchable data group can potentially experience this problem. Journal entries used to replicate inserted rows on the production system do not contain information that would allow the value generator to remain synchronized. The result is that after a switch to the backup system, rows can be inserted on the backup system using identity column values

373

Handling SQL identity columns

other than the next expected value. The starting value for the value generator on the backup system is used instead of the next expected value based on the tables content. This can result in the reuse of identity column values which in turn can cause a duplicate key exception. Detailed technical descriptions of all attributes are available in the IBM eServer iSeries Information Center. Look in the Database section for the SQL Reference for CREATE TABLE and ALTER TABLE statements.

When the SETIDCOLA command is useful


Important! The SETIDCOLA command should not be used in all environments. Its use is subject to the limitations described in SETIDCOLA command limitations on page 374. If you cannot use the SETIDCOLA command, see Alternative solutions on page 375. Examples of when you may need to run the SETIDCOLA command are: The SETIDCOLA command can be used to determine whether a data group replicates tables which contain identity columns and report the results. To do so, specify ACTION(*CHECKONLY) on the command. It is recommended that you initially use this capability before setting values. You may want to perform this type of check whenever new tables are created that might contain identity columns. See Checking for replication of tables with identity columns on page 378. For many environments, default values on the SETIDCOLA command are appropriate for use following a planned switch to the backup system to ensure that the identity column values inserted on the backup system start at the proper point. After performing a switch to the backup system, run the command from the backup system before starting replication in the reverse direction. After a restore (RSTnnn command) from a "save of backup machine." For this scenario, run the command on the system on which you performed the restore. Before saving files to tape or other media from the backup system. For this scenario, run the command from the backup system. By doing this, you avoid the need to run the command after restoring.

Also, the SETIDCOLA command is needed in any environment in which you are attempting to restore from a save that was created while replication processes were running.

SETIDCOLA command limitations


In general, SETIDCOLA only works correctly for the most typical scenario where all values for identity columns have been generated by the system, and no cycles are allowed. In other scenarios, it may not restart the identity column at a useful value. Limited support for unplanned switch - Following an unplanned switch, the backup system may not be caught up with all the changes that occurred on the production system. Using the SETIDCOLA command on the backup system may result in the generation of identity column values that were used on the production system but not yet replicated to the backup system. Careful selection of the value of the INCREMENTS parameter can minimize the likelihood of this problem, but the value

374

chosen must be valid for all tables in the data group. See Examples of choosing a value for INCREMENTS on page 377. Not supported -The following scenarios are known to be problematic and are not supported. If you cannot use the SETIDCOLA command in your environment, consider the Alternative solutions on page 375. Columns that have cycled - If an identity column allows cycling and adding a row increments its value beyond the maximum range, the restart value is reset to the beginning of the range. Because cycles are allowed, the assumption is that duplicate keys will not be a problem. However, unexpected behavior may occur when cycles are allowed and old rows are removed from the table with a frequency such that the identity column values never actually complete a cycle. In this scenario, the ideal starting point would be wherever there is the largest gap between existing values. The SETIDCOLA command cannot address this scenario; it must be handled manually. Rows deleted on production table - An application may require that an identity column value never be generated twice. For example, the value may be stored in a different table, data area or data queue, given to another application, or given to a customer. The application may also require that the value always locate either the original row or, if the row is deleted, no row at all. If rows with values at the end of the range are deleted and you perform a switch followed by the SETIDCOLA command, the identity column values of the deleted rows will be re-generated for newly inserted rows. The SETIDCOLA command is not recommended for this environment. This must be handled manually. No rows in backup table - If there are no rows in the table on the backup system, the restart value will be set to the initial start value. Running the SETIDCOLA command on the backup system may result in re-generating values that were previously used. The SETIDCOLA command cannot address this scenario; it must be handled manually. Application generated values - Optionally, applications can supply identity column values at the time they insert rows into a table. These application-generated identity values may be outside the minimum and maximum values set for the identity column. For example, a tables identity column range may be from 1 through 100,000,000 but an application occasionally supplies values in the range of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA command is run, the command would recognize the higher values from the application and would cycle back to the minimum value of 1. Because the result would be problematic, the SETIDCOLA command is not recommended for tables which allow application-generated identity values. This must be handled manually.

Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you have these options. Manually reset the identity column starting point: Following a switch to the backup system, you can manually reset the restart value for tables with identity

375

Handling SQL identity columns

columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for this purpose. Convert to SQL sequence objects: To overcome the limitations of identity column switching and to avoid the need to use the SETIDCOLA command, SQL sequence objects can be used instead of identity columns. Sequence objects are implemented using a data area which can be replicated by MIMIX. The data area for the sequence object must be configured for replication through the user journal (cooperatively processed).

SETIDCOLA command details


The Set Identity Column Attribute (SETIDCOLA) command performs a RESTART WITH alteration on the identity column of any SQL tables defined for replication in the specified data group. For each table, the new restart value determines the identity column value for the next row added to the table. Careful selection of values can ensure that, when applications are started, the identity column starting values exceed the last values used prior to the switch or save/restore operation. If you use Lakeview-provided product-level security, the minimum authority level for this command is *OPR. Note: For systems running i5/OS V5R3, it is recommended that you apply IBM PTFs before you use the SETIDCOLA command. For more information, log in to Support Central and refer to the Technical Documents page for a list of recommended operating system fixes. The Data group definition (DGDFN) parameter identifies the data group against which the specified action is taken. Only tables that are identified for replication by the specified data group are addressed. The Action (ACTION) parameter specifies what action is to be taken by the command. Only tables which can be replicated by the specified data group are acted upon. Possible values are: *SET The command checks and sets the attribute of the identity column of each table which meets the criteria. This is the default value. *CHECKONLY The command checks for tables which have identity columns. It does not set the attributes of the identity columns. The result of the check is reported in the job log. If there are affected tables, message LVE3E2C will be issued. If no tables are affected, message LVI3E26 will be issued. The Number of jobs (JOBS) parameter specifies the number of jobs to use to process tables which meet the criteria for processing by the command. A table will only be updated by one job; each job can update multiple tables. The default value, *DFT, is currently set to one job. You can specify as many as 30 jobs. The Number of increments to skip (INCREMENTS) parameter specifies how many increments of the counter which generates the starting value for the identity column to skip. The value specified is used for all tables which meet the criteria for processing by the command. Be sure to read the information in Examples of choosing a value for INCREMENTS on page 377. Possible values are: *DFT Skips the default number of increments, currently set to 1 increment.

376

Following a planned switch where tables are synchronized, you can usually use *DFT. number-of-increments-to-skip Specify the number of increments to skip. Valid values are 1 through 2,147,483,647. Following an unplanned switch, use a larger value to ensure that you skip any values used on the production system that may not have been replicated to the backup system.

Usage notes
The reason you are using this command determines which system you should run it from. See When the SETIDCOLA command is useful on page 374 for details. The command can be invoked manually or as part of a MIMIX Model Switch Framework custom switching program. Evaluation of your environment to determine an appropriate increment value is highly recommended before using the command. This command can be long running when many files defined for replication by the specified data group contain identity columns. This is especially true when affected identity columns do not have indexes over them or when they are referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce this time. This command creates a work library named SETIDCOLA which is used by the command. The SETIDCOLA library is not deleted so that it can be used for any error analysis. Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts. RUNSQLSTM produces spooled files showing the ALTER TABLE statements executed, along with any error messages received. If any statement fails, the RUNSQLSTM will also fail, and return the failing status back to the job where SETIDCOLA is running and an escape message will be issued.

Examples of choosing a value for INCREMENTS


When choosing a value for INCREMENTS, consider the rate at which each table consumes its available identity values. Account for the needs of the table which consumes numbers at the highest rate, as well as any backlog in MIMIX processing and the activity causing you to run the command. If you have available numbers to use, add a safety factor of at least 100 percent. For example, if the rate of the fastest file is 1,000 numbers per hour and MIMIX is 15 minutes behind (0.25 hours), the value you specify for INCREMENTS needs to result in at least 250 numbers (1000 x 0.25) being skipped. Adding 100% to 250, results in an increment of 500. Note: The MIMIX backlog, sometimes called the latency of changes being transferred to the backup system, is the amount of time from when an operation occurs on the production system until it is successfully sent to the backup system by MIMIX. It does not include the time it takes for MIMIX to apply the entry. Use the DSPDGSTS command to view the Unprocessed entry count for the DB Apply process; this value is the size of the backlog. You need to approximate how long it would take for this value to become zero (0) if application activity were to be stopped on the production system.

377

Handling SQL identity columns

For example, data group ORDERS contains tables A and B. Each row added to table A increases the identity value by 1 and each row added to table B increases the identify value by 1,000. Rows are inserted into table A at a rate of approximately 600 rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per hour. Prior to a switch, on the production system the latest value for table A was 75 and the latest value for table B was 30,000. Consider the following scenarios: Scenario 1. You performed a planned switch for test purposes. Because replication of all transactions completed before the switch and no users have been allowed on the backup system, the backup system has the same values as the production. Before starting replication in the reverse direction you run the SETIDCOLA command with an INCREMENTS value of 1. The next rows added to table A and B will have values of 76 and 31,000, respectively. Scenario 2. You performed an unplanned switch. From previous experience, you know that the latency of changes being transferred to the backup system is approximately 15 minutes. Rows are inserted into Table A at the highest rate. In 15 minutes, approximately 150 rows will have been inserted into Table A (600 rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However, since all measurements are approximations or based on historical data, this amount should be adjusted by a factor of at least 100% to 300 to ensure that duplicate identity column values are not generated on the backup system. The next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000 + (300*1000)= 330,000 respectively.

Checking for replication of tables with identity columns


To determine whether any files being replicated by a data group have identity columns, do the following. 1. From the production system, specify the data group to check in the following command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*CHECKONLY)

2. Check the job log for the following messages. Message LVE3E2C identifies the number of tables found with identity columns. Message LVI3E26 indicates that no tables were found with identity columns. 3. If the results found tables with identity columns, you need to evaluate the tables and determine whether you can use the SETIDCOLA command to set values.

Setting the identity column attribute for replicated files


At a high level, the steps you need to perform to set the identity columns of files being replicated by a data group are listed below. You may want to plan for the time required for investigation steps and time to run the command to set values. 1. Run the SETIDCOLA command in check only mode first to determine if you need to set values. See Checking for replication of tables with identity columns on page 378. 2. Determine whether limitations exist in the replicated tables that would prevent you from running the command to set values. See SETIDCOLA command

378

limitations on page 374. 3. Determine what increment value is appropriate for use for all tables replicated by the data group. Consider the needs of each table. Also consider the MIMIX backlog at the time you plan to use the command. See Examples of choosing a value for INCREMENTS on page 377. 4. From the appropriate system, as defined in When the SETIDCOLA command is useful on page 374 specify a data group and the number of increments to skip in the command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*SET) INCREMENTS(number)

379

Handling SQL identity columns

380

Collision resolution
Collision resolution is a function within MIMIX user journal replication that automatically resolves detected collisions without user intervention. MIMIX supports the following choices for collision resolution that you can specify in the file entry options (FEOPT) parameter in either a data group definition or in an individual data group file entry: Held due to error: (*HLDERR) This is the default value for collision resolution in the data group definition and data group file entries. MIMIX flags file collisions as errors and places the file entry on hold. Any data group file entry for which a collision is detected is placed in a "held due to error" state (*HLDERR). This results in the journal entries being replicated to the target system but they are not applied to the target database. If the file entry specifies member *ALL, a temporary file entry is created for the member in error and only that file entry is held. Normal processing will continue for all other members in the file. You must take action to apply the changes and return the file entry to an active state. When held due to error is specified in the data group definition or the data group file entry, it is used for all 12 of the collision points. Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically synchronize file members when an error is detected. The member is put on hold while the database apply process continues with the next transaction. The file member is synchronized using copy active file processing, unless the collision occurred at the compare attributes collision point. In the latter case, the file is synchronized using save and restore processing. When automatic synchronization is specified in the data group definition or data group file entry, it is used for all 12 of the collision points. Collision resolution class: A collision resolution class is a named definition which provides more granular control of collision resolution. Some collision points also provide additional methods of resolution that can only be accessed by using a collision resolution class. With a defined collision resolution class, you can specify how to handle collision resolution at each of the 12 collision points. You can specify multiple methods of collision resolution to attempt at each collision point. If the first method specified does not resolve the problem, MIMIX uses the next method specified for that collision point.

Additional methods available with CR classes


Automatic synchronization (*AUTOSYNC) and held due to error (*HLDERR) are essentially predefined resolution methods. When you specify *HLDERR or *AUTOSYNC in a data group definition or a data group file entry, that method is used for all 12 of the collision points. If you specify a named collision resolution class in a data group definition or data group file entry, you can customize what resolution method to use at each collision point. Within a collision resolution class, you can specify one or more resolution method to use for each collision point. *AUTOSYNC and *HLDERR are available for use at each collision point. Additionally, the following resolution methods are also available: Exit program: (*EXITPGM) A specified user exit program is called to handle the

381

Collision resolution

data collision. This method is available for all collision points. The MXCCUSREXT service program dynamically links your exit program. The MXCCUSREXT service program is shipped with MIMIX and runs on the target system. The exit program is called on three occasions. The first occasion is when the data group is started. This call allows the exit program to handle any initialization or set up you need to perform. The MXCCUSREXT service program (and your exit program) is called if a collision occurs at a collision point for which you have indicated that an exit program should perform collision resolution actions. Finally, the exit program is called when the data group is ended. Field merge: (*FLDMRG) This method is only available for the update collision point 3, used with keyed replication. If certain rules are met, fields from the afterimage are merged with the current image of the file to create a merged record that is written to the file. Each field within the record is checked using the series of algorithms below. In the following algorithms, these abbreviations are used: RUB = before-image of the source file RUP = after-image of the source file RCD = current record image of the target file a. If the RUB equals the RUP and the RUB equals the RCD, do not change the RUP field data. b. If the RUB equals the RUP and the RUB does not equal the RCD, copy the RCD field data into the RUP record. c. If the RUB does not equal the RUP and the RUB equals the RCD, do not change the RUP field data. d. If the RUB does not equal the RUP and the RUB does not equal the RCD, fail the field-level merge. Applied: (*APPLIED) This method is only available for the update collision point 3 and the delete collision point 1. For update collision point 3, the transaction is ignored if the record to be updated already equals the data in the updated record. For delete collision point 1, the transaction is ignored because the record does not exist.

If multiple collision resolution methods are specified and do not resolve the problem MIMIX will always use *HLDERR as the last resort, placing the file on hold.

Requirements for using collision resolution


To use a collision resolution other than the default *HLDERR, you must have the following: The data group definition used for replication must specify a data group type of *ALL or *DB.

382

You must specify either *AUTOSYNC or the name of a collision resolution class for the Collision resolution element of the File entry option (FEOPT) parameter. Specify the value as follows: If you want to implement collision resolution for all files processed by a data group, specify a value in the parameter within the data group definition. If you want to implement collision resolution for only specific files, specify a value in the parameter within an individual data group file entry. Note: Ensure that data group activity is ended before you change a data group definition or a data group file entry.

If you plan to use an exit program for collision resolution, you must first create a named collision resolution class. In the collision resolution class, specify *EXITPGM for each of the collision points that you want to be handled by the exit program and specify the name of the exit program.

Working with collision resolution classes


Do the following to access options for working with collision resolution: 1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press Enter. 2. From the MIMIX Configuration Menu, select option 5 (Work with collision resolution classes) and press Enter. The Work with CR Classes display appears.

Creating a collision resolution class


To create a collision resolution class, do the following: 1. From the Work with CR Classes display, type a 1 (Create) next to the blank line at the top of the display and press Enter. 2. The Create Collision Res. Class (CRTCRCLS) display appears. Specify a name at the Collision resolution class prompt. 3. At each of the collision point prompts on the display, specify the value for the type of collision resolution processing you want to use. Press F1 (Help) to see a description of the collision point. Note: You can specify more than one method of collision resolution for each prompt by typing a + (plus sign) at the prompt. With the exception of the *HLDERR method, the methods are attempted in the order you specify. If the first method you specify does not successfully resolve the collision, then the next method is run. *HLDERR is always the last method attempted. If all other methods fail, the member is placed on hold due to error. 4. Press Page Down to see additional prompts. 5. At each of the collision point prompts on the second display, specify the value for the type of collision resolution processing you want to use. 6. If you specified *EXITPGM at any of the collision point prompts, specify the name and library of program to use at the Exit point prompt.

383

Collision resolution

7. At the Number of retry attempts prompt, specify the number of times to try to automatically synchronize a file. If this number is exceeded in the time specified in the Retry time limit, the file will be placed on hold due to error 8. At the Retry time limit prompt, specify the number of maximum number of hours to retry a process if a failure occurs due to a locking condition or an in-use condition. Note: If a file encounters repeated failures, an error condition that requires manual intervention is likely to exist. Allowing excessive synchronization requests can cause communications bandwidth degradation and negatively impact communications performance. 9. To create the collision resolution class, press Enter.

Changing a collision resolution class


To change an existing collision resolution class, do the following: 1. From the Work with CR Classes display, type a 2 (Change) next to the collision resolution class you want and press Enter. 2. The Change CR Class Details display appears. Make any changes you need. Page Down to see all of the prompts. 3. Provide the required values in the appropriate fields. Inspect the default values shown on the display and either accept the defaults or change the value. 4. You can specify as many as 3 values for each collision point prompt. To expand this field for multiple entries, type a plus sign (+) in the entry field opposite the phrase "+ for more" and press Enter. 5. To accept the changes, press Enter.

Deleting a collision resolution class


To delete a collision resolution class, do the following: 1. From the Work with CR Classes display, type a 4 (Delete) next to the collision resolution class you want and press Enter. 2. A confirmation display appears. Verify that the collision resolution class shown on the display is what you want to delete. 3. Press Enter.

Displaying a collision resolution class


To display a collision resolution class, do the following: 1. From the Work with CR Classes display, type a 5 (Display) next to the collision resolution class you want and press Enter. 2. The Display CR Class Details display appears. Press Page Down to see all of the values.

384

Printing a collision resolution class


Use this procedure to create a spooled file of a collision resolution class which you can print. 1. From the Work with CR Classes display, type a 6 (Print) next to the collision resolution class you want and press Enter. 2. A spooled file is created with the name MXCRCLS on which you can use your standard printing procedure.

385

Collision resolution

386

Omitting T-ZC content from system journal replication


For logical and physical files configured for replication solely through the system journal, MIMIX provides the ability to prevent replication of predetermined sets of TZC journal entries associated with changes to object attributes or content changes. Default T-ZC processing: Files that have an object auditing value of *CHANGE or *ALL will generate T-ZC journal entries whenever changes to the object attributes or contents occur. The access type field within the T-ZC journal entry indicates what type of change operation occurred. Table 46 lists the T-ZC journal entry access types that are generated by PF-DTA, PF38-DTA, PF-SRC, PF38-SRC, LF, and LF-38 file types.
Table 46. Access Type 1 7 T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible for replication through the system journal. Access Type Description Add Change1 X Operation Type File Member X X Data Add member for physical files and logical files (ADDPFM) Change Physical File (CHGPF), Change Logical File (CHGLF), Change Physical File Member (CHGPFM), Change Logical File Member (CHGLFM), Change Object Description (CHGOBJD) X X X X X X X X X Clear member for physical files (CLRPFM) Initialize member for physical files (INZPFM) Opening member for write for physical files Reorganize member for physical files (RGZPFM) Remove member for physical files and logical files (RMVM) Rename member for physical files and logical files (RNMM) Adding constraint for physical files (ADDPFCST) Changing constraint for physical files (CHGPFCST) Removing constraint for physical files (RMVPFCST) Operations that Generate T-ZC Access Type

10 25 30 36 37 38 62 63 64
1.

Clear Initialize Open Reorganize Remove Rename Add constraint Change constraint Remove constraint

These T-ZC journal entries may or may not have a member name associated with them. If a member name is associated with the journal entry, the T-ZC is a member operation. If no member name is associated with the journal entry, the T-ZC is assumed to be a file operation.

387

Omitting T-ZC content from system journal replication

By default, MIMIX replicates file attributes and file member data for all T-ZC entries generated for logical and physical files configured for system journal replication. While MIMIX recreates attribute changes on the target system, member additions and data changes require MIMIX to replicate the entire object using save, send, and restore processes. This can cause unnecessary replication of data and can impact processing time, especially in environments where the replication of file data transactions is not necessary. Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data group object entry commands, you can specify a predetermined set of access types for *FILE objects to be omitted from system journal replication. T-ZC journal entries with access types within the specified set are omitted from processing by MIMIX. The OMTDTA parameter is useful when a file or members data does not need to be the replicated. For example, when replicating work files and temporary files, it may be desirable to replicate the file layout but not the file members or data. The OMTDTA parameter can also help you reduce the number of transactions that require substantial processing time to replicate, such as T-ZC journal entries with access type 30 (Open). Each of the following values for the OMTDTA parameter define a set of access types that can be omitted from replication: *NONE - No T-ZCs are omitted from replication. All file, member, and data operations in transactions for the access types listed in Table 46 are replicated. This is the default value. *MBR - Data operations are omitted from replication. File and member operations in transactions for the access types listed in Table 46 are replicated. Access type 7 (Change) for both file and member operations are replicated. *FILE - Member and data operations are omitted from replication. Only file operations in transactions for the access types listed in Table 46 are replicated. Only file operations in transactions with access type 7 (Change) are replicated.

Configuration requirements and considerations for omitting T-ZC content


To omit transactions, logical and physical files must be configured for system journal replication and meet these configuration requirements: The data group definition must specify *ALL or *OBJ for the Data group type (TYPE). The file for which you want to omit transactions must be identified by a data group object entry that specifies the following: Cooperate with database (COOPDB) must be *NO when Cooperating object types (COOPTYPE) specifies *FILE. If COOPDB is *YES, then COOPTYPE cannot specify *FILE. Omit content (OMTDTA) must be either *FILE or *MBR. Object auditing value considerations - The file must have an object auditing value of *CHANGE or *ALL in order for any T-ZC journal entry resulting from a change operation to be created in the system journal. To ensure that changes to the file

388

continue to be journaled and replicated, the data group object entry should also specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter. For all library-based objects, MIMIX evaluates the object auditing level when starting data a group after a configuration change. If the configured value specified for the OBJAUD parameter is higher than the objects actual value, MIMIX will change the object to use the higher value. If you use the SETDGAUD command to force the object to have an auditing level of *NONE and the data group object entry also specifies *NONE, any changes to the file will no longer generate T-ZC entries in the system journal. For more information about object auditing, see Managing object auditing on page 57. Object attribute considerations - When MIMIX evaluates a system journal entry and finds a possible match to a data group object entry which specifies an attribute in its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in order to determine which object entry is the most specific match. If the object attribute is not needed to determine the most specific match to a data group object entry, it is not retrieved. After determining which data group object entry has the most specific match, MIMIX evaluates that entry to determine how to proceed with the journal entry. When the matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to consider the object attribute in any other evaluations. As a result, the performance of the object send job may improve.
Updated for 5.0.03.00.

Omit content (OMTDTA) and cooperative processing


The OMTDTA and COOPDB parameters are mutually exclusive. MIMIX allows only a value of *NONE for OMTDTA when a data group object entry specifies cooperative processing of files with COOPDB(*YES) and COOPTYPE(*FILE). When using MIMIX Dynamic Apply for cooperative processing, logical files and physical files (source and data) are replicated primarily through the user journal. Legacy cooperative processing replicates only physical data files. When using legacy cooperative processing, system journal replication processes select only file attribute transactions. File attribute transactions are T-ZC journal entries with access types 7 (Change), 62 (Add constraint), 63 (Change constraint), and 64 (Remove constraint). These transactions are replicated by system journal replication during legacy cooperative processing, while most other transactions are replicated by user journal replication.
Updated for 5.0.03.00.

Omit content (OMTDTA) and comparison commands


All T-ZC journal entries for files are replicated when *NONE is specified for the OMTDTA parameter. However, when OMTDTA is enabled by specifying *FILE or *MBR, some T-ZC journal entries for file objects are omitted from system journal

389

Omitting T-ZC content from system journal replication

replication. This may affect whether replicated files on the source and target systems are identical. For example, recall how a file with an object auditing attribute value of *NONE is processed. After MIMIX replicates the initial creation of the file through the system journal, the file on the target system reflects the original state of the file on the source system when it was retrieved for replication. However, any subsequent changes to file data are not replicated to the target system. According to the configuration information, the files are synchronized between source and target systems, but the files are not the same. A similar situation can occur when OMTDTA is used to prevent replication of predetermined types of changes. For example, if *MBR is specified for OMTDTA, the file and member attributes are replicated to the target system but the member data is not. The file is not identical between source and target systems, but it is synchronized according to configuration. Comparison commands will report these attributes as *EC (equal configuration) even though member data is different. MIMIX audits, which call comparison commands with a data group specified, will have the same results. Running a comparison command without specifying a data group will report all the synchronized-but-not-identical attributes as *NE (not equal) because no configuration information is considered. Consider how the following comparison commands behave when faced with nonidentical files that are synchronized according to the configuration. The Compare File Attributes (CMPFILA) command has access to configuration information from data group object entries for files configured for system journal replication. When a data group is specified on the command, files that are configured to omit data will report those omitted attributes as *EC (equal configuration). When CMPFILA is run without specifying a data group, the synchronized-but-not-identical attributes are reported as *NE (not equal). The Compare File Data (CMPFILDTA) command uses data group file entries for configuration information. As a result, when a data group is specified on the command, any file objects configured for OMTDTA will not be compared. When CMPFILDTA is run without specifying a data group, the synchronized-but-notidentical file member attributes are reported as *NE (not equal). The Compare Object Attributes (CMPOBJA) command can be used to check for the existence of a file on both systems and to compare its basic attributes (those which are common to all object types). This command never compares filespecific attributes or member attributes and should not be used to determine whether a file is synchronized.

390

Selecting an object retrieval delay


When replicating objects, particularly documents (*DOC) and stream files (*STMF), MIMIX will obtain a lock on the object that can prevent your applications from accessing the object in a timely manner. Some of your applications may be unable to recover from this condition and may fail in an unexpected manner. You can reduce, or eliminate, contention for an object between MIMIX and your applications if the object retrieval processing is delayed for a predetermined amount of time before obtaining a lock on the object to retrieve it for replication. You can use the Object retrieval delay element within the Object processing parameter on the change or create data group definition commands to set the delay time between the time the object was last changed on the source system and the time MIMIX attempts to retrieve the object on the source system. Although you can specify this value at the data group level, you can override the data group value at the object level by specifying an Object retrieval delay value on the commands for creating or changing data group entries. You can specify a delay time from 0 through 999 seconds. The default is 0. If the object retrieval latency time (the difference between when the object was last changed and the current time) is less than the configured delay value, then MIMIX will delay its object retrieval processing until the difference between the time the object was last changed and the current time exceeds the configured delay value. If the object retrieval latency time is greater than the configured delay value, MIMIX will not delay and will continue with the object retrieval processing.

Object retrieval delay considerations and examples


You should use care when choosing the object retrieval delay. A long delay may impact the ability of system journal replication processes to move data from a system in a timely manner. Too short a delay may allow MIMIX to retrieve an object before an application is finished with it. You should make the value large enough to reduce or eliminate contention between MIMIX and applications, but small enough to allow MIMIX to maintain a suitable high availability environment. Example 1 - The object retrieval delay value is configured to be 3 seconds: Object A is created or changed at 9:05:10. The Object Retrieve job encounters the create/change journal entry at 9:05:14. It retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 9:05:10 + configured delay value of :03 = 9:05:13) is less than the current date/time (9:05:14). Because the object retrieval delay time has already been exceeded, the object retrieve job continues normal processing and attempts to package the object.

Example 2 - The object retrieval delay value is configured to be 2 seconds: Object A is created or changed at 10:45:51.

391

Selecting an object retrieval delay

The Object Retrieve job encounters the create/change journal entry at 10:45:52. It retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object retrieval delay value has not be met or exceeded, the object retrieve job delays for 1 second to satisfy the configured delay value. After the delay (at time 10:45:53), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval delay value has been met, the object retrieve job continues with normal processing and attempts to package the object.

Example 3 - The object retrieval delay value is configured to be 4 seconds: Object A is created or changed at 13:20:26. The Object Retrieve job encounters the create/change journal entry at 13:20:27. It retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:26 + configured delay value of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3 seconds to satisfy the configured delay value. While the object retrieve job is waiting to satisfy the configured delay value, the object is changed again at 13:20:28. After the delay (at time 13:20:30), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) again exceeds the current date/time (13:20:30) and delays for 2 seconds to satisfy the configured delay value. After the delay (at time 13:20:32), the Object Retrieve job again retrieves the last change date/time attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval delay value has now been met, the object retrieve job continues with normal processing and attempts to package the object.

392

Configuring to replicate SQL stored procedures and user-defined functions


DB2 UDB for System i5 supports external stored procedures and SQL stored procedures. This information is specifically for replicating SQL stored procedures and user-defined functions. SQL stored procedures are defined entirely in SQL and may contain SQL control statements. MIMIX can replicate operations related to stored procedures that are written in SQL (SQL stored procedures), such as CREATE PROCEDURE (create), DROP PROCEDURE (delete), GRANT PRIVILEGES ON PROCEDURE (authority), and REVOKE PRIVILEGES ON PROCEDURE (authority). An SQL procedure is a program created and linked to the database as the result of a CREATE PROCEDURE statement that specifies the language SQL and is called using the SQL CALL statement. For example, the following statement creates program SQLPROC in LIBX and establishes it as a stored procedure associated with LIBX: CREATE PROCEDURE LIBX/SQLPROC(OUT NUM INT) LANGUAGE SQL SELECT COUNT(*) INTO NUM FROM FILEX For SQL stored procedures, an independent program object is created by the system and contains the code for the procedure. The program object usually shares the name of the procedure and resides in the same library with which the procedure is associated. A DROP PROCEDURE statement for an SQL procedure removes the procedure from the catalog and deletes the external program object. Procedures are associated with a particular library. Because information about the procedure is stored in the database catalog and not the library, it cannot be seen by looking at the library. Use System i5 Navigator to view the stored procedures associated with a particular library (select Databases > Libraries).

Requirements for replicating SQL stored procedure operations


The following configuration requirements and restrictions must be met: Apply any IBM PTFs (or their supersedes) associated with i5/OS releases as they pertain to your environment. Log in to Support Central and refer the Technical Documents page for a list of required and recommended IBM PTFs. To correctly replicate a create operation, name mapping cannot be used for either the library or program name. GRANT and REVOKE only affect the associated program object. MIMIX replicates these operations correctly. The COMMENT statement cannot be replicated.

To replicate SQL stored procedure operations


Do the following: 1. Ensure that the replication requirements for the various operations are followed. See Requirements for replicating SQL stored procedure operations on page 393.

393

Configuring to replicate SQL stored procedures and user-defined functions

2. Ensure that you have a data group object entry that includes the associated program object. For example: ADDDGOBJE DGDFN(name system1 system2) LIB1(library) OBJ1(*ALL) OBJTYPE(*PGM)

394

395

Using Save-While-Active in MIMIX

Using Save-While-Active in MIMIX


MIMIX system journal replication processes use save/restore when replicating most types of objects. If there is conflict for the use of an object between MIMIX and some other process, the initial save of the object may fail. When such a failure occurs, MIMIX will attempt to process the object by automatically starting delay or retry processing using the values configured in the data group definition. For the initial save of *FILE objects, save-while-active capabilities will be used unless it is disabled. By default, save-while-active is only used when saving *FILE objects; it is not used when saving other library-based object types, DLOs, or IFS objects. However, you can specify to have MIMIX attempt saves of DLOs and IFS objects using save-while-active. Values for retry processing are specified in the First retry delay interval (RTYDLYITV1) and Number of times to retry (RTYNBR) parameters in the data group definition. After the initial failed save attempt, MIMIX delays for the number of seconds specified in the RTYDLYITV1 value, before retrying the save operation. This is repeated for the number of times that is specified for the RTYNBR value in the data group definition. If the object cannot be saved after the attempts specified in RTYNBR, then MIMIX uses the delay interval value which is specified in the RTYDLYITV2 parameter. The save is then attempted for the number of retries specified in the RTYNBR parameter. For the initial default values for a data group, this calculates to be 7 save attempts (1 initial attempt, 3 attempts using the first delay value of 5 seconds, and 3 attempts using the second delay value of 300 seconds), in a time frame of approximately 20 minutes. For more information on retry processing, see the parameters for automatic retry processing in Tips for data group parameters on page 234.

Considerations for save-while-active


If a file is being saved and it shares a journal with another file that has uncommitted transactions, then the file may be successfully saved by using a normal (non savewhile-active) save. This assumes that the file being saved does not have uncommitted transactions. If you disable save-while-active, attempts to save any type of object will use a normal save. In addition to providing the ability to enable the use of save-while-active for object types other than *FILE, MIMIX provides the abilities to control the wait time when using save-while-active or to disable the use of save-while-active for all object types. Save-while-active wait time For the default (*FILE objects), MIMIX uses save-while-active with a wait time of 120 seconds on the initial save attempt. MIMIX then uses normal (non save-while-active) processing on all subsequent save attempts if the initial save attempt fails. You can configure the save-while-active wait time when specifying to use save-whileactive for the initial save attempt of a *FILE, a DLO, or and IFS object. When specifying to use save-while-active, the first attempt to save the object after delaying the amount of time configured for the Second retry delay interval (RTYDLYITV2)

396

value will also use save-while-active. All other attempts to save the object will use a normal save. Note: Although MIMIX has the capability to replicate DLOs using save/restore techniques, it is recommended that DLOs be replicated using optimized techniques, which can be configured using the DLO transmission method under Object processing in the data group definition.

Types of save-while-active options


MIMIX uses the configuration value (DGSWAT) to select the type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU. DGSWAT: Save-while-active type. You can specify the following values: A value of 0 (the default) indicates that save-while-active is to be used when saving files, with a save-while-active wait time of 120 seconds. For DLOs and IFS objects, a normal save will be attempted. A value of 1 through 99999 indicates that save-while-active is to be used when saving files, DLOs and IFS objects. The value specified will be used as the savewhile-active wait time, such as when passed to the SAVACTWAIT parameter on the SAVOBJ and SAVDLO commands. A value of -1 indicates that save-while-active is disabled and is not to be used when saving files, DLOs or IFS objects. Normal saves will always be used to save any type of object.

Example configurations
The following examples describe the SQL statements that could be used to view or set the configuration settings for a data group definition (data group name, system 1 name, system 2 name) of MYDGDFN, SYS1, SYS2. Example - Viewing: Use this SQL statement to view the values for the data group definition: SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Example - Disabling: If you want to modify the values for a data group definition to disable use of save-while-active for a data group and use a normal save, you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Example - Modifying: If you want to modify a data group definition to enable use of save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=MYDGDFN AND DGSYS=SYS1 AND DGSYS2=SYS2 Note: You only have to make this change on the management system; the network system will be automatically updated by MIMIX.

397

Using Save-While-Active in MIMIX

398

Chapter17

Object selection for Compare and Synchronize commands


Many of the Compare and Synchronize commands, which provide underlying support for MIMIX AutoGuard, use an enhanced set of common parameters and a common processing methodology that is collectively referred to as object selection. Object selection provides powerful, granular capability for selecting objects by data group, object selection parameter, or a combination. The following commands use the MIMIX object selection capability: Compare File Attributes (CMPFILA) Compare Object Attributes (CMPOBJA) Compare IFS Attributes (CMPIFSA) Compare DLO Attributes (CMPDLOA) Compare File Data (CMPFILDTA) Compare Record Count (CMPRCDCNT) Synchronize Object (SYNCOBJ) Synchronize IFS Object (SYNCIFS) Synchronize DLO (SYNCDLO)

The topics in this chapter include: Object selection process on page 399 describes object selection which interacts with your input from a command so that the objects you expect are selected for processing. Parameters for specifying object selectors on page 402 describes object selectors and elements which allow you to work with classes of objects Object selection examples on page 407 provides examples and graphics with detailed information about object selection processing, object order precedence, and subtree rules. Report types and output formats on page 418 describes the output of compare commands: spooled files and output files (outfiles).

Object selection process


It is important to be able to predict the manner in which object selection interacts with your input from a command so that the objects you expect are selected for processing. The object selection capability provides you with the option to select objects by data group, object selection parameter, or a combination. Object selection supports four classes of objects: files, objects, IFS objects, and DLOs.

399

Object selection process

The object selection process takes a candidate group of objects, subsets them as defined by a list of object selectors, and produces a list of objects to be processed. Figure 24 illustrates the process flow for object selection.
Figure 24. Object selection process flow

Candidate objects are those objects eligible for selection. They are input to the object selection process. Initially, candidate objects consist of all objects on the

400

Object selection for Compare and Synchronize commands

system. Based on the command, the set of candidate objects may be narrowed down to objects of a particular class (such as IFS objects). The values specified on the command determine the object selectors used to further refine the list of candidate objects in the class. An object selector identifies an object or group of objects. Object selectors can come from the configuration information for a specified data group, from items specified in the object selector parameter, or both. MIMIX processing for object selection consists of two distinct steps. Depending on what is specified on the command, one or both steps may occur. The first major selection step is optional and is performed only if a data group definition is entered on the command. In that case, data group entries are the source for object selectors. Data group entries represent one of four classes of objects: files, library-based objects, IFS objects, and DLOs. Only those entries that correspond to the class associated with the command are used. The data group entries subset the list of candidate objects for the class to only those objects that are eligible for replication by the data group. If the command specifies a data group and items on the object selection parameter, the data group entries are processed first to determine an intermediate set of candidate objects that are eligible for replication by the data group. That intermediate set is input to the second major selection step. The second step then uses the input specified on the object selection parameter to further subset the objects selected by the data group entries. If no data group is specified on the data group definition parameter, the object selection parameter can be used independently to select from all objects on the system. The second major object selection step subsets the candidate objects based on Object selectors from the commands object selector parameter (file, object, IFS object, or DLO). Up to 300 object selectors may be specified on the parameter. If none are specified, the default is to select all candidate objects. Note: A single object selector can select multiple objects through the use of generic names and special values such as *ALL, so the resulting object list can easily exceed the limit of 300 object selectors that can be entered on a command. The selection parameter is separate and distinct from the data group configuration entries. If a data group is specified, the possible object selectors are 1 to N, where N is defined by the number of data group entries. The remaining candidate objects make up the resultant list of objects to be processed. Each object selector consists of multiple object selector elements, which serve as filters on the object selector. The object selector elements vary by object class. Elements provide information about the object such as its name, an indicator of whether the objects should be included in or omitted from processing, and name mapping for dual-system and single-system environments. See Table 47 for a list of object selector elements by object class.

Order precedence
Object selectors are always processed in a well-defined sequence, which is important when an object matches more than one selector.

401

Parameters for specifying object selectors

Selectors from a data group follow data group rules and are processed in most- to least-specific order. Selectors from the object selection parameter are always processed last to first. If a candidate object matches more than one object selector, the last matching selector in the list is used. As a general rule when specifying items on an object selection parameter, first specify selectors that have a broad scope and then gradually narrow the scope in subsequent selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1. Object selection examples on page 407 illustrates the precedence of object selection. For each object selector, the elements are checked according to a priority defined for the object class. The most specific element is checked for a match first, then the subsequent elements are checked according to their priority. For additional, detailed information about order precedence and priority of elements, see the following topics: How MIMIX uses object entries to evaluate journal entries for replication on page 101 Identifying IFS objects for replication on page 118 How MIMIX uses DLO entries to evaluate journal entries for replication on page 124 Processing variations for common operations on page 130

Parameters for specifying object selectors


The object selectors and elements allow you to work with classes of objects. These objects can be library-based, directory-based, or folder-based. An object selector consists of several elements that identify an object or group of objects, indicates if those objects should be included in or omitted from processing, and may describe name mapping for those objects. The elements vary, depending on the class of objects with which a particular command works. Library-based selection allows you to work with files or objects based on object name, library name, member name, object type, or object attribute. Directory-based selection allows you to work with objects based on a IFS object path name and includes a subtree option that determines the scope of directory-based objects to include. Folder-based selection allows you to work with objects based on DLO path name. Folder-based selection also includes a subtree object selector. Object selection supports generic object name values for all object classes. A generic name is a character string that contains one or more characters followed by an asterisk (*). When a generic name is specified, all candidate objects that match the generic name are selected. For all classes of objects, you can specify as many as 300 object selectors. However, the specific object selector elements that you can specify on the command is determined by the class of object. Object selector elements provide three functions: Object identification elements define the selected object by name, including

402

Object selection for Compare and Synchronize commands

generic name specifications. Filtering elements provide additional filtering capability for candidate objects. Name mapping elements are required primarily for environments where objects exist in different libraries or paths. Include or omit elements identify whether the object should be processed or explicitly excluded from processing.

Table 47 lists object selection elements by function and identifies which elements are available on the commands.
Table 47. Class Commands: Object selection parameters and parameter elements by class File CMPFILA, CMPFILDTA, CMPRCDCNT1 FILE File Library Member Attribute1 Include/Omit System 2 file1 System 2 library1 Library-based object CMPOBJA, SYNCOBJ OBJ Object Library Type Attribute Include/Omit System 2 object System 2 library IFS CMPIFSA, SYNCIFS OBJ Path Subtree Name Pattern Type Include/Omit System 2 path System 2 name pattern DLO CMPDLOA, SYNCDLO DLO Path Subtree Name Pattern Type Owner Include/Omit System 2 path System 2 name pattern

Parameter: Identification elements:

Filtering elements: Processing elements: Name mapping elements:


1.

The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.

File name and object name elements: The File name and Object name elements allow you to identify a file or object by name. These elements allow you to choose a specific name, a generic name, or the special value *ALL. Using a generic name, you can select a group of files or objects based on a common character string. If you want to work with all objects beginning with the letter A, for example, you would specify A* for the object name. To process all files within the related selection criteria, select *ALL for the file or object name. When a data group is also specified on the command, a value of *ALL results in the selection of files and objects defined to that data group by the respective data group file entries or data group object entries. When no data group is specified on the command, specifying *ALL and a library name, only the objects that reside within the given library are selected. Library name element: The library name element specifies the name of the library that contains the files or objects to be included or omitted from the resultant list of

403

Parameters for specifying object selectors

objects. Like the file or object name, this element allows you to define a library a specific name, a generic name, or the special value *ALL. Note: The library value *ALL is supported only when a data group is specified. Member element: For commands that support the ability to work with file members, the Member element provides a means to select specific members. The Member element can be a specific name, a generic name, or the special value *ALL. Refer to the individual commands for detailed information on member processing. Object path name (IFS) and DLO path name elements: The Object path name (IFS) and DLO path name elements identify an object or DLO by path name. They allow a specific path, a generic path, or the special value *ALL. Traditionally, DLOs are identified by a folder path and a DLO name. Object selection uses an element called DLO path, which combines the folder path and the DLO name. If you specify a data group, only those objects defined to that data group by the respective data group IFS entries or data group DLO entries are selected. Directory subtree and folder subtree elements: The Directory subtree and Folder subtree elements allow you to expand the scope of selected objects and include the descendants of objects identified by the given object or DLO path name. By default, the subtree element is *NONE, and only the named objects are selected. However, if *ALL is used, all descendants of the named objects are also selected. Figure 25 illustrates the hierarchical structure of folders and directories prior to processing, and is used as the basis for the path, pattern, and subtree examples shown later in this document. For more information, see the graphics and examples beginning with Example subtree on page 410.
Figure 25. Directory or folder hierarchy

404

Object selection for Compare and Synchronize commands

Directory subtree elements for IFS objects: When selecting IFS objects, only the objects in the file system specified will be included. Object selection will not cross file system boundaries when processing subtrees with IFS objects. Objects from other file systems do not need to be explicitly excluded, however you will need to specify if you want to include objects from other file systems. For more information, see the graphic and examples beginning with Example subtree for IFS objects on page 415. Name pattern element: The Name pattern element provides a filter on the last component of the object path name. The Name pattern element can be a specific name, a generic name, or the special value *ALL. If you specify a pattern of $*, for example, only those candidate objects with names beginning with $ that reside in the named DLO path or IFS object path are selected. Keep in mind that improper use of the Name pattern element can have undesirable results. Let us assume you specified a path name of /corporate, a subtree of *NONE, and pattern of $*. Since the path name, /corporate, does not match the pattern of $*, the object selector will identify no objects. Thus, the Name pattern element is generally most useful when subtree is *ALL. For more information, see the Example Name pattern on page 414. Object type element: The Object type element provides the ability to filter objects based on an object type. The object type is valid for library-based objects, IFS objects, or DLOs, and can be a specific value or *ALL. The list of allowable values varies by object class. When you specify *ALL, only those object types which MIMIX supports for replication are included. For a list of replicated object types, see Supported object types for system journal replication on page 549. Supported object types for CMPIFSA and SYNCIFS are listed in Table 48.
Table 48. Supported object types for CMPIFSA and SYNCIFS Description All directories, stream files, and symbolic links are selected Directories Stream files Symbolic links

Object type *ALL *DIR *STMF *SYMLNK

Supported object types for CMPDLOA and SYNCDLO are listed in Table 49.
Table 49. DLO type *ALL *DOC *FLR Supported DLO types for CMPDLOA and SYNCDLO Description All documents and folders are selected Documents Folders

405

Parameters for specifying object selectors

For unique object types supported by a specific command, see the individual commands. Object attribute element: The Object attribute element provides the ability to filter based on extended object attribute. For example, file attributes include PF, LF, SAVF, and DSPF, and program attributes include CLP and RPG. The attribute can be a specific value, a generic value, or *ALL. Although any value can be entered on the Object attribute element, a list of supported attributes is available on the command. Refer to the individual commands for the list of supported attributes. Owner element: The Owner element allows you to filter DLOs based on DLO owner. The Owner element can be a specific name or the special value *ALL. Only candidate DLOs owned by the designated user profile are selected. Include or omit element: The Include or omit element determines if candidate objects or included in or omitted from the resultant list of objects to be processed by the command. Included entries are added to the resultant list and become candidate objects for further processing. Omitted entries are not added to the list and are excluded from further processing. System 2 file and system 2 object elements: The System 2 file and System 2 object elements provide support for name mapping. Name mapping is useful when working with multiple sets of files or objects in a dual-system or single-system environment. This element may be a specific name or the special value *FILE1 for files or *OBJ1 for objects. If the File or Object element is not a specific name, then you must use the default value of *FILE1 or *OBJ1. This specification indicates that the name of the file or object on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the File or Object parameter. System 2 library element: The System 2 library element allows you to specify a system 2 library name that differs from the system 1 library name, providing name mapping between files or objects in different libraries. This element may be a specific name or the special value *LIB1. If the System 2 library element is not a specific name, then you must use the default value of *LIB1. This specification indicates that the name of the library on system 2 is the same as on system 1 and that no name mapping occurs. Generic values are not supported for the system 2 value if a generic value was specified on the Library object selector. System 2 object path name and system 2 DLO path name elements: The System 2 object path name and System 2 DLO path name elements support name mapping for the path specified in the Object path name or DLO path name element. Name mapping is useful when working with two sets of IFS objects or DLOs in different paths in either a dual-system or single-system environment. Generic values are not supported for the system 2 value if you specified a generic value for the IFS Object or DLO element. Instead, you must choose the default values of *OBJ1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of

406

Object selection for Compare and Synchronize commands

the file or object on system 2 is the same as that value on system 1. The default provides support for a two-system environment without name mapping. System 2 name pattern element: The System 2 name pattern provides support for name mapping for the descendents of the path specified for the Object path name or DLO path name element. The System 2 name pattern element may be a specific name or the special value *PATTERN1. If the Object path name or DLO path name element is not a specific name, then you must use the default value of *PATTERN1. This specification indicates that no name mapping occurs. Generic values are not supported for the System 2 name pattern element if you specified a generic value for the Name pattern element.

Object selection examples


In this section, examples and graphics provide you with detailed information about object selection processing, object order precedence, and subtree rules. These illustrations show how objects are selected based on specific selection criteria.

Processing example with a data group and an object selection parameter


Using the CMPOBJA command, let us assume you want to compare the objects defined to data group DG1. For simplicity, all candidate objects in this example are defined to library LIBX. Table 50 lists all candidate objects on your system .
Table 50. Object ABC AB A DEF DE D Candidate objects on system Library LIBX LIBX LIBX LIBX LIBX LIBX Object type *FILE *SBSD *OUTQ *PGM *DTAARA *CMD

Next, Table 51 represents the object selectors based on the data group object entry configuration for data group DG1. Objects are evaluated against data group entries in the same order of precedence used by replication processes.
Table 51. Object selectors from data group entries for data group DG1 Object A* ABC* Library LIBX LIBX Object type *ALL *FILE Include or omit *INCLUDE *OMIT

Order Processed 3 2

407

Object selection examples

Table 51.

Object selectors from data group entries for data group DG1 Object DEF Library LIBX Object type *JOBQ Include or omit *INCLUDE

Order Processed 1

The object selectors from the data group subset the candidate object list, resulting in the list of objects defined to the data group shown in Table 52. This list is internal to MIMIX and not visible to users.
Table 52. Object A AB DEF Objects selected by data group DG1 Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ

Note: Although job queue DEF in library LIBX did not appear in Table 50, it would be added to the list of candidate objects when you specify a data group for some commands that support object selection. These commands are required to identify or report candidate objects that do not exist. Perhaps you now want to include or omit specific objects from the filtered candidate objects listed in Table 52. Table 53 shows the object selectors to be processed based on the values specified on the object selection parameter. These object selectors serve as an additional filter on the candidate objects.
Table 53. Object selectors for CMPOBJA object selection parameter Object *ALL *ALL *ALL Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ Include or omit *INCLUDE *INCLUDE *OMIT

Order Processed 1 2 3

The objects compared by the CMPOBJA command are shown in Table 54. These are the result of the candidate objects selected by the data group (Table 52) that were subsequently filtered by the object selectors specified for the Object parameter on the CMPOBJA command (Table 53).
Table 54. Object A AB Resultant list of objects to be processed Library LIBX LIBX Object type *OUTQ *SBSD

In this example, the CMPOBJA command is used to compare a set of objects. The input source is a selection parameter. No data group is specified.

408

Object selection for Compare and Synchronize commands

The data in the following tables show how candidate objects would be processed in order to achieve a resultant list of objects. Table 55 lists all the candidate objects on your system.
Table 55. Object ABC AB A DEFG DEF DE D Candidate objects on system Library LIBX LIBX LIBX LIBX LIBX LIBX LIBX Object type *FILE *SBSD *OUTQ *PGM *PGM *DTAARA *CMD

Table 56 represents the object selectors chosen on the object selection parameter. The sequence column identifies the order in which object selectors were entered. The object selectors serve as filters to the candidate objects listed in Table 55. The last object selector entered on the command is the first one used when determining whether or not an object matches a selector. Thus, generic object selectors with the broadest scope, such as A*, should be specified ahead of more specific generic entries, such as ABC*. Specific entries should be specified last.
Table 56. Sequence Entered 1 2 3 4 5 Object selectors entered on CMPOBJA selection parameter Object A* D* ABC* *ALL DEFG Library LIBX LIBX LIBX LIBX LIBX Object type *ALL *ALL *ALL *PGM *PGM Include or omit *INCLUDE *INCLUDE *OMIT *OMIT *INCLUDE

Table 57 illustrates how the candidate objects are selected.


Table 57. Candidate objects selected by object selectors Object DEFG *ALL Library LIBX LIBX Object type *PGM *PGM Include or omit *INCLUDE *OMIT Selected candidate objects DEFG DEF

Sequence Processed 5 4

409

Object selection examples

Table 57.

Candidate objects selected by object selectors Object ABC* D* A* Library LIBX LIBX LIBX Object type *ALL *ALL *ALL Include or omit *OMIT *INCLUDE *INCLUDE Selected candidate objects ABC D, DE A, AB

Sequence Processed 3 2 1

Table 58 represents the included objects from Table 57. This filtered set of candidate objects is the resultant list of objects to be processed by the CMPOBJA command.
Table 58. Object A AB D DE DEFG Resultant list of objects to be processed Library LIBX LIBX LIBX LIBX LIBX Object type *OUTQ *SBSD *CMD *DTAARA *PGM

Example subtree
In the following graphics, the shaded area shows the objects identified by the combination of the Object path name and Subtree elements of the Object parameter for an IFS command. Circled objects represent the final list of objects selected for processing.

410

Object selection for Compare and Synchronize commands

Figure 26 illustrates a path name value of /corporate/accounting, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The candidate objects selected include /corporate/accounting and all descendants.
Figure 26. Directory of /corporate/accounting/

Figure 27 shows a path name of /corporate/accounting/*, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL. In this case, no

411

Object selection examples

additional filtering is performed on the objects identified by the path and subtree. The candidate objects selected consist of the specified objects only.
Figure 27. Subtree *NONE for /corporate/accounting/*

412

Object selection for Compare and Synchronize commands

Figure 28 displays a path name of /corporate/accounting/*, a subtree specification of *ALL, a pattern value of *ALL, and an object type of *ALL. All descendants of /corporate/accounting/* are selected.
Figure 28. Subtree *ALL for /corporate/accounting/*

413

Object selection examples

Figure 29 is a subset of Figure 28. Figure 29 shows a path name of /corporate/accounting, a subtree specification of *NONE, a pattern value of *ALL, and an object type of *ALL, where only the specified directory is selected.
Figure 29. Subtree *NONE for /corporate/accounting

Example Name pattern


The Name pattern element acts as a filter on the last component of the object path name. Figure 30 specifies a path name of /corporate/accounting, a subtree specification of *ALL, a pattern value of $*, and an object type of *ALL. In this

414

Object selection for Compare and Synchronize commands

scenario, only those candidate objects which match the generic pattern value ($123, $236, and $895) are selected for processing.
Figure 30. Pattern $* for /corporate/accounting

Example subtree for IFS objects


In the following graphic, the shaded areas show file systems containing IFS objects. When selecting objects in file systems that contain IFS objects, only the objects in the file system specified will be included. The non-generic part of a path name indicates the file system to be searched. Object selection does not cross file system boundaries when processing subtrees with IFS objects.

415

Object selection examples

Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded areas are the file systems. Table 59 contains examples showing what file systems would be selected with the path names specified and a subtree specification of *ALL.
Figure 31. Directory with a subtree containing IFS objects.

.
Table 59. Examples of specified paths and objects selected for Figure 31 File system Root file system Root file system in independent ASP PARIS Root file system Objects selected /qsyabc /PARIS/qsyabc None

Path specified /qsy* /PARIS/* /PARIS*

416

Object selection for Compare and Synchronize commands

417

Report types and output formats

Report types and output formats


The following compare commands support output in spooled files and in output files (outfiles): the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, CMPDLOA), the Compare Record Count (CMPRCDCNT) command, the Compare File Data (CMPFILDTA) command, and the Check DG File Entries (CHKDGFE) command. The spooled output is a human-readable print format that is intended to be delivered as a report. The output file, on the other hand, is primarily intended for automated purposes such as automatic synchronization. It is also a format that is easily processed using SQL queries. The level of information in the output is determined by the value specified on the Report type parameter. These values vary by command. For the CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA commands, the levels of output available are *DIF, *SUMMARY, and *ALL. The report type of *DIF includes information on objects with detected differences. A report type of *SUMMARY provides a summary of all objects compared as well as an object-level indication whether differences were detected. *SUMMARY does not, however, include details about specific attribute differences. Specifying *ALL for the report type will provide you with information found on both *DIF and *SUMMARY reports. The CMPRCDCNT command supports the *DIF and *ALL report types. The report type of *DIF includes information on objects with detected differences. Specifying *ALL for the report type will provide you with information found on all objects and attributes that were compared. The CMPFILDTA supports the *DIF and *ALL report types, as well as *RRN. The *RRN value allows you to output, using the MXCMPFILR outfile format, the relative record number of the first 1,000 objects that failed to compare. Using this value can help resolve situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. In this case, the *RRN value provides information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.

Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output parameter. The spooled output consists of four main sectionsthe input or header section, the object selection list section, the differences section, and the summary section. First, the header section of the spooled report includes all of the input values specified on the command, including the data group value (DGDFN), comparison level (CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual attributes compared, number of files, objects, IFS objects or DLOs compared, and number of detected differences. It also provides a legend that provides a description of special values used throughout the report.

418

The second section of the report is the object selection list. This section lists all of the object selection entries specified on the comparison command. Similar to the header section, it provides details on the input values specified on the command. The detail section is the third section of the report, and provides details on the objects and attributes compared. The level of detail in this section is determined by the report type specified on the command. A report type value of *ALL will list all objects compared, and will begin with a summary status that indicates whether or not differences were detected. The summary row indicates the overall status of the object compared. Following the summary row, each attribute compared is listedalong with the status of the attribute and the attribute value. In the event the attribute compared is an indicator, a special value of *INDONLY will be displayed in the value columns. A comparison level value of *DIF will list details only for those objects with detected attribute differences. A value of *SUMMARY will not include the detail section for any object. The fourth section of the report is the summary, which provides a one row summary for each object compared. Each row includes an indicator that indicates whether or not attribute differences were detected.

Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output parameter. Similar to the spooled output, the level of output in the output file is dependent on the report type value specified on the Report type parameter. Each command is shipped with an outfile template that uses a normalized database to deliver a self-defined record, or row, for every attribute you compare. Key information, including the attribute type, data group name, timestamp, command name, and system 1 and system 2 values, helps define each row. A summary row precedes the attribute rows. The normalized database feature ensures that new object attributes can be added to the audit capabilities without disruption to current automation processing. The template files for the various commands are located in the MIMIX product library.

419

Chapter18

Comparing attributes
This chapter describes the commands that compare attributes: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands are designed to audit the attributes, or characteristics, of the objects within your environment and report on the status of replicated objects. Together, these command are collectively referred to as the compare attributes commands. You may already be using the compare attributes commands when they are called by audit functions within MIMIX AutoGuard. When used in combination with the automatic recovery features in MIMIX AutoGuard, the compare attributes commands provide robust functionality to help you determine whether your system is in a state to ensure a successful rollover for planned events or failover for unplanned events. The topics in this chapter include: About the Compare Attributes commands on page 420 describes the unique features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA. Comparing file and member attributes on page 425 includes the procedure to compare the attributes of files and members. Comparing object attributes on page 428 includes the procedure to compare object attributes. Comparing IFS object attributes on page 431 includes the procedure to compare IFS object attributes. Comparing DLO attributes on page 434 includes the procedure to compare DLO attributes.

About the Compare Attributes commands


With the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA, and CMPDLOA), you have significant flexibility in selecting objects for comparison, the attributes to be compared, and the format in which the resulting report is created. Each command generates a candidate list of objects on both systems and can detect objects missing from either system. For each object compared, the command checks for the existence of the object on the source and target systems and then compares the attributes specified on the command. The results from the comparisons performed are placed in a report. Each command offers several unique features as well. CMPFILA provides significant capability to audit file-based attributes such as triggers, constraints, ownership, authority, database relationships, and the like. Although the CMPFILA command does not specifically compare the data within the database file, it does check attributes such as record counts, deleted records, and others that check the size of data within a file. Comparing these attributes

420

Comparing attributes

provides you with assurance that files are most likely synchronized. The CMPOBJA command supports many attributes important to other librarybased objects, including extended attributes. Extended attributes are attributes unique to given objects, such as auto-start job entries for subsystems. The CMPIFSA and CMPDLOA commands provide enhanced audit capability for IFS objects and DLOs, respectively.

Choices for selecting objects to compare


You can select objects to compare by using a data group, the object selection parameters, or both. The compare attributes commands do not require active data groups to run. By data group only: If you specify only by data group, all of the objects of the same class as the command that are within the name space configured for the data group are compared. For example, specifying a data group on the CMPIFSA command would compare all IFS objects in the name space created by data group IFS entries associated with the data group. By object selection parameters only: You can compare objects that are not replicated by a data group. By specifying *NONE for the data group and specifying objects on the object selection parameters, you define a name spacethe library for CMPFILA or CMPOBJA, or the directory path for CMPIFSA or CMPDLOA. Detailed information about object selection is available in Object selection for Compare and Synchronize commands on page 399. By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters, the values specified in object selection parameters act as a filter for the items defined to the data group.

Unique parameters
The following parameters for object selection are unique to the compare attributes commands and allow you to specify an additional level of detail when comparing objects or files. Unique File and Object elements: The following are unique elements on the File parameter (CMPFILA command) and Objects parameter (CMPOBJA command): Member: On the CMPFILA command, the value specified on the Member element is only used when *MBR is also specified on the Comparison level parameter. Object attribute: The Object attribute element enables you to select particular characteristics of an object or file, and provides a level of filtering. For details, see CMPFILA supported object attributes for *FILE objects on page 423 and CMPOBJA supported object attributes for *FILE objects on page 423.

System 2: The System 2 parameter identifies the remote system name, and represents the system to which objects on the local system are compared. This parameter is ignored when a data group is specified, since the system 2

421

About the Compare Attributes commands

information is derived from the data group. A value is required if no data group is specified. Comparison level (CMPFILA only): The Comparison level parameter indicates whether attributes are compared at the file level or at the member level. System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only): The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified.

Choices for selecting attributes to compare


The Attributes to compare parameter allows you to select which combination attributes to compare. Each compare attribute command supports an extensive list of attributes. Each command provides the ability to select pre-determined sets of attributes (basic or extended), all supported attributes, as well as any other unique combination of attributes that you require. The basic set of attributes is intended to compare attributes that provide an indication that the objects compared are the same, while avoiding attributes that may be different but do not provide a valid indication that objects are not synchronized, such as the create timestamp (CRTTSP) attribute. Some objects, for example, cannot be replicated using IBM's save and restore technology. Therefore, the creation date established on the source system is not maintained on the target system during the replication process. The comparison commands take this factor into consideration and check the creation date for only those objects whose values are retained during replication. The extended set of attributes includes the basic set of attributes and some additional attributes. The following topics list the supported attributes for each command: Attributes compared and expected results - #FILATR, #FILATRMBR audits on page 591 Attributes compared and expected results - #OBJATR audit on page 596 Attributes compared and expected results - #IFSATR audit on page 604 Attributes compared and expected results - #DLOATR audit on page 606

All comparison attributes supported by a specific compare attribute command may not be applicable for all object types supported by the command. For example, CMPOBJA supports a large number of object types and related comparison attributes. There are many cases where a specific comparison attributes are only valid for a particular object type. Comparison attributes not supported by a given object type are ignored. For example, auto-start job entries is a valid comparison attribute for object types of subsystem description (*SBSD). For all other object types selected as a result of running the

422

Comparing attributes

report, the auto-start job entry attribute is ignored for object types that are not of type *SBSD. If a data group is specified on a compare request, configuration data is used when comparing objects that are identified for replication through the system journal. If an objects configured object auditing value (OBJAUD) is *NONE, its attribute changes are not replicated. When differences are detected on attributes of such an object, they are reported as *EC (equal configuration) instead of being reported as *NE (not equal). For *FILE objects configured for replication through the system journal and configured to omit T-ZC journal entries, also see Omit content (OMTDTA) and comparison commands on page 389.

CMPFILA supported object attributes for *FILE objects


When you specify a data group to compare, the CMPFILA command obtains information from the configured data group entries for all PF and LF files and their subtypes. Those files that are within the name space created by data group entries are compared. Table 60 lists the extended attributes for objects of type *FILE that are supported as values on the Object attribute element.
Table 60. CMPFILA supported extended attributes for *FILE objects Description All physical and logical file types are selected for processing Logical file Files of type LF38 Physical file types, including PF, PF-SRC, and PF-DTA Files of type PF-DTA Files of type PF-SRC Files of type PF38, including PF38, PF38-SRC, and PF38-DTA Files of type PF38-DTA Files of type PF38-SRC

Object attribute *ALL LF LF38 PF PF-DTA PF-SRC PF38 PF38-DTA PF38-SRC

CMPOBJA supported object attributes for *FILE objects


When you specify a data group to compare, the CMPOBJA command obtains data group information from the data group object entries. Those objects defined to the data group object entries are compared. The default value on the Object attribute element is *ALL, which represents the entire list of supported attributes. Any value is supported, but a list of recommended attributes is available in the online help.

423

About the Compare Attributes commands

424

Comparing file and member attributes


You can compare file attributes to ensure that files and members needed for replication exist on both systems or any time you need to verify that files are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile. Note: If you have automation programs monitoring escape messages for differences in file attributes, be aware that differences due to active replication (Step 16) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book. To compare the attributes of files and members, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1 (Compare file attributes) and press Enter. 3. The Compare File Attributes (CMPFILA) command appears. At the Data group definition prompts, do one of the following: To compare attributes for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6. To compare files by name only, specify *NONE and continue with the next step. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts.

425

Comparing file and member attributes

f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Comparison level prompt, accept the default to compare files at a file level only. Otherwise, specify *MBR to compare files at a member level. Note: If *FILE is specified, the Member prompt is ignored (see Step 4b). 7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes based on whether the comparison is at a file or member level or press F4 to see a valid list of attributes. 8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Report type prompt, specify the level of detail for the output report. 12. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 14. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.

13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 18. 14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the Maximum replication lag prompt, specify the maximum amount of time between when a file in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum

426

time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, specify *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.

427

Comparing object attributes

Comparing object attributes


You can compare object attributes to ensure that objects needed for replication exist on both systems or any time you need to verify that objects are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile. Note: If you have automation programs monitoring escape messages for differences in object attributes, be aware that differences due to active replication (Step 15) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book. To compare the attributes of objects, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 2 (Compare object attributes) and press Enter. 3. The Compare Object Attributes (CMPOBJA) command appears. At the Data group definition prompts, do one of the following: To compare attributes for all objects defined by the data group object entries for a particular data group definition, specify the data group name and skip to Step 6. To compare objects by object name only, specify *NONE and continue with the next step. To compare a subset of objects defined to a data group, specify the data group name and skip to continue with the next step.

4. At the Object prompts, you can specify elements for one or more object selectors that either identify objects to compare or that act as filters to the objects defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the Object and library prompts, specify the name or the generic value you want. b. At the Object type prompt, accept *ALL or specify a specific object type to compare. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the object and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the object and library to which objects on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data

428

group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing objects not defined to a data group. If necessary, specify the name of the remote system to which objects on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Report type prompt, specify the level of detail for the output report. 11. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 13. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.

12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 17. 13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 14. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 15. At the Maximum replication lag prompt, specify the maximum amount of time between when an object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3.

429

Comparing object attributes

16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 17. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter and continue with the next step.

18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 20. To start the comparison, press Enter.

430

Comparing IFS object attributes


You can compare IFS object attributes to ensure that IFS objects needed for replication exist on both systems or any time you need to verify that IFS objects are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile. Note: If you have automation programs monitoring for differences in IFS object attributes, be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book. To compare the attributes of IFS objects, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3 (Compare IFS attributes) and press Enter. 3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group definition prompts, do one of the following: To compare attributes for all IFS objects defined by the data group IFS object entries for a particular data group definition, specify the data group name and skip to Step 6. To compare IFS objects by object path name only, specify *NONE and continue with the next step. To compare a subset of IFS objects defined to a data group, specify the data group name and continue with the next step.

4. At the IFS objects prompts, you can specify elements for one or more object selectors that either identify IFS objects to compare or that act as filters to the IFS objects defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the Object path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to compare. e. At the Include or omit prompt, specify the value you want.

431

Comparing IFS object attributes

f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which IFS objects on the local system are compared. Note: The System 2 object path name and System 2 name pattern values are ignored if a data group is specified on the Data group definition prompts. g. Press Enter. 5. The System 2 parameter prompt appears if you are comparing IFS objects not defined to a data group. If necessary, specify the name of the remote system to which IFS objects on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the Report type prompt, specify the level of detail for the output report. 9. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.

10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15. 11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the Maximum replication lag prompt, specify the maximum amount of time between when an IFS object in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in

432

the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To start the comparison, press Enter.

433

Comparing DLO attributes

Comparing DLO attributes


You can compare DLO attributes to ensure that DLOs needed for replication exist on both systems or any time you need to verify that DLOs are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile. Note: If you have automation programs monitoring escape messages for differences in DLO attributes, be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. See the auditing and reporting topics in this book. To compare the attributes of DLOs, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 4 (Compare DLO attributes) and press Enter. 3. The Compare DLO Attributes (CMPDLOA) command appears. At the Data group definition prompts, do one of the following: To compare attributes for all DLOs defined by the data group DLO entries for a particular data group definition, specify the data group name and skip to Step 6. To compare DLOs by path name only, specify *NONE and continue with the next step. To compare a subset of DLOs defined to a data group, specify the data group name and continue with the next step.

4. At the Document library objects prompts, you can specify elements for one or more object selectors that either identify DLOs to compare or that act as filters to the DLOs defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to compare. e. At the Owner prompt, accept *ALL or specify the owner of the DLO.

434

f. At the Include or omit prompt, specify the value you want. g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the path name and pattern to which DLOs on the local system are compared. Note: The System 2 DLO path name and System 2 DLO name pattern values are ignored if a data group is specified on the Data group definition prompts. h. Press Enter. 5. The System 2 parameter prompt appears if you are comparing DLOs not defined to a data group. If necessary, specify the name of the remote system to which DLOs on the local system are compared. 6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. 7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see a valid list of attributes. 8. At the Report type prompt, specify the level of detail for the output report. 9. At the Output prompt, do one of the following To generate print output, accept *PRINT and press Enter. To generate both print output and an outfile, specify *BOTH and press Enter. Skip to Step 11. To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.

10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Accept the default to use the command name to identify the spooled output or specify a unique name. Skip to Step 15. 11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the Maximum replication lag prompt, specify the maximum amount of time between when a DLO in the data group changes and when replication of the change is expected to be complete, or accept *DFT to use the default maximum time of 300 seconds (5 minutes). You can also specify *NONE, which indicates that comparisons should occur without consideration for replication in progress. Note: This parameter is only valid when a data group is specified in Step 3. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in

435

Comparing DLO attributes

the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To start the comparison, press Enter.

436

Chapter19

Comparing file record counts and file member data


This chapter describes the features and capabilities of the Compare Record Counts (CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command. The topics in this chapter include: Comparing file record counts on page 437 describes the CMPRCDCNT command and provides a procedure for performing the comparison. Significant features for comparing file member data on page 440 identifies enhanced capabilities available for use when comparing file member data. Considerations for using the CMPFILDTA command on page 441 describes recommendations and restrictions of the command. This topic also describes considerations for security, use with firewalls, comparing records that are not allocated, as well as comparing records with unique keys, triggers, and constraints. Specifying CMPFILDTA parameter values on page 445 provides additional information about the parameters for selecting file members to compare and using the unique parameters of this command. Advanced subset options for CMPFILDTA on page 451 describes how to use the capability provided by the Advanced subset options (ADVSUBSET) parameter. Ending CMPFILDTA requests on page 454 describes how to end a CMPFILDTA request that is in progress and describes the results of ending the job. Comparing file member data - basic procedure (non-active) on page 455 describes how to compare file data in a data group that is not active. Comparing and repairing file member data - basic procedure on page 458 describes how to compare and repair file data in a data group that is not active. Comparing and repairing file member data - members on hold (*HLDERR) on page 461 describes how to compare and repair file members that are held due to error using active processing. Comparing file member data using active processing technology on page 464 describes how to use active processing to compare file member data. Comparing file member data using subsetting options on page 467 describes how to use the subset feature of the CMPFILDTA command to compare a portion of member data at one time.

Comparing file record counts


The Compare Record Counts (CMPRCDCNT) command allows you to compare the record counts of members of a set of physical files between two systems. This

437

Comparing file record counts

command compares the number of current records (*CURRDS) and the number of deleted records (*NBRDLTRCDS) for members of physical files that are defined for replication by an active data group. In resource-constrained environments, this capability provides a less-intensive means to gauge whether files are likely to be synchronized. Note: Equal record counts suggest but do not guarantee that members are synchronized. To check for file data differences, use the Compare File Data (CMPFILDTA) command. To check for attribute differences, use the Compare File Attributes (CMPFILA) command. Members to be processed must be defined to a data group that permits replication from a user journal. Journaling is required on the source system. User journal replication processes must be active when this command is used. Members on both systems can be actively modified by applications and by MIMIX apply processes while this command is running. For information about the results of a comparison, see What differences were detected by #MBRRCDCNT on page 583. The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase. Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery phase. Differences detected by this audit appear as not recovered in the Audit Summary user interfaces. Any repairs must be undertaken manually, in the following ways: In MIMIX Availability Manager, repair actions are available for specific errors when viewing the output file for the audit. Run the #FILDTA audit for the data group to detect and correct problems. Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.

To compare file record counts


Do the following to compare record counts for an active data group: 1. From a command line, type installation_library/CMPRCDCNT and press F4 (Prompt). 2. The Compare Record Counts (CMPRCDCNT) display appears. At the Data group definition prompts, do one of the following: To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 4. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

3. At the File prompts, you can specify elements for one or more object selectors to act as filters to the files defined to the data group indicated in Step 2. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt

438

Comparing file record counts and file member data

for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Include or omit prompt, specify the value you want. 4. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.

5. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 9. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

6. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 7. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 8. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 9. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the

439

Significant features for comparing file member data

job or specify a simple name. 12. To start the comparison, press Enter.

Significant features for comparing file member data


The Compare File Data (CMPFILDTA) command provides ability to compare data within members of physical files. The CMPFILDTA command is called programmatically by MIMIX AutoGuard functions that help you determine whether files are synchronized and whether your MIMIX environment is prepared for switching. You can also use the CMPFILDTA command interactively or call it from a program. Unique features of the CMPFILDTA command include active server technology and isolated data correction capability. Together, these features enable the detection and correction of file members that are not synchronized while applications and replication processes remain active. File members that are held due to an error can also be compared and repaired.

Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it detects in member data between systems. When files are not synchronized, the CMPFILDTA command provides the ability to resynchronize the file at the record level by sending only the data for the incorrect member to the target system. (In contrast, the Synchronize DG File Entry (SYNCDGFE) command would resynchronize the file by transferring all data for the file from the source system to the target system.)

Active and non-active processing


The Process while active (ACTIVE) parameter determines whether a requested comparison can occur while application and replication activity is present. Two modes of operation are available: active and non-active. In non-active mode, CMPFILDTA assumes that all files are quiesced and performs file comparisons and repairs without regard to application or replication activity. In active mode, processing begins in the same manner, performing an internal compare and generating a list of records that are not synchronized. This list is not reported, however. Instead, CMPFILDTA checks the mismatched records against the activity that is happening on the source system and the apply activity that is occurring on the target. If there is a member that needs repair, CMPFILDTA will then report the error. At that time, the command will also repair the target file member if *YES was specified on the Repair parameter. During active processing of a member, the DB apply threshold (DBAPYTHLD) parameter can be used to specify what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process.

440

Comparing file record counts and file member data

Processing members held due to error


The CMPFILDTA command also provides the ability to compare and repair members being held due to error (*HLDERR). When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair the file membersand when possible, restore them to an active state. To repair members in *HLDERR status, you must also specify that the repair be performed on the target system and request that active processing be enabled. To support the cooperative efforts of CMPFILDTA and DBAPY, the following transitional states are used for file entries undergoing compare and repair processing: *CMPRLS - The file in *HLDERR status has been released. DBAPY will clear the journal entry backlog by applying the file entries in catch-up mode. *CMPACT - The journal entry backlog has been applied. CMPFILDTA and DBAPY are cooperatively repairing the member previously in *HLDERR status, and incoming journal entries continue to be applied in forgiveness mode.

When a member held due to error is being processed by the CMPFILDTA command, the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member then changes to *ACTIVE status if compare and repair processing is successful. In the event that compare and repair processing is unsuccessful, the member-level entry is set back to *HLDERR.

Additional features
The CMPFILDTA command incorporates many other features to increase performance and efficiency. Subsetting and advanced subsetting options provide a significant degree of flexibility for performing periodic checks of a portion of the data within a file. Parallel processing uses multi-threaded jobs to break up file processing into smaller groups for increased throughput. Rather than having a single-threaded job on each system, multiple thread groups break up the file into smaller units of work. This technology can benefit environments with multiple processors as well as systems with a single processor.

Considerations for using the CMPFILDTA command


Before you use the CMPFILDTA command, you should be aware of the information in this topic.

Recommendations and restrictions


It is recommended that the CMPFILDTA command be used in tandem with the CMPFILA command. Use the CMPFILA command to determine whether you have a matching set of files and attributes on both systems and use the CMPFILDTA command to compare the actual data within the files.

441

Considerations for using the CMPFILDTA command

Keyed replication - Although you can run the CMPFILDTA command on keyed files, the command only supports files configured for *POSITIONAL replication. The CMPFILDTA command cannot compare files configured for *KEYED replication. SNA environments - CMPFILDTA requires a TCP/IP transfer definitionyou cannot use SNA. You can be configured for SNA, but then you must override CMPFILDTA to refer to a transfer definition. For more information, see System-level communications on page 159. Apply threshold and apply backlog - Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit.

Using the CMPFILDTA command with firewalls


The CMPFILDTA command uses a communications port based on the port number specified in the transfer definition. If you need to run simultaneous CMPFILDTA jobs, you must open the equivalent number of ports in your firewall. For example, if the port number in your transfer definition is 5000 and you want to run 10 CMPFILDTA jobs at once, you should open at least 10 ports in your firewallminimally, ports 5001 through 5010. If you attempt to run more jobs than there are open ports, those jobs will fail.

Security considerations
You should take extra precautions when using CMPFILDTAs repair function, as it is capable of accessing and modifying data on your system. To compare file data, you must have read access on both systems. When using the repair function, write access on the system to be repaired may also be necessary when active technology is not used. CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a remote process using RUNCMD, which requires two conditions to be true. First, the user profile of the job that is invoking CMPFILDTA must exist on the remote system and have the same password on the remote system as it does on the local system. Second, the user profile must have appropriate read or update access to the members to be compared or repaired. If active processing and repair is requested, only read access is needed. In this case, the repair processing would be done by the database apply process.

Comparing allocated records to records not yet allocated


In some situations, members differ in the number of records allocated. One member may have allocated records, while the corresponding records of the other member are not yet allocated. If the member to be repaired is the smaller of the two members, records are added to make the members the same size. If the member to be repaired is the larger of the two members, however, the excess records are deleted. When MIMIX replication encounters these situations, no error is generated nor is the member placed on error hold.

442

Comparing file record counts and file member data

If one or more members differ in the manner described above, a distinct escape message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor these escape messages specifically.

Comparing files with unique keys, triggers, and constraints


If members being repaired have unique keys, active triggers, or constraints, special care should be taken. An updated or insert repair action that results in one or more duplicate key exceptions automatically results in the deletion of records with duplicate keys. Note: The records that could be deleted include those outside the subset of records being compared. Deletion of records with duplicate keys is not recorded in the outfile statistics. If triggers are enabled, any compare or repair action causes the applicable trigger to be invoked. Triggers should be disabled if this action is not desired by the user. When a compare is specified, read triggers are invoked as records are read. If repair action is specified, update, insert, and delete triggers are invoked as records are repaired. Table 61 describes the interaction of triggers with CMPFILDTA repair and active processing. Attention: If an attempt is made to use one of the unsupported situations listed in Table 61, the job that invokes the trigger will end abruptly. You will see a CEE0200 information message in the job log shortly before the job ends. You may also see an MCH2004 message.

Table 61.

CMPFILDTA and trigger support Trigger activation group (ACTGRP) *NEW NAMED or *CALLER *NEW *NEW *NEW NAMED or *CALLER CMPFILDTA Repair on system (REPAIR) Any value Any value *NONE Any value other than *NONE Any value other than *NONE Any value CMPFILDTA Process while active (ACTIVE) Any value Any value Any value *NO *YES Any value CMPFILDTA support Not supported Supported Supported Not supported Supported Supported

Trigger type

Read Read Update, insert, and delete Update, insert, and delete Update, insert, and delete Update, insert, and delete

443

Considerations for using the CMPFILDTA command

Avoiding issues with triggers


It is possible to avoid potential trigger restrictions. You can use any one of the following techniques, which are listed in the preferred order: Recreate the trigger program, specifying the ACTGRP(*CALLER) or ACTGRP(NAMED) Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED) Disable trigger programs on the file Use the Synchronize Objects (SYNCOBJ) command rather than CMPFILDTA Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than CMPFILDTA Use the Copy Active File (CPYACTF) command rather than CMPFILDTA Save and restore outside of MIMIX

Referential integrity considerations


Referential integrity enforcement can present complex CMPFILDTA repair scenarios. Like triggers, a delete rule of cascade, set null, or set default can cause records in other tables to be modified or deleted as a result of a repair action. In other situations, a repair action may be prevented due to referential integrity constraints. Consider the case where a foreign key is defined between a department table and an employee table. The referential integrity constraint requires that records in the employee table only be permitted if the department number of the employee record corresponds to a row in the department table with the same department number. It will not be possible for CMPFILDTA repair processing to add a row to the employee table if the corresponding parent row is not present in the department table. Because of this, you should use CMPFILDTA to repair parent tables before using CMPFILDTA to repair dependant tables. Note that the order you specify the tables on the CMPFILDTA command is not necessarily the order in which they will be processed, so you must issue the command once for the parent table, and then again for the dependant table. Repairing the parent department table first may present its own problems. If CMPFILDTA attempts to delete a row in the department table and the delete rule for the constraint is restrict, the row deletion may fail if the employee table still contains records corresponding to the department to be deleted. Such constraints should use a delete rule of cascade, set null, or set default. Otherwise, CMPFILDTA may not be able to make all repairs. See the IBM Database Programming manual (SC41-5701) for more information on referential integrity.

Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA job. However, the run priority of either CMPFILDTA job is superseded if a

444

Comparing file record counts and file member data

CMPFILDTA class object (*CLS) exists in the installation library of the system on which the job is running. Note: Use the Change Job (CHGJOB) command on the local system to modify the run priority of the local job. CMPFILDTA uses the priority of the local job to set the priority of the remote job, so that both jobs have the same run priority. To set the remote job to run at a different priority than the local job, use the Create Class (CRTCLS) command to create a *CLS object for the job you want to change.

Specifying CMPFILDTA parameter values


This topic provides information about specific parameters of the CMPFILDTA command.

Specifying file members to compare


The CMPFILDTA command allows you to work with physical file members only. You can select the files to compare by using a data group, the object selection parameters, or both. By data group only: If you specify only by data group, the list of candidate objects to compare is determined by the data group configuration from the local system only. If a file exists on the remote system that meets the object selection criteria but it does not exist on the local system, the data within that file is not compared. If a file exists on the local system but not on the remote system, however, the command will signal an error condition. By object selection parameters only: You can compare file members that are not replicated by a data group. By specifying *NONE for the data group and specifying file and member information on the object selection parameters, you define a name space on the local system from which a list of candidate objects is created. The Object attribute element on the File parameter enables you to select particular characteristics of a file. Table 62 lists the extended attributes for objects of type *FILE that are supported as values for the Object attribute element By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters, the values specified in object selection parameters act as a filter for the items defined to the data group.

Detailed information about object selection is available in Object selection for Compare and Synchronize commands on page 399.
Table 62. CMPFILDTA supported extended attributes for *FILE objects Description Physical file types, including PF, PF-SRC, and PF-DTA Files of type PF-DTA

Object attribute PF PF-DTA

445

Specifying CMPFILDTA parameter values

Table 62.

CMPFILDTA supported extended attributes for *FILE objects Description Files of type PF-SRC Files of type PF38, including PF38, PF38-SRC, and PF38-DTA Files of type PF38-DTA Files of type PF38-SRC

Object attribute PF-SRC PF38 PF38-DTA PF38-SRC

Tips for specifying values for unique parameters


The CMPFILDTA command includes several parameters that are unique among MIMIX commands. Repair on system: When you choose to repair files that do not match, CMPFILDTA allows you to select the system on which the repair should be made. File repairs can be performed on system 1, system 2, local, target, source, or you can specify the system definition name. Note: *TGT and *SRC are only valid when a data group is specified. However, you cannot select *SRC when *YES is specified for the Process while active parameter. Refer to the Process while active section. Process while active: CMPFILDTA includes while-active support. This parameter allows you to indicate whether compares should be made while file activity is taking place. For efficiencys sake, it is always best to perform active repairs during a period of low activity. CMPFILDTA, however, uses a mechanism that retries comparison activity until it detects no interference from active files. Three values are allowed on the Process while active parameter*DFT, *NO, and *YES. The *NO option should be used when the files being compared are not actively being updated by either application activity or MIMIX replication activity. All file repairs are handled directly by CMPFILDTA. *YES is only allowed when a data group is specified and should be used when the files being compared are actively being updated by application activity or MIMIX replication activity. In this case, all file repairs are routed through the data group and require that the data group is active. If a data group is specified, the default value of *DFT is equivalent to *YES. If a data group is not specified, *DFT is the same as *NO. Specifying *NO for the Process while active parameter is the recommended option for running in a quiesced environment. When used in combination with an active data group, it assumes there is no application activity and MIMIX replication is current. If you specify *NO for the Process while active parameter in combination with repairing the file, the data group apply process must be configured not to lock the files on the apply system. This configuration can be accomplished by specifying *NO on the Lock on apply parameter of the data group definition. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit. File entry status: The File entry status parameter provides options for selecting members with specific statuses, including members held due to error (*HLDERR).

446

Comparing file record counts and file member data

When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair members held due to errorand when possible, restore them to an active state. Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A data group must also be specified on the command or the parameter is ignored. The default value, *ALL, indicates that all supported entry statuses (*ACTIVE and *HLDERR) are included in compare and repair processing. The value *ACTIVE processes only those members that are active1. When *HLDERR is specified, only member-level entries being held due to error are selected for processing. To repair members held due to error using *ALL or *HLDERR, you must also specify that the repair be performed on the target system and request that active processing be used. System 1 ASP group and System 2 ASP group: The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. The ASP group name is the name of the primary ASP device within the ASP group. This parameter is ignored when a data group is specified. You must be running on OS V5R2 or greater to use these parameters. Subsetting option: The Subsetting option parameter provides a robust means by which to compare a subset of the data within members. In some instances, the value you select will determine which additional elements are used when comparing data. Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or *RANGE. If *ALL is specified, all data within all selected files is compared, and no additional subsetting is performed. The other options compare only a subset of the data. The following are common scenarios in which comparing a subset of your data is preferable: If you only need to check a specific range of records, use *RANGE. When a member, such as a history file, is primarily modified with insert operations, only recently inserted data needs to be compared. In this situation, use *ENDDTA. If time does not permit a full comparison, you can compare a random sample using *ADVANCED. If you do not have time to perform a full comparison all at once but you want all data to be compared over a number of days, use *ADVANCED.

*RANGE indicates that the Subset range parameter will be used to specify the subset of records to be compared. For more information, see the Subset range section. If you select *ENDDTA, the Records at end of file parameter specifies how many trailing records are compared. This value allows you to compare a selected number of records at the end of all selected members. For more information, see the section titled Records at end of file. Advanced subsetting can be used to audit your entire database over a number of days or to request that a random subset of records be compared. To specify
1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve previous behavior, specify STATUS(*ACTIVE).

447

Specifying CMPFILDTA parameter values

advanced subsetting select *ADVANCED. For more information see Advanced subset options for CMPFILDTA on page 451. Subset range: Subset range is enabled when *RANGE is specified on the Subsetting option parameter, as described in the Subsetting option section. Two elements are included, First record and Last record. These elements allow you to specify a range of records to compare. If more than one member is selected for processing, all members are compared using the same relative record number range. Thus, using the range specification is usually only useful for a single member or a set of members with related records. The First record element can be specified as *FIRST or as a relative record number. In the case of *FIRST, records in the member are compared beginning with the first record. The Last record element can be specified as *LAST or as a relative record number. In the case of *LAST, records in the member are compared up to, and including, the last record. Advanced subset options: The Advanced subset options (ADVSUBSET) provides the ability to use sophisticated comparison techniques. For detailed information and examples, see Advanced subset options for CMPFILDTA on page 451. Records at end of file: The Records at end of file (ENDDTA) parameter allows you to compare recently inserted data without affecting the other subsetting criteria. If you specified *ENDDTA in the Subsetting option parameter, as indicated in the Subsetting option section, only those records specified in the Records at end of file parameter will be processed. This parameter is also valid if values other than *ENDDTA were specified in the Subsetting option. In this case, both records at the end of the file as well as any additional subsetting options factor into the compare. If some records are selected by both by the ENDDTA parameter and another subsetting option, those records are only processed once. The Records at end of file parameter can be specified as *NONE or number-ofrecords. When *NONE is specified, records at the end of the members are not compared unless they are selected by other subset criteria. To compare particular records at the end of each member, you must specify the number of records. The ENDDTA value is always applied to the smaller of the System 1 and System 2 members, and continues through until the end of the larger member. Let us assume that you specify 200 for the ENDDTA value. If one system has 1000 records while the other has 1100, relative records 801-1100 would be checked. The relative record numbers of the last 200 records of the smaller file are compared as well as the additional 100 relative record numbers due to the difference in member size. Using the Records at end of file parameter in daily processing can keep you from missing records that were inserted recently.

448

Comparing file record counts and file member data

Specifying the report type, output, and type of processing


The options for selecting processing method, output format, and the contents of the reported differences are similar to that provided for other MIMIX compare commands. For additional details, see Report types and output formats on page 418.

System to receive output


The System to receive output (OUTSYS) parameter indicates the system on which the output will be created. By default, the output is created on the local system. When Output is *OUTFILE and Process while active is *YES, complete outfile information is only available if the System to receive output parameter indicates that the output file is on the data group target system. In this case, the outfile will be updated as the database apply encounters journal entries relating to possible mismatched records. The Wait time (seconds) parameter can be used to ensure that all such outfile updates are complete before the command completes.

Interactive and batch processing


On the Submit to batch parameter, the *YES default submits a multi-thread capable batch job. When *NO is specified for the parameter, CMPFILDTA generates a batch immediate job to do the bulk of the processing. A batch immediate job is not processed through a job queue and is identified with a job type of BCI on the WRKACTJOB screen. Similarly, if CMPFILDTA is issued from a batch job whose ALWMLTTHD attribute is *NO, a batch immediate job will also be spawned. In cases where a batch immediate job is generated, the original job waits for the batch immediate job to complete and re-issues any messages generated by CMPFILDTA. Interactive jobs are not permitted to have multiple threads, which are required for CMPFILDTA processing. Thus, you need to be aware of the following issues when a batch immediate job is generated: The identity of the job will be issued in a message in the original job. Since the batch immediate job cannot access the interactive jobs QTEMP library, outfiles and files to be compared may not reside in QTEMP, even when CMPFILDTA is issued from a multi-thread capable batch job. Re-issued messages will not have the original from and to program information. Instead, you must view the job log of the generated job to determine this information. Escape messages created prior to the final message will be converted to diagnostic messages. Canceling the interactive request will not cancel the batch immediate job.

Using the additional parameters


The following parameters allow you to specify an additional level of detail regarding CMPFILDTA command processing. These parameters are available by pressing F10 (Additional parameters).

449

Specifying CMPFILDTA parameter values

Transfer definition: The default for the Transfer definition parameter is *DFT. If a data group was specified, the default uses the transfer definition associated with the data group. If no data group was specified, the transfer definition associated with system 2 is used. The CMPFILDTA command requires that you have a TCP/IP transfer definition for communication with the remote system. If your data group is configured for SNA, override the SNA configuration by specifying the name of the transfer definition on the command. Number of thread groups: The Number of thread groups parameter indicates how many thread groups should be used to perform the comparison. You can specify from 1 to 100 thread groups. When using this parameter, it is important to balance the time required for processing against the available resources. If you increase the number of thread groups in order to reduce processing time, for example, you also increase processor and memory use. The default, *CALC, will determine the number of thread groups automatically. To maximize processing efficiency, the value *CALC does not calculate more than 25 thread groups. The actual number of threads used in the comparison is based on the result of the formula 2x + 1, where x is the value specified or the value calculated internally as the result of specifying *CALC. When *CALC is specified, the CMPFILDTA command displays a message showing the value calculated as the number of thread groups. Note: Thread groups are created for primary compare processing only. During setup, multiple threads may be utilized to improve performance, depending on the number of members selected for processing. The number of threads used during setup will not exceed the total number of threads used for primary compare processing. During active processing, only one thread will be used. Wait time (seconds): The Wait time (seconds) value is only valid when active processing is in effect and specifies the amount of time to wait for active processing to complete. You can specify from 0 to 3600 seconds, or the default *NOMAX. If active processing is enabled and a wait time is specified, CMPFILDTA processing waits the specified time for all pending compare operations processed through the MIMIX replication path to complete. In most cases, the *NOMAX default is highly recommended. DB apply threshold: The DB apply threshold parameter is only valid during active processing and requires that a data group be specified. The parameter specifies what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process. The default value *END stops the requested compare and repair action when the database apply threshold is reached; any repair actions that have not been completed are lost. The value *NOMAX allows the compare and repair action to continue even when the database apply threshold has been reached. Continuing processing when the apply process has a large backlog may adversely affect performance of the CMPFILDTA job and its ability to compare a file with an excessive number of outstanding entries. Therefore, *NOMAX should only be used in exceptional circumstances.

450

Comparing file record counts and file member data

Advanced subset options for CMPFILDTA


You can use the Advanced subset options (ADVSUBSET) parameter on the Compare File Data (CMPFILDTA) command for advanced techniques such as comparing records over time and comparing a random sample of data. These techniques provide additional assurance that files are replicated correctly. For example, let us assume you have a limited batch window. You do not have time to run a total compare everyday, but have the requirement to assure that all data is compared over the course of a week. Using the advanced CMPFILDTA capability, you can divide this work over a number of days. Advanced subsetting makes it simple to accomplish this task by comparing 10 percent of your data each weeknight and completing the remaining 50 percent over the weekend. However, as the following example demonstrates, it is always best to compare a random representative sampling of data. The Advanced subset options also provides this capability. For example, if a member contains 1000 records on Monday, records 1 through 100 will be compared on Monday. By Tuesday, perhaps the member has grown to 1500 records. The second 10 percent, to be processed on Tuesday, will contain records 151 through 300. Records 101 through 150 will not get checked at all. Advanced subsetting provides you with an alternative that does not skip records when members are growing. Advanced subset options are applied independently for each member processed. The advanced subset function assigns the data in each member to multiple nonoverlapping subsets in one of two ways. It also allows a specified range of these subsets to be compared, which permits a representative sample subset of the data to be compared. It also permits a full compare to be partitioned into multiple CMPFILDTA requests that, in combination, assures that all data that existed at the time of the first request is compared. To use advanced subsetting, you will need to identify the following: The number of subsets or bins to define for the compare The manner in which records are assigned to bins The specific bins to process

Number of subsets: The first issue to consider when performing advanced subset options is how many subsets or bins to establish. The Number of subsets element is the number of approximately equal-sized bins to define. These bins are numbered from 1 up to the number specified (N). You must specify at least one bin. Each record is assigned to one of these bins. The Interleave element specifies the manner in which members are assigned to a bin. Interleave: The Interleave factor specifies the mapping between the relative record number and the bin number. There are two approaches that can be used.

451

Advanced subset options for CMPFILDTA

If you specify *NONE, records in each member are divided on a percentage basis. For example:
Table 63. Interleave *NONE Member A on Monday Total records in member: Number of subsets (bins): Interleave: Records assigned to bin 1: Records assigned to bin 2: Records assigned to bin 3: 30 3 *NONE 1-10 11-20 21-30 Member A on Tuesday 45 3 *NONE 1-15 16-30 31-45

Note that when the total number of records in a member changes, the mapping also changes. Records that were once assigned to bin 2 may in the future be assigned to bin 1. If you wish to compare all records over the course of a few days, the changing mapping may cause you to miss records. A specific Interleave value is preferable in this case. Using bytes, the Interleave value specifies a number of contiguous records that should be assigned to each bin before moving to the next bin. Once the last bin is filled, assignment restarts at the first bin. Let us assume you have specified in interleave value of 20 bytes. The following example is based on the one provided in Table 63:
Table 64. Interleave(20) Member A on Monday Total records in member: Record length: Number of subsets (bins): Interleave (bytes): Interleave (records): Records assigned to bin 1: 30 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 Member A on Tuesday 45 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 31-32 37-38 43-44

452

Comparing file record counts and file member data

Table 64.

(Continued)Interleave(20) Member A on Monday Member A on Tuesday 3-4 9-10 15-16 21-22 27-28 33-34 39-40 45 5-6 11-12 17-18 23-24 29-30 35-36 41-42

Records assigned to bin 2:

3-4 9-10 15-16 21-22 27-28

Records assigned to bin 3:

5-6 11-12 17-18 23-24 29-30

If the Interleave and Number of Subsets is constant, the mapping of relative record numbers to bins is maintained, despite the growth of member size. Because every bin is eventually selected, comparisons made over several days will compare every record that existed on the first day. In most circumstances, *CALC is recommended for the interleave specification. When you select *CALC, the system determines how many contiguous bytes are assigned to each bin before subsequent bytes are placed in the next bin. This calculated value will not change due to member size changes. Specifying *NONE or a very large interleave factor maximizes processing efficiency, since data in each bin is processed sequentially. Specifying a very small interleave factor can greatly reduce efficiency, as little sequential processing can be done before the file must be repositioned. If you wish to compare a random sample, a smaller interleave factor provides a more random, or scattered, sample to compare. The next parameters, the First subset and the Last subset, allow you to specify which bin to process. First and last subset: The First subset and Last subset values work in combination to determine a range of bins to compare. For the First subset, the possible values are *FIRST and subset-number. If you select *FIRST, the range to compare will start with bin 1. Last subset has similar values, *LAST and subset-number. When you specify *LAST, the highest numbered bin is the last one processed. To compare a random sample of your data, specify a range of subsets that represent the size of the sample. For example, suppose you wish to compare seven percent of your data. If the number of subsets are 100, the first subset is 1, and the last subset is 7, seven percent of the data is compared. A first subset value of 21 and a last subset value of 27 would also compare seven percent of your data, but it would compare a different seven percent than the first example.

453

Ending CMPFILDTA requests

To compare all your data over the course of several days, specify the number of subsets and interleave factor that allows you to size each days workload as your needs require. For example, you would keep the subset value and interleave factor a constant, but vary the First and Last subset values each day. The following settings could be used over the course of a week to compare all of your data:
Table 65. Using First and last subset to compare data Number of subsets (bins) 100 100 100 100 100 100 100 Interleave First subset 1 11 21 31 41 51 66 Last subset 10 20 30 40 50 65 100 Percentage compared 10 10 10 10 10 15 35

Day of week

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

*CALC *CALC *CALC *CALC *CALC *CALC *CALC

Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX Monitor documentation for more information.

Ending CMPFILDTA requests


The Compare File Data (CMPFILDTA) command, or a rule which calls it, can be long running and may exceed the time which you have available for it to run. The CMPFILDTA command recognizes requests to end the job in a controlled manner (ENDJOB OPTION(*CNTRLD)). Messages indicate the step within CMPFILDTA processing at which the end was requested. The report and output file contain as much information as possible with the data available at the step in progress when the job ended. The output may not be accurate because the full CMPFILDTA request did not complete. The content of the report and output file is most valuable if the command completed processing through the end of phase 1 compare. The output may be incomplete if the end occurred earlier. If processing did not complete to a point where MIMIX can accurately determine the result of the compare, the value *UN (unknown) is placed in the Difference Indicator. Note: If the CMPFILDTA command has been long running or has encountered many errors, you may need to specify more time on the ENDJOB commands Delay time, if *CNTRLD (DELAY) parameter. The default value of 30 seconds may not be adequate in these circumstances.

454

Comparing file member data - basic procedure (nonactive)


You can use the CMPFILDTA command to ensure that data required for replication exists on both systems and any time you need to verify that files are synchronized between systems. You can optionally specify that results of the comparison are placed in an outfile. Before you begin, see the recommendations, restrictions, and security considerations described in Considerations for using the CMPFILDTA command on page 441. You should also read Specifying CMPFILDTA parameter values on page 445 for additional information about parameters and values that you can specify. To perform a basic data comparison, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter. 3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following: To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6. To compare data by file name only, specify *NONE and continue with the next step. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared.

455

Comparing file member data - basic procedure (non-active)

Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, accept *NONE to indicate that no repair action is done. 7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison. 8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 12. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF. If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN. Notes: The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. 13. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step.

456

To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.

457

Comparing and repairing file member data - basic procedure

Comparing and repairing file member data - basic procedure


You can use the CMPFILDTA command to repair data on the local or remote system. Before you begin, see the recommendations, restrictions, and security considerations described in Considerations for using the CMPFILDTA command on page 441. You should also read Specifying CMPFILDTA parameter values on page 445 for additional information about parameters and values that you can specify. To compare and repair data, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter. 3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following: To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6. To compare data by file name only, specify *NONE and continue with the next step. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. f. Press Enter.

458

5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or the system definition name to indicate the system on which repair action should be performed. Note: *TGT and *SRC are only valid if you are comparing files defined to a data group. *SRC is not valid if active processing is in effect. 7. At the Process while active prompt, specify *NO to indicate that active processing technology should not be used in the comparison. 8. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 9. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 10. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 11. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 12. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.

13. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 18. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

14. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 15. At the Output member options prompts, do the following:

459

Comparing and repairing file member data - basic procedure

a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 16. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 17. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 18. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter.

19. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 20. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 21. To start the comparison, press Enter.

460

Comparing and repairing file member data - members on hold (*HLDERR)


Members that are being held due to error (*HLDERR) can be repaired with the Compare File Data (CMPFILDTA) command during active processing. When members in *HLDERR status are processed, the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair the membersand when possible, restore them to an active state. Before you begin, see the recommendations, restrictions, and security considerations described in Considerations for using the CMPFILDTA command on page 441. You should also read Specifying CMPFILDTA parameter values on page 445 for additional information about parameters and values that you can specify. The following procedure repairs a member without transmitting the entire member. As such, this method is generally faster than other methods of repairing members in *HLDERR status that transmit the entire member or file. However, if significant activity has occurred on the source system that has not been replicated on the target system, it may be faster to synchronize the member using the Synchronize Data Group File Entry (SYNCDGFE) command. To repair a member with a status of *HLDERR, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter. 3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, you must specify a data group name. Note: If you want to compare data for all files defined by the data group file entries for a particular data group definition, skip to Step 5. 4. At the File prompts, you can optionally specify elements for one or more object selectors that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. Press Enter. Note: The System 2 file and System 2 library values are ignored when a data

461

Comparing and repairing file member data - members on hold (*HLDERR)

group is specified on the Data group definition prompts. 5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system. 6. At the Process while active prompt, specify *YES to indicate that active processing technology should be used in the comparison. 7. At the File entry status prompt, specify *HLDERR to process members being held due to error only. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 15. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

11. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 12. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 13. At the System to receive output prompt, specify the system on which the output should be created. 14. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 15. At the Submit to batch prompt, do one of the following:

462

If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter.

16. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 17. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 18. To compare and repair the file, press Enter.

463

Comparing file member data using active processing technology

Comparing file member data using active processing technology


You can set the CMPFILDTA command to use active processing technology when a data group is specified on the command. Before you begin, see the recommendations, restrictions, and security considerations described in Considerations for using the CMPFILDTA command on page 441. You should also read Specifying CMPFILDTA parameter values on page 445 for additional information about parameters and values that you can specify. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit. To compare data using the active processing, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter. 3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following: To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, accept the defaults. f. Press Enter. 5. At the Repair on system prompt, specify *TGT to indicate that repair action be performed on the target system of the data group.

464

6. At the Process while active prompt, specify *YES or *DFT to indicate that active processing technology be used in the comparison. Since a data group is specified on the Data group definition prompts, *DFT will render the same results as *YES. 7. At the File entry status prompt, specify *ACTIVE to process only those file members that are active. 8. At the System 1 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 1. Otherwise, specify the name of the ASP group that contains objects to be compared on system 1. Note: This parameter is ignored when a data group definition is specified. 9. At the System 2 ASP group prompt, accept the default if no objects from any ASP group are to be compared on system 2. Otherwise, specify the name of the ASP group that contains objects to be compared on system 2. Note: This parameter is ignored when a data group definition is specified. 10. At the Subsetting option prompt, specify *ALL to select all data and to indicate that no subsetting is performed. 11. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF.

12. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step. If you do not want to generate output, specify *NONE. Press Enter and skip to Step 17. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

13. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 14. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 15. At the System to receive output prompt, specify the system on which the output should be created. Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that you select *SYS2 for the System to receive output prompt.

465

Comparing file member data using active processing technology

16. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used when the command is invoked from outside of shipped audits. When used as part of shipped audits, the default value is *OMIT since the results are already placed in an outfile. 17. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

18. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 19. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 20. To start the comparison, press Enter.

466

Comparing file member data using subsetting options


You can use the CMPFILDTA command to audit your entire database over a number of days. Before you begin, see the recommendations, restrictions, and security considerations described in Considerations for using the CMPFILDTA command on page 441. You should also read Specifying CMPFILDTA parameter values on page 445 for additional information about parameters and values that you can specify. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit. To compare data using the subsetting options, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7 (Compare file data) and press Enter. 3. The Compare File Data (CMPFILDTA) command appears. At the Data group definition prompts, do one of the following: To compare data for all files defined by the data group file entries for a particular data group definition, specify the data group name and skip to Step 6. To compare data by file name only, specify *NONE and continue with the next step. To compare a subset of files defined to a data group, specify the data group name and continue with the next step.

4. At the File prompts, you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the File and library prompts, specify the name or the generic value you want. b. At the Member prompt, accept *ALL or specify a member name to compare a particular member within a file. c. At the Object attribute prompt, accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, specify the value you want. e. At the System 2 file and System 2 library prompts, if the file and library names on system 2 are equal to system 1, accept the defaults. Otherwise, specify the name of the file and library to which files on the local system are compared. Note: The System 2 file and System 2 library values are ignored if a data

467

Comparing file member data using subsetting options

group is specified on the Data group definition prompts. f. Press Enter. 5. The System 2 parameter prompt appears if you are comparing files not defined to a data group. If necessary, specify the name of the remote system to which files on the local system are compared. 6. At the Repair on system prompt, specify a value if you want repair action performed. Note: To process members in *HLDERR status, you must specify *TGT. See Step 8. 7. At the Process while active prompt, specify whether active processing technology should be used in the comparison. Notes: To process members in *HLDERR status, you must specify *YES. See Step 8. If you are comparing files associated with a data group, *DFT uses active processing. If you are comparing files not associated with a data group, *DFT does not use active processing. Do not compare data using active processing technology if the apply process is 180 seconds or more behind, or has exceeded a threshold limit. 8. At the File entry status prompt, you can select files with specific statuses for compare and repair processing. Do one of the following: a. To process active members only, specify *ACTIVE. b. To process both active members and members being held due to error (*ACTIVE and *HLDERR), specify the default value *ALL. c. To process members being held due to error only, specify *HLDERR. Note: When *ALL or *HLDERR is specified for the File entry status prompt, *TGT must also be specified for the Repair on system prompt (Step 6) and *YES must be specified for the Process while active prompt (Step 7). 9. At the Subsetting option prompt, you must specify a value other than *ALL to use additional subsetting. Do one of the following: To compare a fixed range of data, specify *RANGE then press Enter to see additional prompts. Skip to Step 10. To define how many subsets should be established, how member data is assigned to the subsets, and which range of subsets to compare, specify *ADVANCED and press Enter to see additional prompts. Skip to Step 11. To indicate that only data specified on the Records at end of file prompt is compared, specify *ENDDTA and press Enter to see additional prompts. Skip to Step 12.

10. At the Subset range prompts, do the following:

468

a. At the First record prompt, specify the relative record number of the first record to compare in the range. b. At the Last record prompt, specify the relative record number of the last record to compare in the range. c. Skip to Step 12. 11. At the Advanced subset options prompts, do the following: a. At Number of subsets prompt, specify the number of approximately equalsized subsets to establish. Subsets are numbered beginning with 1. b. At the Interleave prompt, specify the interleave factor. In most cases, the default *CALC is highly recommended. c. At the First subset prompt, specify the first subset in the sequence of subsets to compare. d. At the Last subset prompt, specify the last subset in the sequence of subsets to compare. 12. At the Records at end of file prompt, specify the number of records at the end of the member to compare. These records are compared regardless of other subsetting criteria. Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify a value other than *NONE. 13. At the Report type prompt, do one of the following: If you want all compared objects to be included in the report, accept the default. If you only want objects with detected differences to be included in the report, specify *DIF. If you want to include the member details and relative record number (RRN) of the first 1,000 objects that have differences, specify *RRN. Notes: The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. 14. At the Output prompt, do one of the following: To generate spooled output that is printed, accept the default, *PRINT. Press Enter and continue with the next step. To generate an outfile and spooled output that is printed, specify *BOTH. Press Enter and continue with the next step.

469

Comparing file member data using subsetting options

If you do not want to generate output, specify *NONE. Press Enter and skip to Step 19. To generate an outfile, specify *OUTFILE. Press Enter and continue with the next step.

15. At the File to receive output prompts, specify the file and library to receive the output. (Press F1 (Help) to see the name of the supplied database file.) 16. At the Output member options prompts, do the following: a. At the Member to receive output prompt, specify the name of the database file member to receive the output of the command. b. At the Replace or add prompt, specify whether new records should replace existing file members or be added to the existing list. 17. At the System to receive output prompt, specify the system on which the output should be created. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt, you must select *SYS2 for the System to receive output prompt. 18. At the Object difference messages prompt, specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log, and is the default used outside of shipped rules. When used as part of shipped rules, the default value is *OMIT since the results are already placed in an outfile. 19. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter continue with the next step.

20. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 21. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 22. To start the comparison, press Enter.

470

471

Chapter20

Synchronizing data between systems


This chapter contains information about support provided by MIMIX commands for synchronizing data between two systems. The data that MIMIX replicates must by synchronized on several occasions. During initial configuration of a data group, you need to ensure that the data to be replicated is synchronized between both systems defined in a data group. If you change the configuration of a data group to add new data group entries, the objects must be synchronized. You may also need to synchronize a file or object if an error occurs that causes the two systems to become not synchronized. The automatic recovery features of MIMIX AutoGuard also use synchronize commands to recover differences detected during replication and audits. If automatic recovery policies are disabled, you may need to use synchronize commands to correct a file or object in error or to correct differences detected by audits or compare commands.

The Lakeview-provided synchronize commands can be loosely grouped by common characteristics and the level of function they provide. Topic Considerations for synchronizing using MIMIX commands on page 474 describes subjects that apply to more than one group of commands, such as the maximum size of an object that can be synchronized, how large objects are handled, and how user profiles are addressed. Initial synchronization: Initial synchronization can be performed manually with a variety of MIMIX and IBM commands, or by using the Synchronize Data Group (SYNCDG) command. The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard. The command can be long-running. For information about initial synchronization, see these topics: Performing the initial synchronization on page 483 describes how to establish a synchronization point and identifies other key information. Environments using MIMIX support for IBM WebSphere MQ have additional requirements for the initial synchronization of replicated queue managers. For more information, see the MIMIX for IBM WebSphere MQ book.

Synchronize commands: The commands Synchronize Object (SYNCOBJ), Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide robust support in MIMIX environments, for synchronizing library-based objects, IFS objects, and DLOs, as well as their associated object authorities. Each command has considerable flexibility for selecting objects associated with or independent of a data group. Additionally, these commands are often called by other functions, such as by the automatic recovery features of MIMIX AutoGuard and by options to synchronize objects identified in tracking entries used with advanced journaling. For additional information, see: About MIMIX commands for synchronizing objects, IFS objects, and DLOs on

472

Synchronizing data between systems

page 478 About synchronizing tracking entries on page 482

Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry (SYNCDGACTE) command provides the ability to synchronize library-based objects, IFS objects, and DLOs that are associated with data group activity entries which have specific status values. The contents of the object and its attributes and authorities are synchronized. For additional information, see About synchronizing data group activity entries (SYNCDGACTE) on page 479. Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE) command provides the means to synchronize database files associated with a data group by data group file entries. Additional options provide the means to address triggers, referential constraints, logical files, and related files. For more information about this command, see About synchronizing file entries (SYNCDGFE command) on page 480. Send Network commands: The Send Network Object (SNDNETOBJ), Send Network IFS Object (SNDNETIFS), and Send Network DLO (SNDNETDLO) commands support fewer usage options and usability benefits than the Synchronize commands. These commands may require multiple invocations per library, path, or directory, respectively. These commands do not support synchronizing based on a data group name. Procedures: The procedures in this chapter are for commands that are accessible from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to synchronize individual items in your configuration, the best approach is to use the options provided on the displays where they are appropriate to use. The options call the appropriate command and, in many cases, pre-select some of the fields. The following procedures are included: Synchronizing database files on page 489 Synchronizing objects on page 491 Synchronizing IFS objects on page 495 Synchronizing DLOs on page 499 Synchronizing data group activity entries on page 503 Synchronizing tracking entries on page 505 Sending library-based objects on page 506 Sending IFS objects on page 508 Sending DLO objects on page 509

473

Considerations for synchronizing using MIMIX commands

Considerations for synchronizing using MIMIX commands


For discussion purposes, the synchronize commands are grouped as follows: Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) Synchronize Data Group Activity Entry (SYNCDGACTE) Synchronize Data Group File Entry (SYNCDGFE)

The following subtopics apply to more than one group of commands. Before you synchronize you should be aware of information in the following topics: Limiting the maximum sending size on page 474 Synchronizing user profiles on page 474 Synchronizing large files and objects on page 476 Status changes caused by synchronizing on page 476 Synchronizing objects in an independent ASP on page 477

Limiting the maximum sending size


The Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) and the Synchronize Data Group File Entry (SYNCDGFE) command provide the ability to limit the size of files or objects transmitted during synchronization with the Maximum sending size (MAXSIZE) parameter. By default, no maximum value is specified. You can also specify the value *TFRDFN to use the threshold size from the transfer definition associated with the data group1, or specify a value between 1 and 9,999,999 megabytes (MB). On the SYNCDGFE command, the value *TFRDFN is only allowed when the Sending mode (METHOD) parameter specifies *SAVRST. When automatic recovery actions initiate a Synchronize or SYNCDGFE command, the policies in effect determine the value used for the commands MAXSIZE parameter. The Set MIMIX Policies (SETMMXPCY) command sets policies for automatic recovery actions and for the synchronize threshold used by the commands MIMIX invokes to perform recovery actions. When any of the automatic recovery policies are enabled (DBRCY, OBJRCY, or AUDRCY), the value of the Sync. threshold size (SYNCTHLD) policy is used for the MAXSIZE value on the command. You can adjust the SYNCTHLD policy value for the installation or optionally set a value for a specific data group.

Synchronizing user profiles


User profile objects (*USRPRF) can be synchronized explicitly or implicitly. The Synchronize commands (SYNCOBJ, SYNCIFS, and SYNCDLO) and the Send Network Objects (SNDNETOBJ) command can synchronize user profiles either

1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify *TFRDFN.

474

implicitly or explicitly. The following information describes slight variations in processing.

Synchronizing user profiles with SYNCnnn commands


The SYNCOBJ command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. The status of the user profile on the target system is affected as follows: If you specified a data group and a user profile which is configured for replication, the status of the user profile on the target system is the value specified in the configured data group object entry. If you specified a user profile but did not specify a data group, the following occurs: If the user profile exists on the target system, its status on the target system remains unchanged. If the user profile does not exist on the target system, it is synchronized and its status on the target system is set to *DISABLED. When synchronizing other object types, the SYNCOBJ, SYNCIFS, and SYNCDLO commands implicitly synchronize user profiles associated with the object if they do not exist on the target system. Although only the requested object type, such as *PGM, is specified on these commands, the owning user profile, the primary group profile, and user profiles that have private authorities to an object are implicitly synchronized, as follows: When the Synchronize command specifies a data group and that data group has a data group object entry which includes the user profile, the object and the user profile are synchronized. The status of the user profile on the target system is set to match the value from the data group object entry. If a data group object entry excludes the user profile from replication, the object is synchronized and its owner is changed to the default owner indicated in the data group definition. The user profile is not synchronized. When the Synchronize command specifies a data group and that data group does not have a data group object entry for the user profile, the object and the associated user profile are synchronized. The status of the user profile on the target system is set to *DISABLED.

Synchronizing user profiles with the SNDNETOBJ command


The Send Network Objects (SNDNETOBJ) command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. The status of the user profile on the target system is affected as follows: If the user profile exists on the target system, its status on the target system remains unchanged. If the user profile does not exist on the target system, it is synchronized and its status on the target system is set to *DISABLED.

475

Considerations for synchronizing using MIMIX commands

When synchronizing other object types, this command implicitly synchronizes user profiles associated with the object if they do not exist on the target system. Although only the requested object type, such as *PGM, is specified on the command, the owning user profile, the primary group profile, and user profiles that have private authorities to an object are implicitly synchronized. The object and associated user profiles are synchronized. The status of the user profile on the target system is set to *DISABLED.

Missing system distribution directory entries automatically added


When a missing user profile is detected during replication or synchronization of an object, MIMIX automatically adds any missing system distribution directory entries for user profiles. The synchronize (SYNCnnn) and the SNDNETOBJ commands provide this capability. If replication or a synchronization request determines that a user profile is missing on the target system and a system directory entry exists on the source system for that user profile, MIMIX adds the system distribution directory entry for the user profile on the target system and specifies these values: User ID: same value as retrieved from the source system Description: same value as retrieved from the source system Address: local-system name User profile: user-profile name All other directory entry fields are blank

Synchronizing large files and objects


When configured for advanced journaling, large objects (LOBs) can be synchronized through the user (database) journal. You can synchronize a database file that contains LOB data using the Synchronize Data Group File Entry (SYNCDGFE) command. If advanced journaling is not used in your environment, you may want to consider synchronizing large files or objects (over 1 GB) outside of MIMIX. During traditional synchronization, large files or objects can negatively impact performance by consuming too much bandwidth. Certain commands for synchronizing provide the ability to limit the size of files or objects transmitted during synchronization. See Limiting the maximum sending size on page 474 for more information. On certain commands, it is possible to control the size of files and objects sent to another system. The Threshold size (THLDSIZE) parameter on the transfer definition can be used to limit the size of objects transmitted with the Send Network Object commands.

Status changes caused by synchronizing


In some circumstances the Synchronize Data Group Activity Entry (SYNCDGACTE) command changes the status of activity entries when the command completes. For additional details, see About synchronizing data group activity entries (SYNCDGACTE) on page 479.

476

The Synchronize commands (SYNCOBJ, SYNCIFS and SYNCDLO) do not change the status of activity entries associated with the objects being synchronized. Activity entries retain the same status after the command completes. Note: The SYNCIFS command will change the status of an activity entry for an IFS object configured for advanced journaling. When advanced journaling is configured, each replicated activity has associated tracking entries. When you use the SYNCOBJ or SYNCIFS commands to synchronize an object that has a corresponding tracking entry, the status of the tracking entry will change to *ACTIVE upon successful completion of the synchronization request. If the synchronization is not successful, the status of the tracking entry will remain in its original status or have a status of *HLD. If the data group is not active, the status of the tracking entry will be updated once the data group is restarted.

Synchronizing objects in an independent ASP


When synchronizing data that is located in an independent ASP, be aware of the following: In order for MIMIX to access objects located in an independent ASP, do one of the following on the Synchronize Object (SYNCOBJ) command: Specify the data group definition. If no data group is specified, you must specify values for the System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device number parameters. In order for the Send Network Object (SNDNETOBJ) command to access objects that are located in an independent auxiliary storage pool (ASP) on the source system, you must first use the IBM command Set ASP Group (SETASPGRP) on the local system before using the SNDNETOBJ command.

477

About MIMIX commands for synchronizing objects, IFS objects, and DLOs

About MIMIX commands for synchronizing objects, IFS objects, and DLOs
The Synchronize Object (SYNCOBJ), Synchronize IFS (SYNCIFS), and Synchronize DLO (SYNCDLO) commands provide versatility for synchronizing objects and their authority attributes. Where to run: The synchronize commands can be run from either system. However, if you run these commands from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. Identifying what to synchronize: On each command, you can identify objects to synchronize by specifying a data group, a subset of a data group, or by specifying objects independently of a data group. When you specify a data group, its source system determines the objects to synchronize. The objects to be synchronized by the command are the same as those identified for replication by the data group. For example, specifying a data group on the SYNCOBJ command, will synchronize the same library-based objects as those configured for replication by the data group. If you specify a data group as well as specify additional object information in command parameters, the additional parameter information is used to filter the list of objects identified for the data group. When no data group is specified, the local system becomes the source system and a target system must be identified. The list of objects to synchronize is generated on the local system. For more information about the object selection criteria used when no data group is specified on these commands, see Object selection for Compare and Synchronize commands on page 399.

Each command has a Synchronize authorities parameter to indicate whether authority attributes are synchronized. By default, the object and all authority-related attributes are synchronized. You can also synchronize only the object or only the authority attributes of an object. Authority attributes include ownership, authorization list, primary group, public and private authorities. When you use the SYNCOBJ command to synchronize only the authorities for an object and a data group name is not specified, if any files processed by the command are cooperatively processed and a data group that contains these files is active, the command could fail if the database apply job has a lock on these files. When to run: Each command can be run when the data group is in an active or an inactive state. You can synchronize objects whether or not the data group is active. Using the SYNCOBJ, SYNCIFS, and SYNCDLO commands during off-peak usage or when the objects being synchronized are in a quiesced state reduces contention for object locks. When using the SYNCIFS command for a data group configured for advanced journaling, the data group can be active but it should not have a backlog of unprocessed entries.

478

Additional parameters: On each command, the following parameters provide additional control of the synchronization process. The Save active parameter provides the ability to save the object in an active environment using IBM's save while active support. Values supported are the same as those used in related IBM commands. The Save active wait time parameter specifies the amount of time to wait for a commit boundary or for a lock on an object. If a lock is not obtained in the specified time, the object is not saved. If a commit boundary is not reached in the specified time, the save operation ends and the synchronization attempt fails. The Maximum sending size (MB) parameter specifies the maximum size that an object can be in order to be synchronized. For more information, see Limiting the maximum sending size on page 474.

About synchronizing data group activity entries (SYNCDGACTE)


The Synchronize Data Group Activity Entry (SYNCDGACTE) command supports the ability to synchronize library-based objects, IFS objects, or DLOs associated with data group activity entries. Activity entries whose status falls in the following categories can be synchronized: *ACTIVE, *COMPLETED, *DELAYED, or *FAILED. The contents of the object, its attributes, and its authorities are synchronized between the source and target systems. Note: From the 5250 emulator, data group activity and the status category of the represented object are listed on the Work with Data Group Activity display (WRKDGACT command). The specific status of individual activity entries appear on the Work with DG Activity Entries display (WRKDGACTE command). The data group can either be active or inactive during the synchronization request. If the item you are synchronizing has multiple activity entries with varying statuses (for example, an entry with a status of completed, followed by a failed entry, and subsequent delayed entries), the SYNCDGACTE command will find the first noncompleted activity entry and synchronize it. The same SYNCDGACTE request will then find the next non-completed entry and synchronize it. The SYNCDGACTE request will continue to synchronize these non-completed entries until all entries for that object have been synchronized. Any existing active, delayed, or failed activity entries for the specified object are processed and set to completed by synchronization (PZ) when the synchronization request completes successfully. When all activity entries are completed for the specified object, when the synchronization request completes successfully, only the status of the very last completed entry is changed from complete (CP) to completed by synchronization (CZ). Not supported: Spooled files and cooperatively processed files are not eligible to be synchronized using the SYNCDGACTE command.

479

About synchronizing file entries (SYNCDGFE command)

Status changes during to synchronization: During synchronization processing, if the data group is active, the status of the activity entries being synchronized are set to a status of pending synchronization (PZ) and then to pending completion (PC). When the synchronization request completes, the status of the activity entries is set to either completed by synchronization (CZ) or to failed synchronization (FZ). If the data group is inactive, the status of the activity entries remains either pending synchronization (PZ) or pending completion (PC) when the synchronization request completes. When the data group is restarted, the status of the activity entries is set to either completed by synchronization (CZ) or to failed synchronization (FZ).

About synchronizing file entries (SYNCDGFE command)


The Synchronize Data Group File Entry (SYNCDGFE) command synchronizes database files associated with a data group by data group file entries. Active data group required: Because the SYNCDGFE command runs through a database apply job, the data group must be active when the command is used. Choice of what to synchronize: The Sending mode (METHOD) parameter provides granularity in specifying what is synchronized. Table 66 describes the choices.
Table 66. *DATA Sending mode (METHOD) choices on the SYNCDGFE command. This is the default value. Only the physical file data is replicated using MIMIX Copy Active File processing. File attributes are not replicated using this method. If the file exists on the target system, MIMIX refreshes its contents. If the file format is different on the target system, the synchronization will fail. If the file does not exist on the target system, MIMIX uses save and restore operations to create the file on the target system and then uses copy active file processing to fill it with data from the file on the source system. Only the physical file attributes are replicated and synchronized. Only the authorities for the physical file are replicated and synchronized. The content and attributes are replicated using the IBM i save and restore commands. This method allows save-while-active operations. This method also has the capability to save associated logical files.

*ATR1 *AUT1 *SAVRST

1.

Available when service pack SP070.00.0 or higher is installed.

Files with triggers: The SYNCDGFE command provides the ability to optionally disable triggers during synchronization processing and enable them again when processing is complete. The Disable triggers on file (DSBTRG) parameter specifies whether the database apply process (used for synchronization) disables triggers when processing a file. The default value *DGFE uses data group file entry to determine whether triggers should be disabled. The value *YES will disable triggers on the target system during synchronization.

480

If configuration options for the data group, or optionally for a data group file entry, allow MIMIX to replicate trigger-generated entries and disable the triggers, when synchronizing a file with triggers you must specify *DATA as the sending mode. Including logical files: The Include logical files (INCLF) parameter allows you to include any attached logical files in the synchronization request. This parameter is only valid when *SAVRST is specified for the Sending mode prompt. Physical files with referential constraints: Physical files with referential constraints require a field in another physical file to be valid. When synchronizing physical files with referential constraints, ensure all files in the referential constraint structure are synchronized concurrently during a time of minimal activity on the source system. Doing so will ensure the integrity of synchronization points. Including related files: You can optionally choose whether the synchronization request will include files related to the file specified by specifying *YES for the Include related (RELATED) parameter. Related files are those physical files which have a relationship with the selected physical file by means of one or more join logical files. Join logical files are logical files attached to fields in two or more physical files. The Include related (RELATED) parameter defaults to *NO. In some environments, specifying *YES could result in a high number of files being synchronized and could potentially strain available communications and take a significant amount of time to complete. A physical file being synchronized cannot be name mapped if it is not in the same library as the logical file associated with it. Logical files may be mapped by using object entries.

481

About synchronizing tracking entries

About synchronizing tracking entries


Tracking entries provide status of IFS objects, data areas, and data queues that are replicated using MIMIX advanced journaling. Object tracking entries represent data areas or data queues. IFS tracking entries represent IFS objects. IFS tracking entries also track the file identifier (FID) of the object on the source and target systems. You can synchronize the object represented by a tracking entry by using the synchronize option available on the Work with DG Object Tracking Entries display or the Work with DG IFS Tracking Entries display. For object tracking entries, the option calls the Synchronize Object (SYNCOBJ) command. For IFS tracking entries, the option calls the Synchronize IFS Object (SYNCIFS) command. The contents, attributes, and authorities of the item are synchronized between the source and target systems. Notes: Before starting data groups for the first time, any existing objects to be replicated from the source system must be synchronized to the target system. If tracking entries do not exist, you must create them by doing one of the following: Change the data group IFS entry or object entry configuration as needed and end and restart the data groups . Load tracking entries using the Load DG IFS Tracking Entries (LODDGIFSTE) or Load DG Obj Tracking Entries (LODDGOBJTE) commands. See Loading tracking entries on page 284.

Tracking entries may not exist for existing IFS objects, data areas, or data queues that have been configured for replication with advanced journaling since the last start of the data group. For status changes to be effective for a tracking entry that is being synchronized, the data group must be active. When the apply session receives notification that the object represented by the tracking entry is synchronized successfully, the tracking entry status changes to *ACTIVE.

482

Performing the initial synchronization


Ensuring that data is synchronized before you begin replication is crucial to successful replication. How you perform the initial synchronization can be influenced by the available communications bandwidth, the complexity of describing the data, the size of the data, as well as time. Note: If you have configured or migrated a MIMIX configuration to use integrated support for IBM WebSphere MQ, you must use the procedure Initial synchronization for replicated queue managers in the MIMIX for IBM WebSphere MQ book. Large IBM WebSphere MQ environments should plan to perform this during off-peak hours.

Establish a synchronization point


Just before you start the initial synchronization, establish a known start point for replication by changing journal receivers. The information gathered in this procedure will be used when you start replication for the first time. From the source system, do the following: 1. Quiesce your applications before continuing with the next step. 2. For each data group that will replicate from a user journal, use the following command to change the user journal receiver. Record the new receiver names shown in the posted message. On a command line, type: (installation-library-name)/CHGDGRCV DGDFN(data-group-name) TYPE(*DB) 3. Change the system journal receiver and record the new receiver name shown in the posted message. On a command line, type: CHGJRN JRN(QAUDJRN) JRNRCV(*GEN)

Resources for synchronizing


The available choices for synchronizing are, in order of preference: SYNCDG command: The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard. Using the SYNCDG command may help shorten the initial synchronization completion time as only needed data that is not already synchronized will be identified and replicated. The command can be long-running. MIMIX IntelliStart uses this command for automatic replication and synchronization. IBM Save and Restore commands: IBM save and restore commands are best suited for initial synchronization and are used when performing a manual synchronization. While MIMIX SYNCDG, SYNC, and SNDNET commands can be used, the communications bandwidth required for the size and quantity of objects may exceed capacity. SYNC commands: The Synchronize commands (SYNCOBJ, SYNCIFS, SYNCDLO) should be your starting point. These commands provide significantly

483

Using SYNCDG to perform the initial synchronization

more flexibility in object selection and also provide the ability to synchronize object authorities. By specifying a data group on any of these commands, you can synchronize the data defined by its data group entries. You can also use the Synchronize Data Group File Entry (SYNCDGFE) to synchronize database files and members. This command provides the ability to choose between MIMIX copy active file processing and save/restore processing and provides choices for handling trigger programs during synchronization. If you have configured or migrated to integrated advanced journaling, follow the SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and data queues, and SYNCDGFE procedures for files containing LOB data. You can also use options to synchronize objects associated with tracking entries from the Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries display. SNDNET commands: The Send Network commands (SNDNETIFS, SNDNETDLO, SNDNETOBJ) support fewer options for selecting and specifying multiple objects and do not provide a way to specify by data group. These commands may require multiple invocations per path, folder, or library, respectively.

This chapter (Synchronizing data between systems on page 472) includes additional information about the MIMIX SYNC and SNDNET commands.

Using SYNCDG to perform the initial synchronization


This topic describes the procedure for performing the initial synchronization using the Synchronize Data Group (SYNCDG) command prior to beginning replication. The initial synchronization ensures that data is the same on each system and reduces the time and complexity involved with starting replication for the first time. The SYNDG command utilizes the auditing and automatic recovery functions of MIMIX AutoGuard to synchronize an enabled data group between the source system and the target system. The SYNCDG command is intended to be used for initial synchronization of a data group and can be used in other situations where data groups are not synchronized. The SYNCDG command can only be run on the management system, and only one instance of the command per data group can be running at any time. This command submits a batch program that can run for several days. The SYNCDG command can be performed automatically through MIMIX IntelliStart. Note: The SYNCDG command will not process a request to synchronize a data group that is currently using the MIMIX CDP feature. This feature is in use if a recovery window is configured or when a recovery point is set for a data group. Also, do not configure a recovery window or set a recovery point if a SYNCDG request is in progress for the data group. The MIMIX CDP feature may not protect data under these circumstances. Ensure the following conditions are met for each data group that you want to synchronize, before running this command:

484

Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment. Log in to Support Central and access the Technical Documents page for a list of required and recommended IBM PTFs. Journaling is started on the source system for everything defined to the data group. All replication processes are active. The user ID submitting the SYNCDG has *MGT authority in product level security if it is enabled for the installation. No other audits (comparisons or recoveries) are in progress when the SYNCDG is requested. Collector services has been started.

While the synchronization is in progress, other audits for the data group are prevented from running. MIMIX Availability Manager displays initialization mode on the Audit Summary and Compliance interfaces while running this command if the data group definition (DGDFN) specifies *ALL.

To perform the initial synchronization using the SYNCDG command defaults


From MIMIX Availability Manager, do the following: 1. Select the following from the navigation bar: a. Systems - select the system for which you want to perform the initial synchronization. b. Installations - select the installation for which you want to perform the initial synchronization. c. Details - select Data Groups. 2. From the upper portion of the Data Groups Status window, select Start All from the Action drop-down. 3. The Start Data Groups window appears. Accept the defaults and click OK. 4. From the Details section of the navigation bar, select Command History. 5. In the Command History window type SYNCDG and click on the Prompt button.

6. The Synchronize Data Group (SYNCDG) command prompt opens. Click Advanced and specify the following values by pressing F4 for valid options on each parameter or use the drop-down menu: Data group definition (DGDFN). Job description (JOBD). 7. Click on OK to perform the initial synchronization. 8. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use Verifying the initial

485

Using SYNCDG to perform the initial synchronization

synchronization on page 487. From a 5250 emulator, do the following: 1. Use the command STRDG DGDFN(*ALL). 2. Type the command SYNCDG and press Enter. Specify the following values, pressing F4 for valid options on each parameter: Data group definition (DGDFN). Job description (JOBD). 3. Press Enter to perform the initial synchronization. 4. Verify your configuration is using MIMIX AutoGuard. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use Verifying the initial synchronization on page 487.

486

Verifying the initial synchronization


This procedure uses MIMIX AutoGuard to ensure your environment is ready to start replication. Shipped policy settings for MIMIX allow audits to automatically attempt recovery actions for any problems they detect. You should not use this procedure if you have already synchronized your systems using the Synchronize Data Group (SYNCDG) command or the automatic synchronization method in MIMIX IntelliStart. The audits used in this procedure will: Verify that journaling is started on the source and target systems for the items you identified in the deployed replication patterns. Without journaling, replication will not occur. Verify that data is synchronized between systems. Audits will detect potential problems with synchronization and attempt to automatically recover differences found.

Do the following: 1. Check whether all necessary journaling is started for each data group. Enter the following command:
(installation-library-name)/DSPDGSTS DGDFN(data-group-name) VIEW(*DBFETE)

On the File and Tracking Entry Status display, The File Entries column identifies how many file entries were configured from your replication patterns and indicates whether any file entries are not journaled on the source and target systems. If you are configured for advanced journaling, the Tracking Entries columns provide similar information. 2. Use MIMIX AutoGuard to audit your environment. To access the audits, enter the following command:
(installation-library-name)/WRKAUD

3. Each audit listed on the Work with Audits display is a unique combination of data group and MIMIX rule. When verifying an initial configuration, you need to perform a subset of the available audits for each data group in a specific order, shown in Table 67. Do the following: a. To change the number of active audits at any one time, enter the following command:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(*NOMAX)

b. Use F18 (Subset) to subset the audits by the name of the rule you want to run. c. Type a 9 (Run rule) next to the audit for each data group and press Enter.

487

Verifying the initial synchronization

Repeat Step 3b and Step 3c for each rule in Table 67 until you have started all the listed audits for all data groups.
Table 67. Rules for initial validation, listed in the order to be performed.

Rule Name 1. #DGFE 2. #OBJATR 3. #FILATR 4. #IFSATR 5. #FILATRMBR 6. #DLOATR

d. Reset the number of active audit jobs to values consistent with regular auditing:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(5)

4. Wait for all audits to complete. Some audits may take time to complete. Then check the results and resolve any problems. You may need to change subsetting values again so you can view all rule and data group combinations at once. On the Work with Audits display, check the Audit Status column for the following value: *NOTRCVD - The comparison performed by the rule detected differences. Some of the differences were not automatically recovered. Action is required. View notifications for more information and resolve the problem. Note: See the MIMIX AutoGuard document for more information about viewing audit results.

488

Synchronizing database files


The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE) command to synchronize selected database files associated with a data group, between two systems. If you use this command when performing the initial synchronization of a data group, use the procedure from the source system to send database files to the target system. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About synchronizing file entries (SYNCDGFE command) on page 480.

To synchronize a database file between two systems using the SYNCDGFE command defaults, do the following or use the alternative process described below: 1. From the Work with DG Definitions display, type 17 (File entries) next to the data group to which the file you want to synchronize is defined and press Enter. 2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next to the file entry for the file you want to synchronize and press Enter. Note: If you are synchronizing file entries as part of your initial configuration, you can type 16 next to the first file entry and then press F13 (Repeat). When you press Enter, all file entries will be synchronized. Alternative Process: You will need to identify the data group and data group file entry in this procedure. In Step 8 and Step 9, you will need to make choices about the sending mode and trigger support. For additional information, see About synchronizing file entries (SYNCDGFE command) on page 480. 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41 (Synchronize DG File Entry) and press Enter. 3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group definition prompts, specify the name of the data group to which the file is associated. 4. At the System 1 file and Library prompts, specify the name of the database file you want to synchronize and the library in which it is located on system 1. 5. If you want to synchronize only one member of a file, specify its name at the Member prompt. 6. At the Data source prompt, ensure that the value matches the system that you want to use as the source for the synchronization. 7. The default value *YES for the Release wait prompt indicates that MIMIX will hold the file entry in a release-wait state until a synchronization point is reached. Then it will change the status to active. If you want to hold the file entry for your intervention, specify *NO.

489

Synchronizing database files

8. At the Sending mode prompt, specify the value for the type of data to be synchronized. 9. At the Disable triggers on file prompt, specify whether the database apply process should disable triggers when processing the file. Accept *DGFE to use the value specified in the data group file entry or specify another value. Skip to Step 14. 10. At the Save active prompt, accept *NO so that objects in use are not saved, or, specify another value. 11. At the Save active wait time prompt, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 12. At the Allow object differences prompt, accept the default or specify *YES to indicate whether certain differences encountered during the restore of the object on the target system should be allowed. 13. At the Include logical files prompt, accept the default or *NO to indicate whether you want to include attached logical files when sending the file. 14. To change any of the additional parameters, press F10 (Additional parameters). Verify that the values shown for Include related files, Maximum sending file size (MB) and Submit to batch are what you want. 15. To synchronize the file, press Enter

490

Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ) command to synchronize library-based objects between two systems. The objects to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478

To synchronize library-based objects associated with a data group


To synchronize objects between two systems that are identified for replication by data group object entries, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42 (Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ) command appears. 3. At the Data group definition prompts, specify the data group for which you want to synchronize objects. Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. 4. To synchronize all objects identified by data group object entries for this data group, skip to Step 5. To synchronize a subset of objects defined to the data group, at the Object prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information, see see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the Object and library prompts, specify the name or the generic value you want. b. At the Object type prompt, accept *ALL or specify a specific object type to synchronize. c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to select from a list of attributes. d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 object and System 2 library prompts are ignored when a data group is specified. e. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both

491

Synchronizing objects

authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved. Or, specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. Note: When a data group is specified the following parameters are ignored: System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device name. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter.

To synchronize library-based objects without a data group


To synchronize objects between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42 (Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the Object prompts, specify elements for one or more object selectors that identify objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see Object selection for Compare and Synchronize commands on page 399. For each selector, do the following: a. At the Object and library prompts, specify the name or the generic value you want. b. At the Object type prompt, accept *ALL or specify a specific object type to synchronize.

492

c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. e. At the System 2 object and System 2 library prompts, if the object and library names on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the name of the object and library on system 2 to which you want to synchronize the objects. f. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system to which to synchronize the objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. Note: When you specify *ONLY and a data group name is not specified, if any files that are processed by this command are cooperatively processed and the data group that contains these files is active, the command could fail if the database apply job has a lock on these files. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. At the System 1 ASP group or device prompt, specify the name of the auxiliary storage pool (ASP) group or device where objects configured for replication may reside on system 1. Otherwise, accept the default to use the current jobs ASP group name. 11. At the System 2 ASP device number prompt, specify the number of the auxiliary storage pool (ASP) where objects configured for replication may reside on system 2. Otherwise, accept the default to use the same ASP number from which the object was saved (*SAVASP). Only the libraries in the system ASP and any basic user ASPs from system 2 will be in the library name space. 12. At the System 2 ASP device name prompt, specify the name of the auxiliary storage pool (ASP) device where objects configured for replication may reside on system 2. Otherwise, accept the default to use the value specified for the system 1 ASP group or device (*ASPGRP1). 13. Determine how the synchronize request will be processed. Choose one of the following To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

493

Synchronizing objects

14. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 15. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 16. To start the synchronization, press Enter.

494

Synchronizing IFS objects


The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to synchronize IFS objects between two systems. The IFS objects to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478

To synchronize IFS objects associated with a data group


To synchronize IFS objects between two systems that are identified for replication by data group IFS entries, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears. 3. At the Data group definition prompts, specify the data group for which you want to synchronize objects. Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. 4. To synchronize all IFS objects identified by data group IFS entries for this data group, skip to Step 5. To synchronize a subset of IFS objects defined to the data group, at the IFS objects prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want. Note: The IFS object path name can be used alone or in combination with FID values. See Step 12. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize.

495

Synchronizing IFS objects

e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 object path name and System 2 name pattern values are ignored when a data group is specified. f. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. If you chose values in Step 6 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 12.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see Using file identifiers (FIDs) for IFS objects on page 312. 13. To start the synchronization, press Enter.

To synchronize IFS objects without a data group


To synchronize IFS objects not associated with a data group between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter.

496

2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the IFS objects prompts, specify elements for one or more object selectors that identify IFS objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see the topic on object selection in the MIMIX Reference book. For each selector, do the following: a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want. Note: The IFS object path name can be used alone or in combination with FID values. See Step 13. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize. e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the IFS objects. g. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the IFS objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. If you chose values in Step 7 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step.

497

Synchronizing IFS objects

To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 13.

11. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 12. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 13. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see Using file identifiers (FIDs) for IFS objects on page 312. 14. To start the synchronization, press Enter.

498

Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to synchronize document library objects (DLOs) between two systems. The DLOs to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478

To synchronize DLOs associated with a data group


To synchronize DLOs between two systems that are identified for replication by data group DLO entries, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears. 3. At the Data group definition prompts, specify the data group for which you want to synchronize DLOs. Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. 4. To synchronize all objects identified by data group DLO entries for this data group, skip to Step 5. To synchronize a subset of objects defined to the data group, at the Document library objects prompts specify elements for one or more object selectors to act as filters to DLOs defined to the data group. For more information, see Object selection for Compare and Synchronize commands on page 399. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize. e. At the Owner prompt, accept *ALL or specify the owner of the DLO. f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 DLO path name and System 2 DLO name pattern values

499

Synchronizing DLOs

are ignored when a data group is specified. g. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter.

To synchronize DLOs without a data group


To synchronize DLOs between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the Document library objects prompts, specify elements for one or more object selectors that identify DLOs to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see Object selection for Compare and Synchronize commands on page 399. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed.

500

c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize. e. At the Owner prompt, accept *ALL or specify the owner of the DLO. f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the DLOs. h. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the DLOs. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

11. At the Submit to batch prompt, do one of the following: If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter and continue with the next step.

12. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 13. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 14. To start the synchronization, press Enter.

501

Synchronizing DLOs

502

Synchronizing data group activity entries


The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE) command to synchronize an object that is identified by a data group activity entry with any status value*ACTIVE, *DELAYED, *FAILED, or *COMPLETED. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About synchronizing data group activity entries (SYNCDGACTE) on page 479

To synchronize an object identified by a data group activity entry, do the following: 1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next to the activity entry that identifies the object you want to synchronize and press Enter. 2. The Confirm Synchronize of Object display appears. Press Enter to confirm the synchronization. Alternative Process: You will need to identify the data group and data group activity entry in this procedure. 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45 (Synchronize DG File Entry) and press Enter. 3. At the Data group definition prompts, specify the data group name. 4. At the Object type prompt, specify a specific object type to synchronize or press F4 to see a valid list. 5. Additional parameters appear based on the object type selected. Do one of the following: For files, you will see the Object, Library, and Member prompts. Specify the object, library and member that you want to synchronize. For objects, you will see the Object and Library prompts. Specify the object and library of the object you want to synchronize. For IFS objects, you will see the IFS object prompt. Specify the IFS object that you want to synchronize. For DLOs, you will see the Document library object and Folder prompts. Specify the folder path and DLO name of the DLO you want to synchronize.

6. Determine how the synchronize request will be processed. Choose one of the following: To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started.

503

Synchronizing data group activity entries

7. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 8. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 9. To start the synchronization, press Enter.

504

Synchronizing tracking entries


Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data queues configured for replication with MIMIX advanced journaling. You can use a tracking entry to synchronize the contents, attributes, and authorities of the item it represents. You should be aware of the information in the following topics: Considerations for synchronizing using MIMIX commands on page 474 About MIMIX commands for synchronizing objects, IFS objects, and DLOs on page 478 About synchronizing tracking entries on page 482

To synchronize an IFS tracking entry


To synchronize an object represented by an IFS tracking entry, do the following: 1. From the Work with DG IFS Tracking Entries (WRKDGIFSTE) display, type option 16 (Synchronize) next to the IFS tracking entry you want to synchronize. If you want to change options on the command SYNCIFS command, press F4 (Prompt). 2. To synchronize the associated IFS object, press Enter. 3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh). 4. If the synchronization fails, correct the errors and repeat the previous steps.

To synchronize an object tracking entry


To synchronize an object represented by an object tracking entry, do the following: 1. From the Work with DG Object Tracking Entries (WRKDGOBJTE) display, type option 16 (Synchronize) next to the object tracking entry you want to synchronize. If you want to change options on the SYNCOBJ command, press F4 (Prompt). 2. To synchronize the associated data area or data queue, press Enter. 3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh). 4. If the synchronization fails, correct the errors and repeat the previous steps.

505

Sending library-based objects

Sending library-based objects


This procedure sends one or more library-based objects between two systems using the Send Network Object (SNDNETOBJ) command. Use the appropriate command: In general, you should use the SYNCOBJ command to synchronize objects between systems. For more information about differences between commands, see Performing the initial synchronization on page 483. You should be familiar with the information in the following topics before you use this command: Considerations for synchronizing using MIMIX commands on page 474 Synchronizing user profiles with the SNDNETOBJ command on page 475 Missing system distribution directory entries automatically added on page 476

To send library-based objects between two systems, do the following: 1. If the objects you are sending are located in an independent auxiliary storage pool (ASP) on the source system, you must use the IBM command Set ASP Group (SETASPGRP) on the local system to change the ASP group for your job. This allows MIMIX to access the objects. 2. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 3. The MIMIX Utilities Menu appears. Select option 11 (Send object) and press Enter. 4. The Send Network Object (SNDNETOBJ) display appears. At the Object prompt, specify either *ALL, the name of an object, or a generic name. Note: You can specify as many as 50 objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter. 5. Specify the name of the library that contains the objects at the Library prompt. 6. Specify the type of objects to be sent from the specified library at the Object type prompt. Notes: If you specify *ALL, all object types supported by the i5/OS Save Object (SAVOBJ) command are selected. The single values that are listed for this parameter are not included when *ALL is specified because they are not supported by the i5/OS SAVOBJ command. To expand this field for multiple entries, type a plus sign (+) at the prompt and press Enter.

7. Press Enter. 8. Additional prompts appear on the display. Do the following: a. Specify the name of the system to which you are sending objects at the Remote system prompt.

506

b. If the library on the remote system has a different name, specify its name at the Remote library prompt. c. The remaining prompts on the display are used for objects synchronized via a save and restore operation. Verify that the values shown are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help). 9. By default, objects are restored to the same ASP device or number from which they were saved. To change the location where objects are restored, press F10 (Additional parameters), then specify a value for either the Restore to ASP device prompt or the Restore to ASP number prompt. Note: Object types *JRN, *JRNRCV, *LIB, and *SAVF can be restored to any ASP. IBM restricts which object types are allowed in user ASPs. Some object types may not be restored to user ASPs. Specifying a value of 1 restores objects to the system ASP. Specifying 2 through 32 restores values to the basic user ASP specified. If the specified ASP number does not exist on the target system or if it has overflowed, the objects are placed in the system ASP on the target system. 10. By default, authority to the object on the remote system is determined by that system. To have the authorities on the remote system determined by the settings of the local system, press F10 (Additional parameters), then specify *SRC at the Target authority prompt. 11. To start sending the specified objects, press Enter.

507

Sending IFS objects

Sending IFS objects


This procedure uses i5/OS save and restore functions to send one or more integrated files system (IFS) objects between two systems with the Send Network IFS (SNDNETIFS) command. Use the appropriate command: In general, you should use the SYNCIFS command to synchronize IFS objects between systems. For more information about differences between commands, see Performing the initial synchronization on page 483. You should be familiar with the information in Considerations for synchronizing using MIMIX commands on page 474. To send IFS objects between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 2. The MIMIX Utilities Menu appears. Select option 13 (Send IFS object) and press Enter. 3. The Send Network IFS (SNDNETIFS) display appears. At the Object prompt, the name of the IFS object to send. Note: You can specify as many as 30 IFS objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter. 4. Specify the name of the system to which you are sending IFS objects at the Remote system prompt. 5. Press F10 (Additional parameters). 6. Additional parameters appear which MIMIX uses in the save and restore operations. Verify that the values shown for the additional prompts are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help). 7. To start sending the specified IFS objects, press Enter.

508

Sending DLO objects


This procedure uses i5/OS save and restore functions to send one or more document library objects (DLOs) between two systems using the Send Network DLO (SNDNETDLO) command. When you are configuring for system journal replication, use this procedure from the source system to send DLOs to the target system for replication. Use the appropriate command: In general, you should use the SYNCDLO command to synchronize objects between systems. For more information about differences between commands, see Performing the initial synchronization on page 483. You should be familiar with the information in Considerations for synchronizing using MIMIX commands on page 474. To send DLO objects between systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 2. The MIMIX Utilities Menu appears. Select option 12 (Send DLO object) and press Enter. 3. The Send Network DLO (SNDNETDLO) display appears. At the Document library object prompt, specify either *ALL or the name of the DLO. Note: You can specify multiple DLOs. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter. 4. Specify the name of the folder that contains the DLOs at the Folder prompt. 5. Specify the name of the system to which you are sending DLOs at the Remote system prompt. 6. Specify a folder name in the Folder field and a network system name in the Remote system field. 7. Press F10 (Additional parameters). 8. Additional parameters appear on the display. MIMIX uses the Remote folder, Save active, Save active wait time, and Allow object differences prompts in the save and restore operations. Verify that the values shown are what you want. To see a description of each prompt and its available values, place the cursor on the prompt and press F1 (Help). 9. By default, authority to the object on the remote system is determined by that system. To have the authorities on the remote system determined by the settings of the local system, specify *SRC at the Target authority prompt. 10. To start sending the specified DLOs, press Enter.

509

Chapter21

Introduction to programming
MIMIX includes a variety of functions that you can use to extend MIMIX capabilities through automation and customization. The topics in this chapter include: Support for customizing on page 511 describes several functions you can use to customize your replication environment. Completion and escape messages for comparison commands on page 514 lists completion, diagnostic, and escape messages generated by comparison commands. The MIMIX message log provides a common location to see messages from all MIMIX products. Adding messages to the MIMIX message log on page 521 describes how you can include your own messaging from automation programs in the MIMIX message log. MIMIX supports batch output jobs on numerous commands and provides several forms of output, including outfiles. For more information, see Output and batch guidelines on page 523. Displaying a list of commands in a library on page 528 describes how to display the super set of all Lakeview commands known to License Manager or subset the list by a particular library. Running commands on a remote system on page 529 describes how to run a single command or multiple commands on a remote system. Procedures for running commands RUNCMD, RUNCMDS on page 530 provides procedures for using run commands with a specific protocol or by specifying a protocol through existing MIMIX configuration elements. Using lists of retrieve commands on page 536 identifies how to use MIMIX list commands to include retrieve commands in automation. Commands are typically set with default values that reflect the recommendation of Lakeview Technology. Changing command defaults on page 537 provides a method for customizing default values should your business needs require it.

510

Support for customizing


MIMIX includes several functions that you can use to customize processing within your replication environment.

User exit points


User exit points are predefined points within a MIMIX process at which you can call customized programs. User exit points allow you insert customized programs at specific points in an application process to perform additional processing before continuing with the application's processing. MIMIX provides user exit points for journal receiver management. For more information, see Chapter 22, Customizing with exit point programs.

Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a target object and a source object are both updated at the same time. When the change to the source object is replicated to the target object, the data does not match and the collision is detected. With MIMIX user journal replication, the definition of a collision is expanded to include any condition where the status of a file or a record is not what MIMIX determines it should be when MIMIX applies a journal transaction. Examples of these detected conditions include the following: Updating a record that does not exist Deleting a record that does not exist Writing to a record that already exists Updating a record for which the current record information does not match the before image

The database apply process contains 12 collision points at which MIMIX can attempt to resolve a collision. When a collision is detected, by default the file is placed on hold due to an error (*HLDERR) and user action is needed to synchronize the files. MIMIX provides additional ways to automatically resolve detected collisions without user intervention. This process is called collision resolution. With collision resolution, you can specify different resolution methods to handle these different types of collisions. If a collision does occur, MIMIX attempts the specified collision resolution methods until either the collision is resolved or the file is placed on hold. You can specify collision resolution methods for a data group or for individual data group file entries. If you specify *AUTOSYNC for the collision resolution element of the file entry options, MIMIX attempts to fix any problems it detects by synchronizing the file. You can also specify a named collision resolution class. A collision resolution class allows you to define what type of resolution to use at each of the collision points. Collision resolution classes allow you to specify several methods of resolution to try

511

Support for customizing

for each collision point and support the use of an exit program. These additional choices for resolving collisions allow customized solutions for resolving collisions without requiring user action. For more information, see Collision resolution on page 381.

512

513

Completion and escape messages for comparison commands

Completion and escape messages for comparison commands


When the comparison commands finish processing, a completion or escape message is issued. In the event of an escape message, a diagnostic message is issued prior to the escape message. The diagnostic message provides additional information regarding the error that occurred. All completion or escape messages are sent to the MIMIX message log. You can work with the message log from either MIMIX Availability Manager or the 5250 emulator. To find messages for comparison commands, specify the name of the command as the process type. For more information about using the message log, see the Using MIMIX book.

CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of *FILE: Completion LVI3E01 This message indicates that all files were compared successfully. Diagnostic LVE3E0D This message indicates that a particular attribute compared differently. Diagnostic LVE3385 This message indicates that differences were detected for an active file. Diagnostic LVE3E12 This message indicates that a file was not compared. The reason the file was not compared is included in the message. Escape LVE3E05 This message indicates that files were compared with differences detected. If the cumulative differences include files that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3381 This message indicates that compared files were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E09 This message indicates that the CMPFILA command ended abnormally. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Informational LVI3E06 This message indicates that no object was selected to be processed.

The following are the messages for CMPFILA, with a comparison level specification of *MBR: Completion LVI3E05 This message indicates that all members compared successfully. Diagnostic LVE3388 This message indicates that differences were detected for

514

an active member. Escape LVE3E16 This message indicates that members were compared with differences detected. If the cumulative differences include members that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

CMPOBJA messages
The following are the messages for CMPOBJA: Completion LVI3E02 This message indicates that objects were compared but no differences were detected. Diagnostic LVE3384 This message indicates that differences were detected for an active object. Escape LVE3E06 This message indicates that objects were compared and differences were detected. If the cumulative differences include objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3380 This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Informational LVI3E06 This message indicates that no object was selected to be processed.

The LVI3E02 includes message data containing the number of objects compared, the system 1 name, and the system 2 name. The LVE3E06 message includes the same message data as LVI3E02, and also includes the number of differences detected.

CMPIFSA messages
The following are the messages for CMPIFSA: Completion LVI3E03 This message indicates that all IFS objects were compared successfully. Diagnostic LVE3E0F This message indicates that a particular attribute was compared differently. Diagnostic LVE3386 This message indicates that differences were detected for an active IFS object. Diagnostic LVE3E14 This message indicates that a IFS object was not compared. The reason the IFS object was not compared is included in the message. Escape LVE3E07 This message indicates that IFS objects were compared with differences detected. If the cumulative differences include IFS objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences.

515

Completion and escape messages for comparison commands

Escape LVE3382 This message indicates that compared IFS objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Escape LVE3E0B This message indicates that the CMPIFSA command ended abnormally. Informational LVI3E06 This message indicates that no object was selected to be processed.

CMPDLOA messages
The following are the messages for CMPDLOA: Completion LVI3E04 This message indicates that all DLOs were compared successfully. Diagnostic LVE3E11 This message indicates that a particular attribute compared differently. Diagnostic LVE3387 This message indicates that differences were detected for an active DLO. Diagnostic LVE3E15 This message indicates that a DLO was not compared. The reason the DLO was not compared is included in the message. Escape LVE3E08 This message indicates that DLOs were compared and differences were detected. If the cumulative differences include DLOs that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter, this message also includes those differences. Escape LVE3383 This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 This message indicates that no object matched the specified selection criteria. Escape LVE3E0C This message indicates that the CMPDLOA command ended abnormally. Informational LVI3E06 This message indicates that no object was selected to be processed.

CMPRCDCNT messages
The following are the messages for CMPRCDCNT: Escape LVE3D4D This message indicates that ACTIVE(*YES) outfile processing failed and identifies the reason code. Escape LVE3D5A This message indicates that system journal replication is not active. Escape LVE3D5F This message indicates that an apply session exceeded the

516

unprocessed entry threshold. Escape LVE3D6D This message indicates that user journal replication is not active. Escape LVE3D6F This message identifies the number of members compared and how many compared members had differences. Escape LVE3D72 This message identifies a child process that ended unexpectedly. Escape LVE3E17 This message indicates that no object was found for the specified selection criteria. Informational LVI306B This message identifies a child process that started successfully. Informational LVI306D This message identifies a child process that completed successfully. Informational LVI3D45 This message indicates that active processing completed. Informational LVI3D50 This message indicates that work files are not deleted. Informational LVI3D5A This message indicates that system journal replication is not active. Informational LVI3D5F This message identifies an apply session that has exceeded the unprocessed entry threshold. Informational LVI3D6D This message indicates that user journal replication is not active. Informational LVI3E05 This message identifies the number of members compared. No differences were detected. Informational LVI3E06 This message indicates that no object was selected for processing.

CMPFILDTA messages
The following are the messages for CMPFILDTA: Completion LVI3D59 This message indicates that all members compared were identical or that one or more members differed but were then completely repaired. Diagnostic LVE3031 - This message indicates the name of the local system is entered on the System 2 (SYS2) prompt. Using the name of the local system on the SYS2 prompt is not valid. Diagnostic LVE3D40 This message indicates that a record in one of the members cannot be processed. In this case, another job is holding an update lock on the record and the wait time has expired. Diagnostic LVE3D42 - This message indicates that a selected member cannot be processed and provides a reason code. Diagnostic LVE3D46 This message indicates that a file member contains one or

517

Completion and escape messages for comparison commands

more field types that are not supported for comparison. These fields are excluded from the data compared. Diagnostic LVE3D50 This message indicates that a file member contains one or more large object (LOB) fields and a value other than *NONE was specified on the Repair on system (REPAIR) prompt. Files containing LOB fields cannot be repaired. In this case, the request to process the file member is ignored. Specify REPAIR(*NONE) to process the file member. Diagnostic LVE3D64 This message indicates that the compare detected minor differences in a file member. In this case, one member has more records allocated. Excess allocated records are deleted. This difference does not affect replication processing, however. Diagnostic LVE3D65 This message indicates that processing failed for the selected member. The member cannot be compared. Error message LVE0101 is returned. Escape LVE3358 This message indicates that the compare has ended abnormally, and is shown only when the conditions of messages LVI3D59, LVE3D5D, and LVE3D59 do not apply. Escape LVE3D5D This message indicates that insignificant differences were found or remain after repair. The message provides a statistical summary of the differences found. Insignificant differences may occur when a member has deleted records while the corresponding member has no records yet allocated at the corresponding positions. It is also possible that one or more selected members contains excluded fields, such as large objects (LOBs). Escape LVE3D5E This message indicates that the compare request ended because the data group was not fully active. The request included active processing (ACTIVE), which requires a fully active data group. Output may not be complete or accurate. Escape LVE3D5F This message indicates that the apply session exceeded the specified threshold for unprocessed entries. The DB apply threshold (DBAPYTHLD) parameter determines what action should be taken when the threshold is exceeded. In this case, the value *END was specified for DBAPYTHLD, thereby ending the requested compare and repair action. Escape LVE3D59 This message indicates that significant differences were found or remain after repair, or that one or more selected members could not be compared. The message provides a statistical summary of the differences found. Escape LVE3D56 This message indicates that no member was selected by the object selection criteria. Escape LVE3D60 This message indicates that the status of the data group could not be determined. The WRKDG (MXDGSTS) outfile returned a value of *UNKNOWN for one or more fields used in determining the overall status of the data group. Escape LVE3D62 This message indicates the number of mismatches that will not be fully processed for a file due to the large number of mismatches found for this request. The compare will stop processing the affected file and will continue to

518

process any other files specified on the same request. Escape LVE3D67 This message indicates that the value specified for the File entry status (STATUS) parameter is not valid. To process members in *HLDERR status, a data group must be specified on the command and *YES must be specified for the Process while active parameter. Escape LVE3D68 This message indicates that a switch cannot be performed due to members undergoing compare and repair processing. Escape LVE3D69 This message indicates that the data group is not configured for database. Data groups used with the CMPFILDTA command must be configured for database, and all processes for that data group must be active. Escape LVE3D6C This message indicates that the CMPFILDTA command ended before it could complete the requested action. The processing step in progress when the end was received is indicated. The message provides a statistical summary of the differences found. Escape LVE3E41 This message indicates that a database apply job cannot process a journal entry with the indicated code, type, and sequence number because a supporting function failed. The journal information and the apply session for the data group are indicated. See the database apply job log for details of the failed function. Informational LVI3727 This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and is now in *CMPRLS state. Informational LVI3728 This message indicates that the database apply process (DBAPY) is currently processing a repair request for a specific member. The member was previously being held due to error (*HLDERR) and has been changed from *CMPRLS to *CMPACT state. Informational LVI3729 This message indicates that the repair request for a specific member was not successful. As a result, the CMPFILDTA command has changed the data group file entry for the member back to *HLDERR status. Informational LVI372C The CMPFILDTA command is ending controlled because of a user request. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI372D The CMPFILDTA command exceeded the maximum rule recovery time policy and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI372E The CMPFILDTA command is ending unexpectedly. It received an unexpected request from the remote CMPFILDTA job to shut down and is ending. The command did not complete the requested compare or repair. Its output may be incomplete or incorrect. Informational LVI3D4B This message indicates that work files are not automatically deleted because the time specified on the Wait time (seconds) (ACTWAIT) prompt expired or an internal error occurred. Informational LVI3D59 This message indicates that the CMPFILDTA command

519

Completion and escape messages for comparison commands

completed successfully. The message also provides a statistical summary of compare processing. Informational LVI3D5E - This message indicates that the compare request ended because the request required Active processing and the data group was not active. Results of the comparison may not be complete or accurate. Informational LVI3D5F This message indicates that the apply session exceeded the specified threshold for unprocessed entries, thereby ending the requested compare and repair action. In this case, the value *END was specified for the DB apply threshold (DBAPYTHLD) parameter, which determines what action should be taken when the threshold is exceeded. Informational LVI3D60 - This message indicates that the status of the data group could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN for one or more status fields associated with systems, journals, system managers, journal managers, system communications, remote journal link, and database send and apply processes. Informational LVI3E06 This message indicates that the data group specified contains no data group file entries.

When active processing and ACTWAIT(*NONE) is specified, or when the active wait time out occurs, some members will have unconfirmed differences if none of the differences initially found was verified by the MIMIX database apply process. The CMPFILDTA outfile contains more detail on the results of each member compare, including information on the types of differences that are found and the number of differences found in each member. Messages LVI3D59, LVE3D5D, and LVE3D59 include message data containing the number of members selected, the number of members compared, the number of members with confirmed differences, the number of members with unconfirmed differences, the number of members successfully repaired, and the number of members for which repair was unsuccessful.
Updated for 5.0.02.00.

520

Adding messages to the MIMIX message log


The Add Message Log Entry (ADDMSGLOGE) command allows you to add an entry to the MIMIX message log. This is helpful when you want to include messages from your automation programs into the MIMIX message log for easier tracking. To see the parameters for this command, type the command and press F4 (Prompt). Help text for the parameters describe the options available. The message is written to the message log file. The message is also sent to the primary and secondary messages queues if the message meets the filter criteria for those queues. The message can also be sent to a program message queue. Messages generated on a network system will be automatically sent to the management system. However, messages generated on a management system may not be sent to any network systems. The system manager on the management system does not send messages to network systems when it cannot determine which system should receive the message.

521

Adding messages to the MIMIX message log

522

Output and batch guidelines


This topic provides guidelines for display, print, and file output. In addition, the user interface, the mechanics of selecting and producing output, and content issues such as formatting are described. Batch job submission guidelines are also provided. These guidelines address the user interface as well as the mechanics of submitting batch jobs that are not part of the mainline replication process.

General output considerations


Commands can produce many forms of output, including messages, display output (interactive panels), printer output (spooled files), and file output. This section focuses primarily on display, print, and file-related output. In most cases, the output information can be selectively directed to a display, a printer, or an outfile. Messages, on the other hand, are intended to provide diagnostic or status-related information, or an indication of error conditions. Messages are not intended for general output. Several commands support display, print, output files, or some combination thereof. The Work (WRK) and Display (DSP) commands are the most common classes of commands that support various forms of output. Other classes of commands, such as Compare (CMP) and Verify (VFY), also support various forms of output in many cases. As part of an on-going effort to ensure consistent capabilities across similar classes of commands, most commands in the same class support the same output formats. For example, all Work (WRK) commands typically support display, print, and output formats. This section describes the general guidelines used throughout the product. However, there are some exceptions, which are described in the sections about specific commands. Display support is intended primarily for Display (DSP) commands for displaying detailed information about a specific entry, or for Work (WRK) related commands that display lists of entries. Audit-based commands, such as Compare (CMP) and Verify (VFY), are often long-running requests and do not typically provide display support. Spooled output support provides a more easily readable form of output for print or distribution purposes. Output is generated in the form of spooled output files that can easily be printed or distributed. Nearly all Display (DSP) or Work (WRK) commands support this form of output. In some cases, other command-specific options may affect the contents of the spooled output file. Output files are intended primarily for automation purposes, providing MIMIX-related information in a manner that facilitates programming automation for various purposessuch as additional monitoring support, auditing support, automatic detection, and the correction of error conditions. Output files are also beneficial as intermediate data for advance reporting using SQL query support.

Output parameter
Some commands can produce output of more than one typedisplay, print, or output file. In these cases, the selection is made on the Output parameter. Table 68 lists the values supported by the Output parameter.

523

Output and batch guidelines

Note: Not all values are supported for all commands. For some commands, a combination of values is supported.
Table 68. * *NONE *PRINT *OUTFILE *BOTH Values supported by the Output parameter Display only No output is generated Spooled output is generated An output file is generated Both spooled output and an output file are generated.

Commands that support OUTPUT(*) that can also run in batch are required to support the other forms of output as well. Commands called from a program or submitted to batch with a specification of OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing or when called from another program would otherwise fail. With the exception of messages generated as a result of running a command, commands that support OUTPUT(*NONE) will generate no other forms of output. Commands that support combinations of output values do not support OUTPUT(*) in combination with other output values.

Display output
Commands that support OUTPUT(*) provide the ability to display information interactively. Display (DSP) and Work (WRK) commands commonly use display support. Display commands typically display detailed information for a specific entity, such as a data group definition. Work commands display a list of entries and provide a summary view of list of entries. Display support is required to work interactively with the MIMIX product. Work commands often provide subsetting capabilities that allow you to select a subset of information. Rather than viewing all configuration entries for all data groups, for example, subsetting allows you to view the configuration entries for a specific data group. This ability allows you to easily view data that is important or relevant to you at a given time.

Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to provide a readable form of output for print or distribution purposes. Output is generated in the form of spooled output files that can easily be printed or distributed. On commands that support spooled output, the spooled output is generated as a result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK) commands support this form of output. Other commands, such as Compare (CMP) and Verify (VFY), also support spooled output in most cases.

524

The Work (WRK) and Display (DSP) commands support different categories of reports. The following are standard categories of reports available from these commands: The detail report contains information for one item, such as an object, definition, or entry. A detail report is usually obtained by using option 6 (Print) on a Work (WRK) display, or by specifying *PRINT on the Output parameter on a Display (DSP) command. The list summary report contains summary information for multiple objects, definitions, or entries. A list summary is usually obtained by pressing F21 (Print) on a Work (WRK) display. You can also get this report by specifying *BASIC on the Detail parameter on a Work (WRK) command. The list detail report contains detailed information for multiple objects, definitions, or entries. A list detail report is usually obtained by specifying *PRINT on the Output parameter of a Work (WRK) command.

Certain parameters, which vary from command to command, can affect the contents of spooled output. The following list represents a common set of parameters that directly impact spooled output: EXPAND(*YES or *NO) - The expand parameter is available on the Work with Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS Entries (WRKDGIFSE), and the Work with Data Group DLO Entries (WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can be accomplished using generic entries, which represent one or more actual objects on the system. The object entry ABC*, for example, can represent many entries on a system. Expand support provides a means to determine that actual objects on a system are represented by a MIMIX configuration. Specifying *NO on the EXPAND parameter prints the configured data group entries. DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail option determines the level of detail in the generated spool file. Specifying DETAIL(*BASIC) prints a summary list of entries. For example, this specification on the Work with Data Group Definitions (WRKDGDFN) command will print a summary list of data group definitions. Specifying DETAIL(*FULL) prints each data group definition in detail, including all attributes of the data group definition. Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is specified. RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The Report Type (RPTTYPE) parameter controls the amount of information in the spooled file. The values available for this parameter vary, depending on the command. The values *DIF, *ALL, and *SUMMARY are available on the Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands. Specifying *DIF reports only detected differences. A value of *SUMMARY reports a summary of objects compared, including an indication of differences detected. *ALL provides a comprehensive listing of objects compared as well as difference detail.

525

Output and batch guidelines

The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values, as well as the value *RRN. Specifying *RRN allows you to output the relative record number of the first 1,000 objects that failed to compare. Using the *RRN value can help resolve situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. In this case, *RRN provides the information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.

File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile support across the MIMIX product is important for a number of reasons. Outfile support is a key enabler for advanced automation purposes. The support also allows MIMIX customers and qualified MIMIX consultants to develop and deliver solutions tailored to the individual needs of the user. As with the other forms of output, output files are commonly supported across certain classes of commands. The Work (WRK) commands commonly support output files. In addition, many audit-based reports, such as Comparison (CMP) commands, also provide output file support. Output file support for Work (WRK) commands provides access to the majority of MIMIX configuration and status-related data. The Compare (CMP) commands also provide output files as a key enabler for automatic error detection and correction capabilities. When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and OUTMBR parameters. The OUTFILE parameter requires a qualified file and library name. As a result of running the command, the specified output file will be used. If the file does not exist, it will automatically be created. Note: If a new file is created for CMPFILA, for example, the record format used is from the Lakeview-supplied model database file MXCMPFILA, found in the installation library. The text description of the created file is Output file for CMPFILA. The file cannot reside in the product library. The Outmember (OUTMBR) parameter allows you to specify which member to use in the output file. If no member exists, the default value of *FIRST will create a member name with the same name as the file name. A second element on the Outmember parameter indicates the way in which information is stored for an existing member. A value of *REPLACE will clear the current contents of the member and add the new records. A value of *ADD will append the new records to the existing data. Expand support: The Expand support was developed specifically as a feature for data group configuration entries that support generic specifications. Data group object entries, IFS entries, and DLO entries can all be configured using generic name values. If you specify an object entry with an object name of ABC* in library XYZ and accept the default values for all other fields, for example, all objects in library XYZ are replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the output files. Using EXPAND(*YES) will list all objects from the local system that match the configuration specified. Thus, if object name ABC* for library XYZ represented 1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the output file. EXPAND(*NO) would add a single generic entry. Note: EXPAND(*YES) support locates all objects on the local system.

526

General batch considerations


MIMIX functions that are identified as long-running processes typically allow you to submit the requests to batch and avoid the unnecessary use of interactive resources. Parameters typically associated with the Batch (BATCH) parameter include Job description (JOBD) and Job name (JOB).

Batch (BATCH) parameter


Values supported on the Batch (BATCH) parameter include *YES and *NO. A value of *YES indicates that the request will be submitted to batch. A value of *NO will cause the request to run interactively. The default value varies from command to command, and is based on the general usage of the command. If a command usually requires significant resource to run, the default will likely be *YES. Some commands, such as Start Data Group (STRDG), perform a number of interactive tasks and start numerous jobs by submitting the requests to batch. Likewise, some jobs, such as the data group apply process, run on a continuous basis and do not end until specifically requested. These jobs represent the various processes required to support an active data group. Commands of this type do not specifically provide a batch (BATCH) parameter since it is the only method available. For commands that are called from other programs, it is important to understand the difference between BATCH(*YES) and BATCH(*NO). Implementing automatic audit detection and correction support is easier to accomplish using BATCH(*NO). Let us assume you are running the Compare File Attributes (CMPFILA) command as part of an audit. If differences are detected, specifying BATCH(*NO) allows you to monitor for specific exceptions and implement automatic correction procedures. This capability would not be available if you submitted the request to BATCH(*YES).

Job description (JOBD) parameter


The Job Description (JOBD) parameter allows the user of the command to specify which job description to use when submitting the batch request. Newer MIMIX commands use the job descriptions MXAUDIT, MXSYNC, and MXDFT, which are automatically created in the MIMIX installation library when MIMIX is installed. Jobs and related output are associated to the user profile submitting the request. Older commands that provided job description support for batch processing have not been altered. Refer to individual commands for default values.

Job name (JOB) parameter


The Job name (JOB) parameter allows the user of the command to specify the job name used for the submitted job request. By default, the job name defaults to the name of the command. The job name parameter is intended to make it easier to identify the active job as well as the spooled files generated as a result of running the command. For spooled files, the job name is also used for the user data information. Only newer features provide this capability.

527

Displaying a list of commands in a library

Displaying a list of commands in a library


You can use the IBM Select Command (SLTCMD) command to display a list of all commands contained within a particular library on the system. This list includes any commands you have added to the associated library, including copies of other commands. Note: This list does not indicate whether you are licensed to the command or if authority to the command exists. Do the following: 1. From the library you want, access the MIMIX Intermediate Main Menu. 2. Select option 13 (Utilities menu) and press Enter. 3. When the MIMIX Utilities Menu is displayed, select option 1 (Select all commands).

528

Running commands on a remote system


The Run Command (RUNCMD) and Run Commands (RUNCMDS) commands provide a convenient way to run a single command or multiple commands on a remote system. The RUNCMD and RUNCMDS commands replace and extend the capabilities available in the IBM commands, Submit Remote Command (SBTRMTCMD) and Run Remote Command (RUNRMTCMD). The MIMIX commands provide a protocol-independent way of running commands using MIMIX constructs such as system definitions, data group definitions, and transfer definitions. The MIMIX commands enable you to run commands and receive messages from the remote system. In addition, the RUNCMD and RUNCMDS commands use the current data group direction to determine where the command is to be run. This capability simplifies automation by eliminating the need to manually enter source and target information at the time a command is run. Note: Do not change the RUNCMD or RUNCMDS commands to PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.

Benefits - RUNCMD and RUNCMDS commands


Individually, the RUNCMD command can be used as a convenient tool to debug base communications problems. The RUNCMD command also provides the ability to prompt on any command. The RUNCMDS command, while supporting up to 300 commands, does not allow command prompting. When multiple commands are run on a single RUNCMDS command, only one communications session is established. The target program environment, including QTEMP and the local data area, is also kept intact. Additionally, the RUNCMDS command has options for monitoring escape and completion messages. All messages are sent to the same program level as the program or command line running the command, enabling you to program remote commands in the same manner as local commands. Both RUNCMD and RUNCMDS allow you to specify commands to be sent through the journal stream and run by the database apply process. This protocol is a MIMIX request that the U-MX journal entry codes send through the journal stream. The value *DGJRN on the Protocol prompt enables this capability, thereby replacing conventional U-EX support. In addition, the When to run (RUNOPT) prompt can be used to specify when the journal entry associated with the command is processed by the target system for the specified data group. See Procedures for running commands RUNCMD, RUNCMDS on page 530 for additional details about the RUNOPT parameter. Benefits of the RUNCMD and RUNCMDS commands also include the following: Provides a convenient and consistent interface to automate tasks across a network. Centralizes the management and control of networked systems. Enables protocol-independent testing and verification of MIMIX communications setups.

529

Procedures for running commands RUNCMD, RUNCMDS

Supports sending and receiving local data area (LDA) data. Allows commands to be run under other user profiles as long as the user ID and password are the same on both systems. The password is validated before the command is run on the remote system, thus the user must have authority to the user profile being used.

Procedures for running commands RUNCMD, RUNCMDS


There are two ways to use the RUNCMD or RUNCMDS commands. You can use them with a specific protocol, or you can use them by specifying a protocol through existing MIMIX configuration elements. To use the commands with a specific protocol, use the procedure Running commands using a specific protocol on page 530. To use the commands using an existing MIMIX configuration, use the procedure Running commands using a MIMIX configuration element on page 532.

Running commands using a specific protocol


1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities Menu appears. 2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select Command display appears. 3. Page down and do one of the following: To run a single command on a remote system, type a 1 next to RUNCMD. The Run Command (RUNCMD) display appears. To run multiple commands on a remote system, type a 1 next to RUNCMDS. The Run Commands (RUNCMDS) display appears.

4. Specify the commands to run or messages to monitor for the command as follows: d. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands. e. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt. 5. Specify the protocol and protocol-specific implementation using Table 69.
Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS Specify At the Protocol prompt, specify *LOCAL.

How to run (protocol) Run on local system

530

Table 69.

Specific protocols and specifications used for RUNCMD and RUNCMDS Specify Do the following: 1. At the Protocol prompt, specify *TCP to run the commands using Transmission Control Protocol/Internet Protocol (TCP/IP) communications. Press Enter for additional prompts. 2. At the Host name or address prompt, specify the host alias or address of the TCP protocol. 3. At the Port number or alias prompt, specify the port number or port alias on the local system to communicate with the remote system. This value is a 14-character mixed-case TCP port alias or port number. Do the following: 1. At the Protocol prompt, specify *SNA to run the commands using System Network Architecture (SNA) communications. Press Enter for additional prompts. 2. At the Remote location prompt, specify the name or address of the remote location. 3. At the Local location prompt, specify the unique location name that identifies the system to remote devices. 4. At the Remote network identifier prompt, specify the name or address of the remote location. 5. At the Mode prompt, specify the name of the mode description used for communications. The product default for this parameter is MIMIX. Do the following: 1. At the Protocol prompt, specify *OPTI to run the commands using OptiConnect fiber optic network communications. Press Enter for additional prompts. 2. At the Remote location prompt, specify the name or address of the remote location.

How to run (protocol) Run using TCP/IP

Run using SNA

Run using OptiConnect

6. Do one of the following: To access additional options, skip to Step 7. To run the commands or monitor for messages, press Enter.

7. Press F10 (Additional parameters). 8. At the Check syntax prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run. 9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs. 10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned.

531

Procedures for running commands RUNCMD, RUNCMDS

11. At the User prompt, specify the user profile to use when the command is run on the remote system. 12. To run the commands or monitor for messages, press Enter.

Running commands using a MIMIX configuration element


To use RUNCMD or RUNCMDS using a MIMIX configuration element, do the following: 1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities Menu appears. 2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select Command display appears. 3. Page down and do one of the following: To run a single command on a remote system, type a 1 next to RUNCMD. The Run Command (RUNCMD) display appears. To run multiple commands on a remote system, type a 1 next to RUNCMDS. The Run Commands (RUNCMDS) display appears.

4. Specify the commands to run or messages to monitor for the command as follows: a. At the Command prompt specify the command to run on the remote system. When using the RUNCMDS command, you can specify up to 300 commands. b. If you are using the RUNCMDS command, you can specify as many as ten escape, notify, or status messages to be monitored for each command. Specify these at the Monitor for messages prompt. 5. Specify the MIMIX configuration element using Table 70.
Table 70. MIMIX configuration protocols and specifications Protocol prompt value *SYSDFN Also specify

Protocol using MIMIX configuration element Run on system defined by the default transfer definition

System definition prompt:

Specify the name of the system definition or press F4 for a list of valid definitions. Press Enter for additional prompts

532

Table 70.

MIMIX configuration protocols and specifications Protocol prompt value *TFRDFN Also specify

Protocol using MIMIX configuration element Run on the system specified in the transfer definition (TFRDFN parameter) that is not the local system

Transfer definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the transfer definition. Press Enter for additional prompts.

Run on the system specified in the data group definition that is not the local system *DGDFN

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the current source system defined for the data group

*DGSRC

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the current target system defined for the data group

*DGTGT

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run by the database apply process when the journal entry is processed

*DGJRN

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

Run on the system defined as System 1 for the data group

*DGSYS1

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

533

Procedures for running commands RUNCMD, RUNCMDS

Table 70.

MIMIX configuration protocols and specifications Protocol prompt value *DGSYS2 Also specify

Protocol using MIMIX configuration element Run on the system defined as System 2 for the data group

Data group definition prompt:

Press F1 Help for assistance in specifying the three-part qualified name of the data group definition.

6. Do one of the following: To access additional options, skip to Step 7. To run the commands or monitor for messages, press Enter.

7. Press F10 (Additional parameters). 8. At the Check syntax only prompt, specify whether to check the syntax of the command only. If *YES is specified, the syntax is checked but the command is not run. 9. At the Local data area length prompt, specify the amount of the current local data area (LDA) to copy. This is useful for automating application processing that is dependent on the local data area and for passing binary information to command programs. 10. At the Return LDA prompt, specify whether to return the contents of the local data area (LDA) from the remote system after the commands are run. The value specified in the Local data area length prompt in Step 9 determines how much data is returned. 11. At the User prompt, specify the user profile to use when the command is run on the remote system. 12. If you specified *DGJRN for the Protocol prompt, you will see the File prompts. Do the following: a. At the File name prompt, specify the name of the file to use when the journal entry generated by the commands is sent. Note: Use these prompts if you want the command to run in the database apply job associated with the named file. If a file is not specified, database apply (DBAPY) session A is selected. b. At the Library prompt, specify the name of the library associated with the file. 13. If you specified a file name for the File prompt, you will see the When to run prompt. Using Table 71, specify when the journal entry associated with the command is processed by the target system for the specified data group. 14. To run the commands or monitor for messages, press Enter.

534

Table 71.

Options for processing journal entries with MIMIX *DGJRN protocol Specify Do the following: 1. At the Protocol prompt, specify *DGJRN. 2. At the When to run prompt, specify *RCV. Do the following: 1. At the Protocol prompt, specify *DGJRN. 2. At the When to run prompt, specify *APY.

When to run (Runopt) Run when the database apply job for the specified file receives the journal entry Run in sequence with all other entries for the file.

535

Using lists of retrieve commands

Using lists of retrieve commands


The following additional commands make working with retrieve commands easier: Note: Although the current retrieve commands will be supported indefinitely, they will not be enhanced. You are now encouraged to use the extensive outfile support now available. Outfile support provides the means to generate a list of entries. The retrieve commands are primarily intended to handle retrieving information for a specific entry only. For more information, see Output and batch guidelines on page 523. Open MIMIX List (OPNMMXLST) This command allows you to open a list of specified MIMIX definitions or data group entries for use with the MIMIX retrieve commands. You specify the type of definitions or data group entries to include in the list, a CL variable to receive the list identifier, and a data group definition. The CL variable for the list identifier is needed for the MIMIX retrieve commands. Close MIMIX List (CLOMMXLST) This command allows you to close a list of specified MIMIX definitions or data group entries opened by the Open MIMIX List (OPNMMXLST) command. A close is necessary in order to free resources. You specify the list identifier to close.

536

Changing command defaults


Nearly all MIMIX processes are based on commands that have been shipped with default values that reflect best practices recommendations. This ensures the easiest and best use of each command. MIMIX implements named configuration definitions through which you can customize your configuration by using options on commands without resorting to changing command defaults. If you wish to customize command defaults to fit a specific business need, use the IBM Change Command Default (CHGCMDDFT) command. Be aware that by changing a command default, you may be affecting the operation of other MIMIX processes. Also, each update of MIMIX software will cause any changes to be lost.

537

Chapter22

Customizing with exit point programs


The MIMIX family of products provide a variety of exit points to enable you to extend and customize your operations. The topics in this chapter include: Summary of exit points on page 538 provides tables that summarize the exit points available for use. Working with journal receiver management user exit points on page 541 describes how to use user exit points safely.

Summary of exit points


The following tables summarize the exit points available for use.

MIMIX user exit points


MIMIX provides the exit points identified in Table 72. for journal receiver management. For additional information, see Working with journal receiver management user exit points on page 541.
Table 72. Type Journal receiver management exit points MIMIX exit points for journal receiver management Exit Point Name Receiver change management pre-change Receiver change management post-change Receiver delete management pre-check Receiver delete management pre-delete Receiver delete management post-delete

MIMIX also supports a generic interface to existing database and object replication process exit points that provides enhanced filtering capability on the source system. This generic user exit capability is only available through a Certified MIMIX Consultant.

MIMIX Monitor user exit points


Table 73 identifies the user exit points available in MIMIX Monitor. You can use the exit points through programs controlled by a monitor. Monitors can be set up to operate with other products, including MIMIX. You can also use the MIMIX Monitor User Access API (MMUSRACCS) for all interfaces to MIMIX Monitor. MIMIX Monitor also contains the MIMIX Model Switch Framework. This support provides powerful customization opportunities through a set of programs and commands that are designed to provide a consistent switch framework for you to use in your switching environment.

538

Customizing with exit point programs

The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX Model Switch Framework.
Table 73. Type Interface exit points MIMIX Monitor exit points Exit Point Name Pre-create Post-create Pre-change Post-change Pre-copy Post-copy Pre-delete Post-delete Pre-display Post-display Pre-print Post-print Pre-rename Post-rename Pre-start Post-start Pre-end Post-end Pre-work with information Post-work with information Pre-hold Post-hold Pre-release Post-release Pre-status Post-status Pre-change status Post-change status Pre-run Post-run Pre-export Post-export Pre-import Post-import

Condition program exit point Event program exit point

After pre-defined condition check After condition check (pre-defined and user-defined)

MIMIX Promoter user exit points


Table 74 identifies the exit points within MIMIX Promoter. If you perform concurrent operations between MIMIX Promoter and MIMIX, you might consider using these exit points within automation.
Table 74. Type Control exit points (The control exit service program supports these exit points.) MIMIX Promoter exit points Exit Point Name Transfer complete Lock failure After lock Copy failure Copy finalize After temporary journal delete Data initialize Data transfer Data finalize

Data exit points (The data exit service program supports these exit points.)

539

Summary of exit points

Requesting customized user exit programs


If you need a specialized user exit program designed for your applications, contact us at support@lakeviewtech.com or through the online tools at www.mimix.com/support. Our personnel will ask about your requirements and design a customized program to work with your applications.

540

Working with journal receiver management user exit points


User exit points in critical processing areas enable you to incorporate specialized processing with MIMIX to extend function to meet additional needs for your environment. Access to user exit processing is provided through the use of an exit program that can be written in any language supported by i5/OS. Since user exit programming allows for user code to be run within MIMIX processes, great care must be exercised to prevent the user code from interfering with the proper operation of MIMIX. For example, a user exit program that inadvertently causes an entry to be discarded that is needed by MIMIX could result in a file not being available in case of a switch. Use caution in designing a configuration for use with user exit programming. You can safely use user exit processing with proper design, programming, and testing. Lakeview services are also available to help customers implement specialized solutions.

Journal receiver management exit points


MIMIX includes support that allows user exit programming in the journal receiver change management and journal receiver delete management processes. With this support, you can customize change management and delete management of journal receivers according to the needs of your environment Journal receiver management exit points are enabled when you specify a exit program to use in a journal definition.

Change management exit points


MIMIX can change journal receivers when a specified time is reached, when the receiver reaches a specified size, or when the sequence number reaches a specified threshold. You specify these values when you create a journal definition. MIMIX also changes the journal receiver at other times, such as during a switch and when a user requests a change with the Change Data Group Receiver (CHGDGRCV) command. The following user exit points are available for customizing change management processing: Receiver Change Management Pre-Change User Exit Point. This exit point is located immediately before the point in processing where MIMIX changes a journal receiver. Either the user forced a journal receiver change (CHGDGRCV command) or MIMIX processing determined that the journal receiver needs to change. The return code from the exit program can prevent MIMIX from changing the journal receiver, which can be useful when the exit program changes the receiver. Receiver Change Management Post-Change User Exit Point. This exit point is located immediately after the point in processing where MIMIX changes a journal receiver. MIMIX ignores the return code from the exit program. This exit point is useful for processing that does not affect MIMIX processing, such as saving the journal receiver to media. (The example program in Table 75 on page 545 shows how you can determine the name of the previously attached journal by retrieving

541

Working with journal receiver management user exit points

the name of the first entry in the currently attached journal receiver.) Restrictions for Change Management Exit Points: The following restriction applies when the exit program is called from either of the change management exit points: Do not include the Change Data Group Receiver (CHGDGRCV) command in your exit program. Do not submit batch jobs for journal receiver change or delete management from the exit program. Submitting a batch job would allow the in-line exit point processing to continue and potentially return to normal MIMIX journal management processing, thereby conflicting with journal manager operations. By not submitting journal receiver change management to a batch job, you prevent a potential problem where the journal receiver is locked when it is accessed by a batch program.

Delete management exit points


MIMIX can delete journal receivers when the send process has completed processing the journal receiver and other configurable conditions are met. When you create a journal definition you specify whether unsaved journal receivers can be deleted, the number of receivers that must be retained, and how many days to retain the receivers. The following user exit points are available for customizing delete management processing: Receiver Delete Management Pre-Check User Exit Point. This exit point is located before MIMIX determines whether to delete a journal receiver. When called at this exit point, actions specified in a user exit program can affect conditions that MIMIX processing checks before the pre-delete exit point. For example, an exit program that saves the journal receiver may make the journal receiver eligible for deletion by MIMIX processing. The return code from the exit program can prevent MIMIX from deleting the journal receiver and any other journal receiver in the chain. Receiver Delete Management Pre-Delete User Exit Point. This exit point is located immediately before the point in processing where MIMIX deletes a journal receiver. MIMIX processing determined that the journal receiver is eligible for deletion. The return code from the exit program can prevent MIMIX from deleting the journal receiver, which is useful when the receiver is being used by another application. Receiver Delete Management Post-Delete User Exit Point. This exit point is immediately after the point in processing where MIMIX deletes a journal receiver. The return code from the exit program can prevent MIMIX from deleting any other (newer) journal receivers attached to the journal.

Requirements for journal receiver management exit programs


This exit program allows you to include specialized processing in your MIMIX environment at points that handle journal receiver management. The exit program runs with the authority of the user profile that owns the exit program. If your exit

542

program fails and signals an exception to MIMIX, MIMIX processing continues as if the exit program was not specified. Attention: It is possible to cause long delays in MIMIX processing that are undesirable when you use this exit program. When the exit program is called, MIMIX passes control to the exit program. MIMIX will not continue change management or delete management processing until the exit program returns. Consider placing long running processes that will not affect journal management in a batch job that is called by the exit program. Return Code
OUTPUT; CHAR (1)

This value indicates how to continue processing the journal receiver when the exit program returns control to the MIMIX process. This parameter must be set. When the exit program is called from Function C2, the value of the return code is ignored. Possible values are:
0 1 Do not continue with MIMIX journal management processing for this journal receiver. Continue with MIMIX journal management processing.

Function
INPUT; CHAR (2)

The exit point from which this exit program is called. Possible values are:
C1 C2 D0 D1 D2 Pre-change exit point for receiver change management. Post-change exit point for receiver change management. Pre-check exit point for receiver delete management. Pre-change exit point for receiver delete management. Post-change exit point for receiver delete management.

Note: Restrictions for exit programs called from the C1 and C2 exit points are described within topic Change management exit points on page 541. Journal Definition
INPUT; CHAR (10)

The name that identifies the journal definition. System


INPUT; CHAR (8)

The name of the system defined to MIMIX on which the journal is defined. Reserved1
INPUT; CHAR (10)

This field is reserved and contains blank characters. Journal Name


INPUT; CHAR (10)

The name of the journal that MIMIX is processing.

543

Working with journal receiver management user exit points

Journal Library
INPUT; CHAR (10)

The name of the library in which the journal is located. Receiver Name
INPUT; CHAR (10)

The name of the journal receiver associated with the specified journal. This is the journal receiver on which journal management functions will operate. For receiver change management functions, this always refers to the currently attached journal receiver. For receiver delete management functions, this always refers to the same journal receiver. Receiver Library
INPUT; CHAR (10)

The library in which the journal receiver is located. Sequence Option


INPUT; CHAR (6)

The value of the Sequence option (SEQOPT) parameter on the CHGJRN command that MIMIX processing would have used to change the journal receiver. Lakeview Technology recommends that you specify this parameter to prevent synchronization problems if you change the journal receiver. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:
*CONT The journal sequence number of the next journal entry created is 1 greater than the sequence number of the last journal entry in the currently attached journal receiver. The journal sequence number of the first journal entry in the newly attached journal receiver is reset to 1. The exit program should either reset the sequence number or set the return code to 0 to allow MIMIX to change the journal receiver and reset the sequence number.

*RESET

Threshold Value
INPUT; DECIMAL(15, 5)

The value to use for the THRESHOLD parameter on the CRTJRNRCV command. This parameter is only used when the exit program is called at the C1 (pre-change) exit point. Possible values are:
0 value Do not change the threshold value. The exit program must not change the threshold size for the journal receiver. The exit program must create a journal receiver with this threshold value, specified in kilobytes. The exit program must also change the journal to use that receiver, or send a return code value of 0 so that MIMIX processing can change the journal receiver.

Reserved2
INPUT; CHAR (1)

This field is reserved and contains blank characters.

544

Reserved3
INPUT; CHAR (1)

This field is reserved and contains blank characters.

Journal receiver management exit program example


The following example shows how an exit program can customize changing and deleting journal receivers. This exit program only processes journal receivers when it is called at the pre-change exit point (C1), the post-change exit point (C2), or the precheck exit point (D0). When called at the pre-change exit point, the sample exit program handles changing any journal receiver in library MYLIB. For any other journal library, MIMIX handles change management processing. When called at the post-change exit point, the exit program saves the recently detached journal receiver if the journal is in library ABCLIB. (The recently detached journal receiver was the attached receiver at the pre-change exit point.) When called at the pre-check exit point, if the journal library is TEAMLIB, the exit program saves the journal receiver to tape and allows MIMIX receiver delete management to continue processing.
Table 75. Sample journal receiver management exit program

/*--------------------------------------------------------------*/ /* Program....: DMJREXIT */ /* Description: Example user exit program using CL */ /*--------------------------------------------------------------*/ PGM PARM(&RETURN &FUNCTION &JRNDEF &SYSTEM + &RESERVED1 &JRNNAME &JRNLIB &RCVNAME + &RCVLIB &SEQOPT &THRESHOLD &RESERVED2 + &RESERVED3) VAR(&RETURN) VAR(&FUNCTION) VAR(&JRNDEF) VAR(&SYSTEM) VAR(&RESERVED1) VAR(&JRNNAME) VAR(&JRNLIB) VAR(&RCVNAME) VAR(&RCVLIB) VAR(&SEQOPT) VAR(&THRESHOLD) VAR(&RESERVED2) VAR(&RESERVED3) TYPE(*CHAR) LEN(1) TYPE(*CHAR) LEN(2) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(8) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(10) TYPE(*CHAR) LEN(6) TYPE(*DEC) LEN(15 5) TYPE(*CHAR) LEN(1) TYPE(*CHAR) LEN(1)

DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL

545

Working with journal receiver management user exit points

Table 75.

Sample journal receiver management exit program

/*--------------------------------------------------------------*/ /* Constants and misc. variables */ /*--------------------------------------------------------------*/ DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0') DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1') DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1') DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2') DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0') DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1') DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2') DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165) DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10) DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10) /*--------------------------------------------------------------*/ /* MAIN */ /*--------------------------------------------------------------*/ CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/ /*--------------------------------------------------------------*/ /* Handle processing for the pre-change exit point. */ /*--------------------------------------------------------------*/ IF (&FUNCTION *EQ &PRECHG) THEN(DO) /*--------------------------------------------------------------*/ /* If the journal library is my library(MYLIB), exit program */ /* will do the changing of the receivers. */ /*--------------------------------------------------------------*/ IF (&JRNLIB *EQ 'MYLIB') THEN(DO) IF (&THRESHOLD *GT 0) THEN(DO) CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) + THRESHOLD(&THRESHOLD) CHGJRN JRN(&JRNLIB/&JRNNAME) + JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT) ENDDO /* There has been a threshold change */ ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) + SEQOPT(&SEQOPT)) /* No threshold change */ CHGVAR &RETURN &STOP /* Stop processing entry */ ENDDO /* &JRNLIB is MYLIB */ ENDDO /* &FUNCTION *EQ &PRECHG */ /*--------------------------------------------------------------*/ /* At the post-change user exit point if the journal library is */ /* ABCLIB, save the just detached journal receiver. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO) IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO) RTVJRNE JRN(&JRNLIB/&JRNNAME) + RCVRNG(&RCVLIB/&RCVNAME) FROMENT(*FIRST) + RTNJRNE(&RTNJRNE)

546

Table 75.

Sample journal receiver management exit program

/*----------------------------------------------------------*/ /* Retrieve the journal entry, extract the previous receiver*/ /* name and library to do the save with. */ /*----------------------------------------------------------*/ CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10)) CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10)) SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) + OBJTYPE(*JRNRCV) /* Save detached receiver */ ENDDO /* &JRNLIB is ABCLIB */ ENDDO /* &FUNCTION is &POSTCHG */ /*--------------------------------------------------------------*/ /* Handle processing for the pre-check exit point. */ /*--------------------------------------------------------------*/ ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO) IF (&JRNLIB *EQ 'TEAMLIB') THEN( + SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) + OBJTYPE(*JRNRCV)) ENDDO /* &FUNCTION is &PRECHK */ ENDPGM

547

Working with journal receiver management user exit points

548

Appendix A

Supported object types for system journal replication


This list identifies IBM i object types and indicates whether MIMIX can replicate these through the system journal. Note: Not all object types exist in all releases of IBM i.
Object Type *ALRTBL *AUTL *BLKSF *BNDDIR *CFGL *CHTFMT *CLD *CLS *CMD *CNNL *COSD *CRG *CRQD *CSI *CTLD *DDIR *DEVD *DIR *DOC *DSTMF *DTAARA *DTADCT *DTAQ *EDTD *EXITRG *FCT *FILE *FLR *FNTRSC *FNTTBL *FORMDF *FTR *GSS *IGCDCT *IGCSRT *IGCTBL *IPXD *JOBD *JOBQ Description Alert table Authorization list Block special file Binding directory Configuration list Chart format C locale description Class Command Connection list Class-of-service description Cluster resource group Change request description Communications side information Controller description Distributed file directory Device description Directory Document Distributed stream file Data area Data dictionary Data queue Edit description Exit registration Forms control table File Folder Font resource Font mapping table Form definition Filter Graphics symbol set Double-byte character set conversion dictionary Double-byte character set sort table Double-byte character set font table Internetwork packet exchange description Job description Job queue Replicated Yes Yes No Yes No6 No9 Yes Yes Yes Yes Yes No9 Yes Yes Yes1 No2 Yes1,13 Yes2 Yes No2 Yes No Yes Yes Yes Yes Yes3, 11 Yes Yes No9 Yes Yes Yes No9 No9 No9 Yes Yes Yes4

549

Object Type *JOBSCD *JRN *JRNRCV *LIB *LIND *LOCALE *M36 *M36CFG *MEDDFN *MENU *MGTCOL *MODD *MODULE *MSGF *MSGQ *NODGRP *NODL *NTBD *NWID *NWSD *OOPOOL *OUTQ *OVL *PAGDFN *PAGSEG *PDG *PGM *PNLGRP *PRDAVL *PRDDFN *PRDLOD *PSFCFG *QMFORM *QMQRY *QRYDFN *RCT *S36 *SBSD *SCHIDX *SOCKET *SOMOBJ *SPADCT *SPLF *SQLPKG *SQLUDT *SRVPGM *SSND *STMF *SVRSTG *SYMLNK *TBL

Description Job schedule Journal Journal receiver Library Line description Locale space AS/400 Advanced 36 machine AS/400 Advanced 36 machine configuration Media definition Menu Management collection Mode description Module Message file Message queue Node group Node list NetBIOS description Network interface description Network server description Persistent pool (for OO objects) Output queue Overlay Page definition Page segment Print descriptor group Program Panel group Product availability Product definition Product load Print Services Facility (PSF) configuration Query management form Query management query Query definition Reference code translate table System/36 machine description Subsystem description Search index Local socket System Object Model (SOM) object Spelling aid dictionary Spool file Structured query language package User-defined SQL type Service program Session description Bytestream file Server storage space Symbolic link Table

Replicated Yes No7 No7 Yes4 Yes1 Yes No8 No8 Yes Yes Yes Yes Yes Yes Yes4 No9 Yes Yes Yes1 Yes No Yes4, 5 Yes Yes Yes Yes Yes12 Yes No6 No6 No6 Yes Yes Yes Yes No9 No9 Yes Yes No No Yes Yes Yes Yes Yes Yes Yes2 No8 Yes2 Yes

550

Supported object types for system journal replication

Object Type Description Replicated *USRIDX User index Yes *USRPRF User profile Yes *USRQ User queue Yes4 *USRSPC User space Yes10 *VLDL Validation list Yes13 *WSCST Workstation customizing object Yes Notes: 1. Replicating configuration objects to a previous version of IBM i may cause unpredictable results. 2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR, and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries. Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data Group DLO Entries. Excludes stream files associated with a server storage space. 3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38, PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF. 4. Content is not replicated. 5. Spooled files are replicated separately from the output queue. 6. These objects are system specific. Duplicating them could cause unpredictable results on the target system. 7. Duplicating these objects can potentially cause problems on the target system. 8. These objects are not duplicated due to size and IBM recommendation. 9. These object types can be supported by MIMIX for replication through the system journal, but are not currently included. Contact Lakeview Technology Support if you need support for these object types. 10.Changes made though external interfaces such as APIs and commands are replicated. Direct update of the content through a pointer is not supported. 11.The SQL field type of DATALINK is not supported. Files containing these types of fields must be excluded from replication. 12.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to that earlier release of IBM i. 13.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL, DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT, PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.

551

Appendix B

Copying configurations
This section provides information about how you can copy configuration data between systems. Supported scenarios on page 552 identifies the scenarios supported in version 5 of MIMIX. Checklist: copy configuration on page 553 directs you through the correct order of steps for copying a configuration and completing the configuration. Copying configuration procedure on page 558 documents how to use the Copy Configuration Data (CPYCFGDTA) command.

Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying configuration data from one library to another library on the same system. After MIMIX is installed, you can use the CPYCFGDTA command. The supported scenarios are as follows: :
Table 76. From MIMIX version 5 MIMIX version 42
1. 2.

Supported scenarios for copying configuration To MIMIX version 51 MIMIX version 5

The installation you are copying to must be at the same or a higher level service pack. V4R4 service pack SPC070.00.0 or higher must be installed.

552

Checklist: copy configuration


Use this checklist when you have installed MIMIX in a new library and you want to copy an existing configuration into the new library. To configure MIMIX with configuration information copied from one or more existing product libraries, do the following: 1. Review Supported scenarios on page 552. 2. Use the procedure Copying configuration procedure on page 558 to copy the configuration information from one or more existing libraries. 3. Verify that the system definitions created by the CPYCFGDTA command have the correct message queue, output queues, and job descriptions required. Be sure to check system definitions for the management system and all of the network systems. 4. Verify that transfer definitions created have the correct three-part name and that the values specified for each transfer protocol are correct. For *TCP, verify the port number. For *SNA, verify that the SNA mode is what is defined for SNA configuration. Note: One of the transfer definitions should be named PRIMARY if you intend to create additional data group definitions or system definitions that will use the default value PRIMARY for the Primary transfer definition PRITFRDFN parameter. 5. Verify that the journal definitions created have the information you want for the journal receiver prefix name, auxiliary storage pool, and journal receiver change management and delete management. The default journal receiver prefix for the user journal is generated; for the system journal, the default journal receiver prefix is AUDRCV. If you want to use a prefix other than these defaults, you will need to modify the journal definition using topic Changing a journal definition on page 217. 6. If you change the names of any of the system, transfer, or journal definitions created by the copy configuration command, ensure that you also update that name in other locations within the configuration.
Table 77. Changing named definitions after copying a configuration Also change the name in this location Transfer definition, TFRDFN parameter Data group definition, DGDFN parameter System definition, PRITFRDFN and SECTFRDFN parameters Data group definition, PRITFRDFN and SECTFRDFN parameters Data group definition, JRNDFN1 and JRNDFN2 parameters

If you change this name System definition, SYSDFN parameter

Transfer definition, TFRDFN parameter

Journal definition, JRNDFN parameter

553

Checklist: copy configuration

7. Verify the data group definitions created have the correct job descriptions. Verify that the values of parameters for job descriptions are what you want to use. MIMIX provides default job descriptions that are tailored for their specific tasks. Note: You may have multiple data groups created that you no longer need. Consider whether or not you can combine information from multiple data groups into one data group. For example, it may be simpler to have both database files and objects for an application be controlled by one data group. 8. Verify that the options which control data group file entries are set appropriately. a. For data group definitions, ensure that the values for file entry options (FEOPT) are what you want as defaults for the data group. b. Check the file entry options specified in each data group file entry. Any file entry options (FEOPT) specified in a data group file entry will override the default FEOPT values specified in the data group definition. You may need to modify individual data group file entries. 9. Check the data group entries for each data group. Ensure that all of the files and objects that you need to replicate are represented by entries for the data group. Be certain that you have checked the data group entries for your critical files and objects. Use the procedures in the Using MIMIX book to verify your configuration. 10. Check how the apply sessions are mapped for data group file entries. You may need to adjust the apply sessions. 11. Use Table 78 to entries for any additional database files or objects that you need to add to the data group.
Table 78. Class Librarybased objects How to configure data group entries for the preferred configuration. Do the following: 1. Create object entries using Creating data group object entries on page 267. 2. After creating object entries, load file entries for LF and PF (source and data) *FILE objects using Loading file entries from a data groups object entries on page 273.
Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files, you should still create file entries for PF source files to ensure that legacy cooperative processing can be used.

Planning and Requirements Information Identifying library-based objects for replication on page 100 Identifying logical and physical files for replication on page 105 Identifying data areas and data queues for replication on page 112

3. After creating object entries, load object tracking entries for *DTAARA and *DTAQ objects that are journaled to a user journal. Use Loading object tracking entries on page 285.

554

Table 78. Class IFS objects

How to configure data group entries for the preferred configuration. Do the following: 1. Create IFS entries using Creating data group IFS entries on page 282. 2. After creating IFS entries, load IFS tracking entries for IFS objects that are journaled to a user journal. Use Loading IFS tracking entries on page 284. Create DLO entries using Creating data group DLO entries on page 287. Planning and Requirements Information Identifying IFS objects for replication on page 118

DLOs

Identifying DLOs for replication on page 124

12. Use the #DGFE audit to confirm and automatically correct any problems found in file entries associated with data group object entries. Do the following: a. Type WRKAUD RULE(#DGFE) and press Enter. b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter. c. The results are placed in an outfile. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580. 13. If you anticipate a delay between configuring and starting the data group and the data group contains object information, you should set object auditing to ensure that any transactions that occur during the delay will be replicated. Use the procedure Setting data group auditing values manually on page 297. 14. Verify that system-level communications are configured correctly. a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and that the communications entries are added to the MIMIXSBS subsystem. b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is started on each system (on each "side" of the transfer definition). You can use the WRKACTJOB command for this. Look for a job under the MIMIXSBS subsystem with a function of LV-SERVER. c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that a MIMIX installation on one system can communicate with a MIMIX installation on another system. Refer to topic Verifying the communications link for a data group on page 195. 15. Ensure that there are no users on the system that will be the source for replication for the rest of this procedure. Do not allow users onto the source system until you have successfully completed the last step of this procedure. 16. Start journaling using the following procedures as needed for your configuration. For user journal replication, use Journaling for physical files on page 326 to start journaling on both source and target systems For IFS objects, configured for advanced journaling, use Journaling for IFS objects on page 330

555

Checklist: copy configuration

For data areas or data queues configured for advanced journaling, use Journaling for data areas and data queues on page 334

17. Synchronize the database files and objects on the systems between which replication occurs. Topic Performing the initial synchronization on page 483 includes instructions for how to establish a synchronization point and identifies the options available for synchronizing. 18. Start the system managers using topic Starting the system and journal managers on page 296. 19. Clear pending entries when you start the data groups. Use topic Starting Selected Data Group Processes in the Using MIMIX book.

556

557

Copying configuration procedure

Copying configuration procedure


This procedure addresses only some of the tasks needed to complete your configuration. Use this procedure only when directed from the Checklist: copy configuration on page 553. Note: By default, the CPYCFGDTA command replaces all MIMIX configuration data in the current product library with the information from the specified library. Any configuration created in the product library will be replaced with data from the specified library. This may not be desirable. To copy existing configuration data to the new MIMIX product, do the following: 1. The products in the installation library that will receive the copied configuration data must be shut down for the duration of this procedure. Use topic Choices when ending replication in the Using MIMIX book to end activity for the appropriate products. 2. Sign on to the system with the security officer (QSECOFR) user profile or with a user profile that has security officer class and all special authorities. 3. Access the MIMIX Basic Main Menu in the product library that will receive the copied configuration data. From the command line, type the command CPYCFGDTA and press F4 (Prompt). 4. At the Copy from library prompt, specify the name of the library from which you want to copy data. 5. To start copying configuration data, press Enter. 6. When the copy is complete, return to topic Checklist: copy configuration on page 553 to verify your configuration.

558

Appendix C

Configuring Intra communications


The MIMIX set of products supports a unique configuration called Intra. Intra is a special configuration that allows the MIMIX products to function fully within a singlesystem environment. Intra support replicates database and object changes to other libraries on the same system by using system facilities that allow for communications to be routed back to the same system. This provides an excellent way to have a test environment on a single machine that is similar to a multiple-system configuration. The Intra environment can also be used to perform backups while the system remains active. In an Intra configuration, the product is installed into two libraries on the same system and configured in a special way. An Intra configuration uses these libraries to replicate data to additional disk storage on the same system. The second library in effect becomes a "backup" library. By using an Intra configuration you can reduce or eliminate your downtime for routine operations such as performing daily and weekly backups. When replicating changes to another library, you can suspend the application of the replicated changes. This enables you to concurrently back up the copied library to tape while your application remains active. When the backup completes, you can resume operations that apply replicated changes to the "backup" library. An Intra configuration enables you to have a "live" copy of data or objects that can be used to offload queries and report generations. You can also use an Intra configuration as a test environment prior to installing MIMIX on another system or connecting your applications to another System i5. Because both libraries exist on the same system, an Intra configuration does not provide protection from disaster. Database replication within an Intra configuration requires that the source and target files either have different names or reside in different libraries. Similarly, objects cannot be replicated to the same named object in the same named library, folders, or directory. Note: Newly created data groups use remote journaling as the default configuration. Remote journaling is not compatible with intra communications, so you must use source send configuration when configuring for intra communications. This section includes the following procedures: Manually configuring Intra using SNA on page 559 Manually configuring Intra using TCP on page 561

Manually configuring Intra using SNA


In an Intra environment, MIMIX communicates between two product libraries on the same system instead of between a local system and a remote system. If you manually

559

Manually configuring Intra using SNA

configure the communications necessary for Intra, consider the default product library (MIMIX) to be the local system and the second product library (in this example, MIMIXI) to be the remote system. If you need to manually configure SNA communications for an Intra environment, do the following: 1. Create the system definitions for the product libraries used for Intra as follows: a. For the MIMIX library (local system), use the local location name in the following command: CRTSYSDFN SYSDFN(local-location-name) TYPE(*MGT) TEXT(Manual creation) b. For the MIMIXI library (remote system), use the following command: CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(Manual creation) 2. Create the transfer definition between the two product libraries with the following command: CRTTFRDFN TFRDFN(PRIMARY INTRA local-location-name) PROTOCOL(*SNA) LOCNAME1(INTRA1) LOCNAME2(INTRA2) NETID1(*LOC) TEXT(Manual creation) 3. Create the MIMIX mode description using the following command: CRTMODD MODD(MIMIX) MAXSSN(100) MAXCNV(100) LCLCTLSSN(12) TEXT('MIMIX INTRA MODE DESCRIPTION Manual creation.') 4. Create a controller description for MIMIX Intra using the following command: CRTCTLAPPC CTLD(MIMIXINTRA) LINKTYPE(*LOCAL) TEXT('MIMIX INTRA Manual creation.') 5. Create a local device description for MIMIX using the following command: CRTDEVAPPC DEVD(MIMIX) RMTLOCNAME(INTRA1) LCLLOCNAME(INTRA2) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX INTRA Manual creation.') 6. Create a remote device description for MIMIX using the following command: CRTDEVAPPC DEVD(MIMIXI) RMTLOCNAME(INTRA2) LCLLOCNAME(INTRA1) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES) TEXT('MIMIX REMOTE INTRA SUPPORT.') 7. Add a communication entry to the MIMIXSBS subsystem for the local location using the following command: ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA2) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX) 8. Add a communication entry to the MIMIXSBS subsystem for the remote location using the following command: ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA1) JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX) 9. Vary on the controller, local device, and remote device using the following

560

Configuring Intra communications

commands: VRYCFG CFGOBJ(MIMIXINTRA) CFGTYPE(*CTL) STATUS(*ON) VRYCFG CFGOBJ(MIMIX) CFGTYPE(*DEV) STATUS(*ON) VRYCFG CFGOBJ(MIMIXI) CFGTYPE(*DEV) STATUS(*ON) 10. Start the MIMIX system manager in both product libraries using the following commands: MIMIX/STRMMXMGR SYSDFN(*INTRA) MGR(*ALL) MIMIX/STRMMXMGR SYSDFN(*LOCAL) MGR(*JRN) Note: You still need to configure journal definitions and data group definitions.

Manually configuring Intra using TCP


In an Intra environment, MIMIX communicates between two product libraries on the same system instead of between a local system and a remote system. The libraries for the MIMIX installations need to have the same name with the Intra library having an 'I' appended to the end of the library name. In this example, the MIMIX library is the management system and the MIMIXI library is the network system. If you manually configure the communications necessary for Intra, consider the MIMIX library as the local system and the MIMIXI library as the remote system. You may already have a management system defined and need to add an Intra network system. All the configuration should be done in the MIMIX library on the management system. Note: If you have multiple network systems, you need to configure your transfer definitions to have the same name with system1 and system2 being different. For more information, see Multiple network system considerations on page 172. To add an entry in the host name table, use the command Configure TCP/IP (CFGTCP) command to access the Configure TCP/IP menu. Select option 10 (Work with TCP/IP Host Table Entries) from the menu. From the Work with TCP/IP Host Table display, type a 2 (Change) next to the LOOPBACK entry and add 'INTRA' to that entry. For this example, the host name of the management system is Source and the host name for the network or target system is Intra. 1. Create the system definitions for the product libraries used for Intra as follows: a. For the MIMIX library (local system) enter the following command: MIMIX/CRTSYSDFN SYSDFN(source) TYPE(*MGT) TEXT(management system) Note: You may have already configured this system. b. For the MIMIXI library (remote system), use the following command: MIMIX/CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(network system)

561

Manually configuring Intra using TCP

2. Create the transfer definition between the two product libraries with the following command. Note that the values for PORT1 and PORT2 must be unique. MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE) HOST2(INTRA) PORT1(55501) PORT2(55502) 3. Create auto-start jobs in the MIMIX subsystem for the port associated with each library so that MIMIX TCP server is started automatically when the subsystem is started. a. Within the MIMIX library use the commands: CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD) TOLIB(MIMIX) NEWOBJ(PORT55501) CHGJOBD JOBD(MIMIX/PORT55501) RQSDTA('MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501) ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55501) JOBD(MIMIX/PORT55501) b. Within the MIMIXI library use the commands: CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD) TOLIB(MIMIXI) NEWOBJ(PORT55502) CHGJOBD JOBD(MIMIXI/PORT55502) RQSDTA('MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502) ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55502) JOBD(MIMIXI/PORT55502) 4. Start the server for the management system (source) by entering the following command: MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501) 5. Start the server for the network system (Intra) by entering the following command: MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502) 6. Start the system managers from the management system by entering the following command: MIMIX/STRMMXMGR SYSDFN(INTRA) MGR(*ALL) RESET(*YES) Start the remaining managers normally. Note: You will still need to configure journal definitions and data group definitions on the management system. You may want to add service table entries for ports 55501 and 55502 to ensure that other applications will not try and use these ports.

562

MIMIX support for independent ASPs

Appendix D

MIMIX support for independent

ASPs
MIMIX has always supported replication of library-based objects and IFS objects to and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 232). Now, MIMIX also supports replication of library-based objects and IFS objects, including journaled IFS objects, data areas and data queues, located in independent ASPs1 (33-255). The system ASP and basic ASPs are collectively known as SYSBAS. Figure 32 shows that MIMIX supports replication to and from SYSBAS and to and from independent ASPs. Figure 33 shows that MIMIX also supports replication from SYSBAS to an independent ASP and from an independent ASP to SYSBAS.
Figure 32. MIMIX supports replication to and from an independent ASP as well as standard replication to and from SYSBAS (the system ASP and basic ASPs).

Figure 33. MIMIX also supports replication between SYSBAS and an independent ASP.

1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in V5R2 of i5/OS.

563

Benefits of independent ASPs

Restrictions: There are several permanent and temporary restrictions that pertain to replication when an independent ASP is included in the MIMIX configuration. See Requirements for replicating from independent ASPs on page 567 and Limitations and restrictions for independent ASP support on page 567.

Benefits of independent ASPs


The key characteristic of an independent ASP is its ability to function independently from the rest of the storage on a server. Independent ASPs can also be made available and unavailable at the time of your choosing. The benefits of using independent ASPs in your environment can be significant. You can isolate infrequently used data that does not always need to be available when the system is up and running. If you have a lot of data that is unnecessary for day-to-day business operations, for example, you can isolate it and leave it offline until it is needed. This allows you to shorten processing time for other tasks, such as IPLs, reclaim storage, and system start time. Additional benefits of independent ASPs allow you to do the following: Consolidate applications and data from multiple servers into a single System i5 allowing for simpler system management and application maintenance. Decrease downtime, enabling data on your system to be made available or unavailable without an IPL. Add storage as necessary, without having to make the system unavailable. Avoid the need to recover all data in the event of a system failure, since the data is isolated. Streamline naming conventions, since multiple instances of data with the same object and library names can coexist on a single System i5 in separate independent ASPs. Protect data that is unique to a specific environment by isolating data associated with specific applications from other groups of users.

Using MIMIX provides a robust solution for high availability and disaster recovery for data stored in independent ASPs.

Auxiliary storage pool concepts at a glance


An independent ASP is actually a part of the larger construct of an auxiliary storage pool (ASP). Each ASP on your system is a group of disk units that can be used to organize data for single-level storage to limit storage device failure and recovery time. The system spreads data across the disk units within an ASP. Figure 34 shows the types and subtypes of ASPs. The system ASP (ASP 1) is defined by the system and consists of disk unit 1 and any other configured storage not assigned to a basic or independent ASP. The system ASP contains the system objects for the operating system and any user objects not defined to a basic or independent ASP.

564

MIMIX support for independent ASPs

User ASPs are additional ASPs defined by the user. A user ASP can either be a basic ASP or an independent ASP. One type of user ASP is the basic ASP. Data that resides in a basic ASP is always accessible whenever the server is running. Basic ASPs are identified as ASPs 2 through 32. Attributes, such as those for spooled files, authorization, and ownership of an object, stored in a basic ASP reside in the system ASP. When storage for a basic ASP is filled, the data overflows into the system ASP. Collectively, the system ASP and the basic ASPs are called SYSBAS. Another type of user ASP is the independent ASP. Identified by device name and numbered 33 through 255, an independent ASP can be made available or unavailable to the server without restarting the system. Unlike basic ASPs, data in an independent ASP cannot overflow into the system ASP. Independent ASPs are configured using iSeries Navigator.
Figure 34. Types of auxiliary storage pools.

Subtypes of independent ASPs consist of primary, secondary, and user-defined file system (UFDS) independent ASPs1. Subtypes can be grouped together to function as a single entity known as an ASP group. An ASP group consists of a primary independent ASP and zero or more secondary independent ASPs. For example, if you make one independent ASP unavailable, the others in the ASP group are made unavailable at the same time. A primary independent ASP defines a collection of directories and libraries and may have associated secondary independent ASPs. A primary independent ASP defines a database for itself and other independent ASPs belonging to its ASP group. The primary independent ASP name is always the name of the ASP group in which it resides. A secondary independent ASP defines a collection of directories and libraries and must be associated with a primary independent ASP. One common use for a secondary independent ASP is to store the journal receivers for the objects being journaled in the primary independent ASP.
1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only userdefined file systems and cannot be a member of an ASP group unless they are converted to a primary or secondary independent ASP.

565

Auxiliary storage pool concepts at a glance

Before an independent ASP is made available (varied on), all primary and secondary independent ASPs in the ASP group undergo a process similar to a server restart. While this processing occurs, the ASP group is in an active state and recovery steps are performed. The primary independent ASP is synchronized with any secondary independent ASPs in the ASP group, and journaled objects are synchronized with their associated journal. While being varied on, several server jobs are started in the QSYSWRK subsystem to support the independent ASP. To ensure that their names remain unique on the server, server jobs that service the independent ASP are given their own job name when the independent ASP is made available. Once the independent ASP is made available, it is ready to use. Completion message CPC2605 (vary on completed for device name) is sent to the history log.

566

Requirements for replicating from independent ASPs


The following requirements must be met before MIMIX can support your independent ASP environment: License Program 5722-SS1 option 12 (Host Server) must be installed in order for MIMIX to properly replicate objects in an independent ASP on the source and target systems. Any PTFs for i5/OS that are identified as being required need to be installed on both the source and target systems. Log in to Support Central and check the Technical Documents page for a list of i5/OS PTFs that may be required. MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must be installed into *SYSBAS.

Limitations and restrictions for independent ASP support


Limitations: Before using independent ASP support, be aware that independent ASPs do not protect against disk failure. If the disks in the independent ASP are damaged and the data is unrecoverable, data is available only up to the last backup copy. A replication solution such as MIMIX is still required for high-availability and disaster recovery. In addition, be aware of the following limitations: Although you can use the same library name between independent ASPs, an independent ASP cannot share a library name with a library in the system ASP or basic ASPs (SYSBAS). SYSBAS is a component of every name space, so the presence of a library name in SYSBAS precludes its use in any independent ASP. This will affect how you configure object for replication with MIMIX, especially for IFS objects. See Configuring library-based objects when using independent ASPs on page 569. Unlike basic ASPs, when an independent ASP fills, no new objects can be created into the device. Also, updates to existing objects in the independent ASP, such as adding records to a file, may not be successful. If an independent ASP attached to the target system fills, your high-availability and disaster recovery solutions are compromised. IBM restricts the object types that can be stored in an independent ASP. For example, DLOs cannot reside in an independent ASP.

Restrictions in MIMIX support for independent ASPs include the following: MIMIX supports the replication of objects in primary and secondary independent ASPs only. Replication of IFS objects that reside in user-defined file system (UDFS) independent ASPs is not supported. You should not place libraries in independent ASPs within the system portion of a library list. MIMIX commands automatically call the IBM command SETASPGRP, which can result in significant changes to the library list for the associated user job. See Avoiding unexpected changes to the library list on page 570.

567

Configuration planning tips for independent ASPs

MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must be installed into SYSBAS. These libraries cannot exist in an independent ASP. Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX commands must reside in SYSBAS. For successful replication, ASP devices in ASP groups that are configured in data group definitions must be made available (varied on). Objects in independent ASPs attached to the source system cannot be journaled if the device is not available. Objects cannot be applied to an independent ASP on the target system if the device is not available. Planned switchovers of data groups that include an ASP group must take place while the ASP devices on both the source and target systems are available. If the ASP device for the data group on either the source or target system is unavailable at the time the planned switchover is attempted, the switchover will not complete. To support an unplanned switch (failover), the independent ASP device on the backup system (which will become the temporary production system) must be available in order for the failover to complete successfully. You must run the Set ASP Group (SETASPGRP) command on the local system before running the Send Network Object (SNDNETOBJ) command if the object you are attempting to send to a remote system is located in an independent ASP.

Also be aware of the following temporary restrictions: MIMIX does not perform validity checking to determine if the ASP group specified in the data group definition actually exists on the systems. This may cause error conditions when running commands. Any monitors configured for use with MIMIX must specify the ASP group. Monitors of type *JRN or *MSGQ that watch for events in an independent ASP must specify the name of the ASP group where the journal or message queue exists. This is done with the ASPGRP parameter of the CRTMONOBJ command. Information regarding independent ASPs is not provided on the following displays: Display Data Group File Entry (DSPDGFE), Display Data Group Data Area Entry (DSPDGDAE), Display Data Group Object Entry (DSPDGOBJE), and Display Data Group Activity Entry (DSPDGACTE). To determine the independent ASP in which the object referenced in these displays resides, see the data group definition.

Configuration planning tips for independent ASPs


A job can only reference one independent ASP at a time. Storing applications and programs in SYSBAS ensures that they are accessible by any job. Data stored in an independent ASP is not accessible for replication when the independent ASP is varied off. For database replication and replication of objects through Advanced Journaling support, due to the requirement for one user journal per data group, it is not possible for a single data group to replicate both SYSBAS data and ASP group data.

568

For object replication of library-based objects through the system journal, you should configure related objects in SYSBAS and an ASP group to be replicated by the same data group. Objects in SYSBAS and an ASP group that are not related should be separated into different data groups. This precaution ensures that the data group will start and that objects residing in SYSBAS will be replicated when the independent ASP is not available. Note: To avoid replicating an object by more than one data group, carefully plan what generic library names you use when configuring data group object entries in an environment that includes independent ASPs. Make every attempt to avoid replicating both SYSBAS data and independent ASP data for objects within the same data group. See the example in Configuring librarybased objects when using independent ASPs on page 569.

Journal and journal receiver considerations for independent ASPs


For database replication and replication of objects through Advanced Journaling support, data to be replicated and the journal used for its replication must exist in the same ASP. When you configure replication for independent ASP, consider what data you store there and the location of the journal and journal receivers needed to replicate the data. With independent ASPs, you have the option of placing journal receivers in an associated secondary independent ASP. When you create an independent ASP, an ASP group is automatically created that uses the same name you gave the primary independent ASP.

Configuring IFS objects when using independent ASPs


Replication of IFS objects in an independent ASP is supported through default replication processes and through MIMIX Advanced Journaling support. However, there are differences in how to configure for these different environments. For IFS replication by default object replication processes, you do not need to identify an ASP group in a data group definition because an IFS objects path includes the independent ASP device name. However, for IFS replication through Advanced Journaling support, you must specify the ASP group name in the data group definition so that MIMIX can locate the appropriate user journal. If you are using Advanced Journaling support and want to limit a data group to only replicate IFS objects from SYSBAS, specify *NONE for the ASP group parameters in the data group definition.

Configuring library-based objects when using independent ASPs


Use care when creating generic data group object entries; otherwise you can create situations where the same object is replicated by multiple data groups. This applies for replication between independent ASPs as well as replication between an independent ASP and SYSBAS.

569

Configuration planning tips for independent ASPs

For example, data group APP1 defines replication between ASP groups named WILLOW on each system. Similarly, group APP2 defines replication between ASP groups named OAK on each system. Both data groups have a generic data group object entry that includes object XZY from library names beginning with LIB*. If object LIBASP/XYZ exists in both independent ASPs and matches the generic data group object entry defined in each data group, both data groups replicate the corresponding object. This is considered normal behavior for replication between independent ASPs, as shown in Figure 35. However, in this example, if SYSBAS contains an object that matches the generic data group object entry defined for each data group, the same object is replicated by both data groups. Figure 35 shows that object LIBBAS/XYZ meets the criteria for replication by both data groups, which is not desirable.
Figure 35. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2 because the data groups contain the same generic data group object entry. As a result, this presents a problem if you need to perform a switch.

Avoiding unexpected changes to the library list


It is recommended that the system portion of your library list does not include any libraries that exist in an ASP group. Whenever you run a MIMIX command, MIMIX automatically determines whether the job requires a call to the IBM command Set ASP Group (SETASPGRP). The SETASPGRP command changes the current job's ASP group environment and enables MIMIX to access objects that reside in independent ASP libraries. MIMIX resets the job's ASP group to its initial value as needed before processing is completed. The SETASPGRP command may modify the library list of the current job. If the library list contains libraries for ASP groups other than those used by the ASP group for which the command was called, the SETASPGRP removes the extra libraries from

570

the library list. This can affect the system and user portions of the library list as well as the current library in the library list. When a MIMIX command runs the SETASPGRP command during processing, MIMIX resets the user portion of the library list and the current library in the library list to their initial values. The system portion of the library list is not restored to its initial value. Figure 36, Figure 37, and Figure 38 show how the system portion of the library list is affected on the Display Library List (DSPLIBL) display when the SETASPGRP command is run.
Figure 36. Before a MIMIX command runs. The library list contains three independent ASP libraries, including a library in independent ASP WILLOW in the system portion of the library list.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device WILLOW Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO

WILLOW OAK

Figure 37. During the running of a MIMIX command. The independent ASP libraries are removed from the library list.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO

Figure 38. After the MIMIX command runs. The library in independent ASP WILLOW in the system portion of the library list is removed. The libraries in independent ASP OAK in the user

571

Detecting independent ASP overflow conditions

portion of the library list and the current library are restored.
Display Library List System: Type options, press Enter. 5=Display objects in library Opt ___ ___ ___ ___ ___ ___ Library LIBSYS1 LIBSYS2 LIBSYS3 LIBCUR1 LIBUSR1 LIBUSR2 Type SYS SYS SYS CUR USR USR ASP device Text : : : : : : Bottom F3=Exit F12=Cancel F17=Top F18=Bottom CHICAGO

WILLOW OAK

The SETASPGRP command can return escape message LVE3786 if License Program 5722-SS1 option 12 (Host Server) is not installed.

Detecting independent ASP overflow conditions


You can take advantage of the independent ASP threshold monitor to detect independent ASP overflow conditions that put your high availability solution at risk due to insufficient storage. The independent ASP threshold monitor, MMIASPTHLD, monitors the QSYSOPR message queue in library QSYS for messages indicating that the amount of storage used by an independent ASP exceeds a defined threshold. When this condition is detected, the monitor sends a warning notification that the threshold is exceeded. The status of warning notifications is incorporated into overall MIMIX status. Notifications can be displayed from MIMIX Availability Manager or with the Work with Notifications (WRKNFY) command. Each ASP defaults to 90% as the threshold value. To change the threshold value, you must use IBM's iSeries Navigator. The independent ASP threshold monitor is shipped with MIMIX. The monitor is not automatically started after MIMIX is installed. If you want to use this monitor, you must start it. The monitor is controlled by the master monitor.

572

Interpreting audit results

Appendix E

Interpreting audit results

Audits use commands that compare and synchronize data. The results of the audits are placed in output files associated with the commands. The following topics provide supporting information for interpreting data returned in the output files. Interpreting audit results - MIMIX Availability Manager on page 575 describes how to check the status of an audit and resolve any problems that occur from within MIMIX Availability Manager. Interpreting audit results - 5250 emulator on page 576 describes how to check the status of an audit and resolve any problems that occur from a 5250 emulator. Checking the job log of an audit on page 578 describes how to use an audits job log to determine why an audit failed. Interpreting results for configuration data - #DGFE audit on page 580 describes the #DGFE audit which verifies the configuration data defined to your configuration using the Check Data Group File Entries (CHKDGFE) command. Interpreting results of audits for record counts and file data on page 582 describes the audits and commands that compare file data or record counts. Interpreting results of audits that compare attributes on page 586 describes the Compare Attributes commands and their results.

573

574

Interpreting audit results - MIMIX Availability Manager


When viewing results of audits, the starting point is the Audit Summary window. You may also need to view the output file or the job log, which are only available from the system where the audits ran. In most cases, this is the management system. Do the following: 1. Ensure that you have selected the management system for the installation you want from the navigation bar. If you are not certain which system is the management system, you can select Services to check. 2. From the management system, select Audit Summary from the navigation bar. 3. In the Audit Summary window, check the State and Results columns for the values shown in Table 79. Audits with potential problems are at the top of the list. 4. For each audit, flyover text for the status icon identifies the appropriate action to take. Table 79 provides additional information.
Table 79. State Rule Failed Addressing audit problems - MIMIX Availability Manager Results (blank) Action Check the job log or run the rule for the audit again. To run the audit, select Run from the action list and click . To see the job log, refer to Checking the job log of an audit on page 578 for more information. Confirm that data group processes are active and run the rule for the audit again. 1. Check the data group status. Select Data Groups from the navigation bar. Then select the data group from the list. 2. In the Summary area, confirm that replication processes are active. If necessary, select the Start action and click . 3. When processes are active, select Summary from the navigation area. 4. Locate the audit in question. Select the Run action and click . The detected differences must be manually resolved. Do the following: 1. Select Output File from the action list and click . 2. The detected differences are displayed. Look for items with a Difference Indicator value of *NC or *NE. You can display details about the error or attempt the possible recovery action available. 3. Select the action you want and click . To have MIMIX recover differences on subsequent audits, change the value of the automatic audit recovery policy.

Rule Failed

User journal replication is not active

Completed Successfully

Differences detected, recovery disabled

575

Interpreting audit results - 5250 emulator

Table 79. State

Addressing audit problems - MIMIX Availability Manager Results Differences detected, some objects not recovered Action The remaining detected differences must be manually resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits, such as #FILDTA, may correct the detected differences.

Completed Successfully

Do the following: 1. Select Output File from the action list and click . 2. The detected differences are displayed. Look for items with a Difference Indicator value of *NE, *NC, or *RCYFAILED. If automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. You can display details about the error or attempt the possible recovery action available. 3. Select the action you want and click .

For more information about the values displayed in the audit results, see Interpreting results for configuration data - #DGFE audit on page 580, Interpreting results of audits for record counts and file data on page 582, and Interpreting results of audits that compare attributes on page 586.

Interpreting audit results - 5250 emulator


When viewing results of audits, the starting point is the Summary view of the Work with Audits display. You may also need to view the output file or the job log, which are only available from the system where the audits ran. In most cases, this is the management system. Do the following from the management system: 1. Do one of the following to access the Work with Audits display. From a command line, enter WRKAUD VIEW(*AUDSTS) From the MIMIX Availability Status display, use option 5 (Display details) next to Audits and notifications. Then, if necessary, use F10 to access the appropriate view.

2. Check the Audit Status column for values shown in Table 80. Audits with potential

576

problems are at the top of the list. Take the action indicated in Table 80.
Table 80. Addressing audit problems - 5250 emulator Action The audit failed for these possible reasons. Reason 1: The rule called by the audit failed or ended abnormally. To run the rule for the audit again, select option 9 (Run rule). To check the job log, see Checking the job log of an audit on page 578. Reason 2: The #FILDTA audit or the #MBRRCDCNT audit which required replication processes that were not active. 1. From the MIMIX Availability Status display, check whether there are any problems indicated for replication processes. 2. If there are no problems with replication processes, use F20 to access a command line and type WRKAUD. Then skip to Step 6. 3. If there are replication problems, use option 9 (Troubleshoot) next to the Replication activity. 4. On the Work with Data Groups display, if processes for the data group show a red I, L, or P in the Source and Target columns, use option 9 (Start DG). 5. When processes are active, use F7 to view audits. 6. From the Work with Audits display, use option 9 (Run rule) to run the audit. The comparison performed by the audit detected differences. No recovery actions were attempted because automatic audit recovery is disabled. 1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results in the output file. 3. Check the Difference Indicator column for values of *NC and *NE. For any of these differences, you will need manually resolve these problems. To have MIMIX recover differences on subsequent audits, change the value of the automatic audit recovery policy. The comparison performed by the audit detected differences. Some of the differences were not automatically recovered. The remaining detected differences must be manually resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits, such as #FILDTA, may correct the detected differences.

Compliance Status *FAILED

*DIFNORCY

*NOTRCVD

Do the following: 1. Use option 7 to view notifications for the audit. 2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results in the output file. 3. Check the Difference Indicator column for values of *NC, *NE, and *RCYFAILED. If automatic audit recovery is disabled, you may see other values as well. For the #MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and *UN. For any of these differences, you will need to manually resolve these issues.

577

Checking the job log of an audit

Checking the job log of an audit


An audits job log can provide more information about why an audit failed. The Job log may be available from the system on which the notification was sent. Typically, this is the management system. From MIMIX Availability Manager, to check the job log for an audit, do the following: 1. For the audit in question, select the Job logs action and click only available when viewing audits from the sending system. . This choice is

2. The Job Log window opens. Look at the most recent messages to determine the cause of the audit failure. Note: If you see no data available instead, you may still be able to view the job log from the 5250 emulator as described below. From a 5250 emulator, you must display the notifications from an audit in order to view the job log. Do the following: 1. From the Work with Audits display, type 7 (Notification) next to the audit and press Enter. 2. The notifications associated with the audit are displayed on the Work with Notifications display. Use option 5 (Display) or F22 to view the description in the Notification column. 3. If the notification is not sufficient to determine the problem, use option 12 (Display job) next to the notification. 4. The Display Job menu opens. Select option 4 (Display spooled files). Then use option 5 (Display) from the Display Job Spooled Files display. 5. Look for a completion message from the rule with the text indicated from Step 2. Usually the most recent messages are at the bottom of the display.

578

579

Interpreting results for configuration data - #DGFE audit

Interpreting results for configuration data - #DGFE audit


The #DGFE audit verifies the configuration data that is defined for replication in your configuration. This audit invokes the Check Data Group File Entries (CHKDGFE) command for the audits comparison phase. The CHKDGFE command collects data on the source system and generates a report in a spooled file or an outfile. One possible reason why actual configuration data in your environment may not match what is defined to your configuration is that a file was deleted but the associated data group file entries were left intact. Another reason is that a data group file entry was specified with a member name, but a member is no longer defined to that file. If you use the automatic scheduling and automatic audit recovery functions of MIMIX AutoGuard, these configuration problems can be automatically detected and recovered for you. The report is available on the system where the command ran. The report displays values that indicate problems or whether a recovery was attempted. When Check Data Group File Entries (CHKDGFE) command is run the following values can be indicated in the report: No file entry exists (*NODGFE) An extra file entry exists (*EXTRADGFE) No file for the existing file entry exists (*NOFILE) No file member for the existing file entry exists (*NOMBR) File entries are in transition and cannot be compared (*UA).

When the #DGFE rule is called and a recovery is attempted, the following values can also be indicated in the report: Recovered by automatic recovery actions (*RECOVERED) Automatic audit recovery actions were attempted but failed to correct the detected error (*RCYFAILED)

Table 81 provides examples of when various configuration errors might occur. Table 82 provides possible problem resolution actions for these errors:
Table 81. Result *NODGFE *EXTRADGFE *NOFILE *NOMBR CHKDGFE - possible error conditions File exists Yes Yes No Yes Member exists Yes Yes No No DGFE exists No Yes Yes Yes DGOBJE exists COOPDB(*YES) COOPDB(*NO) Exclude No entry

580

Table 82. Result *NODGFE

CHKDGFE - possible error resolution actions Recovery Actions Create the DGFE or change the DGOBJE to COOPDB(*NO) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value. Delete the DGFE or change the DGOBJE to COOPDB(*YES) - applies to all objects using the object entry. If you do not want all objects changed to this value, copy the existing DGOBJE to a new, specific DGOBJE with the appropriate COOPDB value. Delete the DGFE, re-create the missing file, or restore the missing file. Delete the DGFE for the member or add the member to the file.

*EXTRADGFE

*NOFILE *NOMBR

581

Interpreting results of audits for record counts and file data

Interpreting results of audits for record counts and file data


The audits and commands that compare file data or record counts are as follows: #FILDTA audit or Compare File Data (CMPFILDTA) command #MBRRCDCNT audit or Compare Record Count (CMPRCDCNT) command

Each record in the output files for these audits or commands identifies a file member that has been compared and indicates whether a difference was detected for that member. MIMIX Availability Manager displays only detected differences found by each compare command using a subset of the fields from the output file. You can see the full set of fields in each output file by viewing it from a 5250 emulator. The type of data included in the output file is determined by the report type specified on the compare command. When viewed from a 5250 emulator, the data included for each report type is as follows: Difference reports return information about detected differences. Difference reports are the default for these compare commands. Full reports return information about all objects and attributes compared. Full reports include both differences and objects that are considered synchronized. Relative record number reports return the relative record number of the first 1,000 records of a member that fail to compare. Relative record number reports apply only to the Compare File Data command.

What differences were detected by #FILDTA


The Difference Indicator (DIFIND) field identifies the result of the comparison. Table 83 identifies values for the Compare File Data command that can appear in this field
Table 83. Values *APY *CMT Possible values for Compare File Data (CMPFILDTA) output file field Difference Indicator (DIFIND) Description The database apply (DBAPY) job encountered a problem processing a U-MX journal entry for this member. Commit cycle activity on the source system prevents active processing from comparing records or record counts in the selected member. Unable to process selected member. Cannot open file. Unable to process selected member containing a large object (LOB). The file or the MIMIX-created SQL view cannot be opened. Unable to process selected member. The file uses an unsupported data type.

*CO *CO (LOB) *DT

582

Table 83. Values *EQ

Possible values for Compare File Data (CMPFILDTA) output file field Difference Indicator (DIFIND) Description Record counts match. No differences were detected. Global difference indicator. No difference was detected. However, fields with unsupported types were omitted. The file feature is not supported for comparison. Examples of file features include materialized query tables. Matching entry not found in database apply table. Unable to process selected member. File formats differ between source and target files. Either the record length or the null capability is different. Indicates that a member is held or an inactive state was detected. Unable to complete processing on selected member. Messages preceding LVE0101 may be helpful. Indicates a difference was detected. The file member is being processed for repair by another job running the Compare File Data (CMPFILDTA) command. The source file is not journaled, or is journaled to the wrong journal. Unable to process selected member. See messages preceding message LVE3D42 in job log. The file or member is being processed by the Synchronize DG File Entry (SYNCDGFE) command. Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful. Indicates that the members synchronization status is unknown.

*EQ (OMIT) *FF *FMC *FMT

*HLD *IOERR *NE *REP *SJ *SP *SYNC *UE *UN


Updated for 5.0.06.00.

What differences were detected by #MBRRCDCNT


Table 84 identifies values for the Compare Record Count command that can appear in the Difference Indicator (DIFIND) field.
Table 84. Values *CMT Possible values for Compare Record Count (CMPRCDCNT) output file field Difference Indicator (DIFIND) Description Commit cycle activity on the source system prevents active processing from comparing records or record counts in the selected member.

583

Interpreting results of audits for record counts and file data

Table 84. Values *EQ *FF *HLD *LCK *NE *NF1 *NF2 *SJ *UE *UN

Possible values for Compare Record Count (CMPRCDCNT) output file field Difference Indicator (DIFIND) Description Record counts match. No difference was detected. Global difference indicator. The file feature is not supported for comparison. Examples of file features include materialized query tables. Indicates that a member is held or an inactive state was detected. Lock prevented access to member. Indicates a difference was detected. Member not found on system 1. Member not found on system 2. The source file is not journaled, or is journaled to the wrong journal. Unable to process selected member. Reason unknown. Messages preceding LVE3D42 in job log may be helpful. Indicates that the members synchronization status is unknown.

Updated for 5.0.06.00.

584

585

Interpreting results of audits that compare attributes

Interpreting results of audits that compare attributes


Each audit that compares attributes does so by calling a Compare Attributes1 command and places the results in an output file. Each row in an output file for a Compare Attributes command can contain either a summary record format or a detailed record format. Each summary row identifies a compared object and includes a prioritized object-level summary of whether differences were detected. Each detail row identifies a specific attribute compared for an object and the comparison results. The type of data included in the output file is determined by the report type specified on the Compare Attributes command. When viewed from a 5250 emulator, the data included for each report type is as follows: Difference reports (RPTTYPE(*DIF)) return information about detected differences. Only summary rows for objects that had detected differences are included. Detail rows for all compared attributes are included. Difference reports are the default for the Compare Attributes commands. Full reports (RPTTYPE(*ALL)) return information about all objects and attributes compared. For each object compared there is a summary row as well as a detail row for each attribute compared. Full reports include both differences and objects that are considered synchronized. Summary reports (RPTTYPE(*SUMMARY)) return only a summary row for each object compared. Specific attributes compared are not included.

For difference and full reports of compare attribute commands, several of the attribute selectors return an indicator (*INDONLY) rather than an actual value. Attributes that return indicators are usually variable in length, so an indicator is returned to conserve space. In these instances, the attributes are checked thoroughly, but the report only contains an indication of whether it is synchronized. For example, an authorization list can contain a variable number of entries. When comparing authorization lists, the CMPOBJA command will first determine if both lists have the same number of entries. If the same number of entries exist, it will then determine whether both lists contain the same entries. If differences in the number of entries are found or if the entries within the authorization list are not equal, the report will indicate that differences are detected. The report will not provide the list of entriesit will only indicate that they are not equal in terms of count or content. MIMIX Availability Manager displays only detected differences found by Compare Attributes commands using a subset of the fields from the output file. MIMIX Availability Manager displays summary rows in the Summary List window and detail rows in the Details window for the Compare command type. You can see the full set of fields in the output file by viewing it from a 5250 emulator.

1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA).

586

What attribute differences were detected


The Difference Indicator (DIFIND) field identifies the result of the comparison. Table 85 identifies values that can appear in this field. Not all values may be valid for every Compare command. Within MIMIX Availability Manager, the value shown in the Summary List window is a prioritized summary of the status of all attributes checked for the object. This summary value is also presented along with other object-identifying information at the top of the Details window. For each attribute displayed on the Details window, the results of its comparison is shown. When the output file is viewed from a 5250 emulator, the summary row is the first record for each compared object and is indicated by an asterisk (*) in the Compared Attribute (CMPATR) field. The summary rows Difference Indicator value is the prioritized summary of the status of all attributes checked for the object. When included, detail rows appear below the summary row for the object compared and show the actual result for the attributes compared. The Priority2 column in Table 85 indicates the order of precedence MIMIX uses when determining the prioritized summary value for the compared object.
Table 85. Values1 *CO *CO (LOB) *CMT Possible values for output file field Difference Indicator (DIFIND) Description Unable to process selected member. Cannot open file. Unable to process selected member containing a large object (LOB). The MIMIX-created SQL view cannot be opened. An open commit cycle on the source system prevents active processing from comparing one or more records in the selected member. Unable to process selected member. The file uses an unsupported data type. The values are based on the MIMIX configuration settings. The actual values may or may not be equal. Record counts match. No differences were detected. Global difference indicator. No differences were detected. However, fields with unsupported types were omitted. Unable to process selected member. File formats differ between source and target files. Either the record length or the null capability is different. Indicates that a member is held or an inactive state was detected. Unable to complete processing on selected member. Messages preceding LVE0101 may be helpful. 1 N/A Summary Record2 Priority 1

*DT *EC *EQ *EQ (OMIT) *FMT

1 5 5

*HLD *IOERR

N/A 1

587

Interpreting results of audits that compare attributes

Table 85. Values1 *LCK *NA *NC *NE *NF1 *NF2 *NS *RCYSBM

Possible values for output file field Difference Indicator (DIFIND) Description Lock prevented access to member. The values are not compared. The actual values may or may not be equal. The values are not equal based on the MIMIX configuration settings. The actual values may or may not be equal. Indicates differences were detected. Member not found on system 1. Member not found on system 2. Indicates that the attribute is not supported on one of the systems. Will not cause a global not equal condition. Indicates that MIMIX AutoGuard submitted an automatic audit recovery action that must be processed through the user journal replication processes. The database apply (DBAPY) will attempt the recovery and send an *ERROR or *INFO notification to indicate the outcome of the recovery attempt. Used to indicate that automatic recovery attempts via AutoGuard failed to recover the detected difference. Indicates that recovery for this object was successful. Unable to process selected member. The source file is not journaled. Unable to process selected member. See messages preceding message LVE3D42 in job log. Unable to process selected member. The file is being processed by the Synchronize DG File Entry (SYNCDGFE) command. Object status is unknown due to object activity. If an object difference is found and the comparison has a value specified on the Maximum replication lag prompt, the difference is seen as unknown due to object activity. This status is only displayed in the summary record.
Note: The Maximum replication lag prompt is only valid when a data group is specified on the command.

Summary Record2 Priority

5 3 2

*RCYFAILED *RECOVERED *SJ *SP *SYNC *UA

13 1 1 N/A 2

*UE *UN
1. 2. 3.

Unable to process selected member. Reason unknown. Messages preceding message LVE3D42 in job log may be helpful. Indicates that the objects synchronization status is unknown.

1 4

Not all values may be possible for every Compare command. Priorities are used to determine the value shown in output files for Compare Attribute commands. The value *RECOVERED can only appear in an output file modified by a recovery action. The object was initially found to be *NE or *NC but MIMIX autonomic functions recovered the object.

588

For most attributes, when a detailed row contains blanks in either of the System 1 Indicator or System 2 Indicator fields, MIMIX determines the value of the Difference Indicator field according to Table 86. For example, if the System 1 Indicator is *NOTFOUND and the System 2 Indicator is blank (Object found), the resultant Difference Indicator is *NE.
Table 86. Difference Indicator values that are derived from System Indicator values. Difference Indicator System 1 Indicator Object *NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED Found (blank value) *NA Object Found *EQ / *EQ (blank value) (LOB) / *NE / *UA / *EC / *NC *NA *NE / *UA *NS *UN *NE *NE *NS *UN *NE

System *NOTCMPD *NA 2 Indicator *NOTFOUND *NE / *UA *NOTSPT *NS

*NE *EQ *NE *NE *NE

*NS

*UN

*NE *NE *NE *NE *NE

*NE / *UA *NE / *UA *NS *UN *NE *UN *UN *NE

*RTVFAILED *UN *DAMAGED *NE

For a small number of specific attributes, the comparison is more complex. The results returned vary according to parameters specified on the compare request and MIMIX configuration values. For more information see the following topics: Comparison results for journal status and other journal attributes on page 608 Comparison results for auxiliary storage pool ID (*ASP) on page 612 Comparison results for user profile status (*USRPRFSTS) on page 615 Comparison results for user profile password (*PRFPWDIND) on page 619

Where was the difference detected


The System 1 Indicator (SYS1IND) and System 2 Indicator (SYS2IND) fields show the status of the attribute on each system as determined by the compare request. Table 87 identifies the possible values. While these fields are available in both summary and detail rows in the output file, MIMIX Availability Manager only displays them in the Details window.
Table 87. Value <blank> *DAMAGED Possible values for output file fields SYS1IND and SYS2IND Description No special conditions exist for this object. Object damaged condition. Summary Record1 Priority 5 3

589

Interpreting results of audits that compare attributes

Table 87. Value

Possible values for output file fields SYS1IND and SYS2IND Description Member not found. Attribute not compared. Due to MIMIX configuration settings, this attribute cannot be compared. Object not found. Attribute not supported. Not all attributes are supported on all IBM i releases. This is the value that is used to indicate an unsupported attribute has been specified. Unable to retrieve the attributes of the object. Reason for failure may be a lock condition. Summary Record1 Priority 2 N/A2 1 N/A2

*MBRNOTFND *NOTCMPD *NOTFOUND *NOTSPT

*RTVFAILED
1. 2.

The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary record. This value is not used in determining the priority of summary level records.

For comparisons which include a data group, the Data Source (DTASRC) field identifies which system is configured as the source for replication. In MIMIX Availability Manager Details windows, the direction of the arrow shown the data group field identifies the flow of replication.

What attributes were compared


In each detailed row, the Compared Attribute (CMPATR) field identifies a compared attribute. The following topics identify the attributes that can be compared by each command and the possible values returned Attributes compared and expected results - #FILATR, #FILATRMBR audits on page 591 Attributes compared and expected results - #OBJATR audit on page 596 Attributes compared and expected results - #IFSATR audit on page 604 Attributes compared and expected results - #DLOATR audit on page 606

590

Attributes compared and expected results - #FILATR, #FILATRMBR audits


The Compare File Attribute (CMPFILA) command supports comparisons at the file and member level. Most of the attributes supported are for file-level comparisons. The #FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for the comparison phase of the audit. Some attributes are common file attributes such as owner, authority, and creation date. Most of the attributes, however, are file-specific attributes. Examples of filespecific attributes include triggers, constraints, database relationships, and journaling information. The difference Indicator (DIFIND) returned after comparing file attributes may depend on whether the file is defined by file entries or object entries. For instance, a attribute could be equal (*EC) to the database configuration but not equal (*NC) to the object configuration. See What attribute differences were detected on page 587. Table 88 lists the attributes that can be compared and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the comparison.
Table 88. Attribute *ACCPTH1 Compare File Attributes (CMPFILA) attributes Description Access path Returned Values (SYS1VAL, SYS2VAL) AR - Arrival sequence access path EV - Encoded vector with a 1-, 2-, or 4-byte vector. KC - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in first-changed-first-out (FCFO) order. KF - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in first-in-first-out (FIFO) order. KL - Keyed sequence access path with duplicate keys allowed. Duplicate keys are accessed in last-in-first-out (LIFO) order KN - Keyed sequence access path with duplicate keys allowed. No order is guaranteed when accessing duplicate keys. KU - Keyed sequence access path with no duplicate keys allowed (UNIQUE). *MAX4GB, *MAX1TB *YES, *NO Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD, *ALWWRT *YES, *NO *YES, *NO

*ACCPTHSIZ1 *ALWDLT *ALWOPS *ALWRD *ALWUPD

Access path size Allow delete operation Allow operations Allow read operation Allow update operation

591

Table 88. Attribute *ALWWRT *ASP

Compare File Attributes (CMPFILA) attributes Description Allow write operation Auxiliary storage pool ID Returned Values (SYS1VAL, SYS2VAL) *YES, *NO 1-16 (pre-V5R2) 1-255 (V5R2) 1 = System ASP See Comparison results for auxiliary storage pool ID (*ASP) on page 612 for details. *NONE, *CHANGE, *ALL Group which checks attributes *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND *NONE, list name Group which checks a pre-determined set of attributes. When *FILE is specified for the Comparison level (CMPLVL), these attributes are compared: *CST (group), *NBRMBR, *OBJATR, *RCDFMT, *TEXT, and *TRIGGER (group). When *MBR is specified for the Comparison level (CMPLVL), these attributes are compared: *CURRCDS, *EXPDATE, *NBRDLTRCD, *OBJATR, *SHARE, and *TEXT. 1-65535 Group which checks attributes *CSTIND, *CSTNBR No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of constraints, constraint names, constraint types, and the check pending attribute are equal. For referential and check constraints, the constraint state as well as whether the constraint status is enabled or disabled is also compared. Numeric value 0-4294967295 *YES, *NO Group which checks *DBRIND, *OBJATR Database relations No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of database relations and the dependent file names are equal. Blank for *NONE or date in CYYMMDD format, where C equals the century. Value 0 is 19nn and 1 is 20nn.

*AUDVAL *AUT *AUTL *BASIC

Object audit value File authorities Authority list name Pre-determined set of basic attributes

*CCSID1 *CST *CSTIND 2

Coded character set Constraint attributes Constraint equal indicator

*CSTNBR 2 *CURRCDS *DBCSCAP *DBR *DBRIND 2

Number of constraints Current number of records DBCS capable

*EXPDATE1

Expiration date for member

592

Table 88. Attribute

Compare File Attributes (CMPFILA) attributes Description Pre-determined, extended set Returned Values (SYS1VAL, SYS2VAL) Valid only for Comparison level of *FILE, this group compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *ACCPTH, *AUT (group), *CCSID, *CST (group), *CURRCDS, *DBR (group), *MAXKEYL, *MAXMBRS, *MAXRCDL, *NBRMBR, *OBJATR, *OWNER, *PFSIZE (group), *RCDFMT, *REUSEDLT, *SELOMT, *SQLTYP, *TEXT, and *TRIGGER (group). 10 character name *NONE if the file has no members. *YES, *NO *NONE, 1-32767 0-32767 *YES, *NO Add, update, and delete authorities are not checked. Differences in these authorities do not result in an *NE condition. Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT. Results are described in Comparison results for journal status and other journal attributes on page 608. *YES, *NO 10 character name, blank if never journaled *AFTER, *BOTH 10 character name, blank if never journaled *OPNCLO, *NONE 3 character ID 10 character name *NONE if the file has no members. *YES, *NO *IMMED, *REBLD, *DLY 0-32767

*EXTENDED

*FIRSTMBR1 3 *FRCKEY1 *FRCRATIO1 *INCRCDS1 *JOIN

Name of member *FIRST Force keyed access path Records to force a write Increment number of records Join Logical file

*JOURNAL

Journal attributes

*JOURNALED *JRN *JRNIMG *JRNLIB *JRNOMIT *LANGID1 *LASTMBR1 3 *LVLCHK1 *MAINT1 *MAXINC1

File is currently journaled Current or last journal Record images Current or last journal library Journal entries to be omitted Language ID Name of member *LAST Record format level check Access path maintenance Maximum increments

593

Table 88. Attribute

Compare File Attributes (CMPFILA) attributes Description Maximum key length Maximum members Max % deleted records allowed Maximum record length Current number of deleted records Number of members Initial number of records Object control level File owner File size attributes Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) 1-2000 *NOMAX, 1-32767 *NONE, 1-100 1-32766 0-4294967295 0-32767 *NOMAX, 1-2147483646 8 character user-defined value User profile name Group which checks *CURRCDS, *INCRCDS, *MAXINC, *NBRDLTRCD, *NBRRCDS *NONE, user profile name No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal. No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if public authority values are equal. 1-32 *IPL, *AFTIPL, *NO *YES, *NO *YES, *NO *YES, *NO PF Types - NONE, TABLE, LF Types - INDEX, VIEW, NONE 50 character value

*MAXKEYL1 *MAXMBRS1 *MAXPCT1 *MAXRCDL1 *NBRDLTRCD1 *NBRMBR1 *NBRRCDS1 *OBJCTLLVL1 *OWNER *PFSIZE *PGP *PRVAUTIND

*PUBAUTIND

Public authority indicator Number of record formats Access path recovery Reuse deleted records Select / omit file Share open data path SQL file type Text description

*RCDFMT *RECOVER1 *REUSEDLT1 *SELOMT *SHARE1 *SQLTYP *TEXT1

594

Table 88. Attribute

Compare File Attributes (CMPFILA) attributes Description Returned Values (SYS1VAL, SYS2VAL) Group which checks *TRGIND, *TRGNBR, *TRGXSTIND Trigger equal indicator No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates whether it is enabled or disabled, and if the number of triggers, trigger names, trigger time, trigger event, and trigger condition with an event type of update are equal. Numeric value No value, indicator only4 When this attribute is returned in output, its Difference Indicator value indicates if a trigger program exists on the system. 10 character user-defined value *IMMED, *CLS, 1-32767 *IMMED, *NOMAX, 1-32767

*TRIGGER *TRGIND 2

*TRGNBR 2 *TRGXSTIND 2

Number of triggers Trigger existence indicator User-defined attribute Maximum file wait time Maximum record wait time

*USRATR *WAITFILE1 *WAITRCD1


1.

2. 3.

4.

Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. This attribute cannot be specified as input for comparing but it is included in a group attribute. When the group attribute is checked, this value may appear in the output. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the file is configured for system journal replication with a configured Omit content (OMTDTA) value of *FILE. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, however, these values are blank.

Updated for 5.0.11.00.

595

Attributes compared and expected results - #OBJATR audit


The #OBJATR audit calls the Compare Object Attributes (CMPOBJA) command and places the results in an output file. Table 89 lists the attributes that can be compared by the CMPOBJA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The command supports attributes that are common among most library-based objects as well as extended attributes which are unique to specific object types, such as subsystem descriptions, user profiles, and data areas. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.
Table 89. Attribute *ACCPTHSIZ1 2 *AJEIND Compare Object Attributes (CMPOBJA) attributes Description Access path size Valid for logical files only. Auto start job entries. Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) *MAX4GB and *MAX1TB No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of auto start job entries, job entry and associated job description, and library entry values are equal. 1-16 (pre-V5R1) 1-32 (V5R1) 1-255 (V5R2), 1 = System ASP See Comparison results for auxiliary storage pool ID (*ASP) on page 612 for details. Numeric value

*ASP

Auxiliary storage pool ID

*ASPNBR

Number of defined storage pools. Valid for subsystem descriptions only. Attention key handling program Valid for user profiles only. Object audit value Authority attributes Authority to check. Valid for job queues only. Authority list name Pre-determined set of basic attributes

*ATTNPGM2

*SYSVAL, *NONE, *ASSIST, attention program name

*AUDVAL *AUT *AUTCHK2 *AUTL *BASIC

*NONE, *USRPRF, *CHANGE, *ALL Group which checks *AUTL, *PGP, *PRVAUTIND, *PUBAUTIND *OWNER, *DTAAUT *NONE, list name Group which checks a pre-determined set of attributes. These attributes are compared: *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR.

596

Table 89. Attribute *CCSID2

Compare Object Attributes (CMPOBJA) attributes Description Character identifier control. Valid for user profiles only. Country ID Valid for user profiles only. Communications entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) *SYSVAL, ccsid-value

*CNTRYID2

*SYSVAL, country-id

*COMMEIND

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of communication entries, maximum number of active jobs, communication device, communication mode, associated job description and library, and the default user entry values are equal. *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE

*CRTAUT2

Authority given to users who do not have specific authority to the object. Valid for libraries only. Auditing value for objects created in this library Valid for libraries only. Profile that owns objects created by user Valid for user profiles only. Object creation date Current library Valid for user profiles only. Data cyclic redundancy check (CRC) Valid for data queues only. DDM conversation Valid for job descriptions only. Decimal positions Valid for data areas only. Object Domain Data area extended attributes

*CRTOBJAUD2

*SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL

*CRTOBJOWN

*USRPRF, *GRPPRF, profile-name

*CRTTSP *CURLIB

YYYY-MM-DD-HH.MM.SS.mmmmmm *CRTDFT, current-library

*DATACRC2

10 character value

*DDMCNV2

*KEEP, *DROP

*DECPOS *DOMAIN *DTAARAEXT

0-9 *SYSTEM, *USER Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE

597

Table 89. Attribute

Compare Object Attributes (CMPOBJA) attributes Description Pre-determined, extended set Returned Values (SYS1VAL, SYS2VAL) Group which compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS, *OBJATR, *TEXT, and *USRATR. *NONE, 1 - 32,767 1 - 4294967294

*EXTENDED

*FRCRATIO1 2 *GID

Records to force a write Valid for logical files only. Group profile ID number Valid for user profiles only. Group authority to created objects Valid for user profiles only. Group authority type Valid for user profiles only. Group profile name Valid for user profiles only. Information status

*GRPAUT

*NONE, *ALL, *CHANGE, *USE, *EXCLUDE

*GRPAUTTYP

*PGP, *PRIVATE

*GRPPRF

*NONE, profile-name

*INFSTS

*OK (No errors occurred), *RTVFAILED (No information returned - insufficient authority or object is locked), *DAMAGED (Object is damaged or partially damaged). Menu - *SIGNOFF, menu name Library - *LIBL, library name Program - *NONE, program name Library - *LIBL, library name Group which checks *DDMCNV, *JOBQ, *JOBQLIB, *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB, *OUTQPRI, *PRTDEV 10 character name

*INLMNU

Initial menu Valid for user profiles only. Initial program Valid for user profiles only. Job description extended attributes Job queue Valid for job descriptions only. Job queue entries Valid for subsystem descriptions only.

*INLPGM

*JOBDEXT

*JOBQ2

*JOBQEIND

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of job queue entries, job queue names, job queue libraries, and order of entries are the same

598

Table 89. Attribute

Compare Object Attributes (CMPOBJA) attributes Description Job queue extended attributes Job queue library Valid for job descriptions only. Job queue priority Valid for job descriptions only. Subsystem that receives jobs from this queue Valid for job queues only. Job queue status Valid for job queues only. Journal attributes Returned Values (SYS1VAL, SYS2VAL) Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS, *OPRCTL 10 character name

*JOBQEXT *JOBQLIB2

*JOBQPRI2

1 (highest) - 9 (lowest)

*JOBQSBS2

Subsystem name

*JOBQSTS2 *JOURNAL

HELD, RELEASED Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOMIT4. Results are described in Comparison results for journal status and other journal attributes on page 608. *YES, *NO 10 character name *AFTER, *BOTH 10 character name *OPNCLO, *NONE *SYSVAL, language-id

*JOURNALED *JRN *JRNIMG *JRNLIB *JRNOMIT *LANGID2

Object is currently journaled Current or last journal Record images Current or last journal library Journal entries to be omitted Language ID Valid for user profiles only. Data area length Valid for data areas only Extended library information attributes Initial library list Valid for job descriptions only.

*LENGTH *LIBEXT *LIBLIND

1-2000 (character), 1-24 (decimal), 1 (logical) Group which checks *CRTAUT, *CRTOBJAUD No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of library list entries and entry list values are equal. The comparison is order dependent.

599

Table 89. Attribute *LMTCPB

Compare Object Attributes (CMPOBJA) attributes Description Limit capabilities Valid for user profiles only. Job log output Valid for job descriptions only. Record format level check Valid for logical files only. Access path maintenance Valid for logical files only. Maximum active jobs Valid for subsystem descriptions only. Maximum members Valid for logical files only. Message queue Valid for user profiles only. Number of logical file members Valid for logical files only. Object attribute Object control level Valid for object types that support this attribute5. Operator controlled Valid for job queues only. Output queue Valid for job descriptions only. Output queue library Valid for job descriptions only. Output queue priority Valid for job descriptions only. Returned Values (SYS1VAL, SYS2VAL) *PARTIAL, *YES, *NO

*LOGOUTPUT2

*SYSVAL, *JOBLOGSVR, *JOBEND, *PND

*LVLCHK1 2

*YES, *NO

*MAINT1 2

*DLY, *IMMED, *REBLD

*MAXACT 2

Numeric value, *NOMAX (32,767)

*MAXMBRS1 2 *MSGQ2

*NOMAX, 1 - 32,767 Message queue - message queue name Library - *LIBL, library name 0 - 32,767

*NBRMBR1 2

*OBJATR *OBJCTLLVL2

10 character object extended attribute 8 character user-defined value

*OPRCTL2 *OUTQ2

*YES, *NO *USRPRF, *DEV, *WRKSTN, output queue name

*OUTQLIB2

10 character name

*OUTQPRI2

1 (highest) - 9 (lowest)

600

Table 89. Attribute *OWNER *PGP

Compare Object Attributes (CMPOBJA) attributes Description Object owner Primary group Pre-start job entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) 10 character name *NONE, user profile name No value, indicator only1 When this attribute is returned in output, its Difference Indicator value indicates if the number of prestart jobs, program, user profile, start job, wait for job, initial jobs, maximum jobs, additional jobs, threshold, maximum users, job name, job description, first and second class, and number of first and second class jobs values are equal. *LIBL/*WRKSTN, *DEV

*PRESTIND

*PRFOUTQ2

Output queue Valid for user profiles only. User profile password indicator Printer device Valid for job descriptions only. Private authority indicator

*PRFPWDIND *PRTDEV2

See Comparison results for user profile password (*PRFPWDIND) on page 619 for details. *USRPRF, *SYSVAL, *WRKSTN, printer device name

*PRVAUTIND

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal. *SYSVAL, *NOMAX, 1-366 days

*PUBAUTIND

Public authority indicator

*PWDEXPITV

Password expiration interval Valid for user profiles only. No password indicator Valid for user profiles only. Job queue allocation indicator Valid for subsystem descriptions only.

*PWDIND

*YES (no password), *NO (password)

*QUEALCIND

No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the job queue entries for a subsystem are in the same order and have the same queue names and queue library names. It also compares the allocation indicator values

601

Table 89. Attribute

Compare Object Attributes (CMPOBJA) attributes Description Remote location entries Valid for subsystem descriptions only. Returned Values (SYS1VAL, SYS2VAL) No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of remote location entries, remote location, mode, job description and library, maximum active jabs, and default user entry values are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of routing entries, sequence number, maximum active, steps, compare start, entry program, class, and compare entry values are equal Group which checks *AJEIND, *ASPNBR, *COMMEIND, *JOBQEIND, *MAXACT, *PRESTIND, *RLOCIND, *RTGEIND, *SBSDSTS *ACTIVE, *INACTIVE

*RLOCIND

*RTGEIND

Routing entries Valid for subsystem descriptions only.

*SBSDEXT

Subsystem description extended attributes Subsystem status Valid for subsystem descriptions only. Object size Special authorities Valid for user profiles only. SQL stored procedures Valid for programs and service programs only.

*SBSDSTS2

*SIZE *SPCAUTIND

Numeric value No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if special authority values are equal *NONE, or indicator only3 *NONE is returned when there are no stored procedures associated with the program or service program. When the indicator only is returned in output, the Difference Indicator value identifies whether SQL stored procedures associated with the object are equal. *NONE, or indicator only3 *NONE is returned when there are no user defined functions associated with the program or service program. When the indicator only is returned in output, the Difference Indicator value identifies whether SQL user defined functions associated with the object are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if supplemental group values are equal 50 character description

*SQLSP

*SQLUDF

SQL user defined functions Valid for programs and service programs only.

*SUPGRPIND

Supplemental Groups Valid for user profiles only. Text description

*TEXT2

602

Table 89. Attribute *TYPE

Compare Object Attributes (CMPOBJA) attributes Description Data area type - data area types of DDM resolved to actual data area types Valid for data areas only. User profile ID number Valid for user profiles only. User-defined attribute User Class Valid for user profiles only. User profile extended attributes Returned Values (SYS1VAL, SYS2VAL) *CHAR, *DEC, *LGL

*UID

1 - 4294967294

*USRATR2 *USRCLS

10 character user-defined value *SECOFR, *SECADM, *PGMR, *SYSOPR, *USER

*USRPRFEXT

Group which checks *ATTNPGM, *CCSID, *CNTRYID, *CRTOBJOWN, *CURLIB, *GID, *GRPAUT, *GRPAUTTYP, *GRPPRF, *INLMNU, *INLPGM, *LANGID, *LMTCPB, *MSGQ, *PRFOUTQ, *PWDEXPITV, *PWDIND, *SPCAUTIND, *SUPGRPIND, *USRCLS *ENABLED, *DISABLED6 For details, see Comparison results for user profile status (*USRPRFSTS) on page 615. Character value of data

*USRPRFSTS

User profile status

*VALUE2
1. 2. 3. 4. 5. 6.

Data area value Valid for data areas only.

This attribute only applies to logical files. Use the Compare File Attributes (CMPFILA) command to compare or omit physical file attributes. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. If *PRINT is specified for the output format on the compare request, an indicator appears in the System 1 and System 2 columns. If *OUTFILE is specified, these values are blank. These attributes are compared for object types of *FILE, *DTAQ, and *DTAARA. These are the only objects supported by IBM's user journals. The *OBJCTLLVL attribute is only supported on the following object types: *AUTL, *CNNL, *COSD, *CTLD, *DEVD, *DTAARA, *DTAQ, *FILE, *IPXD, *LIB, *LIND, *MODD, *NTBD, *NWID, *NWSD, and *USRPRF. The profile status is only compared if no data group is specified or the USRPRFSTS has a value of *SRC for the specified data group. If a data group is specified on the CMPOBJA command and the USRPRFSTS value on the object entry has a value of *TGT, *ENABLED, or *DISABLED, the user profile status is not compared.

Updated for 5.0.03.00 and 5.0.07.00.

603

Attributes compared and expected results - #IFSATR audit


The #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and places the results in an output file. Table 90 lists the attributes that can be compared by the CMPIFSA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.
Table 90. Attribute *ALWSAV1 *ASP Compare IFS Attributes (CMPIFSA) attributes Description Allow save Auxiliary storage pool Returned Values (SYS1VAL, SYS2VAL) *YES, *NO 1-16 (pre-V5R1) 1-255 (V5R1) 1-System ASP See Comparison results for auxiliary storage pool ID (*ASP) on page 612 for details. *ALL, *CHANGE, *NONE, *USRPRF Group which checks attributes *AUTL, *PGP, *PUBAUTIND, *PRVAUTIND *NONE, list name Group which checks a pre-determined set of attributes. The following set of attributes are compared: *CCSID, *DATASIZE, *OBJTYPE, and the group *PCATTR. 1-65535 SAA format (YY-MM-DD-HH.MM.SS.mmmmmm) 8 character value 0-4294967295 Group which checks a pre-determined set of attributes. Compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared: *AUT (group), *CCSID, *DATASIZE, *OBJTYPE, *OWNER, and *PCATTR (group). Groups which checks attributes *JOURNALED, *JRN, *JRNLIB, *JRNIMG, *JRNOPT. Results are described in Comparison results for journal status and other journal attributes on page 608. *YES, *NO 10 character name *AFTER, *BOTH

*AUDVAL *AUT *AUTL *BASIC

Object auditing value Authority attributes Authority list name Pre-determined set of basic attributes Coded character set Create timestamp Data cyclic redundancy check (CRC) Data size Pre-determined, extended set

*CCSID1 *CRTTSP2 *DATACRC *DATASIZE1 *EXTENDED

*JOURNAL

Journal information

*JOURNALED *JRN *JRNIMG

File is currently journaled Current or last journal Record images

604

Table 90. Attribute *JRNLIB *JRNOPT

Compare IFS Attributes (CMPIFSA) attributes Description Current or last journal library Journal optional entries Object type File owner Archived file PC Attributes Hidden file Read only attribute System file Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) 10 character name *YES, *NO *STMF, *DIR, *SYMLNK 10 character name *YES, *NO Group which checks *PCARCHIVE, *PCHIDDEN, *PCREADO, *PCSYSTEM *YES, *NO *YES, *NO *YES, *NO *NONE, user profile name No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the number of private authorities and private authority values are equal. No value, indicator only3 When this attribute is returned in output, its Difference Indicator value indicates if the public authority values are equal.

*OBJTYPE *OWNER *PCARCHIVE1 *PCATTR *PCHIDDEN1 *PCREADO1 *PCSYSTEM1 *PGP *PRVAUTIND

*PUBAUTIND

Public authority indicator

1.

2.

3.

Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a data group and the object is configured for system journal replication with a configured object auditing value of *NONE. The *CRTTSP attribute does not compare directories (*DIR) or symbolic links (*SYMLNK). For stream files (*STMF), the #IFSATR audit omits the *CRTTSP attribute from comparison since creation timestamps are not preserved during replication. Running the CMPIFSA command will detect differences in the creation timestamps for stream files. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, these values are blank.

Updated for 5.0.07.00.

605

Attributes compared and expected results - #DLOATR audit


The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and places the results in an output file. Table 91 lists the attributes that can be compared by the CMPDLOA command and the value shown in the Compared Attribute (CMPATR) field in the output file. The Returned Values column lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the compare.
Table 91. Attribute *ASP Compare DLO Attributes (CMPDLOA) attributes Description Auxiliary storage pool Returned Values (SYS1VAL, SYS2VAL) 1-16 (pre-V5R1) 1-32 (V5R1) 1 = System ASP See Comparison results for auxiliary storage pool ID (*ASP) on page 612 for details. *NONE, *USRPRF, *CHANGE, *ALL Group which checks *AUTL, *PGP, *PUBAUTIND, *PRVAUTIND *NONE, list name Group which checks a pre-determined set of attributes. The following set of attributes are compared: *CCSID, *DATASIZE, *OBJTYPE, *PCATTR, and *TEXT. 1-65535 SAA format (YY-MM-DD-HH.MM.SS.mmmmmm) 0-42949672951 Group which checks a pre-determined set of attributes. Compares the basic set of attributes (*BASIC) plus an extended set of attributes. The following attributes are compared *AUT, *CCSID, *DATASIZE, *OBJTYPE, *OWNER, *PCATTR, and *TEXT. SAA format (YY-MM-DD-HH.MM.SS.mmmmmm) *DOC, *FLR2 10 character name *YES, *NO Group which checks *PCARCHIVE, *PCHIDDEN, *PCREADO, *PCSYSTEM *YES, *NO *YES, *NO

*AUDVAL *AUT *AUTL *BASIC

Object audit value Authority attributes Authority list name Pre-determined set of basic attributes

*CCSID *CRTTSP *DATASIZE *EXTENDED

Coded character set Create timestamp Data size Pre-determined, extended set

*MODTSP *OBJTYPE *OWNER *PCARCHIVE *PCATTR *PCHIDDEN *PCREADO

Modify timestamp Object type File owner Archived file PC Attributes Hidden file Read only attribute

606

Table 91. Attribute

Compare DLO Attributes (CMPDLOA) attributes Description System file Primary group Private authority indicator Returned Values (SYS1VAL, SYS2VAL) *YES, *NO *NONE, user profile name No value, indicator only3 When this attribute is returned in output, its Difference Indicator value if the number of private authorities and private authority values are equal No value, indicator only1 When this attribute is returned in output, its Difference Indicator value if the public authority values are equal 50 character description

*PCSYSTEM *PGP *PRVAUTIND

*PUBAUTIND

Public authority indicator

*TEXT
1. 2. 3.

Text description

This attribute is not supported for DLOs with an object type of *FLR. This attribute is always compared. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is specified, these values are blank.

607

Comparison results for journal status and other journal attributes


The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling attributes listed in Table 92 for objects replicated from the user journal. These commands function similarly when comparing journaling attributes. When a compare is requested, MIMIX determines the result displayed in the Differences Indicator field by considering whether the file is journaled, whether the request includes a data group, and the data groups configured settings for journaling. Regardless of which journaling attribute is specified on the command, MIMIX always checks the journaling status first (*JOURNALED attribute). If the file or object is journaled on both systems, MIMIX then considers whether the command specified a data group definition before comparing any other requested attribute.
Table 92. Journaling attributes

When specified on the CMPOBJA command, these values apply only to files, data areas, or data queues. When specified on the CMPFILA command, these values apply only to PF-DTA and PF38-DTA files. *JOURNAL *JOURNALED Object journal information attributes. This value acts as a group selection, causing all other journaling attributes to be selected Journal Status. Indicates whether the object is currently being journaled. This attribute is always compared when any of the other journaling attributes are selected. Journal. Indicates the name of the current or last journal. If blank, the object has never been journaled. Journal Image. Indicates the kinds of images that are written to the journal receiver for changes to objects. Journal Library. Identifies the library that contains the journal. If blank, the object has never been journaled. Journal Omit. Indicates whether file open and close journal entries are omitted.

*JRN 1 *JRNIMG 1 2 *JRNLIB 1 *JRNOMIT 1


1.

2.

When these values are specified on a Compare command, the journal status (*JOURNALED) attribute is always evaluated first. The result of the journal status comparison determines whether the command will compare the specified attribute. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the journal status is as expected. The journal image status is reflected as not supported (*NS) because IBM i only supports after (*AFTER) images.

Compares that do not specify a data group - When no data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute). Table 93 shows the result displayed in the Differences Indicator field. If the file or

608

object is not journaled on both systems, the compare ends. If both source and target systems are journaled, MIMIX then compares any other specified journaling attribute.
Table 93. Difference indicator values for *JOURNALED attribute when no data group is specified Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.

Yes *EQ *NE *NE

No *NE *EQ *NE

*NOTFOUND *NE *NE *UN

The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Compares that specify a data group - When a data group is specified on the compare request, MIMIX compares the journaled status (*JOURNALED attribute) to the configuration values. If both source and target systems are journaled according to the expected configuration settings, then MIMIX compares any other specified journaling attribute against the configuration settings. The Compare commands vary slightly in which configuration settings are checked. For CMPFILA requests, if the journaled status is as configured, any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94. For CMPOBJA and CMPIFSA requests, if the journaled status is as configured and the configuration specifies *YES for Cooperate with database (COOPDB), then any other specified journal attributes are compared. Possible results from comparing the *JOURNALED attribute are shown in Table 94 and Table 95. If the configuration specifies COOPDB(*NO), only the journaled status is compared; possible results are shown in Table 96.

Table 94, Table 95, and Table 96 show results for the *JOURNALED attribute that can appear in the Difference Indicator field when the compare request specified a data group and considered the configuration settings.

609

Table 94 shows results when the configured settings for Journal on target and Cooperate with database are both *YES.
Table 94. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *YES for JRNTGT and COOPDB Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.

Yes *EC *NC *NE

No *EC *NC *NE

*NOTFOUND *NE *NE *UN

The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Table 95 shows results when the configured settings are *NO for Journal on target and *YES for Cooperate with database. .
Table 95. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *NO for JRNTGT and *YES for COOPDB. Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.

Yes *NC *NC *NE

No *EC *NC *NE

*NOTFOUND *NE *NE *UN

The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Table 96 shows results when the configured setting for Cooperate with database is *NO. In this scenario, you may want to investigate further. Even though the Difference Indicator shows values marked as configured (*EC), the object can be not journaled

610

on one or both systems. The actual journal status values are returned in the System 1 Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.
Table 96. Difference indicator values for *JOURNALED attribute when a data group is specified and the configuration specifies *NO for COOPDB. Difference Indicator Target Journal Status 1 Yes Source No *NOTFOUND
1.

Yes *EC *EC *NE

No *EC *EC *NE

*NOTFOUND *NE *NE *UN

The returned values for journal status found on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

How configured journaling settings are determined


When a data group is specified on a compare request, MIMIX also considers configuration settings when comparing journaling attributes. For comparison purposes, MIMIX assumes that the source system is journaled and that the target system is journaled according to configuration settings. Depending on the command used, there are slight differences in what configuration settings are checked. The CMPFILA, CMPOBJA, and CMPIFSA commands retrieve the following configurable journaling attributes from the data group definition: The Journal on target (JRNTGT) parameter identifies whether activity replicated through the user journal is journaled on the target system. The default value is *YES. The System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) values are retrieved and used to determine the source journal, source journal library, target journal, and target journal library. Values for elements Journal image and Omit open/close entries specified in the File entry options (FEOPT) parameter are retrieved. The default values are *AFTER and *YES, respectively.

Because the data groups values for Journal image and Omit open/close entries can be overridden by a data group file entry or a data group object entry, the CMPFILA and CMPOBJA commands also retrieve these values from the entries. The values determined after the order of precedence is resolved, sometimes called the overall MIMIX configuration values, are used for the compare. For CMPOBJA and CMPIFSA requests, the value of the Cooperate with database (COOPDB) parameter is retrieved from the data group object entry or data group IFS entry. The default value in object entries is *YES, while the default value in IFS entries is *NO.

611

Comparison results for auxiliary storage pool ID (*ASP)


The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands support comparing the auxiliary storage pool (*ASP) attribute for objects replicated from the user journal. These commands function similarly. When a compare is requested, MIMIX determines the result displayed in the Differences Indicator field by considering whether a data group was specified on the compare request. Compares that do not specify a data group - When no data group is specified on the compare request, MIMIX compares the *ASP attribute for all files or objects that match the selection criteria specified in the request. The result displayed in the Differences Indicator field. Table 97 shows the possible results in the Difference Indicator field.
Table 97. Difference Indicator values when no data group is specified Difference Indicator Target ASP Values 1 ASP1 Source ASP2 *NOTFOUND
1.

ASP1 *EQ *NE *NE

ASP2 *NE *EQ *NE

*NOTFOUND *NE *NE *EQ

The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Compares that specify a data group - When a data group is specified on the compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not compare the *ASP attribute. When a data group is specified on a CMPOBJA request which specifies an object type except libraries (*LIB), MIMIX does not compare the *ASP attribute. Table 98 shows the possible results in the Difference Indicator field
Table 98. Difference Indicator values for non-library objects when the request specified a data group Difference Indicator Target ASP Values 1 ASP1 Source ASP2 *NOTFOUND
1.

ASP1 *NOTCMPD *NOTCMPD *NE

ASP2 *NOTCMPD *NOTCMPD *NE

*NOTFOUND *NE *NE *EQ

The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

612

For CMPOBJA requests which specify a a data group and an object type of *LIB, MIMIX considers configuration settings for the library. Values for the System 1 library ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved from the data group object entry and used in the comparison. Table 99, Table 100, and Table 101 show the possible results in the Difference Indicator field. Note: For Table 99, Table 100, and Table 101, the results are the same even if the system roles are switched. Table 99 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *SRCLIB for the System 1 library ASP number and the data source is system 2. .
Table 99. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2). Difference Indicator Target ASP Values 1 ASP1 Source ASP2 *NOTFOUND
1.

ASP1 *EC *NC *NE

ASP2 *NC *EC *NE

*NOTFOUND *NE *NE *EQ

The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Table 100 shows the expected values for the ASP attribute the request specifies a data group and the configuration specifies 1 for the System 1 library ASP number and the data source is system 2.
Table 100. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(1) and DTASRC(*SYS2) Difference Indicator Target ASP Values 1 Source 2 *NOTFOUND
1.
1

1 *EC *EC *NE

2 *NC *NC *NE

*NOTFOUND *NE *NE *EQ

The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

Table 101 shows the expected values for the ASP attribute when the request specifies a data group and the configuration specifies *ASPDEV for the System 1

613

library ASP number, DEVNAME is specified for the System 1 library ASP device, and data source is system 2. .
Table 101. Difference Indicator values for libraries when a data group is specified and configured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and DTASRC(*SYS2) Difference Indicator Target ASP Values 1 1 Source 2 *NOTFOUND
1.

DEVNAME *EC *EC *NE

2 *NC *NC *NE

*NOTFOUND *NE *NE *EQ

The returned values for *ASP attribute on the Source and Target systems are shown in the SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the value of the DTASRC field.

614

Comparison results for user profile status (*USRPRFSTS)


When comparing the attribute *USRPRFSTS (user profile status) with the Compare Object Attributes (CMPOBJA) command, MIMIX determines the result displayed in the Differences Indicator field by considering the following: The status values of the object on both the source and target systems Configured values for replicating user profile status, at the data group and object entry levels The value of the Data group definition (DGDFN) parameter specified on the CMPOBJA command.

Compares that do not specify a data group - When the CMPOBJA command does not specify a data group, MIMIX compares the status values between source and target systems. The result is displayed in the Differences Indicator field, according to Table 85 in Interpreting results of audits that compare attributes on page 586. Compares that specify a data group - When the CMPOBJA command specifies a data group, MIMIX checks the configuration settings and the values on one or both systems. (For additional information, see How configured user profile status is determined on page 616.) When the configured value is *SRC, the CMPOBJA command compares the values on both systems. The user profile status on the target system must be the same as the status on the source system, otherwise an error condition is reported. Table 102 shows the possible values.
Table 102. Difference Indicator values when configured user profile status is *SRC Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *EC *NC *NE *DISABLED *NC *EC *NE *NOTFOUND *NE *NE *UN

When the configured value is *ENABLED or *DISABLED, the CMPOBJA command checks the target system value against the configured value. If the user profile status on the target system does not match the configured value, an error condition is reported. The source system user profile status is not relevant. Table 103 and Table

615

104 show the possible values when configured values are *ENABLED or *DISABLED, respectively.
Table 103. Difference Indicator values when configured user profile status is *ENABLED Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *EC *EC *NE *DISABLED *NC *NC *NE *NOTFOUND *NE *NE *UN

Table 104. Difference Indicator values when configured user profile status is *DISABLED Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *NC *NC *NE *DISABLED *EC *EC *NE *NOTFOUND *NE *NE *UN

When the configured value is *TGT, the CMPOBJA command does not compare the values because the result is indeterminate. Any differences in user profile status between systems are not reported. Table 105 shows possible values.
Table 105. Difference Indicator values when configured user profile status *TGT Difference Indicator Target User profile status *ENABLED Source *DISABLED *NOTFOUND *ENABLED *NA *NA *NE *DISABLED *NA *NA *NE *NOTFOUND *NE *NE *UN

How configured user profile status is determined


The data group definition determines the behavior for replicating user profile status unless it is explicitly overridden by a non-default value in a data group object entry. The value determined after the order of precedence is resolved is sometimes called the overall MIMIX configuration value. Unless specified otherwise in the data group or

616

in an object entry, the default is to use the value *SRC from the data group definition. Table 106 shows the possible values at both the data group and object entry levels.
Table 106. Configuration values for replicating user profile status *DGDFT Only available for data group object entries, this indicates that the specified in the data group definition is used for the user profile statue. This is the default value for object entries. The status of the user profile is set to *DISABLED when the user profile is created or changed on the target system. The status of the user profile is set to *ENABLED when the user profile is created or changed on the target system. This is the default value in the data group definition. The status of the user profile on the source system is always used when the user profile is created or changed on the target system. If a new user profile is created, the status is set to *DISABLED. If an existing user profile is changed, the status of the user profile on the target system is not altered.

*DISABLE 1 *ENABLE 1 *SRC

*TGT

1.

Data group definitions use these values. In data group object entries, the values *DISABLED and *ENABLED are used but have the same meaning.

617

618

Comparison results for user profile password (*PRFPWDIND)


When comparing the attribute *PRFPWDIND (user profile password indicator) with the Compare Object Attributes (CMPOBJA) command, MIMIX assumes that the user profile names are the same on both systems. User profile passwords are only compared if the user profile name is the same on both systems and the user profile of the local system is enabled and has a defined password. If the local or remote user profile has a password of *NONE, or if the local user profile is disabled or expired, the user profile password is not compared. The System Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The Difference Indicator field will also return a value of not compared (*NA). The CMPOBJA command does not support name mapping while comparing the *PRFPWDIND attribute. If the user profile names are different, or if you attempt name mapping, the System Indicator fields will indicate that comparing the attribute is not supported (*NOTSPT). The Difference Indicator field will also return a value of not supported (*NS). The following tables identify the expected results when user profile password is compared. Note that the local system is the system on which the command is being run, and the remote system is defined as System 2. Table 107 shows the possible Difference Indicator values when the user profile passwords are the same on the local and remote systems and are not defined as *NONE.
Table 107. Difference Indicator values when user profile passwords are the same, but not *NONE. Difference Indicator Remote System User Profile Password *ENABLED *DISABLED Local System Expired Not Found *ENABLED *EQ *NA *NA *NE *DISABLED *EQ *NA *NA *NE Expired *EQ *NA *NA *NE Not Found *NE *NE *NE *EQ

619

Table 108 shows the possible Difference Indicator values when the user profile passwords are different on the local and remote systems and are not defined as *NONE.
Table 108. Difference Indicator values when user profile passwords are different, but not *NONE Difference Indicator Remote System User Profile Password *ENABLED *DISABLED Local System Expired Not Found *ENABLED *NE *NA *NA *NE *DISABLED *NE *NA *NA *NE Expired *NE *NA *NA *NE Not Found *NE *NE *NE *EQ

Table 109 shows the possible Difference Indicator values when the user profile passwords are defined as *NONE on the local and remote systems.
Table 109. Difference Indicator values when user profile passwords are *NONE. Difference Indicator Remote System User Profile Password *ENABLED *DISABLED Local System Expired Not Found *ENABLED *NA *NA *NA *NE *DISABLED *NA *NA *NA *NE Expired *NA *NA *NA *NE Not Found *NE *NE *NE *EQ

620

Appendix F

Outfile formats
This section contains the output files (outfile) formats for those MIMIX commands that provide outfile support. Lakeview Technology provides a model database file that defines the record format for the outfile. These database files can be found in the product installation library. Public authority to the created outfile is the same as the create authority of the library in which the file is created. Use the Display Library Description (DSPLIBD) command to see the create authority of the library. You can use the Run Query (RUNQRY) command to display outfiles with column headings and data type formatting if you have the licensed program 5722QU1, Query, installed. Otherwise, you can use the Display File Field Description (DSPFFD) command to see detailed outfile information, such as the field length, type, starting position, and number of bytes.

Outfile support in MIMIX Availability Manager


MIMIX Availability Manager provides enhanced MIMIX output file information for the compare commands used by audits. For these commands, MIMIX Availability Manager provides a subsetted view of the output file in a window unique to the command. Each output file window provides options for taking actions that are appropriate for the errors detected, including problems and recovered items. Note: Corrective action is only available for output files associated with Notifications. Recovery output files only display entries that have problems or have already been recovered. All other output files generated by other commands are shown in their entirety in the Output File Information window. The outfile display can be customized using Preferences. For more information about audit results, see Interpreting audit results - MIMIX Availability Manager on page 575 and Interpreting audit results - 5250 emulator on page 576.

621

Work panels with outfile support

Work panels with outfile support


The following table lists the work panels with outfile support.
Table 110. Work panels with outfile support Panel WRKDGDFN WRKJRNDFN WRKTFRDFN WRKSYSDFN WRKDGFE WRKDGDAE WRKDGOBJE WRKDGDLOE WRKDGIFSE WRKDGACT WRKDGACTE WRKDGIFSTE WRKDGOBJTE Description Work with DG Definitions Work with Journal Definitions Work with Transfer Definitions Work with System Definitions Work with DG File Entries Work with DG Data Area Entries Work with DG Object Entries Work with DG DLO Entries Work with DG IFS Entries Work with DG Activity Work with DG Activity Entries Work with DG IFS Tracking Entries Work with DG Object Tracking Entries

622

MCAG outfile (WRKAG command)

MCAG outfile (WRKAG command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Application Groups (WRKAG) command.
Table 111. MCAG outfile (WRKAG command) Field AGDFN USRPRF APP APPLIB RLSLVL Description Application group definition User profile Application name Application library Application release level Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name Any valid user profile *AGDFN, user-defined name *APP, user-defined name User-defined value Column headings AGDFN NAME USER PROFILE APP NAME APP LIBRARY APP RELEASE LEVEL PARENT APP GROUP APP CRG EXIT PGM CRG EXIT PGM LIB CRG EXIT PGM JOB NAME CRG EXIT PGM DATA NUMBER OF RESTARTS

PARENT

Parent application group

CHAR(10)

*AGDFN, *NONE, *PARENT, userdefined name User-defined name *APPLIB, user-defined name *APP, *JOBD, user-defined name

EXITPGM EXITPGMLIB JOB

Application CRG exit program Application CRG exit program library Exit program job name

CHAR(10) CHAR(10) CHAR(10)

EXITDTA NBRRESTART

Exit program data Number of restarts

CHAR(256) PACKED(5 0)

User-defined value 0-3

623

MCAG outfile (WRKAG command)

Table 111. MCAG outfile (WRKAG command) Field HOST Description Takeover IP address Type, length CHAR(256) Valid values User-defined value Column headings TAKEOVER IP ADDRESS DESCRIPTI ON UPDATE CLUSTER ENV

TEXT UPDENV

Description Update cluster environment

CHAR(50) CHAR(10)

User-defined value *YES, *NO

IDA

Input data area name

CHAR(10)

BLANK, Name of the Input Data Area INPUT DATA AREA NAME BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND APP CRG STATUS

AGSTS

Application CRG status

CHAR(10)

AGNODS

Application CRG nodes status Data CRGs status

CHAR(10)

APP CRG NODES STATUS DATA CRG STATUS

DCSTS

CHAR(10)

624

MCAG outfile (WRKAG command)

Table 111. MCAG outfile (WRKAG command) Field DCNODS Description Data CRG nodes status Type, length CHAR(10) Valid values BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL *NONE, User-defined name Column headings DATA CRG NODES STATUS DG STATUS FAILOVER MSGQ LIBRARY FAILOVER MSGQ NAME FAILOVER WAIT TIME FAILOVER DFT ACTION

REPSTS

Data group status

CHAR(10)

FMSGQL

Failover message queue library Failover message queue name Failover wait time Failover default action

CHAR(10)

FMSGQN

CHAR(10)

*NONE, User-defined name

FWTIME FDFTACT

PACKED(5 0) PACKED(5 0)

*NOMAX, 1-32767 *CANCEL, *PROCEED

625

MCDTACRGE outfile (WRKDTARGE command)

MCDTACRGE outfile (WRKDTARGE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Data CRG Entries (WRKDTARGE) command.
Table 112. MCDTACRGE outfile (WRKDTARGE command) Field DTACRG DGDFN AGDFN JRN JRNLIB OSF Description Data CRG Data group name Application group definition Journal name Journal library Object specifier file Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name *DTACRG, user-defined name User-defined name *DGDFN, user-defined name User-defined name *DTACRG, user-defined name Column headings DATA CRG DGDFN NAME AGDFN NAME JOURNAL JOURNAL LIBRARY OBJECT SPECIFIER FILE (OSF) OSF LIBRARY OSF MEMBER RJ MODE (DELIVER) DATA CRG EXIT PGM DATA CRG EXIT PGM LIBRARY DATA CRG STATUS

OSFLIB OSFMBR DELIVERY EXITPGM EXITPGMLIB DCSTS

Object specifier file library Object specifier file member RJ mode Data CRG exit program Data CRG exit program library Data CRGs status

CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10)

*AGDFN, user-defined name *DTACRG, user-defined name *NONE, *ASYNC, *SYNC MMXDTACRG, user-defined name *MIMIX, user-defined name BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL, *INDOUBT, *RESTORED, *ADDNODPND, *DLTPND, *DLTCMDPND, *CHGPND, *CRTPND, *ENDCRGPND, *RMVNODPND, *STRCRGPND, *SWTPND *NONE, *NOTAVAIL

DCNODS REPSTS

Data CRG nodes status Data group status

CHAR(10) CHAR(10)

BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, DATA CRG STATUS DG STATUS BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, *NONE, *NOTAVAIL 626

MCDTACRGE outfile (WRKDTARGE command)

Table 112. MCDTACRGE outfile (WRKDTARGE command) Field DEVCRG ASPGRP DTATYPE Description Device CRG name ASP Group Data resource group type Type, length CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name *NONE, User-defined name *DEV, *DTA, *PEER, *XSM Column headings DEVICE CRG ASP GROUP DATA RESOURCE TYPE FAILOVER MSGQ LIBRARY FAILOVER MSGQ NAME FAILOVER WAIT TIME FAILOVER DFT ACTION CLUSTER ADMINISTRATIVE DOMAIN SYNCHRONIZATI ON DOMAIN

FMSGQL FMSGQN FWTIME FDFTACT ADMDMN

Failover message queue library Failover message queue name Failover wait time Failover default action Cluster administrative domain

CHAR(10) CHAR(10) PACKED(5 0) PACKED(5 0) CHAR(10)

*AGDFN, *NONE, User-defined name *AGDFN, *NONE, User-defined name *AGDFN, *NOMAX, 1-32767 *AGDFN, *CANCEL, *PROCEED *NONE, User-defined value

SYNCOPT

Synchronization option

PACKED(10 5)

*LASTCHG, *ACTDMN

Updated for 5.0.13.00.

627

MCNODE outfile (WRKNODE command)

MCNODE outfile (WRKNODE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Node Entries (WRKNODE) command.
Table 113. MCNODE outfile (WRKNODE command) Field AGDFN CRG NODE CURROLE CURSEQ Description Data CRG CRG name System name Current role Current sequence Type, length CHAR(10) CHAR(10) CHAR(8) CHAR(10) PACKED(5 0) Valid values User-defined name *AGDFN, user-defined name User-defined name *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED -2, -1, 0-127 (-2= *UNDEFINED) (-1 = *REPLICATE) (0 = *PRIMARY) (1-127 = *BACKUP sequence) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED -2, -1, 0-127 (-2= *UNDEFINED) (-1 = *REPLICATE) (0 = *PRIMARY) (1-127 = *BACKUP sequence) *PRIMARY, *BACKUP, *REPLICATE, *UNDEFINED Column headings AGDFN NAME CRG NAME NODE CURRENT ROLE CURRENT SEQUENCE

CURDTAPVD

Current data provider

CHAR(10)

CURRENT DATA PROVIDER PREFERRED ROLE PREFERRED SEQUENCE

PREFROLE PREFSEQ

Preferred role Preferred sequence

CHAR(10) PACKED(5 0)

CFGROLE

Configured role

CHAR(10)

CONFIGURE D ROLE

628

MCNODE outfile (WRKNODE command)

Table 113. MCNODE outfile (WRKNODE command) Field CFGSEQ Description Configured sequence Type, length PACKED(5 0) Valid values -2, -1, 0-127 (-2= *UNDEFINED) (-1 = *REPLICATE) (0 = *PRIMARY) (1-127 = *BACKUP sequence) *PRIMARY, *BACKUP, *UNDEFINED, user-defined name *ACTIVE, *INACTIVE, *ATTN, *NONE, *NOTAVAIL, *UNKNOWN Column headings CONFIGURE D SEQUENCE

CFGDTAPVD

Configured data provider

CHAR(10)

CONFIGURE D DATA PROVIDER CRG NODE STATUS

STATUS

CRG node status

CHAR(10)

629

MXCDGFE outfile (CHKDGFE command)

MXCDGFE outfile (CHKDGFE command)


The following fields are available if you specify *OUTFILE on the Output parameter of the Check Data Group File Entries (CHKDGFE) command.The command is also called by audits which run the #DGFE rule. For additional information, see Interpreting results for configuration data - #DGFE audit on page 580.
Table 114. MXCDGFE outfile (CHKDGFE command) Field TIMESTAMP COMMAND DGSHRTNM DGDFN DGSYS1 DGSYS2 DTASRC FILE LIB MBR OBJTYPE RESULT Description Timestamp (YYYY-MMDD.HH.MM.SSmmmm) Command name Data group short name Data group definition name System 1 System 2 Data source System 1 file name System 1 library name System 1 member name Object type Result Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values SAA timestamp CHKDGFE Short data group name User-defined data group name User-defined system name User-defined system name *SYS1, *SYS2 User-defined name User-defined name User-defined name *FILE *NODGFE, *EXTRADGFE, *NOFILE, *NOMBR, *RCYFAILED, *RECOVERED, *UA *NONE, *NOFILECHK User-defined name User-defined name User-defined name Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 OBJECT SYSTEM 1 LIBRARY SYSTEM 1 MEMBER OBJECT TYPE RESULT

OPTION FILE2 LIB2 MBR2

Option System 2 file name System 2 library name System 2 member name

CHAR(100) CHAR(10) CHAR(10) CHAR(10)

OPTION SYSTEM 2 OBJECT SYSTEM 2 LIBRARY SYSTEM 2 MEMBER

237

MXCDGFE outfile (CHKDGFE command)

Table 114. MXCDGFE outfile (CHKDGFE command) Field ASPDEV Description Source ASP device Type, length CHAR(10) Valid values Column headings

*UNKNOWN - if object not found ASP DEVICE or an API error *SYSBAS - if object in ASP 1-32 User-defined name - if object in ASP 33-255 PF-DTA, PF-SRC, LF, PF38-DTA, PF38-SRC, LF38 OBJECT ATTRIBUTE

OBJATR

Object attribute

CHAR(10)

237

MXCMPDLOA outfile (CMPDLOA command)

MXCMPDLOA outfile (CMPDLOA command)


For additional supporting information, see Interpreting results of audits that compare attributes on page 586.
Table 115. CMPDLOA Output file (MXCMPDLOA) Field TIMESTAMP COMMAND DGSHRTNM DGNAME SYSTEM1 SYSTEM2 DTASRC SYS1DLO SYS2DLO CCSID CNTRYID LANGID CMPATR SYS1IND Description Timestamp (CCCC-YY-MMDD.HH.MM.SSmmmm) Command name Data group short name Data group definition name System 1 System 2 Data source System 1 DLO name System 2 DLO name DLO name CCSID DLO name country ID DLO name language ID Compared attribute System 1 file indicator Type, length CHAR(26) CHAR(10) CHAR(3) CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(76) CHAR(76) BIN(5) CHAR(2) CHAR(3) CHAR(10) CHAR(10) Valid values SAA timestamp CMPDLOA Short data group name User-defined data group name
Note: Blank if not DG specified on the command.

Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 DLO SYSTEM 2 DLO CCSID CNTRYID LANGID COMPARED ATTRIBUTE SYSTEM 1 INDICATOR

User-defined system name


Note: Local system name if no DG specified.

User-defined system name


Note: Local system name if no DG specified.

*SYS1, *SYS2 User-defined name User-defined name User-defined name System-defined name System-defined name See Attributes compared and expected results #DLOATR audit on page 606 See Table 87 in Where was the difference detected on page 589

632

MXCMPDLOA outfile (CMPDLOA command)

Table 115. CMPDLOA Output file (MXCMPDLOA) Field SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID Description Stem 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 1 value of the specified attribute System 1 value CCSID Type, length CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) Valid values See Table 87 in Where was the difference detected on page 589 See What attribute differences were detected on page 587 See Attributes compared and expected results #DLOATR audit on page 606 1-65535 See Attributes compared and expected results #DLOATR audit on page 606 1-65535 Column headings SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID

633

MXCMPFILA outfile (CMPFILA command)

MXCMPFILA outfile (CMPFILA command)


For additional supporting information, see Interpreting results of audits that compare attributes on page 586.
Table 116. CMPFILA Output file (MXCMPFILA) Field TIMESTAMP COMMAND DGSHRTNM DGNAME SYSTEM1 SYSTEM2 DTASRC SYS1OBJ SYS1LIB MBR SYS2OBJ SYS2LIB OBJTYPE Description Timestamp (YYYY-MMDD.HH.MM.SSmmmmmm) Command name Data group short name Data group definition name System 1 System 2 Data source System 1 object name System 1 library name Member name System 2 object name System 2 library name Object type Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values SAA timestamp CMPFILA Short data group name User-defined data group name *blank if not DG specified on the command. User-defined system name *local system name if no DG specified. User-defined system name *remote system name if no DG specified. *SYS1, *SYS2 User-defined name User-defined name User-defined name System-defined name System-defined name *FILE Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 FILE SYSTEM 1 LIBRARY MEMBER SYSTEM 2 FILE SYSTEM 2 LIBRARY OBJECT TYPE

634

MXCMPFILA outfile (CMPFILA command)

Table 116. CMPFILA Output file (MXCMPFILA) Field CMPATR SYS1IND SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID ASPDEV1 Description Compared attribute System 1 file indicator System 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 2 value of the specified attribute System 2 value CCSID System 1 ASP device Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) CHAR(10) Valid values See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. See Table 87 in Where was the difference detected on page 589. See Table 87 in Where was the difference detected on page 589. See What attribute differences were detected on page 587. See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. 1-65535 See Attributes compared and expected results #FILATR, #FILATRMBR audits on page 591. 1-65535 *NONE, User-defined name Column headings COMPARED ATTRIBUTE SYSTEM 1 INDICATOR SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE

ASPDEV2

System 2 ASP device

CHAR(10)

*NONE, User-defined name

635

MXCMPFILD outfile (CMPFILDTA command)

MXCMPFILD outfile (CMPFILDTA command)


For additional information for interpreting this outfile, see Interpreting results of audits for record counts and file data on page 582. The following fields require additional explanation: Major mismatches before - Indicates the number of mismatched records found. A value other than 0 (zero) indicates that there are either missing records or data within records does not match. Major mismatches after - Indicates the number of mismatched records remaining. If repair was requested, this value should be 0 (zero); otherwise, the value should equal that shown in the Major mismatches before column. Minor mismatches after - Indicates the number of differences remaining that do not affect data integrity. Apply pending - Indicates the number of records for which the database apply process has not yet performed repair processing.

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field TIMESTAMP COMMAND DGSHRTNM DGNAME Description Timestamp (YYYY-MMDD.HH.MM.SSmmmmmm) Command name Data group short name Data group definition name Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) Valid values SAA timestamp CMPFILDTA Short data group name User-defined data group name * blank if not DG specified on the command User-defined system name *local system name if no DG specified User-defined system name *remote system name if no DG specified *SYS1, *SYS2 Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME

SYSTEM1

System 1

CHAR(8)

SYSTEM 1

SYSTEM2

System 2

CHAR(8)

SYSTEM 2

DTASRC

Data source

CHAR(10)

DATA SOURCE

636

MXCMPFILD outfile (CMPFILDTA command)

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field SYS1OBJ SYS1LIB MBR SYS2OBJ SYS2LIB OBJTYPE DIFIND REPAIRSYS FILEREP TOTRCDS Description System 1 object name System 1 library name Member name System 2 object name System 2 library name Object type Differences indicator Repair system File repair successful Total records compared Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) DECIMAL(20) Valid values User-defined name User-defined name User-defined name User-defined name User-defined name *FILE What attribute differences were detected on page 587 *SYS1, *SYS2 Blank, *YES, *NO 0 - 99999999999999999999 Column headings SYSTEM 1 OBJECT SYSTEM 1 LIBRARY MEMBER SYSTEM 2 OBJECT SYSTEM 2 LIBRARY OBJECT TYPE DIFFERENCE INDICATOR REPAIR SYSTEM FILE REPAIR SUCCESSFUL TOTAL RECORDS COMPARED MAJOR MISMATCHES BEFORE PROCESSING MAJOR MISMATCHES AFTER PROCESSING MINOR MISMATCHES AFTER PROCESSING

MAJMISMBEF

Major mismatches before processing

DECIMAL(20)

0 - 99999999999999999999

MAJMISMAFT

Major mismatches after processing

DECIMAL(20)

0 - 99999999999999999999

MINMISMAFT

Minor mismatches after processing

DECIMAL(20)

0 - 99999999999999999999

637

MXCMPFILD outfile (CMPFILDTA command)

Table 117. Compare File Data (CMPFILDTA) output file (MXCMPFILD) Field APYPENDING Description Apply pending records Type, length DECIMAL(20) Valid values 0 - 99999999999999999999 Column headings ACTIVE RECORDS PENDING SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE TEMPORARY TARGET SQL VIEW

ASPDEV1 ASPDEV2 TMPSQLVIEW

System 1 ASP device System 2 ASP device Temporary target system SQL view pathname

CHAR(10) CHAR(10) CHAR(33)

*NONE, User-defined name *NONE, User-defined name i5/OS-format path name or blanks

638

MXCMPFILR outfile (CMPFILDTA command, RRN report)

MXCMPFILR outfile (CMPFILDTA command, RRN report)


This output file format is the result of specifying *RRN for the report type on the Compare File Data command. Output in this format enables you to see the relative record number (RRN) of the first 1,000 objects that failed to compare. This value is useful when resolving situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. Viewing the RRN value provides information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired.
Table 118. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR) Field SYSTEM 1 Description System 1 Type, length CHAR(8) Valid values User-defined system name *local system name if no DG specified User-defined system name *local system name if no DG specified User-defined name User-defined name User-defined name User-defined name User-defined name Number *NONE, User-defined name *NONE, User-defined name Column headings SYSTEM 1

SYSTEM 2

System 2

CHAR(8)

SYSTEM 2

SYS1OBJ SYS1LIB MBR SYS2OBJ SYS2LIB RRN ASPDEV1 ASPDEV2

System 1 object name System 1 library name Member name System 2 object name System 2 library name Relative record number System 1 ASP device System 2 ASP device

CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) DECIMAL(10) CHAR(10) CHAR(10)

SYSTEM 1 OBJECT SYSTEM 1 LIBRARY MEMBER SYSTEM 2 OBJECT SYSTEM 2 LIBRARY RRN SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE

639

MXCMPRCDC outfile (CMPRCDCNT command)

MXCMPRCDC outfile (CMPRCDCNT command)


For additional information for interpreting this outfile, see Interpreting results of audits for record counts and file data on page 582.

Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC) Field TIMESTAMP COMMAND DGSHRTNM Description Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm) Command Name Data group short name Format TIMESTAMP CHAR(10) CHAR(3) Valid values SAA timestamp CMPFILDTA short data group name Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1

DGNAME

Data group definition name

CHAR(10)

user-defined data group name * blank if not DG specified on the command user-defined system name * local system name if no DG specified user-defined system name * remote system name if no DG specified *SYS1, *SYS2 user-defined name user-defined name user-defined name

SYSTEM1

System 1

CHAR(8)

SYSTEM2

System 2

CHAR(8)

SYSTEM 2

DTASRC SYS1OBJ SYS1LIB MBR

Data source System 1 object name System 1 library name Member name

CHAR(10) CHAR(10) CHAR(10) CHAR(10)

DATA SOURCE SYSTEM 1 OBJECT SYSTEM 1 LIBRARY MEMBER

640

MXCMPRCDC outfile (CMPRCDCNT command)

Table 119. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC) Field DIFIND SYS1CURCNT Description Differences indicator System 1 current records Format CHAR(10) DECIMAL(20) Valid values Refer to differences indicator table 0 - 99999999999999999999 Column headings DIFFERENCE INDICATOR SYSTEM 1 CURRENT RECORDS SYSTEM 2 CURRENT RECORDS SYSTEM 1 DELETED RECORDS SYSTEM 2 DELETED RECORDS SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE ACTIVE RECORDS PENDING

SYS2CURCNT

System 2 current records

DECIMAL(20)

0 - 99999999999999999999

SYS1DLTCNT

System 1 deleted records

DECIMAL(20)

0 - 99999999999999999999

SYS2DLTCNT

System 2 deleted records

DECIMAL(20)

0 - 99999999999999999999

ASPDEV1

System 1 ASP device

CHAR(10)

*NONE, user-defined name

ASPDEV2

System 2 ASP device

CHAR(10)

*NONE, user-defined name

ACTRCDPND

Active records pending

DECIMAL(20)

0 - 99999999999999999999

641

MXCMPRCDC outfile (CMPRCDCNT command)

642

643

MXCMPIFSA outfile (CMPIFSA command)

MXCMPIFSA outfile (CMPIFSA command)


For additional supporting information, see Interpreting results of audits that compare attributes on page 586.
Table 120. CMPIFSA Output file (MXCMPIFSA) Field TIMESTAMP COMMAND DGSHRTNM DGNAME SYSTEM1 SYSTEM2 DTASRC SYS1OBJ SYS2OBJ CCSID CNTRYID LANGID CMPATR SYS1IND Description Timestamp (YYYY-MMDD.HH.MM.SSmmmmmm) Command name Data group short name Data group definition name System 1 System 2 Data source System 1 object name System 2 object name IFS object name CCSID IFS object name country ID IFS object name language ID Compared attribute System 1 file indicator Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) BIN(5) CHAR(2) CHAR(3) CHAR(10) CHAR(10) Valid values SAA timestamp CMPIFSA Short data group name User-defined data group name *blank if not DG specified on the command. User-defined system name *local system name if no DG specified. User-defined system name *remote system name if no DG specified. *SYS1, *SYS2 User-defined name User-defined name User-defined name System-defined name System-defined name See Attributes compared and expected results #IFSATR audit on page 604. See Table 87 in Where was the difference detected on page 589. Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 OBJECT SYSTEM 2 OBJECT CCSID CNTRYID LANGID COMPARED ATTRIBUTE SYSTEM 1 INDICATOR

644

MXCMPIFSA outfile (CMPIFSA command)

Table 120. CMPIFSA Output file (MXCMPIFSA) Field SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID Description System 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 2 value of the specified attribute System 2 value CCSID Type, length CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) Valid values See Table 87 in Where was the difference detected on page 589. What attribute differences were detected on page 587. See Attributes compared and expected results #IFSATR audit on page 604. 1-65535 See Attributes compared and expected results #IFSATR audit on page 604. 1-65535 Column headings SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID

645

MXCMPIFSA outfile (CMPIFSA command)

646

MXCMPOBJA outfile (CMPOBJA command)

MXCMPOBJA outfile (CMPOBJA command)


For additional supporting information, see Interpreting results of audits that compare attributes on page 586.
Table 121. CMPOBJA Output file (MXCMPOBJA) Field TIMESTAMP COMMAND DGSHRTNM DGNAME SYSTEM1 SYSTEM2 DTASRC SYS1OBJ SYS1LIB MBR SYS2OBJ SYS2LIB OBJTYPE Description Timestamp (YYYY-MMDD.HH.MM.SSmmmm) Command name Data group short name Data group definition name System 1 System 2 Data source System 1 object name System 1 library name Member name System 2 object name System 2 library name Object type Type, length TIMESTAMP CHAR(10) CHAR(3) CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values SAA timestamp CMPOBJA Short data group name User-defined data group name *blank if not DG specified on the command. User-defined system name *local system name if no DG specified. User-defined system name *remote system name if no DG specified. *SYS1, *SYS2 User-defined name User-defined name User-defined name User-defined name User-defined name User-defined name Column headings TIMESTAMP COMMAND NAME DGDFN SHORT NAME DGDFN NAME SYSTEM 1 SYSTEM 2 DATA SOURCE SYSTEM 1 FILE SYSTEM 1 LIBRARY MEMBER SYSTEM 2 OBJECT SYSTEM 2 LIBRARY OBJECT TYPE

647

MXCMPOBJA outfile (CMPOBJA command)

Table 121. CMPOBJA Output file (MXCMPOBJA) Field CMPATR SYS1IND SYS2IND DIFIND SYS1VAL SYS1CCSID SYS2VAL SYS2CCSID ASPDEV1 Description Compared attribute System 1 file indicator Stem 2 file indicator Differences indicator System 1 value of the specified attribute System 1 value CCSID System 1 value of the specified attribute System 1 value CCSID System 1 ASP device Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) VARCHAR(2048) MINLEN(50) BIN(5) VARCHAR(2048) MINLEN(50) BIN(5) CHAR(10) Valid values See Attributes compared and expected results #OBJATR audit on page 596 See Table 87 in Where was the difference detected on page 589 See Table 87 in Where was the difference detected on page 589 What attribute differences were detected on page 587 See Attributes compared and expected results #OBJATR audit on page 596 1-65535 See Attributes compared and expected results #OBJATR audit on page 596 1-65535 *NONE, User-defined name Column headings COMPARED ATTRIBUTE SYSTEM 1 INDICATOR SYSTEM 2 INDICATOR DIFFERENCE INDICATOR SYSTEM 1 VALUE SYSTEM 1 CCSID SYSTEM 2 VALUE SYSTEM 2 CCSID SYSTEM 1 ASP DEVICE SYSTEM 2 ASP DEVICE

ASPDEV2

System 2 ASP device

CHAR(10)

*NONE, User-defined name

648

MXDGACT outfile (WRKDGACT command)

MXDGACT outfile (WRKDGACT command)


Table 122. MXDGACT outfile (WRKDGACT command) Field DGDFN DGSYS1 DGSYS2 DTASRC STATUS Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) Data source Object status category CHAR(10) CHAR(10) *SYS1, *SYS2 Type, length CHAR(10) CHAR(8) CHAR(8) Valid values User-defined data group name User-defined system name User-defined system name Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 DATA SOURCE

*COMPLETED, *FAILED, *DELAYED, *ACTIVE OBJECT STATUS CATEGORY Refer to the OM5100P file for the list of valid object types Refer to the OM5200P file for the list of valid object attributes *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank 0-9999 (9999 = maximum value supported) *DLO, *IFS, *SPLF, *LIB User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK 649 OBJECT TYPE OBJECT ATTRIBUTE FAILURE REASON ENTRY COUNT OBJECT CATEGORY OBJECT LIBRARY OBJECT MEMBER DLO

TYPE OBJATR REASON COUNT OBJCAT OBJLIB OBJ OBJMBR DLO

Object type Object attribute Failure reason Entry count Object category Object library Object name Member name DLO name

CHAR(10) CHAR(10) CHAR(11) PACKED(5 0) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(12)

MXDGACT outfile (WRKDGACT command)

Table 122. MXDGACT outfile (WRKDGACT command) Field FLR SPLFJOB SPLF SPLFNBR OUTQ OUTQLIB IFS CCSID Description Folder name Spooled file job name Spooled file name Spooled file number Output queue Output queue library Object IFS name Object CCSID Type, length CHAR(63) CHAR(26) CHAR(10) PACKED(7 0) CHAR(10) CHAR(10) CHAR(1024) VARLEN(100) BIN(5 0) Valid values User-defined name, BLANK Three part spooled file name, BLANK User-defined name, BLANK 1-99999, BLANK User-defined name, *NONE, BLANK User-defined name, *NONE, BLANK User-defined name, BLANK Default to job CCSID. If unable to convert to job's CCSID or job CCSID is 65535, related fields will be written in Unicode Column headings FOLDER SPLF JOB SPLF NAME SPLF NUMBER OUTQ OUTQ LIBRARY IFS OBJECT CCSID

IFSUCS

IFS Object (UNICODE)

Graphic(512) User-defined name (Unicode), BLANK VARLEN(75) CCSID(13488)

IFS Object (UNICODE)

650

MXDGACTE outfile (WRKDGACTE command)

MXDGACTE outfile (WRKDGACTE command)


Table 123. MXDGACTE outfile (WRKDGACTE command) Field DGDFN DGSYS1 DGSYS2 DTASRC STATUS Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) Data source Object status category Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name *SYS1, *SYS2 *COMPLETED, *FAILED, *DELAYED, *ACTIVE Refer to on-line help for complete list Refer to the OM5100P file for the list of valid object types Refer to the OM5200P file for the list of valid object attributes *INUSE, *RESTRICTED, *NOTFOUND, *OTHER, blank *DLO, *IFS, *SPLF, *LIB 1-9999999999 Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 DATA SOURCE OBJECT STATUS CATEGORY OBJECT STATUS OBJECT TYPE OBJECT ATTRIBUTE FAILURE REASON OBJECT CATEGORY JOURNAL SEQUENCE NUMBER JOURNAL ENTRY CODE

OBJSTATUS TYPE OBJATR REASON OBJCAT SEQNBR

Object status Object type Object attribute Failure reason Object category Journal sequence number

CHAR(2) CHAR(10) CHAR(10) CHAR(11) CHAR(10) PACKED(10 0)

JRNCODE

Journal entry code

CHAR(1)

Valid journal codes

651

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field JRNTYPE JRNTSP Description Journal entry type Journal entry timestamp Type, length CHAR(2) TIMESTAMP Valid values Valid journal types YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm YYYY-MMDD.HH.MM.SS.mmmmmm *YES, *NO Column headings JOURNAL ENTRY TYPE JOURNAL ENTRY TIMESTAMP JOURNAL ENTRY SEND TIMESTAMP JOURNAL ENTRY RCV TIMESTAMP JOURNAL ENTRY RTV TIMESTAMP CONTAINER SEND TIMESTAMP JOURNAL ENTRY APY TIMESTAMP REQUIRES CONTAINER SEND WAITING FOR RETRY NUMBER OF RETRIES ATTEMPTED NUMBER OF RETRIES REMAINING

JRNSNDTSP

Journal entry send timestamp Journal entry receive timestamp Journal entry retrieve timestamp Container send timestamp

TIMESTAMP

JRNRCVTSP

TIMESTAMP

JRNRTVTSP

TIMESTAMP

CNRSNDTSP

TIMESTAMP

JRNAPYTSP

Journal entry apply timestamp Requires container

TIMESTAMP

REQCNRSND

CHAR(10)

RTYWAIT RTYATTEMPT

Waiting for retry

CHAR(10)

*YES, *NO 0-1998

Number of retries attempted PACKED(5 0)

RTYREMAIN

Number of retries remaining PACKED(5 0)

0-1998

652

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field DLYITV NXTRTYTSP MSGID MSG FAILEDJOB JRNENT OBJLIB OBJ OBJMBR DLO FLR SPLFJOB SPLF SPLFNBR OUTQ OUTQLIB IFS Description Delay interval Next retry timestamp Message ID Message data Failed job name Journal entry Object library Object name Member name DLO name Folder name Spooled file job name Spooled file name Spooled file number Output queue Output queue library Object IFS name Type, length PACKED(5 0) TIMESTAMP CHAR(7) CHAR(256) VARLEN(50) CHAR(26) CHAR(400) CHAR(10) CHAR(10) CHAR(10) CHAR(12) CHAR(63) CHAR(26) CHAR(10) PACKED(7 0) CHAR(10) CHAR(10) CHAR(1024) VARLEN(100) Valid values 1-7200 YYYY-MMDD.HH.MM.SS.mmmmmm Valid message ID, BLANK Valid message data, BLANK Job name, BLANK Journal entry User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK 1-99999, BLANK User-defined name, *NONE, BLANK User-defined name, *NONE, BLANK User-defined name, BLANK Column headings DELAY INTERVAL NEXT RETRY TIMESTAMP MESSAGE ID MESSAGE DATA FAILED JOB NAME JOURNAL ENTRY OBJECT LIBRARY OBJECT MEMBER DLO FOLDER SPLF NAME SPLF NUMBER OUTQ OUTQ LIBRARY IFS OBJECT

Three part spooled file name, BLANK SPLF JOB

653

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field CCSID Description Object CCSID Type, length BIN(5 0) Valid values Default to job CCSID. If unable to convert to job's CCSID or job CCSID is 65535, related fields will be written in Unicode. User-defined name, BLANK Column headings CCSID

TGTOBJLIB

Target system object library name Target system object name Target system object member name Target system DLO name Target system object folder name Target system spooled file job name Target system spooled file name Target system spooled file job number

CHAR(10)

TARGET OBJECT LIBRARY TARGET OBJECT TARGET MEMBER TARGET DLO TARGET FOLDER

TGTOBJ TGTOBJMBR TGTDLO TGTFLR TGTSPLFJOB TGTSPLF TGTSPLFNBR

CHAR(10) CHAR(10) CHAR(12) CHAR(63) CHAR(26) CHAR(10) PACKED(7 0)

User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK

Three part spooled file name, BLANK TARGET SPLF User-defined name, BLANK 1-999999, BLANK JOB TARGET SPLF NUMBER TARGET OUTQ TARGET OUTQ LIBRARY TARGET IFS OBJECT

TGTOUTQ TGTOUTQLIB

Target system output queue CHAR(10) Target system output queue library Target system IFS name CHAR(10)

User-defined name, BLANK User-defined name, BLANK

TGTIFS

CHAR(1024) VARLEN(100)

User-defined name, BLANK

654

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field RNMOBJLIB Description Renamed object library name Renamed object name Renamed object member name Renamed DLO name Renamed object folder name Renamed spooled file job name Type, length CHAR(10) Valid values User-defined name, BLANK Column headings RENAMED OBJECT LIBRARY RENAMED OBJECT RENAMED MEMBER RENAMED DLO RENAMED FOLDER

RNMOBJ RNMOBJMBR RNMDLO RNMFLR RNMSPLFJOB RNMSPLF RNMSPLFNBR

CHAR(10) CHAR(10) CHAR(12) CHAR(63) CHAR(26)

User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK User-defined name, BLANK

Three part spooled file name, BLANK RENAMED SPLF JOB User-defined name, BLANK 1-999999, BLANK RENAMED SPLF NAME RENAMED SPLF NUMBER RENAMED OUTQ RENAMED OUTQ LIBRARY RENAMED IFS OBJECT RENAMED TGT OBJECTS LIBRARY

Renamed spooled file name CHAR(10) Renamed spooled file number Renamed output queue Renamed output queue library Renamed IFS object name Renamed target object library name PACKED(7 0)

RNMOUTQ RNMOUTQLIB

CHAR(10) CHAR(10)

User-defined name, BLANK User-defined name, BLANK

RNMIFS RNMOBJLIB

CHAR(1024) VARLEN(100) CHAR(10)

User-defined name, BLANK User-defined name, BLANK

655

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field RNMTGTOBJ Description Renamed target object name Renamed target object member name Renamed target object DLO name Renamed target object folder name Renamed target spooled file job name Renamed target spooled file name Renamed target spooled file number Type, length CHAR(10) Valid values User-defined name, BLANK Column headings RENAMED TARGET OBJECT RENAMED TARGET OBJ MEMBER RENAMED TARGET DLO RENAMED TARGET FOLDER

RNMTOBJMBR

CHAR(10)

User-defined name, BLANK

RNMTGTDLO RNMTGTFLR

CHAR(12) CHAR(63)

User-defined name, BLANK User-defined name, BLANK

RNMTSPLFJ

CHAR(26)

Three part spooled file name, BLANK RENAMED TARGET SPLF JOB User-defined name, BLANK RENAMED TARGET SPLF NAME RENAMED TARGET SPLF NUMBER RENAMED TARGET OUTQ RENAMED TARGET OUTQ LIBRARY RENAMED TARGET IFS OBJECT

RNTTGTSPLF

CHAR(10)

RNMTSPLFN

PACKED(7 0)

1-999999, BLANK

RNMTGTOUTQ

Renamed target output queue Renamed target output queue library

CHAR(10)

User-defined name, BLANK

RNMTOUTQL

CHAR(10)

User-defined name, BLANK

RNMTGTIFS

Renamed target object IFS name

CHAR(1024) VARLEN(100)

User-defined name, BLANK

656

MXDGACTE outfile (WRKDGACTE command)

Table 123. MXDGACTE outfile (WRKDGACTE command) Field COOPDB Description Cooperate with DB Type, length CHAR(10) Valid values *YES, *NO, BLANK Column headings COOPERATE WITH DATABASE IFS OBJECT FID (Binary) IFS OBJECT FID (Hex) IFS Object (UNICODE) TGT IFS Object (UNICODE) RNM IFS Object (UNICODE) RNM TGT IFS Object (UNICODE)

OBJFID OBJFIDHEX IFSUCS

IFS object file identifier (binary format) IFS object file identifier (character format) IFS Object (UNICODE)

BIN(16) CHAR(32) GRAPHIC(512) VARLEN(75) CCSID(13488

Binary representation of file identifier Character representation of file identifier User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK User-defined name (Unicode), BLANK

TGTIFSUCS

TGT IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488) RNM IFS Object (UNICODE) RNM TGT IFS Object (UNICODE) GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488)

RNMIFSUCS

RNMTGTIFSU

657

MXDGACTE outfile (WRKDGACTE command)

658

MXDGDAE outfile (WRKDGDAE command)

MXDGDAE outfile (WRKDGDAE command)


Table 124. MXDGDAE outfile (WRKDGDAE command) Field DGDFN DGSYS1 DGSYS2 DTAARA1 DTAARALIB1 Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 data area System 1 data area library Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name User-defined name, *ALL User-defined name Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 DATA AREA SYSTEM 1 DATA AREA LIBRARY SYSTEM 2 DATA AREA SYSTEM 2 DATA AREA LIBRARY DESCRIPTION RETRIEVE ERROR FIELD

DTAARA2 DTAARALIB2

System 2 data area System 2 data area library

CHAR(10) CHAR(10)

User-defined name, *ALL User-defined name

TEXT RTVERR

Description Retrieve error field

CHAR(50) CHAR(10)

User-defined text *NO, *YES

659

MXDGDFN outfile (WRKDGDFN command)

MXDGDFN outfile (WRKDGDFN command)


Table 125. MXDGDFN outfile (WRKDGDFN command) Field DGDFN DGSYS1 DGSYS2 DGSHRTNM DTASRC ALWSWT DGTYPE PRITFRDFN SECTFRDFN RDRWAIT JRNTGT JRNDFN1 Description Data group definition name (Data group definition) System 1 (Data group definition) System 2 (Data group definition) Data group short name Data source Allow to be switched Data group type Configured primary transfer definition Secondary transfer definition Reader wait time (seconds) Journal on target Configured system 1 journal definition Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(3) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) PACKED(5 0) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name Short data group name *SYS1, *SYS2 *YES, *NO *ALL, *OBJ, *DB User-defined name, *DGDFN User-defined name, *NONE 0-600 *YES, *NO *DGDFN, user-defined name, *NONE User-defined name, blank Column Headings DGDFN NAME DGDFN NAME SYSTEM 1 DGDFN NAME SYSTEM 2 DGDFN SHORT NAME DATA SOURCE ALLOW SWITCH DG TYPE CONFIGURED PRITFRDFN CONFIGURED SECTFRDFN DB READER WAITTIME JOURNAL ON TARGET CONFIGURED SYSTEM 1 JRNDFN ACTUAL SYSTEM 1 JRNDFN JRNDFN SYSTEM 2

JRNDFN1NM

Actual system 1 journal definition name

CHAR(10)

JRNDFN2SYS

System 2 journal definition system name

CHAR(8)

User-defined name, blank

660

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field JRNDFN2 Description Configured system 2 journal definition Type, length CHAR(10) Valid values *DGDFN, user-defined name, *NONE User-defined name, blank Column Headings CONFIGURED SYSTEM 2 JRNDFN ACTUAL SYSTEM 2 JRNDFN JRNDFN SYSTEM 2 RJ LINK CURRENT NUMBER OF DB APPLIES REQUESTED NUMBER OF DB APPLIES DBJRNPRC BEFORE IMAGES DBJRNPRC FILES NOT IN DG DBJRNPRC GEND BY MIMIX ACT DBJRNPRC NOT USED BY MIMIX DESCRIPTION SYNC CHECK INTERVAL TIME STAMP INTERVAL

JRNDFN2NM

Actual system 2 journal definition name

CHAR(10)

JRNDFN2SYS RJLNK NBRDBAPY

System 2 journal definition system name User remote journal link Number of DB apply sessions

CHAR(8) CHAR(10) PACKED(3 0)

User-defined name, blank *YES, *NO 1-6

RQSDBAPY

Requested number of DB apply sessions

PACKED(3 0)

1-6

DBBFRIMG

Before images (DB journal entry processing)

CHAR(10)

*IGNORE, *SEND

DBNOTINDG

For files not in data group (DB journal entry processing) Generated by MIMIX activity (DB journal entry processing) Not used by MIMIX (DB journal entry processing) Description Synchronization check interval Time stamp interval

CHAR(10)

*IGNORE, *SEND

DBMMXGEN

CHAR(10)

*IGNORE, *SEND

DBNOTUSED TEXT SYNCCHKITV TSPITV

CHAR(10) CHAR(50) PACKED(5 0) PACKED(5 0)

*IGNORE, *SEND *BLANK, user-defined text 0 - 999999 (0=*NONE) 0 - 999999 (0=*NONE)

661

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field VFYITV DTAARAITV Description Verify interval Data area polling interval Type, length PACKED(5 0) PACKED(5 0) Valid values 1000-999999 1-7200 Column Headings VERIFICATION INTERVAL DATA AREA POLLING INTERVAL NUMBER OF RETRIES FIRST RETRY INTERVAL SECOND RETRY INTERVAL USE ADAPTIVE CACHE DATA CRG FEOPT JOURNAL IMAGES FEOPT OMIT OPEN CLOSE FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT CFG APPY SESSION FEOPT COLLISION RESOLUTION

RTYNBR RTYDLYITV1 RTYDLYITV2

Number of times to retry First retry delay interval Second retry delay interval

PACKED(3 0) PACKED(5 0) PACKED(5 0)

0-999 1-3600 10-7200

ADPCHE DATACRG DFTJRNIMG

Adaptive cache Data cluster resource group Journal image (File entry options)

CHAR(10) CHAR(10) CHAR(10)

*YES, *NO User-defined name, blank, *NONE *AFTER, *BOTH

DFTOPNCLO DFTREPTYPE

Omit open / close entries (File entry options) Replication type (File entry options)

CHAR(10) CHAR(10)

*NO, *YES *POSITION, *KEYED

DFTAPYLOCK DFTAPYSSN DFTCRCLS

Lock member during apply (File entry options) Configured apply session (File entry options) Collision resolution (File entry options)

CHAR(10) CHAR(10) CHAR(10)

*YES, *NO *ANY, A-F *HLDERR, *AUTOSYNC, userdefined name

662

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field DFTSBTRG DFTPRCCST Description Disable triggers during apply (File entry options) Process constraint entries (File entry options) Type, length CHAR(10) CHAR(10) Valid values *YES, *NO *YES Column Headings FEOPT DISABLE TRIGGERS FEOPT PROCESS CONSTRAINT DBAPYPRC FORCE DATA DBAPYPRC MAX OPEN MEMBERS DBAPYPRC THRESHOLD WARNING DBAPYPRC HISTORY DBAPYPRC KEEP JRN DBAPYPRC SIZE OF LOG SPACES OBJPRC DEFAULT OWNER OBJPRC DLO TRANSFER METHOD OBJPRC IFS TRANSFER METHOD

DBFRCITV DBMAXOPN

Force data interval (Database apply processing) Maximum open members (Database apply processing) Threshold warning (Database apply processing)

PACKED(5 0) PACKED(5 0)

1-99999 50 - 32767

DBAPYTWRN

PACKED(7 0)

0, 100-9999999

DBAPYHST DBKEEPLOG DBLOGSIZE

Apply history log spaces (Database apply processing) Keep journal log spaces (Database apply processing) Size of log spaces (MB) (Database apply processing) Object default owner (Object processing)

PACKED(5 0) PACKED(5 0) PACKED(5 0)

0-9999 0-9999 1-16

OBJDFTOWN

CHAR(10)

User-defined name

OBJDLOMTH

DLO transmission method (Object processing)

CHAR(10)

*OPTIMIZED, *SAVRST

OBJIFSMTH

IFS transmission method (Object processing)

CHAR(10)

*SAVRST, *OPTIMIZED

663

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field OBJUSRSTS Description User profile status (Object processing) Type, length CHAR(10) Valid values *SRC, *TGT, *ENABLE, *DISABLE Column Headings OBJPRC USER PROFILE STATUS OBJPRC KEEP DELETED SPLF OBJPRC KEEP DLO SYS NAME OBJRTVPRC DELAY OBJRTVPRC MIN NUMBER OF JOBS OBJRTVPRC MAX NUMBER OF JOBS OBJRTVPRC THLD FOR MORE JOBS CNRSNDPRC MIN NUMBER OF JOBS CNRSNDPRC MAX NUMBER OF JOBS CNRSNDPRC THLD FOR MORE JOBS

OBJKEEPSPL

Keep deleted spooled files (Object processing)

CHAR(10)

*YES, *NO

OBJKEEPDLO

Keep DLO System Name (Object Processing)

CHAR(10)

*YES, *NO

OBJRTVDLY OBJRTVMINJ

Retrieve delay (Object retrieve processing) Minimum number of jobs (Object retrieve processing) Maximum number of jobs (Object retrieve processing) Threshold for more jobs (Object retrieve processing) Minimum number of jobs (Container send processing) Maximum number of jobs (Container send processing) Threshold for more jobs (Container send processing)

PACKED(3 0) PACKED(3 0)

0-999 1-99

OBJRTVMAXJ

PACKED(3 0)

1-99

OBJRTVTHLD

PACKED(5 0)

1-99999

CNRSNDMINJ

PACKED(3 0)

1-99

CNRSNDMAXJ

PACKED(3 0)

1-99

CNRSNDTHLD

PACKED(5 0)

1-99999

664

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field OBJAPYMINJ Description Type, length Valid values 1-99 Column Headings OBJAPYPRC MIN NUMBER OF JOBS OBJAPYPRC MAX NUMBER OF JOBS OBJAPYPRC THLD FOR MORE JOBS OBJAPYPRC THLD FOR WARNING MSGS USRPRF FOR SUBMIT JOB SEND JOBD SEND JOBD LIBRARY APPLY JOBD APPLY JOBD LIBRARY REORGANIZE JOBD REORGANIZE JOBD LIBRARY SYNC JOBD SYNC JOBD LIBRARY

Minimum number of jobs (Object apply processing) PACKED(3 0)

OBJAPYMAXJ

Maximum number of jobs (Object apply processing) PACKED(3 0)

1-99

OBJAPYTHLD

Threshold for more jobs (Object apply processing)

PACKED(5 0)

1-99999

OBJAPYTWRN

Threshold for warning messages (Object apply processing)

PACKED(5 0)

0, 50-99999 (0 = *NONE)

SBMUSR SNDJOBD SNDJOBDLIB APYJOBD APYJOBDLIB RGZJOBD RGZJOBDLIB SYNJOBD SYNJOBDLIB

User profile for submit job Send job description Send job description library Apply job description Apply job description library Reorganize job description Reorganize job description library Synchronize job description Synchronize job description library

CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10)

*JOBD, *CURRENT Job description name Job description library Job description name Job description library Job description name Job description library Job description name Job description library

665

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field SAVACT Description Save while active (seconds) Type, length PACKED(5 0) Valid values -1, 0, 1-999999 (0 = Save while active for files only with a 120 second wait time) (-1 = No save while active) (1-99999 = Save while active for all object types with specified wait time) 000000 - 235959, *NONE, *SYSDFN1, *SYSDFN2 000000 = midnight (default) *NONE, User-defined name *NONE, User-defined name *SYSJRN, *USRJRN *NONE, *ALLAPY 0-99999 *DFT, *YES, *NO 0-9999 (0 = *NONE) Column Headings SAVE WHILE ACTIVE (SEC)

RSTARTTIME

Restart Time

CHAR((8)

RESTART TIME

ASPGRP1 ASPGRP2 COOPJRN RCYWINPRC RCHWINDUR JRNATCRT RJLNKTHLDM

System 1 ASP group System 2 ASP group Cooperative Journal Recovery Window Process Recovery Window Duration Journal at creation RJ Link Threshold (Time in minutes)

CHAR(10) CHAR(10) CHAR(10) CHAR (7) PACKED (5 0) CHAR(10) PACKED(4 0)

SYSTEM 1 ASP GROUP SYSTEM 2 ASP GROUP COOPERATIVE JOURNAL RECOVERY PROCESS RECOVERY DURATION JOURNAL AT CREATION RJLNK THRESHOLD (TIME IN MIN) RJLNK THRESHOLD (NBR OF JRNE) DBSND/DBRDR THRESHOLD (TIME IN MIN)

RJLNKTHLDE

RJ Link Threshold (Number of journal entries)

PACKED(7 0)

0, 1000-9999999 (0 = *NONE)

DBSNDTHLDM

DB Send/Reader Threshold (Time in minutes)

PACKED(4 0)

0-9999 (0 = *NONE)

666

MXDGDFN outfile (WRKDGDFN command)

Table 125. MXDGDFN outfile (WRKDGDFN command) Field DBSNDTHLDE Description DB Send/Reader Threshold (Number of journal entries) Object Send Threshold (Time in minutes) Type, length PACKED(7 0) Valid values 0, 1000-9999999 (0 = *NONE) Column Headings DBSND/DBRDR THRESHOLD (NBR OF JRNE) OBJSND THRESHOLD (TIME IN MIN) OBJSND THRESHOLD (NBR OF JRNE) OBJRTV THRESHOLD CNRSND THRESHOLD

OBJSNDTHDM

PACKED(4 0)

0-9999 (0 = *NONE)

OBJSNDTHDE

Object Send Threshold (Number of journal entries)

PACKED(7 0)

0, 1000-9999999 (0 = *NONE)

OBJRTVTHDE CNRSNDTHDE

Object Retrieve Threshold (Number of activity entries) Container Send Threshold (Number of activity entries)

PACKED(5 0) PACKED(5 0)

0, 50-99999 (0 = *NONE) 0, 50-99999 (0 = *NONE)

Updated for 5.0.08.00 and 5.0.13.00.

667

MXDGDLOE outfile (WRKDGDLOE command)

MXDGDLOE outfile (WRKDGDLOE command)


Table 126. MXDGDLOE outfile (WRKDGDLOE command) Field DGDFN DGSYS1 Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 folder System 1 document Owner System 2 folder System 2 document Object auditing value Type, length CHAR(10) CHAR(8) Valid values User-defined data group name User-defined system name Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 FOLDER SYSTEM 1 DLO OWNER SYSTEM 2 FOLDER SYSTEM 2 DLO OBJECT AUDITING VALUE PROCESS TYPE OBJRTVP RC DELAY

DGSYS2

CHAR(8)

User-defined system name

FLR1 DOC1 OWNER FLR2 DOC2 OBJAUD

CHAR(63) CHAR(12) CHAR(10) CHAR(63) CHAR(12) CHAR(10)

User-defined name User-defined name, *ALL User-defined name, *ALL *FLR1, User-defined name *DOC1, User-defined name *CHANGE, *ALL, *NONE

PRCTYPE OBJRTVDL Y

Process type Retrieve delay (Object retrieve processing)

CHAR(10) PACKED(3 0)

*INCLD, *EXCLD 0-999, *DGDFT

668

MXDGDLOE outfile (WRKDGDLOE command)

669

MXDGFE outfile (WRKDGFE command)

MXDGFE outfile (WRKDGFE command)


Table 127. MXDGFE outfile (WRKDGFE command) Field DGDFN DGSYS1 DGSYS2 FILE1 LIB1 MBR1 FILE2 LIB2 MBR2 TEXT JRNIMG Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 file name System 1 library name Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name User-defined name User-defined name User-defined name User-defined name User-defined name User-defined name User-defined text *AFTER, *BOTH, *DGDFT Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 FILE SYSTEM 1 LIBRARY SYSTEM 1 MEMBER SYSTEM 2 FILE SYSTEM 2 LIBRARY SYSTEM 2 MEMBER DESCRIPTION FEOPT JOURNAL IMAGE FEOPT OMIT OPEN CLOSE

System 1 member name CHAR(10) System 2 file name System 2 library name CHAR(10) CHAR(10)

System 2 member name CHAR(10) Description Journal image (File entry options) Omit open/close entries (File entry options) CHAR(50) CHAR(10)

OPNCLO

CHAR(10)

*YES, *NO, *DGDFT

670

MXDGFE outfile (WRKDGFE command)

Table 127. MXDGFE outfile (WRKDGFE command) Field REPTYPE Description Replication type (File entry options) Type, length CHAR(10) Valid values *POSITION, *KEYED, *DGDFT Column headings FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT FILTER BFR IMAGE FEOPT CURRENT APYSSN FEOPT REQUESTED APYSSN FEOPT COLLISION RESOLUTION FEOPT DISABLE TRIGGERS FEOPT PROCESS TRIGGERS FEOPT PROCESS CONSTRAINTS CURRENT STATUS

APYLOCK

Lock member during CHAR(10) apply (File entry options) CHAR(10) CHAR(10)

*YES, *NO, *DGDFT

FTRBFRIMG Filter before image (File entry options) APYSSN Current apply session (File entry options) Configured or requested apply session (File entry options)

*YES, *NO, *DGDFT A-F, *DGDFT

RQSAPYSS N CRCLS

CHAR(10)

A-F, *DGDFT

Collision resolution class CHAR(10) (File entry options) Disable triggers during CHAR(10) apply (File entry options) Process trigger entries (File entry options) Process constraint entries (File entry options) File status CHAR(10)

*HLDERR, *AUTOSYNC, userdefined name *YES, *NO, *DGDFT

DSBTRG

PRCTRG

*YES, *NO, *DGDFT

PRCCST

CHAR(10)

*YES

STATUS

CHAR(10)

*ACTIVE, *RLSWAIT, *RLSCLR, *HLD, *HLDIGN, *RLS, *HLDRGZ, *HLDPRM, *HLDRNM, *HLDSYNC, *HLDRTY, *HLDERR, *HLDRLTD, *CMPACT, *CMPRLS, *CMPRPR

671

MXDGFE outfile (WRKDGFE command)

Table 127. MXDGFE outfile (WRKDGFE command) Field RQSSTS JRN1STS JRN2STS ERRCDE JECDE JETYPE Description Requested file status System 1 journaled System 2 journaled Error code Journal entry code Journal entry type Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(2) CHAR(1) CHAR(2) Valid values *ACTIVE, *HLD, *HLDIGN, *RLS, *RLSWAIT *YES, *NO, *NA *YES, *NO, *NA Valid error codes Valid journal entry code Valid journal entry type Column headings REQUESTED STATUS SYSTEM 1 JOURNALED SYSTEM 2 JOURNALED ERROR CODE JOURNAL ENTRY CODE JOURNAL ENTRY TYPE

Updated for 5.0.07.00 and 5.0.08.00.

672

MXDGFE outfile (WRKDGFE command)

673

MXDGIFSE outfile (WRKDGIFSE command)

MXDGIFSE outfile (WRKDGIFSE command)


Table 128. MXDGIFSE outfile (WRKDGIFSE command) Field DGDFN DGSYS1 DGSYS2 OBJ1 OBJ2 CCSID Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 object System 2 object Object CCSID Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(1024) CHAR(1024) BIN(5 0) Valid values User-defined data group name User-defined system name User-defined system name User-defined name *OBJ1, user-defined name Defaults to job CCSID. If job CCSID is 65535 or data cannot be converted to job CCSID, OBJ1 and OBJ2 values remain in Unicode *INCLD, *EXCLD *DIR, *STMF, *SYMLNK 0-999, *DGDFT *YES, *NO, blank Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 IFS OBJECT SYSTEM 2 IFS OBJECT CCSID

PRCTYPE TYPE OBJRTVDLY COOPDB

Process type Object type Retrieve delay (Object retrieve processing) Cooperate with database

CHAR(10) CHAR(10) CHAR(10) CHAR(10)

PROCESS TYPE OBJECT TYPE OBJRTVPRC DELAY COOPERATE WITH DATABASE OBJECT AUDITING VALUE

OBJAUD

Object auditing

CHAR(10)

*NONE, *CHANGE, *ALL

674

MXDGIFSE outfile (WRKDGIFSE command)

675

MXDGSTS outfile (WRKDG command)

MXDGSTS outfile (WRKDG command)


The MXDGSTS outfile contains status information which corresponds to fields shown in the following interfaces: MIMIX Availability Manager: the data group detail status displays 5250 emulator: Work with Data Groups (WRKDG) command

The Work with Data Groups (WRKDG) command generates new outfiles based on the MXDGSTSF record format from the MXDGSTS model database file supplied by Lakeview Technology. The content of the outfile is based on the criteria specified on the command. If there are no differences found, the file is empty. Usage notes: When the value *UNKNOWN is returned for either the Data group source system status (DTASRCSTS) field or the Data group target system status (DTATGTSTS), status information is not available from the system that is remote relative to where the request was made. For example, if you requested the report from the target system and the value returned for DTASRCSTS is *UNKNOWN, the WRKDG request could not communicate with the source system. Fields which rely on data collected from the remote system will be blank. If a data group is configured for only database or only object replication, any fields associated with processes not used by the configured type of replication will be blank. See WRKDG outfile SELECT statement examples on page 696 for examples of how to query the contents of this output file. You can automate the process of gathering status. If you use MIMIX Monitor to create a synchronous interval monitor, the monitor can specify the command to generate the outfile. Through exit programs, you can program the monitor to take action based on the status returned in the outfile. For information about creating interval monitors, see the Using MIMIX Monitor book.

Table 129. MXDGSTS outfile (WRKDG command) Field ENTRYTSP DGDFN DGSYS1 Description Entry timestamp Data group definition name (Data group definition) System 1 (Data group definition) Type, length TIMESTAMP CHAR(10) CHAR(8) Valid values SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu User-defined data group name User-defined system name Column headings TIME REQUEST PROCESSED DGDFN NAME DGDFN SYSTEM 1

676

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field DGSYS2 STSTIME STSTIMF STSAVAIL Description System 2 (Data group definition) Elapsed time for data group status (seconds) Elapsed time for data group status (HHH:MM:SS) Data group status retrieved from these systems Data group source system Data group source system status Data group target system Data group target system status Switch mode status for system 1 Type, length CHAR(8) PACKED(10 0) CHAR(10) CHAR(10) Valid values User-defined system name Calculated, 0-9999999999 Calculated, 0-9999999 *ALL, *SOURCE, *TARGET, *NONE Column headings DGDFN SYSTEM 2 ELAPSED TIME ELAPSED TIME (HHH:MM:SS) SYS STATUS RETRIEVED FROM DG SOURCE SYSTEM DG SOURCE STATUS DG TARGET SYSTEM DG TARGET STATUS SYSTEM 1 SWITCH STATUS SYSTEM 2 SWITCH STATUS OVERALL DG STATUS CONFIGURED FOR DB REPLICATION CONFIGURED FOR OBJECT REPLICATION

DTASRC DTASRCSTS DTATGT DTATGTSTS SWTSTS1

CHAR(8) CHAR(10) CHAR(8) CHAR(10) CHAR(10)

User-defined system name *ACTIVE, *INACTIVE, *UNKNOWN User-defined system name *ACTIVE, *INACTIVE, *UNKNOWN *NONE, *SWITCH

SWTSTS2

Switch mode status for system 2

CHAR(10)

*NONE, *SWITCH

DGSTS DBCFG

Data group status summary Data group configured for data base replication Data group configured for object replication

CHAR(10) CHAR(10)

BLANK, *ERROR, *WARNING, *DISABLED *YES, *NO

OBJCFG

CHAR(10)

*YES, *NO

677

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field SRCSYSSTS Description Source system manager status summation (system manager Database send process status summation (DBSNDPRC) Object send process status summation (OBJSNDPRC) Data area polling process status (DTAPOLLPRC) Target System manager status summation (system manager plus journal manager status) Database apply status summation (Apply sessions A-F) Object apply status summation Total database file entries Active database file entries (FEACT) Inactive database file entries Database file entries not journaled on source Database file entries not journaled on target Database file entries held due to error Type, length CHAR(10) Valid values *ACTIVE, *INACTIVE, *UNKNOWN Column headings SOURCE MANAGER SUMMATION DB SEND STATUS OBJECT SEND STATUS DATA AREA POLLER STATUS TARGET MANAGER SUMMATION DB APPLY SUMMATION OBJECT APPLY SUMMATION TOTAL DB FILE ENTRIES ACTIVE DB FILE ENTRIES INACTIVE DB FILE ENTRIES FILES NOT JOURNALED ON SOURCE FILES NOT JOURNALED ON TARGET FILES HELD FOR ERRORS

DBSNDSTS OBJSNDSTS DTAPOLLSTS

CHAR(10) CHAR(10) CHAR(10)

*ACTIVE, *INACTIVE, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN, *NONE

TGTSYSSTS

CHAR(10)

*ACTIVE, *INACTIVE, *UNKNOWN

DBAPYSTS OBJAPYSTS FECNT FEACTIVE FENOTACT FENOTJRNS

CHAR(10) CHAR(10) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0)

*ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD 0-99999 0-99999 0-99999 0-99999

FENOTJRNT

PACKED(5 0)

0-99999

FEHLDERR

PACKED(5 0)

0-99999

678

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field FEHLDOTHR OBJPENDSRC Description Database file entries held for other reasons (FEHLD) Objects in pending status, source system Type, length PACKED(5 0) PACKED(5 0) Valid values 0-99999 0-99999 Column headings FILES HELD FOR OTHER OBJECTS PENDING ON SOURCE SYSTEM OBJECTS PENDING ON TARGET SYSTEM TOTAL OBJECTS DELAYED TOTAL OBJECTS IN ERROR DLO CONFIG CHANGED IFS CONFIG CHANGED OBJECT CONFIG CHANGED PRIMARY TFRDFN SECONDARY TFRDFN LAST USED TFRDFN

OBJPENDAPY

Objects in pending status, target system PACKED(5 0)

0-99999

OBJDELAY

Objects in delayed status

PACKED(5 0)

0-99999

OBJERR

Objects in error

PACKED(5 0)

0-99999

DLOCFGCHG IFSCFGCHG OBJCFGCHG

DLO configuration changed IFS configuration changed Object configuration changed

CHAR(10) CHAR(10) CHAR(10)

*YES, *NO *YES, *NO *YES, *NO

PRITFRDFN SECTFRDFN TFRDFN

Primary transfer definition Secondary transfer definition Current transfer definition

CHAR(10) CHAR(10) CHAR(10)

User-defined transfer definition name User-defined transfer definition name User-defined transfer definition name

679

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field TFRSTS Description Current transfer definition communications status Source system manager status Type, length CHAR(10) Valid values *ACTIVE, *INACTIVE Column headings LAST USED TFRDFN STATUS SOURCE SYS MANAGER STATUS SOURCE JRN MANAGER STATUS CONTAINER SEND STATUS OBJECT RETRIEVE STATUS TARGET SYS MANAGER STATUS TARGET JRN MANAGER STATUS DB JRNRCV DB JRNRCV LIBRARY DB ENTRY TYPE AND CODE DB ENTRY SEQUENCE DB ENTRY TIMESTAMP

SRCMGRSTS

CHAR(10)

*ACTIVE, *INACTIVE, *UNKNOWN

SRCJRNSTS

Source journal manager status

CHAR(10)

*ACTIVE, *INACTIVE, *UNKNOWN

CNRSNDSTS OBJRTVSTS

Container send process status Object retrieve process status

CHAR(10) CHAR(10)

*ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN, *NONE, *THRESHOLD *ACTIVE, *INACTIVE, *UNKNOWN

TGTMGRSTS

Target system manager status

CHAR(10)

TGTJRNSTS

Target journal manager status

CHAR(10)

*ACTIVE, *INACTIVE, *UNKNOWN

CURDBRCV CURDBLIB CURDBCODE

Current database journal entry receiver name Current database journal entry receiver library name Current database journal code and entry type Current database journal entry sequence number Current database journal entry timestamp

CHAR(10) CHAR(10) CHAR(3)

User-defined value User-defined value Valid journal entry types and codes

CURDBSEQ CURDBTSP

PACKED(10 0) TIMESTAMP

0-9999999999 SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

680

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field CURDBTPH RDDBRCV RDDBLIB Description Current database journal entry transactions per hour Last read database journal entry receiver name (DBSNTRCV) Last read database journal entry receiver library name Last read database journal code and entry type Last read database journal entry sequence number (DBSNTSEQ) Last read database journal entry timestamp (DBSNTDATE, DBSNTTIME) Last read database journal entry transactions per hour Number of database entries not sent Estimated time to process database entries not sent (seconds) Estimated time to process database entries not sent (HHH:MM:SS) Last received database journal entry receiver name Type, length PACKED(15 0) CHAR(10) CHAR(10) Valid values Calculated, 0-9999999999999 User-defined value User-defined value Column headings DB ARRIVAL RATE DB READER JRNRCV DB READER JRNRCV LIBRARY DB READER TYPE AND ENTRY CODE DB READER ENTRY SEQUENCE DB READER ENTRY TIMESTAMP DB READER READ RATE DB SEND BACKLOG DB SEND BACKLOG SECONDS DB SEND BACKLOG HHH:MM:SS DB LAST RECEIVED JRNRCV

RDDBCODE

CHAR(3)

Valid journal entry types and codes

RDDBSEQ

PACKED(10 0)

0-9999999999

RDDBTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

RDDBTPH DBSNDBKLG DBSNBKTIME

PACKED(15 0) PACKED(15 0) PACKED(10 0)

Calculated, 0-999999999999999 Calculated, 0-999999999999999 Calculated, 0-9999999999

DBSNBKTIMF

CHAR(10)

Calculated, 0-999:99:99

RCVDBRCV

CHAR(10)

User-defined value

681

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field RCVDBLIB Description Last received database journal entry receiver library name Last received database journal code and entry type Last received database journal entry sequence number Last received database journal entry timestamp Last received database journal entry transactions per hour Number of database apply sessions requested Number of database apply sessions configured Number of database apply session currently active (DBAPYPRC) Type, length CHAR(10) Valid values User-defined value Column headings DB LAST RECEIVED JRNRCV LIB DB LAST RCV TPE AND ENTRY DB LAST RECEIVED SEQUENCE DB LAST RECEIVED TIMESTAMP DB RECEIVE ARRIVAL RATE REQUESTED DB APPLY SESSIONS CONFIGURED DB APPLY SESSIONS ACTIVE DB APPLY SESSIONS DB APPLY BACKLOG DB APPLY TIME SECONDS DB APPLY TIME HHH:MM:SS

RCVDBCODE

CHAR(3)

See the IBM OS/400 Backup and Recovery Guide for journal and entry types 0-9999999999

RCVDBSEQ

PACKED(10 0)

RCVDBTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

RCVDBTPH DBAPYREQ

PACKED(15 0) PACKED(5 0)

Calculated, 0-999999999999999 1-6

DBAPYMAX

PACKED(5 0)

1-6

DBAPYACT

PACKED(5 0)

1-6

DBAPYBKLG DBAPBKTIME DBAPBKTIMF

Number of database entries not applied PACKED(15 0) Estimated time to process database entries not applied (seconds) Estimated time to process database entries not applied (HHH:MM:SS) PACKED(10 0) CHAR(10)

Calculated, 0-999999999999999 Calculated, 0-9999999999 Calculated, 0-999:99:99

682

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field DBAPYTPH Description Database apply total transactions per hour Database apply session A status Database apply session A last received sequence number Database apply session A last processed sequence number Database apply session A number of unprocessed entries Database apply session A estimated time to apply unprocessed transactions (seconds) Database apply session A estimated time to apply unprocessed transactions (HHH:MM:SS) Database apply session A number of transactions per hour Database apply session A open commit indicator Database apply session A oldest open commit ID Database apply session A last applied journal code and entry type Type, length PACKED(15 0) Valid values Calculated, 0-999999999999999 Column headings DB APPLY PROCESSING RATE DB APPLY A STATUS DB APPLY A LAST RECEIVED DB APPLY A LAST PROCESSED DB APPLY A BACKLOG DB APPLY A TIME SECONDS DB APPLY A TIME HHH:MM:SS DB APPLY A PROCESSING RATE DB APPLY A COMMIT INDICATOR DB APPLY A CURRENT COMMIT ID DB APPLY A TYPE AND ENTRY

DBASTS DBARCVSEQ

CHAR(10) PACKED(10 0)

*ACTIVE, *INACTIVE, *THRESHOLD, *UNKNOWN 0-9999999999

DBAPRCSEQ

PACKED(10 0)

0-9999999999

DBABKLG DBABKTIME

PACKED(15 0) PACKED(10 0)

Calculated, 0-999999999999999 Calculated, 0-9999999999

DBABKTIMF

CHAR(10)

Calculated, 0-999:99:99

DBATPH

PACKED(15 0)

Calculated, 0-999999999999999

DBAOPNCMT

CHAR(10)

*YES, *NO

DBACMTID

CHAR(10)

Journal-defined commit ID

DBAAPYCODE

CHAR(3)

See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types.

683

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field DBAAPYSEQ DBAAPYTSP Description Database apply session A last applied sequence number Database apply session A last applied journal entry timestamp Database apply session A object to which last transaction was applied Database apply session A library of object to which last transaction was applied Database apply session A member of object to which last transaction was applied. Database apply session A last applied journal entry clock time difference (seconds) Database apply session A last applied journal entry clock time difference (HHH:MM:SS) Database apply session A hold MIMIX log sequence number Repeat the database apply (all DBx fields match session A fields including the DBA fields) reserved information for five other apply sessions with values of x from B-F) Current object journal entry receiver name Current object journal entry receiver library name CHAR(10) CHAR(10) Type, length PACKED(10 0) TIMESTAMP Valid values 0-9999999999 SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu Column headings DB APPLY A LAST APPLIED DB APPLY A LAST TIMESTAMP DB APPLY A OBJECT NAME DB APPLY A LIBRARY NAME DB APPLY A MEMBER NAME DB APPLY A TIME DIFF SECONDS DB APPLY A TIME DIFF HHH:MM:SS DB APPLY A HOLD SEQUENCE All DBx headings match the DBA headings, with x

DBAAPYOBJ DBAAPYLIB

CHAR(10) CHAR(10)

User-defined object name User-defined object library name

DBAAPYMBR

CHAR(10)

User-defined object member name

DBAAPYTIME

PACKED(10 0)

Calculated, 0-9999999999

DBAAPYTIMF

CHAR(10)

Calculated, 0-999:99:99

DBAHLDSEQ

PACKED(10 0)

0-9999999999

DBxnnnnnnn replacing database apply session A CUROBJRCV CUROBJLIB

All DBx field values match the DBA field values.

User-defined value User-defined value

OBJECT JRNRCV OBJECT JRNRCV LIBRARY

684

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field CUROBJCODE Description Current object journal code and entry type Current object journal entry sequence number Current object journal entry timestamp Type, length CHAR(3) Valid values See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999 Column headings OBJECT TYPE AND ENTRY CODES OBJECT JOURNAL SEQUENCES OBJECT JRN ENTRY TIMESTAMP OBJECT ARRIVAL PER HOUR OBJRDRPRC JRNRCV OBJRDRPRC JRNRCV LIBRARY OBJRDRPRC TYPE AND ENTRY CODE OBJRDRPRC JOURNAL SEQUENCE OBJRDRPRC JRN ENTRY TIMESTAMP OBJRDRPRC READ RATE OBJSNDPRC BACKLOG

CUROBJSEQ

PACKED(10 0)

CUROBJTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

CUROBJTPH

Current object journal entry transactions per hour Last read object journal entry receiver name (OBJSNTRCV) Last read object journal entry receiver library name Last read object journal code and entry type Last read object journal entry sequence number (OBJSNTSEQ) Last read object journal entry timestamp (OBJSNTDATE, OBJSNTTIME) Last read object journal entry transactions per hour Object entries not processed

PACKED(15 0)

0-999999999999999

RDOBJRCV RDOBJLIB

CHAR(10) CHAR(10)

User-defined value User-defined value

RDOBJCODE

CHAR(3)

See the IBM OS/400 Backup and Recovery Guide for journal entry codes and entry types. 0-9999999999

RDOBJSEQ

PACKED(10 0)

RDOBJTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

RDOBJTPH OBJSNDBKLG

PACKED(15 0)

Calculated, 0-999999999999999

PACKED(15 0)) Calculated, 0-999999999999999

685

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field OBJSNDNUM Description Number of object entries sent Type, length Valid values Column headings OBJSNDPRC SENT IN TIME SLICE OBJSNDPRC BACKLOG SECONDS OBJSNDPRC BACKLOG HHH:MM:SS OBJRCVPRC LAST RCVD JRNRCV OBJRCVPRC LAST RCVD JRNRCV LIB OBJRCVPRC LAST TYPE AND ENTRY OBJRCVPRC LAST ENTRY SEQUENCE OBJRCVPRC LAST ENTRY TIMESTAMP OBJRCVPRC RECEIVE RATE OBJRTVPRC MIN NUMBER OF JOBS OBJRTVPRC NUMBER OF JOBS 686

PACKED(15 0)) Calculated, 0-999999999999999

OBJSBKTIME

Estimated time to process object entries not sent (seconds) Estimated time to process entries not sent (HHH:MM:SS) Last received object journal entry receiver name Last received object journal entry receiver library name Last received object journal code and entry type Last received object journal entry sequence number Last received object journal entry timestamp Last received object journal entry transactions per hour Minimum number of object retriever processes Active number of object retriever processes (OBJRTVPRC)

PACKED(10 0)

Calculated, 0-9999999999

OBJSBKTIMF

CHAR(10)

Calculated, 0-999:99:99

RCVOBJRCV

CHAR(10)

User-defined value

RCVOBJLIB

CHAR(10)

User-defined value

RCVOBJCODE

CHAR(3)

See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999

RCVOBJSEQ

PACKED(10 0)

RCVOBJTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

RCVOBJTPH OBJRTVMIN

PACKED(15 0) PACKED(3 0)

0-999999999999999 1-99

OBJRTVACT

PACKED(3 0)

1-99

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field OBJRTVMAX Description Maximum number of object retriever processes Number of object retriever entries not processed Last processed object retrieve journal code and entry type Last processed object retrieve journal sequence number Last processed object retrieve journal entry timestamp (OBJRTVDATE, OBJRTVTIME) Type of object last processed by object retrieve Qualified name of object last processed by object retrieve Minimum number of container send processes Active number of container send processes (CNRSNDPRC) Maximum number of container send processes Number of container send entries not processed Type, length PACKED(3 0) Valid values 1-99 Column headings OBJRTVPRC MAX NUMBER OF JOBS OBJRTVPRC BACKLOG OBJRTVPRC LAST TYPE AND ENTRY OBJRTVPRC LAST SEQUENCE OBJRTVPRC LAST TIMESTAMP OBJRTVPRC LAST OBJ TYPE OBJRTVPRC LAST OBJ NAME CNRSNDPRC MIN NUMBER OF JOBS CNRSNDPRC NUMBER OF JOBS CNRSNDPRC MAX NUMBER OF JOBS CNRSNDPRC BACKLOG

OBJRTVBKLG OBJRTVCODE

PACKED(15 0) CHAR(3)

0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999

OBJRTVSEQ

PACKED(10 0)

OBJRTVTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

OBJRTVTYPE OBJRTVOBJ

CHAR(10) CHAR(1024)

Object type of user-defined object User-defined object name and path


Note: Variable length of 75.

CNRSNDMIN

PACKED(3 0)

1-99

CNRSNDACT

PACKED(3 0)

1-99

CNRSNDMAX

PACKED(3 0)

1-99

CNRSNDBKLG

PACKED(15 0)

0-999999999999999

687

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field CNRSNDNUM CNRSNDCPH CNRSNDCODE Description Number of containers sent Containers per hour Last processed container send journal code and entry type Last processed container send journal sequence number (CNRSNTSEQ) Last processed container send journal entry timestamp (CNRSNTDATE, CNTRSNTTIME) Type of object last processed by container send Qualified name of object last processed by container send Minimum number of object apply processes Active number of object apply processes (OBJAPYPRC) Maximum number of object apply processes Number of object apply entries not processed Type, length PACKED(15 0) PACKED(15 0) CHAR(3) Valid values 0-999999999999999 0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999 Column headings CNRSNDPRC NUMBER SENT CNRSNDPRC RATE CNRSNDPRC LAST TYPE AND ENTRY CNRSNDPRC LAST SEQUENCE CNRSNDPRC LAST TIMESTAMP CNRSNDPRC LAST OBJ TYPE CNRSNDPRC LAST OBJ NAME OBJAPYPRC MIN NUMBER OF JOBS OBJAPYPRC NUMBER OF JOBS OBJAPYPRC MAX NUMBER OF JOBS OBJAPYPRC BACKLOG

CNRSNDSEQ

PACKED(10 0)

CNRSNDTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

CNRSNDTYPE CNRSNDOBJ

CHAR(10) CHAR(1024)

Object type of user-defined object User-defined object name and path


Note: Variable length of 75.

OBJAPYMIN

PACKED(3 0)

1-99

OBJAPYACT

PACKED(3 0)

1-99

OBJAPYMAX

PACKED(3 0)

1-99

OBJAPYBKLG

PACKED(15 0)

Calculated, 0-999999999999999

688

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field OBJAPYACTA Description Number of active objects Type, length PACKED(15 0) Valid values Calculated, 0-999999999999999 Column headings OBJAPYPRC ACTIVE BACKLOG OBJAPYPRC APPLIED IN TIME SLICE OBJAPYPRC BACKLOG SECONDS OBJAPYPRC BACKLOG HHH:MM:SS OBJAPYPRC RATE OBJAPYPRC LAST TYPE AND ENTRY OBJAPYPRC LAST SEQUENCE OBJAPYPRC LAST TIMESTAMP OBJAPYPRC LAST OBJ TYPE OBJAPYPRC LAST OBJ NAME RJ LINK USED BY DG

OBJAPYNUM

Number of object entries applied

PACKED(15 0)

Calculated, 0-999999999999999

OBJABKTIME

Estimated time to process object entries not applied (seconds) Estimated time to process object entries not applied (HHH:MM:SS) Number of object entries applied per hour Last applied object journal code and entry type Last applied object journal sequence number (OBJAPYSEQ) Last applied object journal entry timestamp (OBJAPYDATE, OBJAPYTIME) Type of object last processed by object apply Qualified name of object last processed by object apply Remote journal (RJ) link used by data group

PACKED(10 0)

Calculated, 0-9999999999

OBJABKTIMF

CHAR(10)

Calculated, 0-999:99:99

OBJAPYTPH OBJAPYCODE

PACKED(15 0) CHAR(3)

Calculated, 0-999999999999999 See the IBM OS/400 Backup and Recovery Guide for journal codes and entry types. 0-9999999999

OBJAPYSEQ

PACKED(10 0)

OBJAPYTSP

TIMESTAMP

SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu

OBJAPYTYPE OBJAPYOBJ

CHAR(10) CHAR(1024)

Object type of user-defined object User-defined object name and path


Note: Variable length of 75.

RJINUSE

CHAR(10)

*YES, *NO

689

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field RJSRCDFN Description RJ link source journal definition Type, length CHAR(10) Valid values User-defined journal definition name Column headings RJ LINK SOURCE JRNDFN RJ LINK SOURCE JRNDFN RJ LINK TARGET SYSTEM RJ LINK TARGET JRNDFN RJ PRIMARY RDB ENTRY RJ PRIMARY TFRDFN RJ SECONDARY RDB ENTRY RJ SECONDARY TFRDFN RJ LINK STATE

RJSRCSYS

RJ link source system

CHAR(8)

User-defined system name

RJTGTDFN

RJ link target journal definition

CHAR(10)

User-defined journal definition name

RJTGTSYS

RJ link target system

CHAR(8)

User-defined system name

RJPRIRDB RJPRITFR RJSECRDB

RJ link primary RDB entry RJ link primary transfer definition name RJ link secondary RDB entry

CHAR(18) CHAR(10) CHAR(18)

User-defined or MIMIX generated RDB name User-defined transfer definition name User-defined or MIMIX generated RDB name

RJSECTFR

RJ link secondary transfer definition name RJ link state

CHAR(10)

User-defined transfer definition name

RJSTATE

CHAR(10)

BLANK, *FAILED, *CTLINACT, *INACTPEND, *ASYNC, *SYNC, *ASYNPEND, *SYNCPEND, *NOTBUILT, *UNKNOWN *ASYNC, *SYNC, BLANK

RJDLVRY

RJ link delivery mode

CHAR(10)

RJ LINK DELIVERY MODE RJ LINK SEND PRIORITY

RJSNDPTY

RJ link send task priority

PACKED(3 0)

0-99 0=*SYSDFT

690

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field RJRDRSTS RJSMONSTS RJTMONSTS ITECNT Description RJ reader task status RJ link source monitor status RJ link target monitor status Total IFS tracking entries Type, length CHAR(10) CHAR(10) CHAR(10) PACKED(10 0) Valid values BLANK, *UNKNOWN, *ACTIVE, *INACTIVE, *THRESHOLD BLANK, *UNKNOWN, *ACTIVE, *INACTIVE BLANK, *UNKNOWN, *ACTIVE, *INACTIVE 0-999999 Column headings RJREADER STATUS RJ SOURCE MONITOR RJ TARGET MONITOR TOTAL IFS TRACKING ENTRIES ACTIVE IFS TRACKING ENTRIES INACT IFS TRACKING ENTRIES IFS TE NOT JOURNALED ON SOURCE IFS TE NOT JOURNALED ON TARGET IFS TE HELD FOR ERRORS IFS TE HELD FOR OTHER TOTAL OBJ TRACKING ENTRIES ACTIVE OBJ TRACKING ENTRIES

ITEACTIVE

Active IFS tracking entries

PACKED(10 0)

0-999999

ITENOTACT

Inactive IFS tracking entries

PACKED(10 0)

0-999999

ITENOTJRNS

IFS tracking entries not journaled on source IFS tracking entries not journaled on target IFS tracking entries held due to error IFS tracking entries held for other reasons Total object tracking entries

PACKED(10 0)

0-999999

ITENOTJRNT

PACKED(10 0)

0-999999

ITEHLDERR ITEHLDOTHR OTECNT

PACKED(10 0) PACKED(10 0) PACKED(10 0)

0-999999 0-999999 0-999999

OTEACTIVE

Active object tracking entries

PACKED(10 0)

0-999999

691

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field OTENOTACT Description Inactive object tracking entries Type, length PACKED(10 0) Valid values 0-999999 Column headings INACT OBJ TRACKING ENTRIES OBJ TE NOT JOURNALED ON SOURCE OBJ TE NOT JOURNALED ON TARGET OBJ TE HELD FOR ERRORS OBJ TE HELD FOR OTHER JOURNAL CACHE TARGET JOURNAL CACHE SOURCE JOURNAL STATE TARGET JOURNAL STATE SOURCE JRN CACHE TARGET STATUS JRN CACHE SOURCE STATUS

OTENOTJRNS

Object tracking entries not journaled on source Object tracking entries not journaled on target

PACKED(10 0)

0-999999

OTENOTJRNT

PACKED(10 0)

0-999999

OTEHLDERR OTEHLDOTHR JRNCACHETA

Object tracking entries held due to error PACKED(10 0) Object tracking entries held for other reasons Journal cache target PACKED(10 0) CHAR(10)

0-999999 0-999999 *YES, *NO, *UNKNOWN

JRNCACHESA

Journal cache source

CHAR(10)

*YES, *NO, *UNKNOWN

JRNSTATETA JRNSTATESA

Journal state target Journal state source

CHAR(10) CHAR(10)

*ACTIVE, *STANDBY, *INACTIVE *ACTIVE, *STANDBY, *INACTIVE

JRNCACHETS

Journal cache status - target

CHAR(10)

*ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN

JRNCACHESS

Journal cache status - source

CHAR(10)

692

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field JRNSTATETS JRNSTATESS Description Journal state target status Journal state source status Type, length CHAR(10) CHAR(10) Valid values *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN *ERROR, *NONE, *OK, *WARNING, *NOFEATURE, *UNKNOWN User-defined value User-defined value Column headings JOURNAL STATE TARGET JOURNAL STATE SOURCE RJ TGT JRNRCV RJ TGT JRNRCV LIBRARY RJTGT TYPE AND ENTRY CODE RJ TGT ENTRY SEQUENCE RJ TGT ENTRY TIMESTAMP LAST OBJ RETRIEVED (UNICODE) LAST OBJ SENT (UNICODE) LAST OBJ APPLIED (UNICODE) TOTAL DB FILE ENTRIES2

RJTGTRCV RJTGTLIB

Last RJ target journal entry receiver name Last RJ target journal entry receiver library name Last RJ target journal code and entry type Last RJ target journal entry sequence number Last RJ target journal entry timestamp Qualified name of object last qualified by object retrieve - Unicode Qualified name of object last qualified by container send - Unicode Qualified name of object last qualified by object apply - Unicode Total database file entries

CHAR(10) CHAR(10)

RJTGTCOCDE

CHAR(3)

Valid journal entry types and codes

RJTGTSEQ RJTGTTSP OBJRTVUCS

PACKED(10 0) TIMESTAMP GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488) GRAPHIC(512) VARLEN(75) CCSID(13488) PACKED(10 0)

0-9999999999 SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu User-defined object name and path

CNRSNDUCS

User-defined object name and path

OBJAPYUCS

User-defined object name and path

FECNT2

0-9999999999

693

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field FEACTIVE2 Description Active database file entries (FEACT) Type, length PACKED(10 0) Valid values 0-9999999999 Column headings ACTIVE DB FILE ENTRIES2 INACTIVE DB FILE ENTRIES2 FILES NOT JOURNALED ON SOURCE2 FILES NOT JOURNALED ON TARGET2 FILES HELD FOR ERRORS2 FILES HELD FOR OTHERS2 FILES BEING REPAIRED2 RJLNK THRESHOLD (TIME IN MIN) RJLNK THRESHOLD (NBR OF JRNE) DBSND/DBRDR THRESHOLD (TIME IN MIN) DBSND/DBRDR THRESHOLD (NBR OF JRNE)

FENOTACT2

Inactive database file entries

PACKED(10 0)

0-9999999999

FENOTJRNS2

Database file entries not journaled on source Database file entries not journaled on target Database file entries held due to error

PACKED(10 0)

0-9999999999

FENOTJRNT2

PACKED(10 0)

0-9999999999

FEHLDERR2

PACKED(10 0)

0-9999999999

FEHLDOTHR2 FECMPRPR2

Database file entries held for other reasons (FEHLD) Database file entries being repaired

PACKED(10 0) PACKED(10 0)

0-9999999999 0-9999999999

RJLNKTHLDM

RJ Link Threshold Exceeded (Time in minutes) RJ Link Threshold Exceeded (Number of journal entries) DB Send/Reader Threshold Exceeded (Time in minutes) DB Send/Reader Threshold Exceeded (Number of journal entries)

PACKED(4 0)

0-9999

RJLNKTHLDE

PACKED(7 0)

0-9999999

DBRDRTHLDM

PACKED(4 0)

0-9999

DBRDRTHLDE

PACKED(7 0)

0-9999999

694

MXDGSTS outfile (WRKDG command)

Table 129. MXDGSTS outfile (WRKDG command) Field DBAPYATHLD DBAPYBTHLD DBAPYCTHLD DBAPYDTHLD DBAPYETHLD DBAPYFTHLD OBJSNDTHDM Description DB Apply A Threshold Exceeded (Number of journal entries) DB Apply B Threshold Exceeded (Number of journal entries) DB Apply C Threshold Exceeded (Number of journal entries) DB Apply D Threshold Exceeded (Number of journal entries) DB Apply E Threshold Exceeded (Number of journal entries) DB Apply F Threshold Exceeded (Number of journal entries) Object Send Threshold Exceeded (Time in minutes) Object Send Threshold Exceeded (Number of journal entries) Object Retrieve Threshold Exceeded (Number of activity entries) Container Send Threshold Exceeded (Number of activity entries) Object Apply Threshold Exceeded (Number of activity entries) RJ Backlog Type, length PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(4 0) Valid values 0-99999 0-99999 0-99999 0-99999 0-99999 0-99999 0-9999 Column headings DB APPLY A THRESHOLD DB APPLY B THRESHOLD DB APPLY C THRESHOLD DB APPLY D THRESHOLD DB APPLY E THRESHOLD DB APPLY F THRESHOLD OBJSND THRESHOLD (TIME IN MIN) OBJSND THRESHOLD (NBR OF JRNE) OBJRTV THRESHOLD CNRSND THRESHOLD OBJAPY THRESHOLD RJ BACKLOG

OBJSNDTHDE

PACKED(7 0)

0-9999999

OBJRTVTHDE CNRSNDTHDE OBJAPYTHDE RJBKLG


Updated for 5.0.13.00.

PACKED(5 0) PACKED(5 0) PACKED(5 0) PACKED(15 0)

0-99999 0-99999 0-99999 Calculated 0-999999999999

695

MXDGSTS outfile (WRKDG command)

WRKDG outfile SELECT statement examples


Following are some example SELECT statements that query a WRKDG outfile and produce various outfile reports. The first three examples show how to use wild cards to produce reports about specific data groups in the outfile. The last example adds a few field definitions, in request time sequence, to produce outfile reports with additional data group related information. These are basic examples, there may be additional formatting options that you may want to apply to your output.

WRKDG outfile example 1


This SELECT statement uses a single wildcard character to query the outfile to retrieve and display all of the data group names that start with an A and have 0 or more characters following the A. The records are listed in record arrival order. The statement would be entered as follows: SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like 'A%' The outfile report produced follows:
DGN ACCTPAY ACCTREC APP1 APP2 SYS CHICAGO CHICAGO CHICAGO CHICAGO SYS LONDON LONDON LONDON LONDON

WRKDG outfile example 2


This SELECT statement uses wildcard characters to query the outfile for all data group names that are in the outfile. The records are listed in record arrival order. The statement would be entered as follows: SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '%%' The outfile report produced follows:
DGN INVENTORY PAYROLL ACCTPAY ORDERS ACCTREC APP1 APP2 SYS CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO SYS LONDON LONDON LONDON LONDON LONDON LONDON LONDON 696

MXDGSTS outfile (WRKDG command)

SUPERAPP

CHICAGO

LONDON

WRKDG outfile example 3


This SELECT statement uses wildcard characters to find all data groups with names that contain an A. The records are listed in record arrival order. The statement would be entered as follows: SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN like '%A%' The outfile report produced is follows:
DGN PAYROLL ACCTPAY ACCTREC APP1 APP2 SUPERAPP SYS CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO SYS LONDON LONDON LONDON LONDON LONDON LONDON

WRKDG outfile example 4


This SELECT statement selects all records that have a data group name containing an A. These records are listed in data group name order with all duplicate data group names listed by the time the entry was placed in the outfile. All records for a data group are listed together in ascending time sequence. Additionally, the time stamp that the entry was placed in the file and the current top sequence number of the object journal are also listed with the entry. The statement would be entered as follows: SELECT DGDFN, DGSYS1, DGSYS2, ENTRYTSP, CUROBJSEQ, FROM library/filename WHERE DGDFN like '%A%' ORDER BY DGDFN, DGSYS1, DGSYS2, ENTRYTSP The outfile report produced follows:
DGN PAYROLL ACCTPAY ACCTREC APP1 APP2 SUPERAPP SYS CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO CHICAGO SYS LONDON LONDON LONDON LONDON LONDON LONDON ENTRYTSP 2001-02-06-11.09.59.842000 2001-02-06-11.24.05.851000 2001-02-06-11.09.59.842000 2001-02-06-11.24.05.851000 2001-02-06-14.24.49.793000 2001-02-06-11.09.59.842000 SEQN 29,034,877 29,035,093 29,034,879 29,035,095 29,051,130 0

697

MXDGSTS outfile (WRKDG command)

698

MXDGSTS outfile (WRKDG command)

699

MXDGSTS outfile (WRKDG command)

700

MXDGSTS outfile (WRKDG command)

701

MXDGSTS outfile (WRKDG command)

702

MXDGOBJE outfile (WRKDGOBJE command)

MXDGOBJE outfile (WRKDGOBJE command)


Table 130. MXDGOBJE outfile (WRKDGOBJE command) Field DGDFN DGSYS1 DGSYS2 OBJ1 LIB1 TYPE OBJATR OBJ2 LIB2 OBJAUD Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 folder System 1 library Object type Object attribute System 2 object System 2 library Object auditing value (configured value) Process type Cooperate with database Replicate spooled files Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name User-defined name, *ALL User-defined name, generic* Refer to the OM5100P file for the list of valid values Refer to the OM5200P file for the list of valid object attributes User-defined name, *ALL, generic*, *OBJ1 User-defined name, generic*, *LIB1 *CHANGE, *ALL, *NONE Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 OBJECT SYSTEM 1 LIBRARY OBJECT TYPE OBJECT ATTRIBUTE SYSTEM 2 OBJECT SYSTEM 2 LIBRARY OBJECT AUDITING VALUE PROCESS TYPE COOPERATE WITH DATABASE REPLICATE SPOOLED FILES

PRCTYPE COOPDB REPSPLF

CHAR(10) CHAR(10) CHAR(10)

*INCLD, *EXCLD *YES, *NO *YES, *NO

703

MXDGOBJE outfile (WRKDGOBJE command)

Table 130. MXDGOBJE outfile (WRKDGOBJE command) Field KEEPSPLF OBJRTVDLY USRPRFSTS JRNIMG OPNCLO REPTYPE Description Keep deleted spooled files Retrieve delay (Object retrieve processing) User profile status Journal image (File entry options) Omit open and close entries (File entry options) Replication type (File entry options) Lock member during apply (File entry options) Apply session (File entry options) Type, length CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values *YES, *NO 0-999, *DGDFT *DGDFT, *DISABLED, *ENABLED, *SRC, *TGT *DGDFT, *AFTER, *BOTH *DGDFT, *YES, *NO *DGDFT, *POSITION, *KEYED Column headings KEEP DLTD SPOOLED FILES OBJRTVPRC DELAY USER PROFILE STATUS FEOPT JOURNAL IMAGE FEOPT OMIT OPEN CLOSE FEOPT REPLICATION TYPE FEOPT LOCK MBR ON APPLY FEOPT CURRENT APYSSN FEOPT COLLISION RESOLUTION FEOPT DISABLE TRIGGERS FEOPT PROCESS TRIGGERS FEOPT PROCESS CONSTRAINTS SYSTEM 1 LIBRARY ASP

APYLOCK APYSSN

CHAR(10) CHAR(10)

*DGDFT, *YES, *NO A-F, *DGDFT, *ANY

CRCLS

Collision resolution (File entry options) Disable triggers during apply (File entry options) Process trigger entries (File entry options) Process constraint entries (File entry options) System 1 library ASP number

CHAR(10)

User-defined name, *DGDFT, *HLDERR, *AUTOSYNC *YES, *NO, *DGDFT *YES, *NO, *DGDFT

DSBTRG PRCTRG

CHAR(10) CHAR(10)

PRCCST

CHAR(10)

*YES

LIB1ASP

PACKED(3,0)

0 = *SRCLIB, 1-32, -1 = *ASPDEV

704

MXDGOBJE outfile (WRKDGOBJE command)

Table 130. MXDGOBJE outfile (WRKDGOBJE command) Field LIB1ASPD Description System 1 library ASP device (File entry options) System 2 library ASP number System 2 library ASP device (File entry options) Number of omit content (OMTDTA) values Omit content values (File entry options) Spooled file options Number of cooperating object types Cooperating object types Number of attribute options Attribute options Type, length CHAR(10) Valid values *LIB1ASP, User-defined name Column headings SYSTEM 1 LIBRARY ASP DEV SYSTEM 2 LIBRARY ASP SYSTEM 2 LIBRARY ASP DEV NUMBER OF OMIT CONTENT VALUES OMIT CONTENT SPOOLED FILE OPTIONS NUMBER OF COOPERATING OBJECT TYPES COOPERATING OBJECT TYPES NUMBER OF ATTRIBUTE ATTRIBUTE

LIB2ASP LIB2ASPD

PACKED(3,0) CHAR(10)

0 = *SRCLIB, 1-32, -1 = *ASPDEV *LIB2ASP, User-defined name

NBROMTDTA

PACKED(3 0)

1-10

OMTDTA SPLFOPT NUMCOOPTYP

CHAR(100) CHAR(10) PACKED(3 0)

*NONE, *FILE, *MBR (10 characters each) *NONE, *HLD, *HLDONSAV 0-999

COOPTYPE NBRATROPT ATROPT


Updated for 5.0.08.00.

CHAR(100) PACKED (3 0) CHAR(500)

*FILE, *DTAARA, *DTAQ -1, 1-50 *ALL

705

MXDGTSP outfile (WRKDGTSP command)

MXDGTSP outfile (WRKDGTSP command)


Table 131. MXDGTSP outfile (WRKDGTSP command) Field DGDFN DGSYS1 DGSYS2 DTASRC APYSSN CRTTSP SNDTSP Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) Data source Apply session Create Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm Send Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) TIMESTAMP TIMESTAMP Valid values User-defined data group name User-defined system name User-defined system name *SYS1, *SYS2 A-F SAA timestamp - normalized to the target system (Timestamp when the journal entry is created.) SAA timestamp - normalized to the target system (Timestamp value is set equal to the create timestamp (CRTTSP) when using remote journaling. For non-remote journaling, this is the time the journal entry is read on the source system and is sent by the MIMIX send process.) SAA timestamp - normalized to the target system (Timestamp when the journal entry is received by the journal reader on the target system when using remote journaling or received by the target system by the MIMIX send process for non-remote journaling). SAA timestamp - normalized to the target system (Timestamp when the journal entry is applied on the target system.) Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 DATA SOURCE APPLY SESSION CREATE TIMESTAMP SEND TIMESTAMP

RCVTSP

Receive Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm

TIMESTAMP

RECEIVE TIMESTAMP

APYTSP

Apply Timestamp (YYYY-MMDD.HH.MM.SS.mmmmmm

TIMESTAMP

APPLY TIMESTAMP

706

MXDGTSP outfile (WRKDGTSP command)

Table 131. MXDGTSP outfile (WRKDGTSP command) Field CRTSNDET Description Elapsed time between create and send process (milliseconds) Type, length PACKED(10 0) Valid values Calculated, 0-9999999999 (Elapsed time between generation of the timestamps and the time the MIMIX send process is received on the target system for non-remote journaling. For remote journaling, the create and send times are set equal so elapsed time will be a value of 0. Calculated, 0-9999999999 (Elapsed time between the send time and the receive time.) Calculated, 0-9999999999 Elapsed time between the receive time and the apply time.) Calculated, 0-9999999999 (Elapsed time between generation of the timestamp to the time when the journal entry is applied on the target system.) -9999999999-0, 0-9999999999 Column headings SEND ELAPSED TIME

SNDRCVET

Elapsed time between send and receive process (milliseconds) Elapsed time between receive and apply process (milliseconds) Elapsed time between create and apply timestamps (milliseconds)

PACKED(10 0)

RECEIVE ELAPSED TIME APPLY ELAPSED TIME TOTAL ELAPSED TIME

RCVAPYET

PACKED(10 0)

CRTAPYET

PACKED(10 0)

SYSTDIFF

The time differential between the source and target systems, where time differential = source time target time

PACKED(10 0)

TIME DIFFERENCE

707

MXDGTSP outfile (WRKDGTSP command)

708

MXJRNDFN outfile (WRKJRNDFN command)

MXJRNDFN outfile (WRKJRNDFN command)


Table 132. MXJRNDFN outfile (WRKJRNDFN command) Field JRNDFN JRNSYS JRN JRNLIB JRNLIBASP Description Journal definition name (Journal definition) System name (Journal definition) Journal name (Journal) Journal library (Journal) Journal library ASP Type, length CHAR(10) CHAR(8) CHAR(10) CHAR(10) PACKED(3 0) Valid values User-defined journal definition name User-defined system name Journal, *JRNDFN Journal library Numeric value 0 = *CRTDFT 1-32 - 1 = *ASPDEV *GEN, user-defined name User-defined name, *JRNLIB Numeric value 0 = *CRTDFT 1-32 - 1 = *ASPDEV 2 x CHAR(10) - *NONE, *TIME, *SIZE, *SYSTEM The only valid combinations are: *TIME *SIZE *TIME *SYSTEM 10-1000000 Column headings JRNDFN NAME JRNDFN SYSTEM JOURNAL JOURNAL LIBRARY JOURNAL LIBRARY ASP

JRNRCVPFX JRNRCVLIB RCVLIBASP

Journal receiver prefix (Journal receiver prefix) Journal receiver library (Journal receiver prefix) Journal receiver library ASP

CHAR(10) CHAR(10) PACKED(3 0)

JRNRCV PREFIX JRNRCV LIBRARY JRNRCV LIBRARY ASP

CHGMGT

Receiver change management

CHAR(20)

RECEIVER CHANGE MANAGEMEN T RECEIVER THRESHOLD SIZE (MB)

THRESHOLD

Receiver threshold size (MB)

PACKED(7 0)

709

MXJRNDFN outfile (WRKJRNDFN command)

Table 132. MXJRNDFN outfile (WRKJRNDFN command) Field RCVTIME RESETTHLD Description Time of day to change receiver Reset sequence threshold Type, length ZONED(6 0) PACKED(5 0) Valid values Time 10-1000000 Column headings RECEIVER CHANGE TIME RESET SEQUENCE THRESHOLD RECEIVER DELETE MANAGEMEN T KEEP UNSAVED JRNRCV KEEP JRNRCV COUNT KEEP JRNRCV (DAYS) DESCRIPTION JRNRCV ASP MSGQ THRESHOLD MSGQ MSGQ THRESHOLD MSGQ LIBRARY RJ LINK EXIT PROGRAM EXIT PROGRAM LIBRARY

DLTMGT

Receiver delete management

CHAR(10)

*YES, *NO

KEEPUNSAV

Keep unsaved journal receivers

CHAR(10)

*YES, *NO

KEEPRCVCNT KEEPJRNRCV TEXT JRNRCVASP MSGQ

Keep journal receiver (days) Journal receiver ASP Description Journal receiver ASP Threshold message queue

PACKED(3 0) PACKED(3 0) CHAR(50) PACKED(3 0) CHAR(10)

0-999 0-999 *BLANK, User-defined text Numeric value (0 = *LIBASP) User-defined name, *JRNDFN

MSGQLIB

Threshold message queue library

CHAR(10)

*JRNLIB, user-defined name (See field JRNLIB if this field contains *JRNLIB)

RJLNK EXITPGM EXITPGMLIB

Remote journal link Exit program Exit program library

CHAR(10) CHAR(10) CHAR(10)

*NONE, *SOURCE, *TARGET *NONE, user-defined name User-defined name

710

MXJRNDFN outfile (WRKJRNDFN command)

Table 132. MXJRNDFN outfile (WRKJRNDFN command) Field MINENTDTA REQTHLDSIZ Description Minimal journal entry data Requested threshold size Type, length CHAR(100) PACKED(7 0) Valid values Array of 10 CHAR(10) fields *DTAARA, *FLDBDY, *FILE, *NONE Numeric value Column headings MIN JRN ENTRY DATA REQUESTED THRESHOLD SIZE SAVE TYPE JOURNALING LAG LIMIT (SEC) *JRNLIBASP, user-defined name JOURNAL LIBRARY ASP DEV JRNRCV LIBRARY ASP DEV TARGET JOURNAL STATE JOURNAL CACHING

SAVTYPE JRNLAGLMT

Save type Journaling lag limit (seconds)

CHAR(10) PACKED(3 0)

JRNLIBASPD

Journal library ASP device

CHAR(10)

RCVLIBASPD

Journal receiver library ASP device Target journal state

CHAR(10)

*RCVLIBASP, user-defined name

TGTSTATE

CHAR(10)

*ACTIVE, *STANDBY

JRNCACHE

Journal cache option

CHAR(10)

*SRC, *TGT, *BOTH, *NONE

Updated for 5.0.02.00.

711

MXJRNDFN outfile (WRKJRNDFN command)

712

MXRJLNK outfile (WRKRJLNK command)

MXRJLNK outfile (WRKRJLNK command)


Table 133. MXRJLNK outfile (WRKRJLNK command) Field SRCJRNDFN Description Journal definition name on source Source system name of journal definition Source Journal Library ASP Type, length CHAR(10) Valid values Journal definition name Column headings SOURCE JOURNAL DEFINITION SOURCE SYSTEM SRC JRN LIBRARY ASP SRC JRN LIBRARY ASP DEV SRC JRNRCV LIBRARY ASP SRC JRNRCV LIBRARY ASP DEV TARGET JOURNAL DEFINITION TARGET SYSTEM TGT JRN LIBRARY ASP

SRCSYS SRCJEJRNA

CHAR(8) DEC(3)

System name "0 = *CRTDFT -1 = *ASPDEV *JRNLIBASP, *ASPDEV, ASP Primary Group name "0 = *CRTDFT -1 = *ASPDEV *RCVLIBASP, *ASPDEV, ASP Primary Group name Journal definition name

SRCJEJLAD

Source Journal Library ASP Device Source Journal Receiver Library ASP Source Journal Receiver Library ASP Device Journal definition name on target Target system name of journal definition Target Journal Library ASP

CHAR(10)

SRCJERCVA

DEC(3)

SRCJERLAD

CHAR(10)

TGTJRNDFN

CHAR(10)

TGTSYS TGTJEJRNA

CHAR(8) DEC(3)

System name "0 = *CRTDFT -1 = *ASPDEV

713

MXRJLNK outfile (WRKRJLNK command)

Table 133. MXRJLNK outfile (WRKRJLNK command) Field TGTJEJLAD Description Target Journal Library ASP Device Target Journal Receiver Library ASP Target Journal Receiver Library ASP Device Delivery mode of remote journaling Remote journal state Type, length CHAR(10) Valid values *JRNLIBASP, *ASPDEV, ASP Primary Group name "0 = *CRTDFT -1 = *ASPDEV *RCVLIBASP, *ASPDEV, ASP Primary Group name *ASYNC, *SYNC, blank *ASYNC, *ASYNCPEND, *SYNC, *SYNCPEND, *INACTIVE, *CTLINACT, *FAILED, *NOTBUILT, *UNKNOWN Transfer definition name, *SYSDFN Transfer definition name, *SYSDFN, *NONE 0=*SYSDFN, 1-99 Plain text Column headings TGT JRN LIBRARY ASP DEV TGT JRNRCV LIBRARY ASP TGT JRNRCV LIBRARY ASP DEV RJ MODE (DELIVERY) STATE

TGTJERCVA

DEC(3)

TGTJERLAD

CHAR(10)

RJMODE RJSTATE

CHAR(10) CHAR(10)

PRITFRDFN SECTFRDFN PRIORITY TEXT

Primary transfer definition Secondary transfer definition Async process priority Text description

CHAR(10) CHAR(10) Packed(3 0) CHAR(50)

PRIMARY TFRDFN SECONDARY TFRDFN PRIORITY TEXT

714

MXRJLNK outfile (WRKRJLNK command)

715

MXSYSDFN outfile (WRKSYSDFN command)

MXSYSDFN outfile (WRKSYSDFN command)


Table 134. MXSYSDFN outfile (WRKSYSDFN command) Field SYSDFN TYPE PRITFRDFN SECTFRDFN CLUMBR CLUTFRDFN Description System definition System type Configured primary transfer definition Configured secondary transfer definition Cluster member Cluster transfer definition Type, length CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(20) Valid values User-defined name *MGT, *NET User-defined name User-defined name *YES, *NO User-defined name, *PRITFRDFN, *SECTFRDFN (Refer to the PRITFRNAME, PRITFRSYS1 and PRITFRSYS2 fields if this field contains *PRITFRDFN) User-defined name User-defined name, *LIBL *SEVERE, *INFO, *WARNING, *ERROR, *TERM, *ALERT, *ACTION, 0-99 0-99 Column headings SYSDFN NAME SYSTEM TYPE CONFIGURED PRITFRDFN CONFIGURED SECTFRDFN CLUSTER MEMBER CLUSTER TFRDFN

PRIMSGQ PRIMSGQLIB PRISEV PRISEVNBR

Primary message queue (Primary message handling) Primary message queue library (Primary message handling) Primary message queue severity (Primary message handling) Primary message queue severity number (Primary message handling) Primary message queue information level (Primary message handling) Secondary message queue (Secondary message handling)

CHAR(10) CHAR(10) CHAR(10) PACKED(3 0)

PRIMARY MSGQ PRIMARY MSGQ LIB PRIMARY MSGQ SEV PRIMARY MSGQ SEV NBR PRIMARY MSGQ INFO LEVEL SECONDARY MSGQ

PRIINFLVL

CHAR(10)

*SUMMARY, *ALL

SECMSGQ

CHAR(10)

User-defined name

716

MXSYSDFN outfile (WRKSYSDFN command)

Table 134. MXSYSDFN outfile (WRKSYSDFN command) Field SECMSGQLIB SECSEV Description Secondary message queue library (Secondary message handling) Secondary message queue severity (Secondary message handling) Secondary message queue severity number (Secondary message handling) Secondary message queue information level (Secondary message handling) Description Journal manager delay (seconds) System manager delay (seconds) Output queue (Output queue) Output queue library (Output queue) Hold on output queue Save on output queue Keep system history (days) Keep data group history (days) Keep MIMIX data (days) MIMIX data library ASP Type, length CHAR(10) CHAR(10) Valid values User-defined name, *LIBL *SEVERE, *INFO, *WARNING, *ERROR, *TERM, *ALERT, *ACTION, 0-99 0-99 Column headings SECONDARY MSGQ LIB SECONDARY MSGQ SEV SECONDARY MSGQ SEV NBR SECONDARY MSGQ INFO LEVEL DESCRIPTION JRNMGR DELAY (SEC) SYSMGR DELAY (SEC) OUTQ OUTQ LIBRARY HOLD ON OUTQ SAVE ON OUTQ KEEP SYS HISTORY (DAYS) KEEP DG HISTORY (DAYS) KEEP MIMIX DATA (DAYS) MIMIX DATA LIB ASP

SECSEVNBR

PACKED(3 0)

SECINFLVL

CHAR(10)

*SUMMARY, *ALL (Refer to the TFRSYS1 field if this field contains *SYS1) *BLANK, user-defined text 5-900 5-900 User-defined name User-defined name *YES, *NO *YES, *NO 1-365 1-365 1-365, 0 = *NOMAX Numeric value, 0 = *CRTDFT

TEXT JRNMGRDLY SYSMGRDLY OUTQ OUTQLIB HOLD SAVE KEEPSYSHST KEEPDGHST KEEPMMXDTA DTALIBASP

CHAR(50) PACKED(3 0) PACKED(3 0) CHAR(10) CHAR(10) CHAR(10) CHAR(10) PACKED(3 0) PACKED(3 0) PACKED(3 0) PACKED(3 0)

717

MXSYSDFN outfile (WRKSYSDFN command)

Table 134. MXSYSDFN outfile (WRKSYSDFN command) Field DSKSTGLMT SBMUSR MGRJOBD MGRJOBDLIB DFTJOBD DFTJOBDLIB PRDLIB RSTARTTIME KEEPNEWNFY KEEPACKNFY ASPGRP Description Disk storage limit (GB) User profile for submit job Manager job description (Manager job description) Manager job description library (Manager job description) Default job description (Default job description) Default job description library (Default job description) MIMIX product library Job restart time Keep new notification (days) Keep acknowledged notification (days) ASP Group Type, length PACKED(5 0) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(8) PACKED(3 0) PACKED(3 0) CHAR(10) Valid values 1-9999, 0 = *NOMAX *JOBD, *CURRENT User-defined name User-defined name User-defined name User-defined name User-defined name 000000 - 235959, *NONE (Values are returned left-justified) 1-365, 0 = *NOMAX 1-365, 0 = *NOMAX *NONE, User-defined name Column headings DISK STORAGE LIMIT (GB) USRPRF FOR SUBMIT JOB MANAGER JOBD MANAGER JOBD LIBRARY DEFAULT JOBD DEFAULT JOBD LIBRARY MIMIX PRODUCT LIBRARY RESTART TIME KEEP NEW NFY (DAYS) KEEP ACK NFY (DAYS) ASP GROUP

718

MXSYSDFN outfile (WRKSYSDFN command)

719

MXTFRDFN outfile (WRKTFRDFN command)

MXTFRDFN outfile (WRKTFRDFN command)


The Work with Transfer Definitions (WRKTFRDFN) command generates new outfiles based on the MXTFRDFN record format.
Table 135. MXTFRDFN outfile (WRKTFRDFN command) Field TFRDFN TFRSYS1 TFRSYS2 PROTOCOL HOST1 HOST2 PORT1 PORT2 LOCNAME1 LOCNAME2 NETID1 Description Transfer definition name (Transfer definition) System 1 name (Transfer definition) System 2 name (Transfer definition) Transfer protocol System 1 host name or address System 2 host name or address System 1 port number or alias System 2 port number or alias System 1 location name System 2 location name System 1 network identifier Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(256) CHAR(256) CHAR(14) CHAR(14) CHAR(8) CHAR(8) CHAR(8) Valid values User-defined journal definition name User-defined system name User-defined system name *TCP, *SNA, *OPTI *SYS1, user-defined name (Refer to the TFRSYS1 field if this field contains *SYS1) *SYS2, user-defined name (Refer to the TFRSYS2 field if this field contains *SYS2) User-defined port number User-defined port number *SYS1, user-defined name *SYS2, user-defined name *LOC, user-defined name, *NETATR, *NONE Column headings TFRDFN NAME TFRDFN NAME SYSTEM 1 TFRDFN NAME SYSTEM 2 TRANSFER PROTOCOL SYSTEM 1 HOST OR ADDRESS SYSTEM 2 HOST OR ADDRESS SYSTEM 1 PORT NBR OR ALIAS SYSTEM 2 PORT NBR OR ALIAS SYSTEM 1 LOCATION SYSTEM 2 LOCATION SYSTEM 1 NETWORK IDENTIFIER SYSTEM 2 NETWORK IDENTIFIER

NETID2

System 2 network identifier

CHAR(8)

*LOC, User-defined name, *NETATR, *NONE

720

MXTFRDFN outfile (WRKTFRDFN command)

Table 135. MXTFRDFN outfile (WRKTFRDFN command) Field MODE TEXT THLDSIZE RDB RDBSYS1 RDBSYS2 MNGRDB Description SNA mode Description Reset sequence threshold Relational database System 1 Relational database name System 2 Relational database name Manage RDB Directory Entries Indicator Transfer definition short name Type, length CHAR(8) CHAR(50) PACKED(7 0) CHAR(18) CHAR(18) CHAR(18) CHAR(10) Valid values User-defined name, *NETATR *BLANK, user-defined text 0-9999999 *GEN, user-defined name *SYS1, User-defined name *SYS2, User-defined name *DFT, *YES, *NO Column headings SNA MODE DESCRIPTION THRESHOLD SIZE RELATIONAL DATABASE RELATIONAL DATABASE RELATIONAL DATABASE MANAGE DIRECTORY ENTRIES TFRDFN SHORT NAME

TFRSHORTN

CHAR(4)

Name

721

MZPRCDFN outfile (WRKPRCDFN command)

MZPRCDFN outfile (WRKPRCDFN command)


Table 136. MZPRCDFN outfile (WRKPRCDFN command) Field PRCDFN Description Process definition name (Process definition) System name (Process definition) Process type Type, length CHAR(10) Valid values *ANY, user-defined name Column headings PRCDFN NAME PRCDFN SYSTEM PROCESS TYPE

PRCSYS TYPE

CHAR(10) CHAR(10)

*ANY, *BACKUP, *PRIMARY, *REPLICATE, user-defined name *ANY, *CRGADDNOD, *CRGCHG, *CRGCRT, *CRGDLT, *CRGDLTCMD, *CRGEND, *CRGENDNOD, *CRGFAIL, *CRGREJOIN, *CRGRESTR, *CRGRMVNOD, *CRGSTR, *CRGSWT, *CRGUNDO, User-defined value User-defined name User-defined value

PRDLIB TEXT

Product library Description

CHAR(10) CHAR(50)

PRODUCT LIBRARY DESCRIPTI ON

722

MZPRCE outfile (WRKPRCE command)

MZPRCE outfile (WRKPRCE command)


Table 137. MZPRCE outfile (WRKPRCE command) Field PRCDFN PRCSYS TYPE Description Process definition name (Process definition) System name ( Process definition) Process type Type, length CHAR(10) CHAR(10) CHAR(10) Valid values *ANY, user-defined name *ANY, *BACKUP, *PRIMARY, *REPLICATE, user-defined name *ANY, *CRGADDNOD, *CRGCHG, *CRGCRT, *CRGDLT, *CRGDLTCMD, *CRGEND, *CRGENDNOD, *CRGFAIL, *CRGREJOIN, *CRGRESTR, *CRGRMVNOD, *CRGSTR, *CRGSWT, *CRGUNDO, User-defined value 1-999999 User-defined name *ANY, user-defined value *CMD, *CMDPMT, *CMP, *CMT, *GOTO, *RTN Column headings PRCDFN NAME PRCDFN SYSTEM PROCESS TYPE

SEQNBR LABEL MSGID ACTION

Sequence number Label Message identifier Action

PACKED(6 0) CHAR(10) CHAR(10) CHAR(10)

SEQUENCE NUMBER LABEL MESSAGE ID ACTION

723

MZPRCE outfile (WRKPRCE command)

Table 137. MZPRCE outfile (WRKPRCE command) Field OPERAND1 Description Compare operand 1 Type, length CHAR(10) Valid values BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1, *BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5, *BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4, *BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME, *CRGNAME, *CRGTYPE, *DTACRGSTS, *ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *NODCNT, *PRDLIB, *PRINOD,*PRIPRVROL, *PRIPRVSTS, *PRISTS, *PRVACTCDE, *PRVROL1, *PRVROL2, *PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1, *PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5, *REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4, *REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3, *REPSTS4, *REPSTS5, *ROLETYPE, User-defined type Column headings COMPARE OPERAND1

OPERATOR OPERAND2

Compare operator Compare operand 2

CHAR(10) CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1, *BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5, *BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4, *BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME, *CRGNAME, *CRGTYPE, *DTACRGSTS, *ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS, *LCLROLE, *LCLSTS, *NODCNT, *PRDLIB, *PRINOD, *PRIPRVROL, *PRIPRVSTS, *PRISTS, *PRVACTCDE, *PRVROL1, *PRVROL2, *PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1, *PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5, *REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4, *REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3, *REPSTS4, *REPSTS5, *ROLETYPE, User-defined type BLANK, user-defined value

COMPARE OPERATOR COMPARE OPERAND2

CMD

Command details

CHAR(1000)

COMMAND DETAILS

724

MZPRCE outfile (WRKPRCE command)

Table 137. MZPRCE outfile (WRKPRCE command) Field ACTLBL RTNVAL COMMENT Description Action label Return value Comment text Type, length CHAR(10) CHAR(10) CHAR(50) Valid values BLANK, user-defined value *FAIL, *SUCCESS BLANK, user-defined value Column headings ACTION LABEL RETURN VALUE COMMENT TEXT

725

MXDGIFSTE outfile (WRKDGIFSTE command)

MXDGIFSTE outfile (WRKDGIFSTE command)


Table 138. MXDGIFSTE outfile (WRKDGIFSTE command) Field DGDFN DGSYS1 DGSYS2 OBJ1 Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 object name (unicode) Type, length CHAR(10) CHAR(8) CHAR(8) GRAPHIC(512) VARLEN(75) BIN(16 0) Valid values User-defined data group name User-defined system name User-defined system name User-defined name Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 IFS OBJECT (UNICODE) SYSTEM 1 FILE ID (BINARY) SYSTEM 1 FILE ID (HEX) SYSTEM 2 IFS OBJECT (UNICODE) SYSTEM 2 FILE ID (BINARY) SYSTEM 2 FILE ID (HEX) CCSID

FID1

System 1 file identifier (binary)

i5/OS-defined file identifier

FID1HEX OBJ2

System 1 file identifier (hexadecimal-readable) System 2 object name (unicode)

CHAR(32) GRAPHIC(512) VARLEN(75) BIN(16 0)

i5/OS-defined file identifier User-defined name

FID2

System 2 file identifier (binary)

i5/OS-defined file identifier

FID2HEX CCSID

System 2 file identifier (hexadecimal-readable) Object CCSID

CHAR(32) BIN(5 0)

i5/OS-defined file identifier Defaults to job CCSID. If job CCSID is 65535 or data cannot be converted to job CCSID, OBJ1 and OBJ2 values remain in Unicode.

726

MXDGIFSTE outfile (WRKDGIFSTE command)

Table 138. MXDGIFSTE outfile (WRKDGIFSTE command) Field OBJ1CVT Description System 1 object name (converted to job CCSID) System 2 object name (converted to job CCSID) Object type Entry status Journaled on system 1 Journaled on system 2 Apply session Type, length CHAR(512) VARLEN(75) CHAR(512) VARLEN(75) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined name converted using CCSID value. Zero length if conversion not possible. User-defined name converted using CCSID value. Zero length if conversion not possible. *DIR, *STMF, *SYMLNK *ACTIVE, *HLD, *HLDERR, *HLDIGN, *HLDRNM, *RLSWAIT *YES, *NO *YES, *NO A (only supported apply session) Column headings SYSTEM 1 IFS OBJECT CONVERTED SYSTEM 2 IFS OBJECT CONVERTED OBJECT TYPE CURRENT STATUS SYSTEM 1 JOURNALED SYSTEM 2 JOURNALED APPLY SESSION

OBJ2CVT

TYPE STSVAL JRN1STS JRN2STS APYSSN

727

MXDGOBJTE outfile (WRKDGOBJTE command)

MXDGOBJTE outfile (WRKDGOBJTE command)


Table 139. MXDGOBJTE outfile (WRKDGOBJTE command) Field DGDFN DGSYS1 DGSYS2 OBJ1 LIB 1 TYPE OBJ2 LIB 2 STSVAL JRN1STS JRN2STS APYSSN RQSAPYSSN Description Data group name (Data group definition) System 1 name (Data group definition) System 2 name (Data group definition) System 1 object System 1 library Object type System 2 object System 2 library Entry status Journaled on system 1 Journaled on system 2 Current apply session Requested apply session Type, length CHAR(10) CHAR(8) CHAR(8) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) CHAR(10) Valid values User-defined data group name User-defined system name User-defined system name User-defined name User-defined name *DTAARA, *DTAQ User-defined name User-defined name *ACTIVE, *HLD, *HLDERR, *HLDIGN, *RLSWAIT *YES, *NO *YES, *NO A (only supported apply session) A (only supported apply session) Column headings DGDFN NAME DGDFN SYSTEM 1 DGDFN SYSTEM 2 SYSTEM 1 OBJECT SYSTEM 1 LIBRARY OBJECT TYPE SYSTEM 2 OBJECT SYSTEM 2 LIBRARY CURRENT STATUS SYSTEM 1 JOURNALED SYSTEM 2 JOURNALED CURRENT APYSSN REQUESTED APYSSN

728

MXDGOBJTE outfile (WRKDGOBJTE command)

Table 139. MXDGOBJTE outfile (WRKDGOBJTE command) Field OBJ1APY Description System 1 object (known by apply) Type, length CHAR(10) Valid values User-defined name Column headings SYSTEM 1 OBJECT (APPLY) SYSTEM 1 LIBRARY (APPLY) SYSTEM 2 OBJECT (APPLY) SYSTEM 2 LIBRARY (APPLY)

LIB1APY

System 1 library (known by apply)

CHAR(10)

User-defined name

OBJ2APY

System 2 object (known by apply)

CHAR(10)

User-defined name

LIB2APY

System 2 library (known by apply)

CHAR(10)

User-defined name

729

Notices
Copyright 1999, 2008, Lakeview Technology Inc., All rights reserved. This document may not be copied, reproduced, translated, or transmitted in whole or part, except under license of Lakeview Technology Inc. MIMIX is a registered trademark of Lakeview Technology Inc. MIMIX AutoGuard, MIMIX AutoNotify, MIMIX Availability Manager, MIMIX ha1, MIMIX ha Lite, MIMIX DB2 Replicator, MIMIX Object Replicator, MIMIX Monitor, MIMIX Promoter, IntelliStart, RJ Link, and MIMIX Switch Assistant are trademarks of Lakeview Technology Inc. AS/400, DB2, eServer, i5/OS, IBM, iSeries, OS/400, Power, System i, and WebSphere are trademarks of International Business Machines Corporation. All other trademarks are the property of their respective owners. Lakeview Technology Inc. is an IBM Business Partner. If you are an entity of the U.S. government, you agree that this documentation and the program(s) referred to in this document are Commercial Computer Software, as defined in the Federal Acquisition Regulations (FAR), and the DoD FAR Supplement, and are delivered with only those rights set forth within the license agreement for such documentation and program(s). Use, duplication or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFAR 252.227-7013 (48 CFR) or subparagraphs (c)(1) & (2) of the Commercial Computer Software - Restricted Rights clause at FAR 52.227-19. The information in this document is subject to change without notice. Lakeview Technology Inc. makes no warranty of any kind regarding this material and assumes no responsibility for any errors that may appear in this document. The program(s) referred to in this document are not specifically developed, or licensed, for use in any nuclear, aviation, mass transit, or medical application or in any other inherently dangerous applications, and any such use shall remove Lakeview Technology Inc. from liability. Lakeview Technology Inc. shall not be liable for any claims or damages arising from such use of the Program(s) for any such applications. Examples and Example Programs: This book contains examples of reports and data used in daily operation. To illustrate them as completely as possible the examples may include names of individuals, companies, brands, and products. All of these names are fictitious. Any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. This book contains small programs that are furnished by Lakeview Technology Inc. as simple examples to provide an illustration. These examples have not been thoroughly tested under all conditions. Lakeview Technology, therefore, cannot guarantee or imply reliability, serviceability, or function of these example programs. All programs contained herein are provided to you AS IS. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.

Lakeview Technology Inc. 1901 South Meyers Suite 600 Oakbrook Terrace, IL 60181 USA www.lakeviewtech.com Phone:630-282-8100 Fax:630-282-8500

Index
Symbols
*FAILED activity entry 43 *HLD, files on hold 103 *HLDERR, held due to error 381 *HLDERR, hold error status 77 *MSGQ, maintaining private authorities 104 group 565 independent 565 independent, benefits 564 independent, configuration tips 568 independent, configuring 568 independent, configuring IFS objects 569 independent, configuring library-based objects 569 independent, effect on library list 570 independent, journal receiver considerations 569 independent, limitations 567 independent, primary 565 independent, replication 563 independent, requirements 567 independent, restrictions 567 independent, secondary 565 SYSBAS 563 system 564 user 565 asynchronous delivery 65 attributes, supported CMPDLOA command 606 CMPFILA command 591 CMPIFSA command 604 CMPOBJA command 596 audit results #DGFE rule 580, 630 #DLOATR rule 606, 632 #DLOATR rule, ASP attributes 612 #FILATR rule 591, 634 #FILATR rule, ASP attributes 612 #FILATR rule, journal attributes 608 #FILATRMBR rule 591, 634 #FILATRMBR rule, ASP attributes 612 #FILATRMBR rule, journal attributes 608 #FILDTA rule 582, 636 #IFSATR rule 604, 644 #IFSATR rule, ASP attributes 612 #IFSATR rule, journal attributes 608 #MBRRCDCNT rule 582, 640 #OBJATR rule 596, 647 #OBJATR rule, ASP attributes 612 #OBJATR rule, journal attributes 608 #OBJATR rule, user profile password attribute 619 #OBJATR rule, user profile status attribute 615 interpreting 573, 575, 576 interpreting, attribute comparisons 586

A
access paths, journaling 220 access types (file) for T-ZC entries 387 accessing MIMIX Main Menu 91 active server technology 440 additional resources 17 advanced journaling add to existing data group 85 apply session balancing 87 benefits 72 conversion examples 86 convert data group to 85 ending journaling 331, 335 loading tracking entries 284 planning for 85 replication process 73 serialized transactions with database 85 starting journaling 330, 334 advanced journaling, data areas and data queues synchronizing 505 verifying journaling 336 advanced journaling, IFS objects file IDs (FIDs) 312 journal receiver size 213 restrictions 121 synchronizing 505 verifying journaling 332 advanced journaling, large objects (LOBs) journal receiver size 213 synchronizing 476 APPC/SNA, configuring 163 apply session constraint induced changes 371 default value 240 specifying 236 apply session, database load balancing 87 ASP basic 565 concepts 564

732

interpreting, file data comparisons 582 timestamp difference 129 troubleshoot 578 auditing and reporting, compare commands DLO attributes 434 file and member attributes 425 file data using active processing 464 file data using subsetting options 467 file data with repair capability 458 file data without active processing 455 files on hold 461 IFS object attributes 431 object attributes 428 auditing value, i5/OS object set by MIMIX 58 auditing, i5/OS object 25 performed by MIMIX 297 audits 487 job log 578 authorities, private 104 automation 510 autostart job entry 190 changing 191 configuring 190 identifying 191

B
backlog comparing file data restriction 442 backup system 23 restricting access to files 240 basic ASP 565 batch output 527 benefits independent ASPs 564 LOB replication 107 bi-directional data flow 361 broadcast configuration 68

C
candidate objects defined 400 cascade configuration 68 cascading distributions, configuring 365 catchup mode 63 change management journal receivers 202 overview 37 remote journal environment 37

changing RJ link 227 startup programs, remote journaling 305 changing from RJ to MIMIX processing permanently 229 temporarily 228 checklist convert *DTAARA, *DTAQ to user journaling 154 convert IFS objects to user journaling 154 converting to remote journaling 147 copying configuration data 553 legacy cooperative processing 157 manual configuration (source-send) 143 MIMIX Dynamic Apply 150 new preferred configuration 139 pre-configuration 81 collision points 511 collision resolution 511 default value 240 requirements 382 working with 381 commands changing defaults 537 displaying a list of 528 commands, by mnemonic ADDDGDAE 290 ADDMSGLOGE 521 ADDRJLNK 225 CHGDGDAE 290 CHGJRNDFN 217 CHGRJLNK 227 CHGSYSDFN 171 CHGTFRDFN 186 CHKDGFE 303, 580 CLOMMXLST 536 CMPDLOA 420 CMPFILA 420 CMPFILDTA 440, 455 CMPIFSA 420 CMPOBJA 420 CMPRCDCNT 437 CPYCFGDTA 552 CPYDGDAE 291 CPYDGFE 291 CPYDGIFSE 291 CRTCRCLS 383 CRTDGDFN 247, 251 CRTJRNDFN 215 CRTSYSDFN 170

733

CRTTFRDFN 184 DLTCRCLS 384 DLTDGDFN 256 DLTJRNDFN 256 DLTSYSDFN 256 DLTTFRDFN 256 DSPDGDAE 293 DSPDGFE 293 DSPDGIFSE 293 ENDJRNFE 327 ENDJRNIFSE 331 ENDJRNOBJE 335 ENDJRNPF 327 LODDGDAE 289 LODDGFE 272 LODDGOBJE 268 MIMIX 91 OPNMMXLST 536 RMVDGDAE 292 RMVDGFE 292 RMVDGFEALS 292 RMVDGIFSE 292 RMVRJCNN 231 RUNCMD 529 RUNCMDS 529 SETDGAUD 297 SETIDCOLA 373 SNDNETDLO 509 SNDNETIFS 508 SNDNETOBJ 475, 506 STRJRNFE 326 STRJRNIFSE 330 STRJRNOBJE 334 STRMMXMGR 296 STRSVR 189 SWTDG 25 SYNCDFE 473 SYNCDGACTE 473, 479 SYNCDGFE 480, 489 SYNCDLO 472, 478, 499 SYNCIFS 472, 478, 495, 505 SYNCOBJ 472, 478, 491, 505 VFYCMNLNK 194, 195 VFYJRNFE 328 VFYJRNIFSE 332 VFYJRNOBJE 336 VFYKEYATR 359 WRKCRCLS 383 WRKDGDAE 289, 291 WRKDGDFN 255

WRKDGDLOE 291 WRKDGFE 291 WRKDGIFSE 291 WRKDGOBJE 291 WRKJRNDFN 255 WRKRJLNK 310 WRKSYSDFN 255 WRKTFRDFN 255 commands, by name Add Data Group Data Area Entry 290 Add Message Log Entry 521 Add Remote Journal Link 225 Change Data Group Data Area Entry 290 Change Journal Definition 217 Change RJ Link 227 Change System Definition 171 Change Transfer Definition 186 Check Data Group File Entries 303, 580 Close MIMIX List 536 Compare DLO Attributes 420 Compare File Attributes 420 Compare File Data 440, 455 Compare IFS Attributes 420 Compare Object Attributes 420 Compare Record Counts 437 Copy Configuration Data 552 Copy Data Group Data Area Entry 291 Copy Data Group File Entry 291 Copy Data Group IFS Entry 291 Create Collision Resolution Class 383 Create Data Group Definition 247, 251 Create Journal Definition 215 Create System Definition 170 Create Transfer Definition 184 Delete Collision Resolution Class 384 Delete Data Group Definition 256 Delete Journal Definition 256 Delete System Definition 256 Delete Transfer Definition 256 Display Data Group Data Area Entry 293 Display Data Group File Entry 293 Display Data Group IFS Entry 293 End Journal Physical File 327 End Journaling File Entry 327 End Journaling IFS Entries 331 End Journaling Obj Entries 335 Load Data Group Data Area Entries 289 Load Data Group File Entries 272 Load Data Group Object Entries 268 MIMIX 91

734

Open MIMIX List 536 Remove Data Group Data Area Entry 292 Remove Data Group File Entry 292 Remove Data Group IFS Entry 292 Remove Remote Journal Connection 231 Run Command 529 Run Commands 529 Send Network DLO 509 Send Network IFS 508 Send Network Object 506 Send Network Objects 475 Set Data Group Auditing 297 Set Identity Column Attribute 373 Start Journaling File Entry 326 Start Journaling IFS Entries 330 Start Journaling Obj Entries 334 Start Lakeview TCP Server 189 Start MIMIX Managers 296 Switch Data Group 25 Synchronize Data Group Activity Entry 479 Synchronize Data Group File Entry 480, 489 Synchronize DG Activity Entry 473 Synchronize DG File Entry 473 Synchronize DLO 472, 478, 499 Synchronize IFS 478 Synchronize IFS Object 472, 495, 505 Synchronize Object 472, 478, 491, 505 Verify Communications Link 194, 195 Verify Journaling File Entry 328 Verify Journaling IFS Entries 332 Verify Journaling Obj Entries 336 Verify Key Attributes 359 Work with Collision Resolution Classes 383 Work with Data Group Data Area Entries 289, 291 Work with Data Group Definition 255 Work with Data Group DLO Entries 291 Work with Data Group File Entries 291 Work with Data Group IFS Entries 291 Work with Data Group Object Entries 291 Work with Journal Definition 255 Work with RJ Links 310 Work with System Definition 255 Work with Transfer Definition 255 commands, run on remote system 529 commit cycles effect on audit comparison 582, 583 effect on audit results 587 policy effect on compare record count 351 commitment control 107

#MBRRCDCNT audit performance 351 journal standby state, journal cache 341, 344 journaled IFS objects 73 communications APPC/SNA 163 configuring system level 159 job names 48 native TCP/IP 159 OptiConnect 163 protocols 159 starting TCP sever 189 compare commands completion and escape messages 514 outfile formats 419 report types and outfiles 418 spooled files 418 comparing DLO attributes 434 file and member attributes 425 IFS object attributes 431 object attributes 428 when file content omitted 389 comparing attributes attributes to compare 422 overview 420 supported object attributes 421, 445 comparing file data 440 active server technology 440 advanced subsetting 451 allocated and not allocated records 442 comparing a random sample 451 comparing a range of records 448 comparing recently inserted data 448 comparing records over time 451 data correction 440 first and last subset 453 interleave factor 451 keys, triggers, and constraints 443 multi-threaded jobs 441 number of subsets 451 parallel processing 441 processing with DBAPY 441, 461 referential integrity considerations 444 repairing files in *HLDERR 441 restrictions 441 security considerations 442 thread groups 450 transfer definition 450 transitional states 441 using active processing 464

735

using subsetting options 467 wait time 450 with repair capability 458 with repair capability when files are on hold 461 without active processing 455 comparing file record counts 437 configuration additional supporting tasks 294 auditing 580 copying existing data 558 configuring advanced replication techniques 353 bi-directional data flow 361 cascading distributions 365 choosing the correct checklist 137 classes, collision resolution 383 data areas and data queues 112 DLO documents and folders 124 file routing, file combining 363 for improved performance 337 IFS objects 118 independent ASP 568 Intra communications 560, 561 job restart time 313 keyed replication 356 library-based objects 100 message queue objects for user profiles 104 omitting T-ZC journal entry content 388 spooled file replication 102 to replicate SQL stored procedures 393 unique key replication 356 configuring, collision resolution 382 confirmed journal entries 64 considerations journal for independent ASP 569 what to not replicate 83 constraints *CST attribute for CMPFILA 591 apply session for dependent files 371 auditing with CMPFILA 420 CMPFILA file-specific attribute 591 comparing file data 443 omit content and legacy cooperative processing 389 referential integrity considerations 444 requirements 370 requirements when synchronizing 481 restrictions with high availability journal performance enhancements 344

support 370 when journal is in standby state 341 constraints, physical files with apply session ignored 111 configuring 107 legacy cooperative processing 111 constraints, referential 111 contacting Lakeview Technology 19 container send process 56 defaults 243 description 54 threshold 243 contextual transfer definitions considerations 183 RJ considerations 182 continuous mode 63 conventions product 14 publications 14 convert data group to advanced journaling 154 COOPDB (Cooperate with database) 113, 120 cooperative journal (COOPJRN) behavior 106 cooperative processing and omitting content 389 configuring files 105 file, preferred method for 50 introduction 50 journaled objects 51 legacy 51 legacy limitations 111 MIMIX Dynamic Apply limitations 110 cooperative processing, legacy limitations 111 requirements and limitations 111 COOPJRN 106 COOPJRN (Cooperative journal) 236 COOPTYPE (Cooperating object types) 113 copying data group entries 291 definitions 255 create operation, how replicated 129 customer support 19 customizing 510 replication environment 511

D
data area

736

retrictions of journaled 113 data areas journaling 72 polling interval 238 polling process 77 synchronizing an object tracking entry 505 data distribution techniques 361 data group 24 convert to remote journaling 147 database only 110 determining if RJ link used 310 ending 40, 67 RJ link differences 67 sharing an RJ link 66 short name 234 starting 40 switching 24 switching, RJ link considerations 70 timestamps, automatic 237 type 235 data group data area entry 289 adding individual 290 loading from a library 289 data group definition 35, 233 creating 247 parameter tips 234 data group DLO entry 287 adding individual 288 loading from a folder 287 data group entry 401 defined 93 description 24 object 267 procedures for configuring 265 data group file entry 272 adding individual 278 changing 279 loading from a journal definition 276 loading from a library 275, 276 loading from FEs from another data group 277 loading from object entries 273 sources for loading 272 data group IFS entry 282 with independent ASPs 569 data group object entry adding individual 268 custom loading 267 independent ASP 569 with independent ASP 569

data library 34, 168 data management techniques 361 data queue restrictions of journaled 113 data queues journaling 72 synchronizing journaled objects 505 data source 234 database apply serialization 85 with compare file data (CMPFILDTA) 441, 461 database apply process 76 description 66 threshold warning 241 database reader process 66 description 66 threshold 241 database receive process 76 database send process 76 description 76 filtering 236 threshold 241 DDM password validation 306 server in startup programs 305 server, starting 308 defaults, command 537 definitions data group 35 journal 35 named 34 remote journal link 35 renaming 258 RJ link 35 system 35 transfer 35 delay times 167 delay/retry processing first and second 238 third 239 delete management journal receivers 203 overview 37 remote journal environment 38 delete operations journaled *DTAARA, *DTAQ, IFS objects 134 legacy cooperative processing 134 deleting data group entries 292

737

definitions 256 delivery mode asynchronous 65 synchronous 63 detail report 525 detected differences viewing and resolving 575, 576 directory entries managing 178 RDB 178 display output 524 displaying data group entries 293 definitions 257 distribution request, data-retrieval 55 DLOs example, entry matching 125 generic name support 124 keeping same name 242 object processing 124 duplicate identity column values 373 dynamic updates adding data group entries 278 removing data group entries 292

port alias, complex 161 port alias, simple 160 querying content of an output file 696 SETIDCOLA command increment values 377 WRKDG SELECT statements 696 exit points 511 journal receiver management 538, 541 MIMIX Monitor 538 MIMIX Promoter 539 exit programs journal receiver management 204, 542 requesting customized programs 540 expand support 526 extended attribute cache 345 configuring 345

F
failed request resolution 43 FEOPT (file and tracking entry options) 239 file id (FID) 75 files combining 363 omitting content 387 output 526 routing 364 sharing 361 synchronizing 480 filtering database replication 76 messages 45 on database send 236 on source side 237 remote journal environment 66 firewall, using CMPFILDTA with 442 folder path names 124

E
end journaling data areas and data queues 335 files 327 IFS objects 331 IFS tracking entry 331 object tracking entry 335 ending CMPFILDTA jobs 454 examples convert to advanced journaling 86 DLO entry matching 125 IFS object selection, subtree 415 job restart time 316 journal definitions for multimanagement environment 209 journal definitions for switchable data group 207 journal receiver exit program 545 load file entries for MIMIX Dynamic Apply 273 object entry matching 102 object retrieval delay 391 object selection process 407 object selection, order precedence in 408 object selection, subtree 410

G
generic name support 402 DLOs 124 generic user exit 538

H
help, accessing 14 history retention 168 hot backup 21

I
IBM i5/OS option 42 341 IBM OS/400 objects

738

to not replicate 83 IFS directory, created during installation 29 IFS file systems 118 unsupported 118 IFS object selection examples, subtree 415 subtree 405 IFS objects 118 file id (FID) use with journaling 75 journaled entry types, commitment control and 73 journaling 72 not supported 118 path names 119 supported object types 118 IFS objects, journaled restrictions 121 supported operations 130 sychronizing 482, 505 independent ASP 565 limitations 567 primary 565 replication 563 requirements 567 restrictions 567 secondary 565 synchronizing data within an 477 information and additional resources 17 installations, multiple MIMIX 23 interleave factor 451 Intra configuration 559 IPL, journal receiver change 37

J
job classes 30 job description parameter 527 job descriptions 30, 168 in data group definition 243 in product library 30 list of MIMIX 30 job log for audit 578 job name parameter 527 job names 47 job restart time 313 data group definition procedure 319 examples 315 overview 313 parameter 168, 244

system definition procedure 319 jobs, restarted automatically 313 journal 25 improving performance of 337 maximum number of objects in 26 security audit 53 system 53 journal analysis 43 journal at create 127, 238 requirements 323 requirements and restrictions 324 journal caching 202, 342 journal definition 35 configuring 197 created by other processes 200 creating 215 fields on data group definition 235 parameter tips 201 remote journal environment considerations 205 remote journal naming convention 206 remote journal naming convention, multimanagement 208 remote journaling example 207 journal entries 25 confirmed 64 filtering on database send 236 minimized data 339 OM journal entry 130 receive journal entry (RCVJRNE) 346 unconfirmed 64, 70 journal entry codes for data area and data queues 114 supported by MIMIX user journal processing 122 journal image 239, 355 journal manager 33 journal receiver 25 change management 37, 202 delete management 37, 38, 203 prefix 202 RJ processing earlier receivers 38 size for advanced journaling 213 starting point 26 stranded on target 39 journal receiver management interaction with other products 38 recommendations 37 journal sequence number, change during IPL 37

739

journal standby state 341 journaled data areas, data queues planning for 85 journaled IFS objects planning for 85 journaled object types user exit program considerations 87 journaling 25 cannot end 327 data areas and data queues 72 ending for data areas and data queues 335 ending for IFS objects 331 ending for physical files 327 IFS objects 72 IFS objects and commitment control 73 implicitly started 323 requirements for starting 323 starting for data areas and data queues 334 starting for IFS objects 330 starting for physical files 326 starting, ending, and verifying 322 verifying 487 verifying for data areas and data queues 336 verifying for IFS objects 332 verifying for physical files 328 journaling environment automatically creating 236 building 219 removing 231 source for values (JRNVAL) 219 journaling on target, RJ environment considerations 39 journaling status data areas and data queues 334 files 326 IFS objects 330 journaling, starting files 326

user exit program 108 large objects (LOBs) minimized journal entry data 339 legacy cooperative processing configuring 108 limitations 111 requirements 111 libraries to not replicate 83 library list adding QSOC to 164 library list, effect of independent ASP 570 library-based objects, configuring 100 limitations database only data group 110 list detail report 525 list summary report 525 load leveling 57 loading tracking entries 284 LOB replication 107 local-remote journal pair 63 log space 26 logical files 105, 106 long IFS path names 119

M
manage directory entries 178 management system 24 maximum size transmitted 177 MAXOPT2 value 213 menu MIMIX Configuration 295 MIMIX Main 91 message handling 167 message log 521 message queues associated with user profiles 104 journal-related threshold 204 messages 44 CMPDLOA 516 CMPFILA 514 CMPFILDTA 517 CMPIFSA 515 CMPOBJA 515 CMPRCDCNT 516 comparison completion and escape 514 MIMIX AutoGuard 487 MIMIX Dynamic Apply

K
keyed replication 355 comparing file data restriction 442 file entry option defaults 239 preventing before-image filtering 237 restrictions 356 verifying file attributes 359

L
large object (LOB) support

740

configuring 105, 108 recommended for files 105 reqirements and limitations 110 MIMIX environment 29 MIMIX installation 23 MIMIX jobs, restart time for 313 MIMIX Model Switch Framework 538 MIMIX performance, improving 337 MIMIX Retry Monitor 43 MIMIXOWN user profile 31, 306 MIMIXQGPL library 34 MIMIXSBS subsystem 34, 90 minimized journal entry data 339 LOBs 107 MMNFYNEWE monitor 127 monitor new objects not configured to MIMIX 127 move/rename operations system journal replication 130 user journal replication 131 multimanagement journal definition naming 208 multi-threaded jobs 441

notification of objects not in configuration 127 notification retention 168

O
object apply process defaults 243 description 54 threshold 243 object attributes, comparing 422 object auditing 323 object auditing level, i5/OS manually set for a data group 297 set by MIMIX 58, 297 object auditing value data areas, data queues 112 DLOs 124 IFS objects 120 library-based objects 98 omit T-ZC entry considerations 388 object entry, data group creating 267 object locking retry interval 238 object processing data areas, data queues 112 defaults 241 DLOs 124 high volume objects 350 IFS objects 118 retry interval 238 spooled files 102 object retrieval delay considerations 391 examples 391 selecting 391 object retrieve process 56 defaults 243 description 53 threshold 243 with high volume objects 350 object selection 399 commands which use 399 examples, order precedence 408 examples, process 407 examples, subtree 410 name pattern 405 order precedence 401 parameter 401 process 399 subtree 404

N
name pattern 405 name space 53 names, displaying long 119 naming conventions data group definitions 234 journal definitions 201, 206, 208 multi-part 27 transfer definitions 176 transfer definitions, contextual (*ANY) 183 transfer definitions, multiple network systems 172 network systems 24 multiple 172 new objects automatically journal 238 automatically replicate 127 files 127 files processed by legacy cooperative processing 128 files processed with MIMIX Dynamic Apply 127 IFS object journal at create requirements 323 IFS objects, data areas, data queues 128 journal at create selection criteria 324

741

object selector elements 401 by function 402 object selectors 401 object send process 54 description 53 threshold 242 object types supported 96, 549 Omit content (OMTDTA) parameter 388 and comparison commands 389 and cooperative processing 389 open commit cycles audit results 582, 583, 587 OptiConnect, configuring 163 outfiles 621 MCAG 623 MCDTACRGE 626 MCNODE 628 MXCDGFE 630 MXCMPDLOA 632 MXCMPFILA 634 MXCMPFILD 636 MXCMPFILR 639 MXCMPIFSA 644 MXCMPOBJA 647 MXCMPRCDC 640 MXDGACT 649 MXDGACTE 651 MXDGDAE 659 MXDGDFN 660 MXDGDLOE 668 MXDGFE 670 MXDGIFSE 674, 726, 728 MXDGIFSTE 726 MXDGOBJE 703 MXDGOBJTE 728 MXDGSTS 676 MXDGTSP 706 MXJRNDFN 709 MXSYSDFN 716 MXTFRDFN 720 MZPRCDFN 722 MZPRCE 723 user profile password 619 user profile status 615 WRKRJLNK 713 outfiles, supporting information record format 621 work with panels 622 output batch 527

considerations 523 display 524 expand support 526 file 526 parameter 523 print 524 output file querying content, examples of 696 output file fields Difference Indicator 582, 587 System 1 Indicator field 589 System 2 Indicator field 589 output queues 168 overview MIMIX operations 40 remote journal support 61 starting and ending replication 40 support for resolving problems 42 support for switching 24, 44 working with messages 44

P
parallel processing 441 path names, IFS 119 policy, CMPRCDCNT commit threshold 351 polling interval 238 port alias 160 complex example 161 creating 162 simple example 160 print output 524 printing controlling characteristics of 168 data group entries 293 definitions 257 private authorities, *MSGQ replication of 104 problems, journaling data areas and data queues 334 files 326 IFS objects 330 process container send and receive 56 database apply 76 database reader 66 database receive 76 database send 76 names 47 object apply 56 object retrieve 56

742

object send 54 process, object selection 399 processing defaults container send 243 database apply 241 file entry options 239 object apply 243 object retrieve 243 user journal entry 236 production system 23 publications conventions 14 formatting used in 15 IBM 17

Q
QAUDCTL system value 53 QAUDLVL system value 53, 103 QDFTJRN data area 238 restrictions 324 role in processing new objects 324 QSOC library 164 subsystem 305

R
RCVJRNE (Receive Journal Entry) 346 configuring values 347 determining whether to change the value of 347 understanding its values 346 RDB 178 directory entries 178 RDB directory entry 188 reader wait time 235 receiver library, changing for RJ target journal 222 receivers change management 202 delete management 203 recommendation multimanagement journal definitions 208 relational database (RDB) 178 entries 178, 186 remote journal benefits 61 i5/OS function 25, 61 i5/OS function, asynchronous delivery 65 i5/OS function, synchronous delivery 63

MIMIX support 61 relational database 178 remote journal environment changing 222 contextual transfer definitions 182 receiver change management 37 receiver delete management 38 restrictions 62 RJ link 66 security implications 306 switch processing changes 44 remote journal link 35, 66 remote journal link, See also RJ link remote journaling data group definition 236 repairing file data 458 files in *HLDERR 441 files on hold 461 replicating user profiles 476 what to not replicate 83 replication advanced topic parameters 237 by object type 96 configuring advanced techniques 353 constraint-induced modifications 371 data area 77 defaults for object types 96 direction of 23 ending data group 40 ending MIMIX 40 independent ASP 563 maximum size threshold 177 positional vs. keyed 355 process, remote journaling environment 66 retrieving extended attributes 345 spooled files 102 SQL stored procedures 393 starting data group 40 starting MIMIX 40 system journal process 53 unit of work for 24 user-defined functions 393 what to not replicate 83 replication path 46 reports detail 525 list detail 525 list summary 525

743

types for compare commands 418 requirement objects and journal in same ASP 26 requirements independent ASP 567 journal at create 323 keyed replication 355 legacy cooperative processing 111 MIMIX Dynamic Apply 110 standby journaling 343 user journal replication of data areas and data queues 112 restarted 313 restore operations, journaled *DTAARA, *DTAQ, IFS objects 134 restrictions comparing file data 441 data areas and data queues 113 independent ASP 567 journal at create 324 journal receiver management 38 journaled *DTAARA, *DTAQ objects 113 journaled IFS objects 121 keyed replication (unique key) 356 legacy cooperative processing 111 LOBs 108 MIMIX Dynamic Apply 110 number of objects in journal 26 QDFTJRN data area 324 remote journaling 62 standby journaling 343 retrying, data group activity entries 43 RJ link 35 adding 225 changing 227 data group definition parameter 236 description 66 end options 67 identifying data groups that use 310 sharing among data groups 66 switching considerations 70 threshold 237 RJ link monitors description 68 displaying status of 68 ending 68 not installed, status when 68 operation 68

S
save-while-active 396 considerations 396 examples 397 options 397 wait time 396 search process, *ANY transfer definitions 181 security considerations, CMPFILDTA command 442 general information 80 remote journaling implications 306 security audit journal 53 sending DLOs 509 IFS objects 508 library-based objects 506 serialization database files and journaled objects 85 object changes with database 72 servers starting DDM 308 starting TCP 189 short transfer definition name 176 source physical files 105, 106 source system 23 spooled files 102 compare commands 418 keeping deleted 103 options 103 retaining on target system 242 SQL stored procedures 393 replication requirements 393 SQL table identity columns 373 alternatives to SETIDCOLA 375 check for replication of 378 problem 373 SETIDCOLA command details 376 SETIDCOLA command examples 377 SETIDCOLA command limitations 374 SETIDCOLA command usage notes 377 setting attribute 378 when to use SETIDCOLA 374 standby journaling IBM i5/OS option 42 341 journal caching 342 journal standby state 341 MIMIX processing with 342 overview 341 requirements 343

744

restrictions 343 start journaling data areas and data queues 334 file entry 326 files 326 IFS objects 330 IFS tracking entry 330 object tracking entry 334 starting system and journal managers 296 TCP server 189 TCP server automatically 190 startup programs changes for remote journaling 305 MIMIX subsystem 90 QSOC subsystem 305 status, values affecting updates to 238 storage, data libraries 168 stranded journal on target, journal entries 39 subsystem MIMIXSBS, starting 90 QSOC 305 subtree 404 IFS objects 405 switching allowing 234 data group 24 enabling journaling on target system 235 example RJ journal definitions for 207 independent ASP restriction 568 MIMIX Model Switch Framework with RJ link 70 preventing identity column problems 373 remote journaling changes to 44 removing stranded journal receivers 39 RJ link considerations 70 synchronization check, automatic 237 synchronizing 472 activity entries overview 479 commands for 474 considerations 474 data group activity entries 503 database files 489 database files overview 480 DLOs 499 DLOs in a data group 499 DLOs without a data group 500 establish a start point 483 file entry overview 480 files with triggers 480

IFS objects 495 IFS objects by path name only 496 IFS objects in a data group 495 IFS objects without a data group 496 IFS tracking entries 505 including logical files 481 independent ASP, data in an 477 initial 484 initial configuration 483 initial configuration MQ environment 483 limit maximum size 474 LOB data 476 object tracking entries 505 object, IFS, DLO overview 478 objects 491 objects in a data group 491 objects without a data group 492 related file 481 resources for 483 status changes caused by 476 tracking entries 482 user profiles 474, 476 synchronous delivery 63 unconfirmed entries 64 SYSBAS 563, 565 system ASP 564 system definition 35, 166 changing 171 creating 170 parameter tips 167 system journal 53 system journal replication advanced techniques 353 omitting content 387 system library list 163, 570 system manager 32 system user profiles to not replicate 83 system value QAUDCTL 53 QAUDLVL 53, 103 QSYSLIBL 164 system, roles 23

T
target journal state 202 target system 23 TCP/IP adding to startup program 305

745

configuring native 159 creating port aliases for 160 temporary files to not replicate 83 thread groups 450 threshold, backlog adjusting 251 container send 243 database apply 241 database reader/send 241 object apply 243 object retrieve 243 object send 242 remote journal link 237 threshold, CMPRCDCNT commit 351 timestamps, automatic 237 tracking entries loading 284 loading for data areas, data queues 285 loading for IFS objects 284 purpose 74 tracking entry file identifiers (FIDs) 312 transfer definition 35, 174, 450 changing 186 contextual system support (*ANY) 28, 181 fields in data group definition 235 fields in system definition 167 multiple network system environment 172 other uses 174 parameter tips 176 short name 176 transfer protocols OptiConnect parameters 177 SNA parameters 177 TCP parameters 176 trigger programs defined 368 synchronizing files 369 triggers avoiding problems 444 comparing file data 443 disabling during synchronization 480 read 443 update, insert, and delete 443 T-ZC journal entries access types 387 configuring to omit 388 omitting 387

U
unconfirmed journal entries 64, 70 unique key comparing file data restriction 442 file entry options for replicating 239 replication of 355 user ASP 565 user exit points 541 user exit program data areas and data queues 87 IFS objects 87 large objects (LOBs) 108 user exit, generic 538 user journal replication advanced techniques 353 requirements for data areas and data queues 112 supported journal entries for data areas, data queues 114 tracking entry 74 user profile MIMIXOWN 306 password 619 status 615 user profiles default 168 MIMIX 31 replication of 104 specifying status 242 synchronizing 474 system distribution directory entries 476 to not replicate 83 user-defined functions 393

V
verifying communications link 194, 195 initial synchronization 487 journaling, IFS tracking entries 332 journaling, object tracking entries 336 journaling, physical files 328 key attributes 359 send and receive processes automatically 238

W
wait time comparing file data 450 reader 235

746

WRKDG SELECT statement 696

747

Vous aimerez peut-être aussi