Académique Documents
Professionnel Documents
Culture Documents
MIMIX® Reference
2
Processing data-retrieval activity entries ............................................................. 55
Processes with multiple jobs ............................................................................... 57
Tracking object replication................................................................................... 57
Managing object auditing .................................................................................... 57
User journal replication.............................................................................................. 61
What is remote journaling?.................................................................................. 61
Benefits of using remote journaling with MIMIX .................................................. 61
Restrictions of MIMIX Remote Journal support ................................................... 62
Overview of IBM processing of remote journals .................................................. 63
Synchronous delivery .................................................................................... 63
Asynchronous delivery .................................................................................. 65
User journal replication processes ...................................................................... 66
The RJ link .......................................................................................................... 66
Sharing RJ links among data groups............................................................. 66
RJ links within and independently of data groups ......................................... 67
Differences between ENDDG and ENDRJLNK commands .......................... 67
RJ link monitors ................................................................................................... 68
RJ link monitors - operation........................................................................... 68
RJ link monitors in complex configurations ................................................... 68
Support for unconfirmed entries during a switch ................................................. 70
RJ link considerations when switching ................................................................ 70
User journal replication of IFS objects, data areas, data queues.............................. 72
Benefits of advanced journaling .......................................................................... 72
Replication processes used by advanced journaling .......................................... 73
Tracking entries ................................................................................................... 74
IFS object file identifiers (FIDs) ........................................................................... 75
Lesser-used processes for user journal replication................................................... 76
User journal replication with source-send processing ......................................... 76
The data area polling process ............................................................................. 77
Chapter 3 Preparing for MIMIX 80
Checklist: pre-configuration....................................................................................... 81
Data that should not be replicated............................................................................. 83
Planning for journaled IFS objects, data areas, and data queues............................. 85
Is user journal replication appropriate for your environment? ............................. 85
Serialized transactions with database files.......................................................... 85
Converting existing data groups .......................................................................... 85
Conversion examples .................................................................................... 86
Database apply session balancing ...................................................................... 87
User exit program considerations........................................................................ 87
Starting the MIMIXSBS subsystem ........................................................................... 90
Accessing the MIMIX Main Menu.............................................................................. 91
Chapter 4 Planning choices and details by object class 93
Replication choices by object type ............................................................................ 96
Configured object auditing value for data group entries............................................ 98
Identifying library-based objects for replication ....................................................... 100
How MIMIX uses object entries to evaluate journal entries for replication ........ 101
Identifying spooled files for replication .............................................................. 102
Additional choices for spooled file replication.............................................. 103
3
Replicating user profiles and associated message queues .............................. 104
Identifying logical and physical files for replication.................................................. 105
Considerations for LF and PF files .................................................................... 105
Files with LOBs............................................................................................ 107
Configuration requirements for LF and PF files................................................. 108
Requirements and limitations of MIMIX Dynamic Apply.................................... 110
Requirements and limitations of legacy cooperative processing....................... 111
Identifying data areas and data queues for replication............................................ 112
Configuration requirements - data areas and data queues ............................... 112
Restrictions - user journal replication of data areas and data queues .............. 113
Supported journal code E and Q entry types............................................... 114
Identifying IFS objects for replication ...................................................................... 118
Supported IFS file systems and object types .................................................... 118
Considerations when identifying IFS objects..................................................... 119
MIMIX processing order for data group IFS entries..................................... 119
Long IFS path names .................................................................................. 119
Upper and lower case IFS object names..................................................... 119
Configured object auditing value for IFS objects ......................................... 120
Configuration requirements - IFS objects .......................................................... 120
Restrictions - user journal replication of IFS objects ......................................... 121
Supported journal code B entry types ......................................................... 122
Identifying DLOs for replication ............................................................................... 124
How MIMIX uses DLO entries to evaluate journal entries for replication .......... 124
Sequence and priority order for documents ................................................ 124
Sequence and priority order for folders ....................................................... 125
Processing of newly created files and objects......................................................... 127
Newly created files ............................................................................................ 127
New file processing - MIMIX Dynamic Apply............................................... 127
New file processing - legacy cooperative processing.................................. 128
Newly created IFS objects, data areas, and data queues ................................. 128
Determining how an activity entry for a create operation was replicated .... 129
Processing variations for common operations ........................................................ 130
Move/rename operations - system journal replication ....................................... 130
Move/rename operations - user journaled data areas, data queues, IFS objects ...
131
Delete operations - files configured for legacy cooperative processing ............ 134
Delete operations - user journaled data areas, data queues, IFS objects ........ 134
Restore operations - user journaled data areas, data queues, IFS objects ...... 134
Chapter 5 Configuration checklists 137
Checklist: New remote journal (preferred) configuration ......................................... 139
Checklist: New MIMIX source-send configuration................................................... 143
Checklist: Converting to remote journaling.............................................................. 147
Converting to MIMIX Dynamic Apply....................................................................... 150
Converting using the Convert Data Group command ....................................... 150
Checklist: manually converting to MIMIX Dynamic Apply.................................. 151
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 154
Checklist: Converting to legacy cooperative processing ......................................... 157
4
Chapter 6 System-level communications 159
Configuring for native TCP/IP.................................................................................. 159
Port aliases-simple example ............................................................................. 160
Port aliases-complex example .......................................................................... 161
Creating port aliases ......................................................................................... 162
Configuring APPC/SNA........................................................................................... 163
Configuring OptiConnect ......................................................................................... 163
Chapter 7 Configuring system definitions 166
Tips for system definition parameters ..................................................................... 167
Creating system definitions ..................................................................................... 170
Changing a system definition .................................................................................. 171
Multiple network system considerations.................................................................. 172
Chapter 8 Configuring transfer definitions 174
Tips for transfer definition parameters..................................................................... 176
Using contextual (*ANY) transfer definitions ........................................................... 181
Search and selection process ........................................................................... 181
Considerations for remote journaling ................................................................ 182
Considerations for MIMIX source-send configurations...................................... 182
Naming conventions for contextual transfer definitions ..................................... 183
Additional usage considerations for contextual transfer definitions................... 183
Creating a transfer definition ................................................................................... 184
Changing a transfer definition ................................................................................. 186
Changing a transfer definition to support remote journaling.............................. 186
Finding the system database name for RDB directory entries ................................ 188
Using i5/OS commands to work with RDB directory entries.............................. 188
Starting the Lakeview TCP/IP server ...................................................................... 189
Using autostart job entries to start the TCP server ................................................. 190
Adding an autostart job entry ............................................................................ 190
Identifying the autostart job entry in the MIMIXSBS subsystem........................ 191
Changing the job description for an autostart job entry ..................................... 191
Verifying a communications link for system definitions ........................................... 194
Verifying the communications link for a data group................................................. 195
Verifying all communications links..................................................................... 195
Chapter 9 Configuring journal definitions 197
Journal definitions created by other processes ....................................................... 200
Tips for journal definition parameters ...................................................................... 201
Journal definition considerations ............................................................................. 205
Naming convention for remote journaling environments with 2 systems........... 206
Example journal definitions for a switchable data group ............................. 207
Naming convention for multimanagement environments .................................. 208
Example journal definitions for three management nodes .......................... 209
Journal receiver size for replicating large object data ............................................. 213
Verifying journal receiver size options .............................................................. 213
Changing journal receiver size options ............................................................. 213
Creating a journal definition..................................................................................... 215
Changing a journal definition................................................................................... 217
Building the journaling environment ........................................................................ 219
Changing the remote journal environment .............................................................. 222
5
Adding a remote journal link.................................................................................... 225
Changing a remote journal link................................................................................ 227
Temporarily changing from RJ to MIMIX processing .............................................. 228
Changing from remote journaling to MIMIX processing .......................................... 229
Removing a remote journaling environment............................................................ 231
Chapter 10 Configuring data group definitions 233
Tips for data group parameters ............................................................................... 234
Additional considerations for data groups ......................................................... 244
Creating a data group definition .............................................................................. 247
Changing a data group definition ............................................................................ 251
Fine-tuning backlog warning thresholds for a data group ....................................... 251
Chapter 11 Additional options: working with definitions 255
Copying a definition................................................................................................. 255
Deleting a definition................................................................................................. 256
Displaying a definition ............................................................................................. 257
Printing a definition.................................................................................................. 257
Renaming definitions............................................................................................... 258
Renaming a system definition ........................................................................... 258
Renaming a transfer definition .......................................................................... 261
Renaming a journal definition with considerations for RJ link ........................... 262
Renaming a data group definition ..................................................................... 263
Chapter 12 Configuring data group entries 265
Creating data group object entries .......................................................................... 267
Loading data group object entries ..................................................................... 267
Adding or changing a data group object entry................................................... 268
Creating data group file entries ............................................................................... 272
Loading file entries ............................................................................................ 272
Loading file entries from a data group’s object entries ................................ 273
Loading file entries from a library ................................................................ 275
Loading file entries from a journal definition ................................................ 276
Loading file entries from another data group’s file entries........................... 277
Adding a data group file entry ........................................................................... 278
Changing a data group file entry ....................................................................... 279
Creating data group IFS entries .............................................................................. 282
Adding or changing a data group IFS entry....................................................... 282
Loading tracking entries .......................................................................................... 284
Loading IFS tracking entries.............................................................................. 284
Loading object tracking entries.......................................................................... 285
Creating data group DLO entries ............................................................................ 287
Loading DLO entries from a folder .................................................................... 287
Adding or changing a data group DLO entry ..................................................... 288
Creating data group data area entries..................................................................... 289
Loading data area entries for a library............................................................... 289
Adding or changing a data group data area entry ............................................. 290
Additional options: working with DG entries ............................................................ 291
Copying a data group entry ............................................................................... 291
Removing a data group entry ............................................................................ 292
Displaying a data group entry............................................................................ 293
6
Printing a data group entry ................................................................................ 293
Chapter 13 Additional supporting tasks for configuration 294
Accessing the Configuration Menu.......................................................................... 295
Starting the system and journal managers.............................................................. 296
Setting data group auditing values manually........................................................... 297
Examples of changing of an IFS object’s auditing value ................................... 298
Checking file entry configuration manually.............................................................. 303
Changes to startup programs.................................................................................. 305
Checking DDM password validation level in use..................................................... 306
Option 1. Enable MIMIXOWN user profile for DDM environment...................... 306
Option 2. Allow user profiles without passwords ............................................... 307
Starting the DDM TCP/IP server ............................................................................. 308
Identifying data groups that use an RJ link ............................................................. 310
Using file identifiers (FIDs) for IFS objects .............................................................. 312
Configuring restart times for MIMIX jobs ................................................................. 313
Configurable job restart time operation ............................................................. 313
Considerations for using *NONE ................................................................. 315
Examples: job restart time ................................................................................. 315
Restart time examples: system definitions .................................................. 316
Restart time examples: system and data group definition combinations..... 316
Configuring the restart time in a system definition ............................................ 319
Configuring the restart time in a data group definition....................................... 319
Chapter 14 Starting, ending, and verifying journaling 322
What objects need to be journaled.......................................................................... 323
Authority requirements for starting journaling.................................................... 324
MIMIX commands for starting journaling................................................................. 325
Journaling for physical files ..................................................................................... 326
Displaying journaling status for physical files .................................................... 326
Starting journaling for physical files ................................................................... 326
Ending journaling for physical files .................................................................... 327
Verifying journaling for physical files ................................................................. 328
Journaling for IFS objects........................................................................................ 330
Displaying journaling status for IFS objects ...................................................... 330
Starting journaling for IFS objects ..................................................................... 330
Ending journaling for IFS objects ...................................................................... 331
Verifying journaling for IFS objects.................................................................... 332
Journaling for data areas and data queues............................................................. 334
Displaying journaling status for data areas and data queues............................ 334
Starting journaling for data areas and data queues .......................................... 334
Ending journaling for data areas and data queues............................................ 335
Verifying journaling for data areas and data queues ......................................... 336
Chapter 15 Configuring for improved performance 337
Minimized journal entry data ................................................................................... 339
Restrictions of minimized journal entry data...................................................... 339
Configuring for minimized journal entry data ..................................................... 340
Configuring for high availability journal performance enhancements...................... 341
Journal standby state ........................................................................................ 341
Minimizing potential performance impacts of standby state ........................ 342
7
Journal caching ................................................................................................. 342
MIMIX processing of high availability journal performance enhancements....... 342
Requirements of high availability journal performance enhancements ............. 343
Restrictions of high availability journal performance enhancements................. 343
Caching extended attributes of *FILE objects ......................................................... 345
Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 346
Understanding the data area format.................................................................. 346
Determining if the data area should be changed............................................... 347
Configuring the RCVJRNE call delay and block values .................................... 347
Configuring high volume objects for better performance......................................... 350
Improving performance of the #MBRRCDCNT audit .............................................. 351
Chapter 16 Configuring advanced replication techniques 353
Keyed replication..................................................................................................... 355
Keyed vs positional replication .......................................................................... 355
Requirements for keyed replication ................................................................... 355
Restrictions of keyed replication........................................................................ 356
Implementing keyed replication ......................................................................... 356
Changing a data group configuration to use keyed replication.................... 356
Changing a data group file entry to use keyed replication........................... 357
Verifying key attributes ...................................................................................... 359
Data distribution and data management scenarios ................................................. 361
Configuring for bi-directional flow ...................................................................... 361
Bi-directional requirements: system journal replication ............................... 361
Bi-directional requirements: user journal replication.................................... 362
Configuring for file routing and file combining ................................................... 363
Configuring for cascading distributions ............................................................. 365
Trigger support ........................................................................................................ 368
How MIMIX handles triggers ............................................................................. 368
Considerations when using triggers .................................................................. 368
Enabling trigger support .................................................................................... 369
Synchronizing files with triggers ........................................................................ 369
Constraint support ................................................................................................... 370
Referential constraints with delete rules............................................................ 370
Replication of constraint-induced modifications .......................................... 371
Handling SQL identity columns ............................................................................... 373
The identity column problem explained ............................................................. 373
When the SETIDCOLA command is useful....................................................... 374
SETIDCOLA command limitations .................................................................... 374
Alternative solutions .......................................................................................... 375
SETIDCOLA command details .......................................................................... 376
Usage notes ................................................................................................ 377
Examples of choosing a value for INCREMENTS....................................... 377
Checking for replication of tables with identity columns .................................... 378
Setting the identity column attribute for replicated files ..................................... 378
Collision resolution .................................................................................................. 381
Additional methods available with CR classes .................................................. 381
Requirements for using collision resolution ....................................................... 382
Working with collision resolution classes .......................................................... 383
Creating a collision resolution class ............................................................ 383
8
Changing a collision resolution class........................................................... 384
Deleting a collision resolution class............................................................. 384
Displaying a collision resolution class ......................................................... 384
Printing a collision resolution class.............................................................. 385
Omitting T-ZC content from system journal replication ........................................... 387
Configuration requirements and considerations for omitting T-ZC content ....... 388
Omit content (OMTDTA) and cooperative processing................................. 389
Omit content (OMTDTA) and comparison commands ................................ 389
Selecting an object retrieval delay........................................................................... 391
Object retrieval delay considerations and examples ......................................... 391
Configuring to replicate SQL stored procedures and user-defined functions.......... 393
Requirements for replicating SQL stored procedure operations ....................... 393
To replicate SQL stored procedure operations ................................................. 393
Using Save-While-Active in MIMIX.......................................................................... 396
Considerations for save-while-active................................................................. 396
Types of save-while-active options ................................................................... 397
Example configurations ..................................................................................... 397
Chapter 17 Object selection for Compare and Synchronize commands 399
Object selection process ......................................................................................... 399
Order precedence ............................................................................................. 401
Parameters for specifying object selectors.............................................................. 402
Object selection examples ...................................................................................... 407
Processing example with a data group and an object selection parameter ...... 407
Example subtree ............................................................................................... 410
Example Name pattern...................................................................................... 414
Example subtree for IFS objects ....................................................................... 415
Report types and output formats ............................................................................. 418
Spooled files ...................................................................................................... 418
Outfiles .............................................................................................................. 419
Chapter 18 Comparing attributes 420
About the Compare Attributes commands .............................................................. 420
Choices for selecting objects to compare.......................................................... 421
Unique parameters ...................................................................................... 421
Choices for selecting attributes to compare ...................................................... 422
CMPFILA supported object attributes for *FILE objects .............................. 423
CMPOBJA supported object attributes for *FILE objects ............................ 423
Comparing file and member attributes .................................................................... 425
Comparing object attributes .................................................................................... 428
Comparing IFS object attributes.............................................................................. 431
Comparing DLO attributes....................................................................................... 434
Chapter 19 Comparing file record counts and file member data 437
Comparing file record counts .................................................................................. 437
To compare file record counts ........................................................................... 438
Significant features for comparing file member data ............................................... 440
Repairing data ................................................................................................... 440
Active and non-active processing...................................................................... 440
Processing members held due to error ............................................................. 441
Additional features............................................................................................. 441
9
Considerations for using the CMPFILDTA command ............................................. 441
Recommendations and restrictions ................................................................... 441
Using the CMPFILDTA command with firewalls................................................ 442
Security considerations ..................................................................................... 442
Comparing allocated records to records not yet allocated ................................ 442
Comparing files with unique keys, triggers, and constraints ............................. 443
Avoiding issues with triggers ....................................................................... 444
Referential integrity considerations ............................................................. 444
Job priority .................................................................................................... 444
Specifying CMPFILDTA parameter values.............................................................. 445
Specifying file members to compare ................................................................. 445
Tips for specifying values for unique parameters .............................................. 446
Specifying the report type, output, and type of processing ............................... 449
System to receive output ............................................................................. 449
Interactive and batch processing................................................................. 449
Using the additional parameters........................................................................ 449
Advanced subset options for CMPFILDTA.............................................................. 451
Ending CMPFILDTA requests ................................................................................. 454
Comparing file member data - basic procedure (non-active) .................................. 455
Comparing and repairing file member data - basic procedure ................................ 458
Comparing and repairing file member data - members on hold (*HLDERR) .......... 461
Comparing file member data using active processing technology .......................... 464
Comparing file member data using subsetting options ........................................... 467
Chapter 20 Synchronizing data between systems 472
Considerations for synchronizing using MIMIX commands..................................... 474
Limiting the maximum sending size .................................................................. 474
Synchronizing user profiles ............................................................................... 474
Synchronizing user profiles with SYNCnnn commands .............................. 475
Synchronizing user profiles with the SNDNETOBJ command ................... 475
Missing system distribution directory entries automatically added .............. 476
Synchronizing large files and objects ................................................................ 476
Status changes caused by synchronizing ......................................................... 476
Synchronizing objects in an independent ASP.................................................. 477
About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 478
About synchronizing data group activity entries (SYNCDGACTE).......................... 479
About synchronizing file entries (SYNCDGFE command) ...................................... 480
About synchronizing tracking entries....................................................................... 482
Performing the initial synchronization...................................................................... 483
Establish a synchronization point ...................................................................... 483
Resources for synchronizing ............................................................................. 483
Using SYNCDG to perform the initial synchronization ............................................ 484
To perform the initial synchronization using the SYNCDG command defaults . 485
Verifying the initial synchronization ......................................................................... 487
Synchronizing database files................................................................................... 489
Synchronizing objects ............................................................................................. 491
To synchronize library-based objects associated with a data group ................. 491
To synchronize library-based objects without a data group .............................. 492
Synchronizing IFS objects....................................................................................... 495
To synchronize IFS objects associated with a data group ................................ 495
10
To synchronize IFS objects without a data group ............................................. 496
Synchronizing DLOs................................................................................................ 499
To synchronize DLOs associated with a data group ......................................... 499
To synchronize DLOs without a data group ...................................................... 500
Synchronizing data group activity entries................................................................ 503
Synchronizing tracking entries ................................................................................ 505
To synchronize an IFS tracking entry ................................................................ 505
To synchronize an object tracking entry ............................................................ 505
Sending library-based objects ................................................................................. 506
Sending IFS objects ................................................................................................ 508
Sending DLO objects .............................................................................................. 509
Chapter 21 Introduction to programming 510
Support for customizing........................................................................................... 511
User exit points.................................................................................................. 511
Collision resolution ............................................................................................ 511
Completion and escape messages for comparison commands ............................. 514
CMPFILA messages ......................................................................................... 514
CMPOBJA messages........................................................................................ 515
CMPIFSA messages ......................................................................................... 515
CMPDLOA messages ....................................................................................... 516
CMPRCDCNT messages .................................................................................. 516
CMPFILDTA messages..................................................................................... 517
Adding messages to the MIMIX message log ......................................................... 521
Output and batch guidelines.................................................................................... 523
General output considerations .......................................................................... 523
Output parameter ........................................................................................ 523
Display output.............................................................................................. 524
Print output .................................................................................................. 524
File output.................................................................................................... 526
General batch considerations............................................................................ 527
Batch (BATCH) parameter .......................................................................... 527
Job description (JOBD) parameter .............................................................. 527
Job name (JOB) parameter ......................................................................... 527
Displaying a list of commands in a library ............................................................... 528
Running commands on a remote system................................................................ 529
Benefits - RUNCMD and RUNCMDS commands ............................................. 529
Procedures for running commands RUNCMD, RUNCMDS.................................... 530
Running commands using a specific protocol ................................................... 530
Running commands using a MIMIX configuration element ............................... 532
Using lists of retrieve commands ............................................................................ 536
Changing command defaults................................................................................... 537
Chapter 22 Customizing with exit point programs 538
Summary of exit points............................................................................................ 538
MIMIX user exit points ....................................................................................... 538
MIMIX Monitor user exit points .......................................................................... 538
MIMIX Promoter user exit points ....................................................................... 539
Requesting customized user exit programs ...................................................... 540
Working with journal receiver management user exit points ................................... 541
11
Journal receiver management exit points.......................................................... 541
Change management exit points................................................................. 541
Delete management exit points ................................................................... 542
Requirements for journal receiver management exit programs................... 542
Journal receiver management exit program example ................................. 545
Appendix A Supported object types for system journal replication 549
Appendix B Copying configurations 552
Supported scenarios ............................................................................................... 552
Checklist: copy configuration................................................................................... 553
Copying configuration procedure ............................................................................ 558
Appendix C Configuring Intra communications 559
Manually configuring Intra using SNA ..................................................................... 559
Manually configuring Intra using TCP ..................................................................... 561
Appendix D MIMIX support for independent ASPs 563
Benefits of independent ASPs................................................................................. 564
Auxiliary storage pool concepts at a glance ............................................................ 564
Requirements for replicating from independent ASPs ............................................ 567
Limitations and restrictions for independent ASP support....................................... 567
Configuration planning tips for independent ASPs.................................................. 568
Journal and journal receiver considerations for independent ASPs .................. 569
Configuring IFS objects when using independent ASPs ................................... 569
Configuring library-based objects when using independent ASPs .................... 569
Avoiding unexpected changes to the library list ................................................ 570
Detecting independent ASP overflow conditions..................................................... 572
Appendix E Interpreting audit results 573
Interpreting audit results - MIMIX Availability Manager ........................................... 575
Interpreting audit results - 5250 emulator................................................................ 576
Checking the job log of an audit .............................................................................. 578
Interpreting results for configuration data - #DGFE audit........................................ 580
Interpreting results of audits for record counts and file data ................................... 582
What differences were detected by #FILDTA.................................................... 582
What differences were detected by #MBRRCDCNT ......................................... 583
Interpreting results of audits that compare attributes .............................................. 586
What attribute differences were detected .......................................................... 587
Where was the difference detected................................................................... 589
What attributes were compared ........................................................................ 590
Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 591
Attributes compared and expected results - #OBJATR audit ............................ 596
Attributes compared and expected results - #IFSATR audit ............................. 604
Attributes compared and expected results - #DLOATR audit ........................... 606
Comparison results for journal status and other journal attributes .................... 608
How configured journaling settings are determined .................................... 611
Comparison results for auxiliary storage pool ID (*ASP)................................... 612
Comparison results for user profile status (*USRPRFSTS) .............................. 615
How configured user profile status is determined........................................ 616
Comparison results for user profile password (*PRFPWDIND)......................... 619
12
Appendix F Outfile formats 621
Outfile support in MIMIX Availability Manager......................................................... 621
Work panels with outfile support ............................................................................. 622
MCAG outfile (WRKAG command) ......................................................................... 623
MCDTACRGE outfile (WRKDTARGE command) ................................................... 626
MCNODE outfile (WRKNODE command)............................................................... 628
MXCDGFE outfile (CHKDGFE command) .............................................................. 630
MXCMPDLOA outfile (CMPDLOA command)......................................................... 632
MXCMPFILA outfile (CMPFILA command) ............................................................. 634
MXCMPFILD outfile (CMPFILDTA command) ........................................................ 636
MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 639
MXCMPRCDC outfile (CMPRCDCNT command)................................................... 640
MXCMPIFSA outfile (CMPIFSA command) ............................................................ 644
MXCMPOBJA outfile (CMPOBJA command) ......................................................... 647
MXDGACT outfile (WRKDGACT command)........................................................... 649
MXDGACTE outfile (WRKDGACTE command)...................................................... 651
MXDGDAE outfile (WRKDGDAE command) .......................................................... 659
MXDGDFN outfile (WRKDGDFN command) .......................................................... 660
MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 668
MXDGFE outfile (WRKDGFE command)................................................................ 670
MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 674
MXDGSTS outfile (WRKDG command) .................................................................. 676
WRKDG outfile SELECT statement examples .................................................. 696
WRKDG outfile example 1........................................................................... 696
WRKDG outfile example 2........................................................................... 696
WRKDG outfile example 3........................................................................... 697
WRKDG outfile example 4........................................................................... 697
MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 703
MXDGTSP outfile (WRKDGTSP command) ........................................................... 706
MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 709
MXRJLNK outfile (WRKRJLNK command) ............................................................. 713
MXSYSDFN outfile (WRKSYSDFN command)....................................................... 716
MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 720
MZPRCDFN outfile (WRKPRCDFN command) ...................................................... 722
MZPRCE outfile (WRKPRCE command) ................................................................ 723
MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 726
MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 728
Index 732
13
Product conventions
Product conventions
The conventions described here apply to all Lakeview products unless otherwise
noted.
Publication conventions
This book uses typography and specialized formatting to help you quickly identify the
type of information you are reading. For example, specialized styles and techniques
distinguish information you see on a display from information you enter on a display or
command line. In text, bold type identifies a new term whereas an underlined word
highlights its importance. Notes and Attentions are specialized formatting techniques
that are used, respectively, to highlight a fact or to warn you of the potential for
damage. The following topics illustrate formatting techniques that may be used in this
book.
14
Formatting for displays and commands
Table 1 shows the formatting used for the information you see on displays and
command interfaces:
monospace Text that you enter into a 5250 emulator Type the command MIMIX and press Enter.
font command line. In instructions, the DGDFN(name system1 system2)
conventions of italic and UPPERCASE CHGVAR &RETURN &CONTINUE
also apply.
Examples showing programming code.
15
Publication conventions
16
Sources for additional information
This book refers to other published information. The following information, plus
additional technical information, can be located in the IBM System i and i5/OS
Information Center.
From the Information center you can access these IBM PowerTM Systems topics,
books, and redbooks:
• Backup and Recovery
• Journal management
• DB2 Universal Database for IBM PowerTM Systems Database Programming
• Integrated File System Introduction
• Independent disk pools
• OptiConnect for OS/400
• TCP/IP Setup
• IBM redbook Striving for Optimal Journal Performance on DB2 Universal
Database for iSeries, SG24-6286
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
• IBM redbook PowerTM Systems iASPs: A Guide to Moving Applications to
Independent ASPs, SG24-6802
The following information may also be helpful if you use advanced journaling:
• DB2 UDB for iSeries SQL Programming Concepts
• DB2 Universal Database for iSeries SQL Reference
• IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
17
Sources for additional information
18
How to contact us
For contact information, visit our Contact CustomerCare web page.
If you are current on maintenance, support for MIMIX products is also available when
you log in to Support Central.
It is important to include product and version information whenever you report
problems. If you use MIMIX Availability Manager, you should also include the version
information provided at the bottom of each MIMIX Availability Manager window.
19
How to contact us
20
MIMIX overview
This book provides concepts, configuration procedures, and reference information for
MIMIX ha1 and MIMIX ha Lite. For simplicity, this book uses the term MIMIX to refer
to the functionality provided by either product unless a more specific name is
necessary.
MIMIX version 5 provides high availability for your critical data in a production
environment on IBM PowerTM Systems through real-time replication of changes.
MIMIX continuously captures changes to critical database files and objects on a
production system, sends the changes to a backup system, and applies the changes
to the appropriate database file or object on the backup system. The backup system
stores exact duplicates of the critical database files and objects from the production
system.
MIMIX uses two replication paths to address different pieces of your replication
needs. These paths operate with configurable levels of cooperation or can operate
independently.
• The user journal replication path captures changes to critical files and objects
configured for replication through a user journal. When configuring this path,
shipped defaults use the IBM i remote journaling function to simplify sending data
to the remote system. In previous versions, MIMIX DB2 Replicator provided this
function.
• The system journal replication path handles replication of critical system objects
(such as user profiles or spooled files), integrated file system (IFS) objects, and
document library object (DLOs) using the IBM i system journal. In previous
versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating database files,
IFS objects, data areas, and data queues.
One common use of MIMIX is to support a hot backup system to which operations
can be switched in the event of a planned or unplanned outage. If a production
system becomes unavailable, its backup is already prepared for users. In the event of
an outage, you can quickly switch users to the backup system where they can
continue using their applications. MIMIX captures changes on the backup system for
later synchronization with the original production system. When the original
production system is brought back online, MIMIX assists you with analysis and
synchronization of the database files and other objects.
You can view the replicated data on the backup system at any time without affecting
productivity. This allows you to generate reports, submit (read-only) batch jobs, or
perform backups to tape from the backup system. In addition to real-time backup
capability, replicated databases and objects can be used for distributed processing,
allowing you to off-load applications to a backup system.
Typically MIMIX is used among systems in a network. Simple environments have one
production system and one backup system. More complex environments have
21
multiple production systems or backup systems. MIMIX can also be used on a single
system.
MIMIX automatically monitors your replication environment to detect and correct
potential problems that could be detrimental to maintaining high availability.
MIMIX also provides a means of verifying that the files and objects being replicated
are what is defined to your configuration. This can help ensure the integrity of your
MIMIX configuration.
The topics in this chapter include:
• “MIMIX concepts” on page 23 describes concepts and terminology that you need
to know about MIMIX.
• “The MIMIX environment” on page 29 describes components of the MIMIX
operating environment.
• “Journal receiver management” on page 37 describes how MIMIX performs
change management and delete management for replication processes.
• “Operational overview” on page 40 provides information about day to day MIMIX
operations.
22
MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX
performs replication. You should be familiar with the relationships between systems,
the concepts of data groups and switching, and role of the i5/OS journaling function in
replication.
23
MIMIX concepts
The terms management system and network system define the role of a system
relative to how the products interact within a MIMIX installation. These roles remain
associated with the system within the MIMIX installation to which they are defined.
Typically one system in the MIMIX installation is designated as the management
system and the remaining one or more systems are designated as network systems.
A management system is the system in a MIMIX installation that is designated as the
control point for all installations of the product within the MIMIX installation. The
management system is the location from which work to be performed by the product
is defined and maintained. Often the system defined as the management system also
serves as the backup system during normal operations. A network system is any
system in a MIMIX installation that is not designated as the management system
(control point) of that MIMIX installation. Work definitions are automatically distributed
from the management system to a network system. Often a system defined as a
network system also serves as the production system during normal operations.
24
MIMIX provides support for switching due to planned and unplanned events. At the
data group level, the Switch Data Group (SWTDG) command will switch the direction
in which replication occurs between systems.
Note: A switchable data group is different than bi-directional data flow. Bi-directional
data flow is a data sharing technique described in “Configuring advanced
replication techniques” on page 353.
25
MIMIX concepts
Journal entries deposited into the system journal (on behalf of an audited object)
contain only an indication of a change to an object. Some of these types of entries
contain enough information needed by MIMIX to apply the change directly to the
replicated object on the target system, however many types of these entries require
MIMIX to gather additional information about the object from the source system in
order to apply the change directly to the replicated object on the target system.
Journal entries deposited into a user journal (on behalf of a journaled file, data area,
data queue, or IFS object) contain images of the data which was changed. This
information is needed by MIMIX in order to apply the change directly to the replicated
object on the target system.
When replication is started, the start request (STRDG command) identifies a
sequence number within a journal receiver at which MIMIX processing begins. In data
groups configured with remote journaling, the specified sequence number and
receiver name is the starting point for MIMIX processing from the remote journal. The
i5/OS remote journal function controls where it starts sending entries from the source
journal receiver to the remote journal receiver.
The i5/OS operating system requires that journaled objects reside in the same
auxiliary storage pool (ASP) as the user journal. The journal receivers can be in a
different ASP. If the journal is in a primary independent ASP, the journal receivers
must reside in the same primary independent ASP or a secondary independent ASP
within the same ASP group.
The i5/OS operating system (V5R4 and higher releases) allows journaling a
maximum of 10,000,000 objects to one user journal. MIMIX can use existing journals
with this value. Journals created by MIMIX have a maximum of 250,000 objects. User
journaling will not start if the number of objects associated with the journal exceeds
the journal maximum. The maximum includes:
• Objects for which changes are currently being journaled
• Objects for which journaling was ended while the current receiver is attached
• Journal receivers that are, or were, associated with the journal while the current
journal receiver is attached.
Remote journaling requires unique considerations for journaling and journal receiver
management. For additional information, see “Journal receiver management” on
page 37.
Log spaces
Based on System i5 user space objects, a log space is a MIMIX object that provides
an efficient storage and manipulation mechanism for replicated data that is
temporarily stored on the target system during the receive and apply processes. All
internal structures and objects that make up a log space are created and manipulated
by MIMIX.
26
Multi-part naming convention
MIMIX uses named definitions to identify related user-defined configuration
information. A multi-part, qualified naming convention uniquely describes certain
types of definitions. This includes a two-part name for journal definitions and a three-
part name for transfer definitions and data group definitions. Newly created data
groups use remote journaling as the default configuration, which has unique
requirements for naming data group definitions. For more information, see “Naming
convention for remote journaling environments with 2 systems” on page 206.
The multi-part name consists of a name followed by one or two participating system
names (actually, names of system definitions). Together the elements of the multi-
part name define the entire environment for that definition. As a whole unit, a fully-
qualified two-part or three-part name must be unique. The first element, the name,
does not need to be unique. In a three-part name, the order of the system names is
also important, since two valid definitions may share the same three elements but
with the system names in different orders.
For example, MIMIX automatically creates a journal definition for the security audit
journal when you create a system definition. Each of these journal definitions is
named QAUDJRN, so the name alone is not unique. The name must be qualified with
the name of the system to which the journal definition applies, such as QAUDJRN
CHICAGO or QAUDJRN NEWYORK. Similarly, the data group definitions
INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO
are unique because of the order of the system names.
When using command interfaces which require a data group definition, MIMIX can
derive the fully-qualified name of a data group definition if a partial name provided is
sufficient to determine the unique name. If the first part of the name is unique, it can
be used by itself to designate the data group definition. For example, if the data group
definition INVENTORY CHICAGO HONGKONG is the only data group with the name
INVENTORY, then specifying INVENTORY on any command requiring a data group
name is sufficient. However, if a second data group named INVENTORY NEWYORK
LONDON is created, the name INVENTORY by itself no longer describes a unique
data group. INVENTORY CHICAGO would be the minimum parts of the name of the
first data definition necessary to determine its uniqueness. If a third data group named
INVENTORY CHICAGO LONDON was added, then the fully qualified name would be
required to uniquely identify the data group. The order in which the systems are
identified is also important. The system HONGKONG appears in only one of the data
groups definitions. However, specifying INVENTORY HONGKONG will generate a
“not found” error because HONGKONG is not the first system in any of the data group
definitions. This applies to all external interfaces that reference multi-part definition
names.
MIMIX can also derive a fully qualified name for a transfer definition. Data group
definitions and system definitions include parameters that identify associated transfer
definitions. When a subsequent operation requires the transfer definition, MIMIX uses
the context of the operation to determine the fully qualified name. For example, when
starting a data group, MIMIX uses information in the data group definition, the
systems specified in the data group name, and the specified transfer definition name
to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer
27
definition, it reverses the order of the system names and checks again, avoiding the
need for redundant transfer definitions.
You can also use contextual system support (*ANY) to configure transfer definitions.
When you specify *ANY in a transfer definition, MIMIX uses information from the
context in which the transfer definition is called to resolve to the correct system.
Unlike the conventional configuration case, a specific search order is used if MIMIX is
still unable to find an appropriate transfer definition. For more information, see “Using
contextual (*ANY) transfer definitions” on page 181.
28
The MIMIX environment
A variety of product-defined operating elements and user-defined configuration
elements collectively form an operational environment on each system. A MIMIX
environment can be comprised of one or more MIMIX installations. Each system that
participates in the same MIMIX environment must have the same operational
environment. This topic describes each of the components of the MIMIX operating
environment.
IFS directories
A default IFS directory structure is used in conjunction with the library-based objects
of the MIMIX family of products. The IFS directory structure is associated with the
product library for the MIMIX installation and is created during the installation process
for License Manager and MIMIX. Over time, the installation processes for products
and fixes will restore objects to the IFS directory structure as well as to the QSYS
library.
The directories created when License Manager is installed or upgraded follow these
guidelines:
/LakeviewTech This is the root directory for all IFS-based objects.
/LakeviewTech/system-based-area This directory structure contains
system-based objects that need to exist only once on a system. The system-
based-area represents a unique directory for each set of objects. Two structures
that you should be aware of are:
/LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location
for users to place fixes downloaded from the Lakeview website. The VvRrMm
value is the same as the release of License Manager on the system. Multiple
VvRrMm directories will exist as the release of License Manager changes.
/LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places
software packages that it uploads to the System i5.
/LakeviewTech/UserData/ is available to users to store product-related
data.
The directories created when MIMIX is installed or upgraded follow these guidelines.
The requirements of your MIMIX environment determine the structure of these
directories:
29
The MIMIX environment
/LakeviewTech/MIMIX/product-installation-library There is a
unique directory structure for each installation of MIMIX.
/LakeviewTech/MIMIX/product-installation-library/product-
area There is a unique directory structure for each installation of MIMIX. The
structure is determined by the set of objects needed by an area of the product and
the product installation library.
30
Table 2. Job descriptions used by MIMIX
MIMIXDFT MIMIX Default. Used for all MIMIX jobs that do not have a X
specific job description.
MIMIXSND MIMIX Send. Used for database send, object send, object X
retrieve, container send, and status send jobs in MIMIX.
User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user
profile. This profile owns all MIMIX objects, including the objects in the MIMIX product
libraries and in the MIMIXQGPL library. The profile is created with sufficient authority
to run all MIMIX products and perform all the functions provided by the MIMIX
products. The authority of this user profile can be reduced, if business practices
require, but this is not recommended. Reducing the authority of the MIMIXOWN
requires significant effort by the user to ensure that the products continue to function
properly and to avoid adversely affecting the performance of MIMIX products. See the
License and Availability Manager book for additional security information for the
MIMIXOWN user profile.
31
The MIMIX environment
system manager job and a receiver side system manager job. These jobs must be
active to enable replication.
Once started, the system manager monitors for configuration changes and
automatically moves any configuration changes to the network system. Dynamic
status changes are also collected and returned to the management system. The
system manager also gathers messages and timestamp information from the network
system and places them in a message log and timestamp file on the management
system. In addition, the system manager performs periodic maintenance tasks,
including cleanup of the system and data group history files.
Figure 1 shows a MIMIX installation with a management system and two network
systems. In this installation, there are four pairs of system manager jobs; two between
the first network system and the management system and two between the second
network system and the management system. Each arrow represents a pair of
system manager jobs. Since each pair has a send side system manager job and a
receiver side system manager job, there are eight total system manager jobs in this
installation.
Figure 1. System manager jobs in a MIMIX installation with one management system and
32
two network systems.
The System manager delay parameter in the system definition determines how
frequently the system manager looks for work. Other parameters in the system
definition control other aspects of system manager operation.
System manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart
the system managers based on the value of the Job restart time parameter in the
system definitions for the network and management systems. For more information,
see the section “Configuring restart times for MIMIX jobs” on page 313.
33
The MIMIX environment
have three journal manager jobs, one on each system. For more information, see
“Journal definition considerations” on page 205.
By default, MIMIX performs both change management and delete management for
journal receivers used by the replication process. Parameters in a journal definition
allow you to customize details of how the change and delete operations are
performed. The Journal manager delay parameter in the system definition determines
how frequently the journal manager looks for work.
Journal manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in
the system definition determines when the journal manager for that system restarts.
For more information, see the section “Configuring restart times for MIMIX jobs” on
page 313.
MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related
processing. This subsystem is shipped with the proper job queue entries and routing
entries for correct operation of the MIMIX jobs.
Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data
libraries:
• MIMIX uses data libraries for storing the contents of the object cache. MIMIX
creates the first data library when needed and may create additional data libraries.
The names of data libraries are of the form product-library_n (where n is a number
starting at 1).
• For system journal replication, MIMIX creates libraries named product-library_x,
where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These
ASP-specific data libraries are created when needed and are not deleted until the
product is uninstalled.
Named definitions
MIMIX uses named definitions to identify related user-defined configuration
information. You can create named definitions for system information, communication
34
(transfer) information, journal information, and replication (data group) information.
Any definitions you create can be used by both user journal and system journal
replication processes.
One or more or each of the following definitions are required to perform replication:
A system definition identifies to MIMIX the characteristics of a system that
participates in a MIMIX installation.
A transfer definition identifies to MIMIX the communications path and protocol to be
used between two systems. MIMIX supports Systems Network Architecture (SNA),
OptiConnect, and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols.
A journal definition identifies to MIMIX a journal environment on a particular system.
MIMIX uses the journal definition to manage the journal receiver environment used by
the replication process.
A data group definition identifies to MIMIX the characteristics of how replication
occurs between two systems. A data group definition determines the direction in
which replication occurs between the systems, whether that direction can be
switched, and the default processing characteristics to use when processing the
database and object information associated with the data group.
A remote journal link (RJ link) is a MIMIX configuration element that identifies an
i5/OS remote journaling environment. Newly created data groups use remote
journaling as the default configuration. An RJ link identifies journal definitions that
define the source and target journals, primary and secondary transfer definitions for
the communications path used by MIMIX, and whether the i5/OS remote journal
function sends journal entries asynchronously or synchronously. When a data group
is added, the ADDRJLNK command is run automatically, using the transfer definition
defined in the data group.
The naming conventions used within definitions are described in “Multi-part naming
convention” on page 27.
35
The MIMIX environment
• Data group IFS entries This type of entry allows you to identify integrated file
system (IFS) objects for replication. IFS objects include directories and stream
files. They reside in directories, similar to DOS or Unix files. You can select IFS
objects for replication by specific or generic path name.
• Data group DLO entries This type of entry allows you to identify document
library objects (DLOs) for replication. DLOs are documents and folders. They are
contained in folders (except for first-level folders). To select DLOs for replication
you select individual DLOs by specific or generic folder and DLO name, and
owner.
• Data group data area entries This type of entry allows you to define a data area
for replication by the data area polling process. However, the preferred way to
replicate data areas is to use advanced journaling.
A single data group can contain any combination of these types of data group entries.
If your license is for only one of the MIMIX products rather than for MIMIX ha1 or
MIMIX ha Lite, only the entries associated with the product to which you are licensed
will be processed for replication.
36
Journal receiver management
Parameters in journal definition commands determine how change management and
delete management are performed on the journal receivers used by the replication
process. Shipped default values result in the recommended behavior of allowing
MIMIX to perform change management and delete management.
Change management - The Receiver change management (CHGMGT) parameter
controls how the journal receivers are changed. The recommended value *TIMESIZE
results in MIMIX changing the journal receiver by both threshold size and time of day.
Additional parameters in the journal definition control the size at which to change
(THRESHOLD), the time of day to change (TIME), and when to reset the receiver
sequence number (RESETTHLD). The conditions specified in these parameters
must be met before change management can occur. For additional information, see
“Tips for journal definition parameters” on page 201.
If you do not use the recommended value *TIMESIZE for CHGMGT, consider the
following:
• When you specify *TIMESYS, the system manages the receiver by size and
during IPLs and MIMIX manages changing the receiver at a specified time.
Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the
same results as *TIMESIZE or *TIMESYS, respectively.
• When you specify *NONE, MIMIX does not handle changing the journal receivers.
You must ensure that the system or another application performs change
management to prevent the journal receivers from overflowing.
• When you allow the system to perform change management (*SYSTEM) and the
attached journal receiver reaches its threshold, the system detaches the journal
receiver and creates and attaches a new journal receiver. During an initial
program load (IPL), the system creates and attaches a new journal receiver.
During normal IPLs and most abnormal IPLs, the journal sequence number may
be reset.
In a remote journaling configuration, MIMIX recognizes remote journals and ignores
change management for the remote journals. The remote journal receiver is changed
automatically by the i5/OS remote journal function when the receiver on the source
system is changed. You can specify in the source journal definition whether to have
receiver change management performed by the system or by MIMIX. Any change
management values you specify for the target journal definition are ignored.
You can also customize how MIMIX performs journal receiver change management
through the use of exit programs. For more information, see “Working with journal
receiver management user exit points” on page 541.
Delete management - The Receiver delete management (DLTMGT) parameter
controls how the journal receivers used for replication are deleted. It is strongly
recommended that you use the value *YES to allow MIMIX to perform delete
management.
When MIMIX performs delete management, the journal receivers are only deleted
after MIMIX is finished with them and all other criteria specified on the journal
37
Journal receiver management
definition are met. The criteria includes how long to retain unsaved journal receivers
(KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and
how long to keep detached journal receivers (KEEPJRNRCV).
Note: If more than one MIMIX installation uses the same journal, the journal
manager for each installation can delete the journal regardless of whether the
other installations are finished with it. If you have this scenario, you need to
use the journal receiver delete management exit points to control deleting the
journal receiver. For more information, see “Working with journal receiver
management user exit points” on page 541.
Delete management of the source and target receivers occur independently from
each other. It is highly recommended that you configure the journal definitions to have
MIMIX perform journal delete management. The i5/OS remote journal function does
not allow a receiver to be deleted until it is replicated from the local journal (source) to
the remote journal (target). When MIMIX manages deletion, a target journal receiver
cannot be deleted until it is processed by the database reader (DBRDR) process and
it meets the other criteria defined in the journal definition.
If you choose to manage journal receivers yourself, you need to ensure that journals
are not removed before MIMIX has finished processing them. MIMIX operations can
be affected if you allow the system to handle delete management. For example, the
system may delete a journal receiver before MIMIX has completed its use.
38
For example, refer to Figure 2. Replication ended while processing journal entries in
target receiver 2. Target journal receiver 1 is deleted through the configured delete
management options. If the data group is started (STRDG) with a starting journal
sequence number for an entry that is in journal receiver 1, the remote journal function
attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1.
However, receiver 2 already exists on the target system. When the operating system
encounters receiver 2, an error occurs and the transmission to the target system
ends.
You can prevent this situation before starting that data group if you delete any target
journal receivers following the receiver that will be used as the starting point. If you
encounter the problem, recovery is simply to remove the target journal receivers and
let remote journaling resend them. In this example, deleting target receiver 2 would
prevent or resolve the problem.
4
2
3
1
2
1
39
Operational overview
Operational overview
Before replication can begin, the following requirements must be met through the
installation and configuration processes:
• MIMIX software must be installed on each system in the MIMIX installation.
• At least one communication link must be in place for each pair of systems
between which replication will occur.
• The MIMIX operating environment must be configured and be available on each
system.
• Journaling must be active for the database files and objects configured for user
journal replication.
• For objects to be replicated from the system journal, the object auditing
environment must be set up.
• The files and objects must be initially synchronized between the systems
participating in replication.
Once MIMIX is configured and files and objects are synchronized, day-to-day
operations for MIMIX can be performed from either the web-based MIMIX Availability
Manager or from a 5250 emulator for a System i5.
MIMIX Availability Manager is easy to use and preferable for daily operations. Newer
MIMIX functions may only be available through this user interface. Through
preferences, individuals have the ability to customize what systems, installations, and
data groups to monitor.
40
Support for checking installation status
Only MIMIX Availability Manager provides the ability to monitor multiple installations
of MIMIX at once from a single interface. Status from each installation ‘bubbles up’ to
the Enterprise View, where you can quickly see whether a problem exists on the
systems you are monitoring. Status icons and flyover text start the problem resolution
process by guiding you to the appropriate action for the most severe problem present.
In the 5250 emulator, the MIMIX Availability Status display reports the prioritized
status of a single installation. Status from the installation is reported in three areas:
Replication, Audits and Notification, and Services. Color and informational messages
identify the most severe problem present in an area and identify the action to take to
start problem isolation.
41
Operational overview
When you choose to display detailed status for a data group from MIMIX Availability
Manager, the highest priority problem that exists for the data group determines which
of several possible views of the Data Group Details window will be displayed. You can
often take action to resolve problems directly from these detailed status windows.
Data Group Details - Status This window identifies all of the replication jobs and
services jobs needed by the data group and provides their status. Similar
information is available from the merged view of the Data Group Status display.
Data Group Details - User Journal This window represents replication
performed by user journal replication processes, including journaled files, IFS
objects, data areas, and data queues. It includes information about the replication
of user journal transactions, including journal progress, performance, and recent
activity. Similar information is available from database views of the Data Group
Status display.
Data Group Details - System Journal This window represents replication
performed by system journal replication processes, including journal progress,
performance, and recent activity. Similar information is available from object views
of the Data Group Status display.
Data Group Details - Activity This window summaries activity for the selected
data group that is experiencing replication problems. Problems are grouped by
type of activity: File, Object, IFS Tracking, or Object Tracking. This window
displays only one type of problem at a time, based on the activity type selected
from the navigation bar. Similar information is available in the 5250 emulator
when you use the following options from the Work with Data Groups display:
12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj
trk entries not active.
42
Activity, and Object Activity Details. Default filtering options in MIMIX Availability
Manager only display problems with replicating objects from the system journal.
Failed requests: During normal processing, system journal replication processes
may encounter object requests that cannot be processed due to an error. Often the
error is due to a transient condition, such as when an object is in use by another
process at the time the object retrieve process attempts to gather the object data.
Although MIMIX will attempt some automatic retries, requests may still result in a
Failed status. In many cases, failed entries can be resubmitted and they will succeed.
Some errors may require user intervention, such as a never-ending process that
holds a lock on the object.
MIMIX is shipped with the MIMIX Retry Monitor (#RTYDGACTE) which runs
periodically and automatically resubmits all failed activity entries for all data groups. In
order to use this monitor, it must be manually enabled, then started, using options on
the Work with monitors (WRKMON) display. If your environment results in numerous
transient failed entries it is recommended that you use the #RTYDGACTE monitor.
You can manually request that MIMIX retry processing for a data group activity entry
that has a status of *FAILED. These entries can be viewed using the Work with Data
Group Activity (WRKDGACT) command. From the Work with Data Group Activity or
Work with Data Group Activity Entries displays, you can use the retry option to
resubmit individual failed entries or all of the entries for an object. This option calls the
Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with
Data Group Activity display, you can also specify a time at which to start the request,
thereby delaying the retry attempt until a time when it is more likely to succeed.
MIMIX Availability Manager supports manually retrying activities from appropriate
windows by providing Retry as an available action in the Action List.
Files on hold: When the database apply process detects a data synchronization
problem, it places the file (individual member) on “error hold” and logs an error. File
entries are in held status when an error is preventing them from being applied to the
target system. You need to analyze the cause of the problem in order to determine
how to correct and release the file and ensure that the problem does not occur again.
An option on the Work with Data Groups display provides quick access to the subset
of file entries that are in error for a data group. From the Work with DG File Entries
display, you can see the status of an entry and use a number of options to assist in
resolving the error. An alternative view shows the database error code and journal
code. Available options include access to the Work with DG Files on Hold
(WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with
file entries that are in a held status. You can view and work with the entry for which
the error was detected and work with all other entries following the entry in error.
MIMIX Availability Manager provides similar capabilities to those of WRKDGFEHLD
from the following windows: Data Group Details - User Journal, Data Group Details -
Activity, and File Activity Details. Default filtering options in MIMIX Availability
Manager only display problems with replicating objects from the user journal.
Journal analysis: With user journal replication, when the system that is the source of
replicated data fails, it is possible that some of the generated journal entries may not
have been transmitted to or received by the target system. However, it is not always
possible to determine this until the failed system has been recovered. Even if the
43
Operational overview
failed system is recovered, damage to a disk unit or to the journal itself may prevent
an accurate analysis of any missed data. Once the source system is available again,
if there is no damage to the disk unit or journal and its associated journal receivers,
you can use the journal analysis function to help determine what journal entries may
have been missed and to which files the data belongs. You can only perform journal
analysis on the system where a journal resides.
44
These messages are sent to both the primary and secondary message queues that
are specified for the system definition.
In addition to these message queues, message entries are recorded in a MIMIX
message log file. The MIMIX message log provides a powerful tool for problem
determination. Maintaining a message log file allows you to keep a record of
messages issued by MIMIX as an audit trail. In addition, the message log provides
robust subset and filter capabilities, the ability to locate and display related job logs,
and a powerful debug tool. When messages are issued, they are initially sent to the
specified primary and secondary message queues. In the event that these message
queues are erased, placing messages into the message log file secures a second
level of information concerning MIMIX operations.
The message log on the management system contains messages from the
management system and each network system defined within the installation. The
system manager is responsible for collecting messages from all network systems. On
a network system, the message log contains only those messages generated by
MIMIX activity on that system.
MIMIX automatically performs cleanup of the message log on a regular basis. The
system manager deletes entries from the message log file based on the value of the
Keep system history parameter in the system definition. However, if you process an
unusually high volume of replicated data, you may want to also periodically delete
unnecessary message log entries since the file grows in size depending on the
number of messages issued in a day.
45
Chapter 2 Replication process overview
46
Replication job and supporting job names
The replication path for database information includes the i5/OS remote journal
function, the MIMIX database reader process, and one or more database apply
processes. If MIMIX source-send processes are used instead of remote journaling,
then the processes include the database send process, the database receive
process, and one or more database apply processes.
The replication path for object information includes the object send process, the
object receive process, and the object apply process. When a data retrieval request is
replicated, the replication path also includes the object retrieve, container send, and
container receive processes. A data retrieval request is an operation that creates or
changes the content of an object. A self-contained request is an operation that
deletes, moves, or renames an object, or that changes the authority or ownership of
an object.
Table 3 identifies the job names for each of the processes that make up the
replication path. Except as noted, MIMIX automatically restarts the jobs in Table 3 to
maintain the MIMIX environment. The default is to restart these MIMIX jobs daily at
midnight (12:00 a.m.). If this time conflicts with scheduled workloads, you can
configure a different time to restart the jobs. For more information, see “Configuring
restart times for MIMIX jobs” on page 313.
47
Replication job and supporting job names
48
49
Cooperative processing introduction
50
relationships by assigning them to the same or appropriate apply sessions. It is also
much better at maintaining data integrity of replicated objects which previously
needed legacy cooperative processing in order to replicate some operations such as
creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is
more efficient hold log processing by enabling multiple files to be processed through a
hold log instead of just one file at a time.
New data groups created with the shipped default configuration values are configured
to use MIMIX Dynamic Apply. This configuration requires data group object entries
and data group file entries.
For more information, see “Identifying logical and physical files for replication” on
page 105 and “Requirements and limitations of MIMIX Dynamic Apply” on page 110.
Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data
queues that are configured for cooperative processing. When these objects are
configured for cooperative processing, replication of changed bytes of the journaled
objects’ data occurs through the user journal. This is more efficient than replicating an
entire object through the system journal each time changes occur.
Such a configuration also allows for the serialization of updates to IFS objects, data
areas, and data queues with database journal entries. In addition, processing time for
these object types may be reduced, even for equal amounts of data, as user journal
replication eliminates the separate save, send, and restore processes necessary for
system replication.
Frequently you will see the phrase “user journal replication of IFS objects, data areas,
and data queues” used interchangeably with the term advanced journaling. These
terms are the same.
For more information, see “User journal replication of IFS objects, data areas, data
queues” on page 72 and “Planning for journaled IFS objects, data areas, and data
queues” on page 85.
51
Cooperative processing introduction
52
System journal replication
The system journal replication path is designed to handle the object-related
availability needs of your system. You identify the critical system objects that you
want to replicate, such as user profiles, programs, and DLOs. MIMIX uses the journal
entries generated by the operating system’s object auditing function to identify the
changes to objects on production systems and replicates the changes to backup
systems.
If you are not already using the system’s security audit journal (QAUDJRN, or
system journal), when you use MIMIX commands to build the journaling environment,
MIMIX creates the journal and correctly sets system values related to auditing. MIMIX
checks the settings of the following system values, making changes as necessary:
• QAUDLVL (Security auditing level) system value. MIMIX sets the values
*CREATE, *DELETE, *OBJMGT, and *SAVRST. MIMIX checks for values
*SECURITY, *SECCFG, *SECRUN, and *SECVLDL and will set them only if the
value *SECURITY is not already set. If any data group is configured to replicated
spooled files, MIMIX also sets *SPLFDTA and *PRTDTA.
• QAUDCTL (Auditing control) system value. MIMIX sets the values *OBJAUD and
*AUDLVL.
These system value settings, along with the object audit value of each object, control
what journal entries are created in the system journal (QAUDJRN) for an object.
If an operation on an object is not represented by an entry in the system journal,
MIMIX is not aware of the operation and cannot replicate it.
The system objects you want to replicate are defined to a data group through data
group object entries, data group DLO entries, and data group IFS entries. The term
name space refers to this collection of objects that are identified for replication by
MIMIX using the system journal replication processes.
An object is replicated when it is created, restored, moved, or renamed into the MIMIX
name space. While in the MIMIX name space, changes to the object or to the
authority settings of the object are also replicated.
Replication through the system journal is event-driven. When a data group is started,
each process used in the replication path waits for its predetermined event to occur
then begins its activity. The processes are interdependent and run concurrently. The
system journal replication path in MIMIX uses the following processes:
• Object send process: alternates between identifying objects to be replicated and
transmitting control information about objects ready for replication to the target
system.
• Object receive process: receives control information and waits for notification that
additional source system processing, if any, is complete before passing the
control information to the object apply process.
• Object retrieve process: if any additional information is needed for replication,
obtains it and places it in a holding area. This process is also used when
additional processing is required on the source system prior to transmission to the
target system.
53
System journal replication
• Container send process: transmits any additional information from a holding area
to the target system and notifies the control process of that action.
• Container receive process: receives any additional information and places it into a
holding area on the target system.
• Object apply process: replicates objects according to the control information and
any required additional information that is retrieved from the holding area.
• Status send process: notifies the source system of the status of the replication.
• Status receive process: updates the status on the source system and, if
necessary, passes control information back to the object send process.
MIMIX uses a collection of structures and customized functions for controlling these
structures during replication. Collectively the customized functions and structures are
referred to as the work log. The structures in the work log consist of log spaces, work
lists (implemented as user queues), and distribution status file.
When a data group is started, MIMIX uses the security audit journal to monitor for
activity on objects within the name space. When activity occurs on the object, such as
it is being accessed or changed, a corresponding journal entry is created in the
security audit journal. As journal entries are added to the journal receiver on the
source system, the object send process reads journal entries and determines if they
represent operations to objects that are within the name space. For each journal entry
for an object within the name space, the object send process creates an activity
entry in the work log. Creation of an activity entry includes adding the entry to the log
space and adding a record to the distribution status file. An activity entry includes a
copy of the journal entry and any related information associated with a replication
operation for an object, including the status of the entry. User interaction with activity
entries is through the Work with Data Group Activity display and the Work with DG
Activity Entries display.
There are two categories of activity entries: those that are self-contained and those
that require the retrieval of additional information. “Processing self-contained activity
entries” on page 54 describes the simplest object replication scenario. “Processing
data-retrieval activity entries” on page 55 describes the object replication scenario in
which additional data must be retrieved from the source system and sent to the target
system.
54
• Transmits the activity entry to a corresponding object receive process job on the
target system.
The object receive process adds the “received” date and time to the activity entry,
writes the activity entry to the log space, adds a record to the distribution status file,
and places the activity entry on the object apply work list. Now each system has a
copy of the activity entry.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list and replicates the operation represented by the
entry. The object apply process adds the “applied” date and time to the activity entry,
changes the status of the entry to CP (completed processing), and adds the entry to
the status send work list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding status receive process on the
source system. The status receive process updates the activity entry in the work log
and the distribution status file.
55
System journal replication
Concurrently, the object send process reads the object send work list. When the
object send process finds an activity entry in the object send work list, the object send
process performs one or more of the following additional steps on the entry:
• If an object retrieve job packaged the object, the activity entry is routed to the
container send work list.
• The activity entry is transmitted to the target system, its status is updated, and a
“retrieved” date and time is added to the activity entry.
On the source system the next available object retrieve process for the data group
retrieves the activity entry from the object retrieve work list and processes the
referenced object. In addition to retrieving additional information for the activity entry,
additional processing may be required on the source system. The object retrieve
process may perform some or all of the following steps:
• Retrieve the extended attribute of the object. This may be one step in retrieving
the object or it may be the primary function required of the retrieve process.
• If necessary, cooperative processing activities, such as adding or removing a data
group file entry, are performed.
• The object identified by the activity entry is packaged into a container in the data
library. The object retrieve process adds the “retrieved” date and time to the
activity entry and changes the status of the entry to “pending send.”
• The activity entry is added to the object send work list. From there the object send
job takes the appropriate action for the activity, which may be to send the entry to
the target system, add the entry to the container send work list, or both.
The container send and receive processes are only used when an activity entry
requires information in addition to what is contained within the journal entry. The next
available job for the container send process for the data group retrieves the activity
entry from the container send work list and retrieves the container for the packaged
object from the data library. The container send job transmits the container to a
corresponding job of the container receive process on the target system. The
container receive process places the container in a data library on the target system.
The container send process waits for confirmation from the container receive job, then
adds the “container sent” date and time to the activity entry, changes the status of the
activity entry to PA (pending apply), and adds the entry to the object send work list.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list, locates the container for the object in the data
library, and replicates the operation represented by the entry. The object apply
process adds the “applied” date and time to the activity entry, changes the status of
the entry to CP (completed processing), and adds the entry to the status send work
list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding job for status receive process
on the source system. The status receive process updates the activity entry in the log
space and the distribution status file. If the activity entry requires further processing,
such as if an updated container is needed on the target system, the status receive job
adds the entry to the object send work list.
56
Processes with multiple jobs
The object retrieve, container send and receive, and object apply processes all
consist of one or more asynchronous jobs. You can specify the minimum and
maximum number of asynchronous jobs you want to allow MIMIX to run for each
process and a threshold for activating additional jobs. The minimum number indicates
how many permanent jobs should be started for the process. These jobs stay active
as long as the data group is active.
During periods of peak activity, if more requests are backlogged than are specified in
the threshold, additional temporary jobs, up to the maximum number, may also be
started. This load leveling feature allows system journal replication processes to react
automatically to periodic heavy workloads. By doing this, the replication process stays
current with production system activity. When system activity returns to a reduced
level, the temporary jobs end after a period of inactivity elapses.
57
System journal replication
The system journal replication path within MIMIX relies on entries placed in the
system journal by i5/OS object auditing functions. To ensure that objects configured
for this replication path retain an object auditing value that supports replication, MIMIX
evaluates and changes the objects’ auditing value when necessary.
To do this, MIMIX employs a configuration value that is specified on the Object
auditing value (OBJAUD) parameter of data group entries (object, IFS, DLO)
configured for the system journal replication path. When MIMIX determines that an
object’s auditing value is lower than the configured value, it changes the object to
have the higher configured value specified in the data group entry that is the closest
match to the object. The OBJAUD parameter supports object audit values of *ALL,
*CHANGE, or *NONE.
MIMIX evaluates and may change an object’s auditing value when specific conditions
exist during object replication or during processing of a Start Data Group (STRDG)
request. This evaluation process can also be invoked manually for all objects
identified for replication by a data group.
During replication - MIMIX may change the auditing value during replication when
an object is replicated because it was created, restored, moved, or renamed into the
MIMIX name space (the group of objects defined to MIMIX).
While starting a data group - MIMIX may change the auditing value while
processing a STRDG request if the request specified processes that cause object
send (OBJSND) jobs to start and the request occurred after a data group switch or
after a configuration change to one or more data group entries (object, IFS, or DLO).
Shipped command defaults for the STRDG command allow MIMIX to set object
auditing if necessary. If you would rather set the auditing level for replicated objects
yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter
when you start data groups.
Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides
the ability to manually set the object auditing level of existing objects identified for
replication by a data group. When the command is invoked, MIMIX checks the audit
value of existing objects identified for system journal replication. Shipped default
values on the command cause MIMIX to change the object auditing value of objects
to match the configured value when an object’s actual value is lower than the
configured value.
The SETDGAUD command is used during initial configuration of a data group.
Otherwise, it is not necessary for normal operations and should only be used under
the direction of a trained MIMIX support representative.
The SETDGAUD command also supports optionally forcing a change to a configured
value that is lower than the existing value through its Force audit value (FORCE)
parameter.
Evaluation processing - Regardless of how the object auditing evaluation is
invoked, MIMIX may find that an object is identified by more than one data group
entry within the same class of object (IFS, DLO, or library-based). It is important to
understand the order of precedence for processing data group entries.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set; object entries and DLO entries
58
are processed using the EBCDIC character set. The first entry (more generic) found
that matches the object is used until a more specific match is found.
The entry that most specifically matches the object is used to process the object. If
the object has a lower audit value, it is set to the configured auditing value specified in
the data group entry that most specifically matches the object.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, all of the directories in the object’s directory path are
checked and, if necessary, changed to the new auditing value. In the case of an IFS
entry with a generic name, all descendents of the IFS object may also have their
auditing value changed.
When you change a data group entry, MIMIX updates all objects identified by the
same type of data group entry in order to ensure that auditing is set properly for
objects identified by multiple entries with different configured auditing values. For
example, if a new DLO entry is added to a data group, MIMIX sets object auditing for
all objects identified by the data group’s DLO entries, but not for its object entries or
IFS entries.
For more information and examples of setting auditing values with the SETDGAUD
command, see “Setting data group auditing values manually” on page 297.
59
System journal replication
60
User journal replication
MIMIX Remote Journal support enables MIMIX to take advantage of the cross-journal
communications capabilities provided by the i5/OS remote journal function instead of
using internal communications. Newly created data groups use remote journaling as
the default configuration.
61
User journal replication
62
Overview of IBM processing of remote journals
Several key concepts within the i5/OS remote journal function are important to
understanding its impact on MIMIX replication.
A local-remote journal pair refers to the relationship between a configured source
journal and target journal. The key point about a local-remote journal pair is that data
flows only in one direction within the pair, from source to target.
When the remote journal function is activated and all journal entries from the source
are requested, existing journal entries for the specified journal receiver on the source
system which have not already been replicated are replicated as quickly as possible.
This is known as catchup mode. Once the existing journal entries are delivered to
the target system, the system begins sending new entries in continuous mode
according to the delivery mode specified when the remote journal function was
started. New journal entries can be delivered either synchronously or asynchronously.
Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal
entries as they are generated by the source applications. The source applications do
not continue processing until the journal entries are sent to the target journal.
Each journal entry is first replicated to the target journal receiver in main memory on
the target system (1 in Figure 3). When the source system receives notification of the
delivery to the target journal receiver, the journal entry is placed in the source journal
receiver (2) and the source database is updated (3).
With synchronous delivery, journal entries that have been written to memory on the
target system are considered unconfirmed entries until they have been written to
63
auxiliary storage on the source system and confirmation of this is received on the
target system (4).
Figure 3. Synchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications
Source
2 Journal 3
Receiver Production
(Local) Database
Source Journal
Message Queue
1
Target System
4
Target
Journal Target Journal
Receiver Message Queue
(Remote)
Unconfirmed journal entries are entries replicated to a target system but the state of
the I/O to auxiliary storage for the same journal entries on the source system is not
known. Unconfirmed entries only pertain to remote journals that are maintained
synchronously. They are held in the data portion of the target journal receiver. These
entries are not processed with other journal entries unless specifically requested or
until confirmation of the I/O for the same entries is received from the source system.
Confirmation typically is not immediately sent to the target system for performance
reasons.
Once the confirmation is received, the entries are considered confirmed journal
entries. Confirmed journal entries are entries that have been replicated to the target
system and the I/O to auxiliary storage for the same journal entries on the source
system is known to have completed.
With synchronous delivery, the most recent copy of the data is on the target system. If
the source system becomes unavailable, you can recover using data from the target
system.
Since delivery is synchronous to the application layer, there are application
performance and communications bandwidth considerations. There is some
performance impact to the application when it is moved from asynchronous mode to
synchronous mode for high availability purposes. This impact can be minimized by
ensuring efficient data movement. In general, a minimum of a dedicated 100
megabyte ethernet connection is recommended for synchronous remote journaling.
64
MIMIX includes special switch processing for unconfirmed entries to ensure that the
most recent transactions are preserved in the event of a source system failure. For
more information, see “Support for unconfirmed entries during a switch” on page 70.
Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal
first (A in Figure 4) and then applied to the source database (B). An independent job
sends the journal entries from a buffer (C) to the target system journal receiver (D) at
some time after control is returned to the source applications that generated the
journal entries.
Because the journal entries on the target system may lag behind the source system’s
database, in the event of a source system failure, entries may become trapped on the
source system.
Figure 4. Asynchronous mode sequence of activity in the IBM remote journal feature.
Source System
Applications
Source
A Journal B
Receiver Production
(Local) Database
Target System
D
Target Journal
Message Queue
Target
Journal
Receiver
(Remote)
With asynchronous delivery, the most recent copy of the data is on the source
system. Performance critical applications frequently use asynchronous delivery.
Default values used in configuring MIMIX for remote journaling use asynchronous
delivery. This delivery mode is most similar to the MIMIX database send and receive
processes.
65
User journal replication processes
Data groups created using default values are configured to use remote journaling
support for user journal replication.
The replication path for database information includes the i5/OS remote journal
function, the MIMIX database reader process, and one or more database apply
processes.
The i5/OS remote journal function transfers journal entries to the target system.
The database reader process (DBRDR) process reads journal entries from the
target journal receiver of a remote journal configuration and places those journal
entries that match replication criteria for the data group into a log space.
Remote journaling does not allow entries to be filtered from being sent to the remote
system. All entries deposited into the source journal will be transmitted to the target
system. The database reader process performs the filtering that is identified in the
data group definition parameters and file and tracking entry options.
The database apply process applies the changes stored in the target log space to the
target system’s database. MIMIX uses multiple apply processes in parallel for
maximum efficiency. Transactions are applied in real-time to generate a duplicate
image of the journaled objects being replicated from the source system.
The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of
a remote journal link. A remote journal link (RJ link) is a configuration element that
identifies an i5/OS remote journaling environment. An RJ link identifies:
• A “source” journal definition that identifies the system and journal which are the
source of journal entries being replicated from the source system.
• A “target” journal definition that defines a remote journal.
• Primary and secondary transfer definitions for the communications path for use by
MIMIX.
• Whether the i5/OS remote journal function sends journal entries asynchronously
or synchronously.
Once an RJ link is defined and other configuration elements are properly set, user
journal replication processes will use the i5/OS remote journaling environment within
its replication path.
The concept of an RJ link is integrated into existing commands. The Work with RJ
Links display makes it easy to identify the state of the i5/OS remote journaling
environment defined by the RJ link.
66
journal entries for database operations to be routed back to their originating system.
See “Support for unconfirmed entries during a switch” on page 70 and “RJ link
considerations when switching” on page 70 for more details.
Table 4. End option values on the End Remote Journal Link (ENDRJLNK) command.
*IMMED The target journal is deactivated immediately. Journal entries that are already
queued for transmission are not sent before the target journal is deactivated.
The next time the remote journal function is started, the journal entries that
were queued but not sent are prepared again for transmission to the target
journal.
*CNTRLD Any journal entries that are queued for transmission to the target journal will
be transmitted before the i5/OS remote journal function is ended. At any time,
the remote journal function may have one or more journal entries prepared for
transmission to the target journal. If an asynchronous delivery mode is used
over a slow communications line, it may take a significant amount of time to
transmit the queued entries before actually ending the target journal.
67
RJ link monitors
User journal replication processes monitor the journal message queues of the
journals identified by the RJ link. Two RJ link monitors are created automatically, one
on the source system and one on the target system. These monitors provide added
value by allowing MIMIX to automatically monitor the state of the remote journal link,
to notify the user of problems, and to automatically recover the link when possible.
68
originated the replication and holds the source journal definition for the next system in
the cascade.
For more information about configuring for these environments, see “Data distribution
and data management scenarios” on page 361.
69
Support for unconfirmed entries during a switch
The MIMIX Remote Journal support implements synchronous mode processing in a
way that reduces data latency in the movement of journal entries from the source to
the target system. This reduces the potential for and the degree of manual
intervention when an unplanned outage occurs.
Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the
target system so they can be applied to the backup database if an unplanned switch
is required. The unconfirmed entries are the most recent changes to the data.
Maintaining this data on the target system is critical to your managed availability
solution.
In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX
database apply process to be applied to the backup database. As a result, you will
see the database apply process jobs run longer than they would under standard
switch processing. If the apply process is ended by a user before the switch, MIMIX
will restart the apply jobs to preserve these entries.
As part of the unplanned switch processing, MIMIX checks whether the apply jobs are
caught up. Then, unconfirmed entries are applied to the target database and added to
a journal that will be transferred to the source system when that system is brought
back up. When the backup system is brought online as the temporary the source
system, the unconfirmed entries are processed before any new journal entries
generated by the application are processed. Furthermore, to ensure full data integrity,
once the original source system is operational these unconfirmed entries are the first
entries replicated back to that system.
70
used during a planned switch cause the RJ link to remain active. You may need to
end the RJ link after a planned switch.
71
User journal replication of IFS objects, data areas, data queues
72
the hotel risks reserving too many or too few rooms. Without advanced journaling,
serialization of these transactions cannot not be guaranteed on the target system due
to inherent differences in MIMIX processing from the user journal (database file) and
the system journal (default for objects). With advanced journaling, MIMIX serializes
these transactions on the target system by updating both the file and the data area
through user journal processing. Thus, as long as the database file and data area are
configured to be processed by the same apply session, updates occur on the target
system in the same order they were originally made on the source system.
Additional benefits of replicating IFS objects, data areas, and data queues from the
user journal include:
• Replication is less intrusive. In traditional object replication, the save/restore
process places locks on the replicated object on the source system. Database
replication touches the user journal only, leaving the source object alone.
• Changes to objects replicated from the user journal may be replicated to the target
system in a more timely manner. In traditional object replication, system journal
replication processes must contend with potential locks placed on the objects by
user applications.
• Processing time may be reduced, even for equal amounts of data. Database
replication eliminates the separate save, send, and restore processes necessary
for object replication.
• The objects replicated from the user journal can reduce burden on object
replication processes when there is a lot of activity being replicated through the
system journal.
• Commitment control is supported for B journal entry types for IFS objects
journaled to a user journal.
• Advanced journaling can be used in configurations that use either remote
journaling or MIMIX source-send processes for user journal replication.
Restrictions and configuration requirements vary for IFS objects and data area or
data queue objects. If one or more of the configuration requirements are not met, the
system journal replication path is used. For detailed information, including supported
journal entry types, see “Identifying data areas and data queues for replication” on
page 112 and “Identifying IFS objects for replication” on page 118.
1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ sup-
port.
73
User journal replication of IFS objects, data areas, data queues
Tracking entries
A unique tracking entry is associated with each IFS object, data area, and data queue
that is replicated using advanced journaling.
The collection of data group IFS entries for a data group determines the subset of
existing IFS objects on the source system that are eligible for replication using
advanced journaling techniques. Similarly, the collection of data group object entries
determines the subset of existing data areas and data queues on the source system
that are eligible for replication using advanced journaling techniques. MIMIX requires
a tracking entry for each of the eligible objects to identify how it is defined for
replication and to assist with tracking status when it is replicated. IFS tracking entries
identify IFS stream files, including the source and target file ID (FID), while object
tracking entries identify data areas or data queues.
When you initially configure a data group you must load tracking entries, start
journaling for the objects which they identify, and synchronize the objects with the
target system. The same is true when you add new or change existing data group IFS
entries or object entries.
It is also possible for tracking entries to be automatically created. After creating or
changing data group IFS entries or object entries that are configured for advanced
journaling, tracking entries are created the next time the data group is started.
However, this method has disadvantanges.This can significantly increase the amount
of time needed to start a data group. If the objects you intend to replicate with
advanced journaling are not journaled before the start request is made, MIMIX places
the tracking entries in *HLDERR state. Error messages indicate that journaling must
be started and the objects must be synchronized between systems.
Once a tracking entry exists, it remains until one of the following occurs:
• The object identified by the tracking entry is deleted from the source system and
replication of the delete action completes on the target system.
• The data group configuration changes so that an object is no longer identified for
replication using advanced journaling.
74
Figure 5 shows an IFS user directory structure, the include and exclude processing
selected for objects within that structure, and the resultant list of tracking entries
created by MIMIX.
Viewing tracking entries is supported in both 5250 emulator and MIMIX Availability
Manager interfaces. Their status is included with other data group status. You also
can see what objects they identify, whether the objects are journaled, and their
replication status. You can also perform operations on tracking entries, such as
holding and releasing, to address replication problems.
75
Lesser-used processes for user journal replication
76
and begins reading entries from the next journal receiver. This eliminates excessive
use of disk storage and allows valuable system resources to be available for other
processing.
Besides indicating the mapping between source and target file names, data group file
entries identify additional information used by database processes. The data group
file entry can also specify a particular apply session to use for processing on the
target system.
A status code in the data group file entry also stores the status of the file or member in
the MIMIX process. If a replication problem is detected, MIMIX puts the member in
hold error (*HLDERR) status so that no further transactions are applied. Files can
also be put on hold (*HLD) manually.
Putting a file on hold causes MIMIX to retain all journal entries for the file in log
spaces on the target system. If you expect to synchronize files at a later time, it is
better to put the file in an ignored state. By setting files to an ignored state, journal
entries for the file in the log spaces are deleted and additional entries received from
the target system are discarded. This keeps the log spaces to a minimal size and
improves efficiency for the apply process.
The file entry option Lock member during apply indicates whether or not to allow only
restricted access (read-only) to the file on the backup system. This file entry option
can be specified on the data group definition or on individual data group entries.
Table 5. Data area types supported by the data area polling process.
You define a data group data area entry for each data area that you want MIMIX to
manage. The data group definition determines how frequently the polling programs
check for changes to data areas.
The data area polling process runs on the source system. This process retrieves each
data area defined to a data group at the interval you specify and determines whether
or not a data area has changed. MIMIX checks for changes to the data area type and
length as well as to the contents of the data area. If a data area has changed, the data
area polling process retrieves the data area and converts it into a journal entry. This
77
Lesser-used processes for user journal replication
journal entry is sent through the normal user journal replication processing and is
used to update the data area on the target system.
For example, if a data area that is defined to MIMIX is deleted and recreated with new
attributes, the data area polling process will capture the new attributes and recreate
the data area on the target system.
78
79
Chapter 3 Preparing for MIMIX
This chapter outlines what you need to do to prepare for using MIMIX.
Preparing for the installation and use of MIMIX is a very important step towards
meeting your availability management requirements. Because of their shared
functions and their interaction with other MIMIX products, it is best to determine
System i5 requirements for user journal and system journal processing in the context
of your total MIMIX environment.
Give special attention to planning and implementing security for MIMIX. General
security considerations for all MIMIX products can be found in the License and
Availability Manager book. In addition, you can make your systems more secure with
MIMIX product-level and command-level security. Each product has its own product-
level security, but now you must consider the security implications of common
functions used by each product. Information about setting security for common
functions is also found in the License and Availability Manager book.
The topics in this chapter include:
• “Checklist: pre-configuration” on page 81 provides a procedure to follow to
prepare to configure MIMIX on each system that participates in a MIMIX
installation.
• “Data that should not be replicated” on page 83 describes how to consider what
data should not be replicated.
• “Planning for journaled IFS objects, data areas, and data queues” on page 85
describes considerations when planning to use advanced journaling for IFS
objects, data areas, or data queues.
• “Starting the MIMIXSBS subsystem” on page 90 describes how to start the
MIMIXSBS subsystem which all MIMIX products run in.
• “Accessing the MIMIX Main Menu” on page 91 describes the MIMIX Main Menu
and its two assistance levels, basic and intermediate which provide options to
help simplify daily interactions with MIMIX.
80
Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation.
Do the following:
1. By now, you should have completed the following tasks:
• The checklist for installing MIMIX software in the License and Availability
Manager book
• You should have also turned on product-level security and granted authority to
user profiles to control access to the MIMIX products.
2. At this time, you should review the information in “Data that should not be
replicated” on page 83.
3. Decide what replication choices are appropriate for your environment. For
detailed information see the chapter “Planning choices and details by object class”
on page 93.
4. If it is not already active, start the MIMIXSBS subsystem using topic “Starting the
MIMIXSBS subsystem” on page 90.
5. Configure each system in the MIMIX installation, beginning with the management
system. The chapter “Configuration checklists” on page 137 identifies the primary
options you have for configuring MIMIX.
6. Once you complete the configuration process you choose, you may also need to
do one or more of the following:
• If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to
write exit programs for monitoring activity and you may want to ensure that
your monitor definitions are replicated. See the Using MIMIX book for more
information.
• Verify the configuration.
• Verify any exit programs that are called by MIMIX.
• Update any automation programs you use with MIMIX and verify their
operation.
• If you plan to use switching support, you or your Certified MIMIX Consultant
may need to take additional action to set up and test switching. In order to use
MIMIX Switch Assistant, a default model switch framework must be configured
and identified in MIMIX policies. For more information about MIMIX Model
Switch Framework, see the Using MIMIX Monitor book. For more information
about switching and policies, see the Using MIMIX book.
81
Checklist: pre-configuration
82
Data that should not be replicated
There are some considerations to keep in mind when defining data for replication. Not
only do you need to determine what is critical to replicate, but you also need to
consider data that should not be replicated.
As you identify your critical data, consider the following:
• You may not need to replicate temporary files, work files, and temporary objects,
including DLOs and stream files. Evaluate how your applications use such files to
determine if they need to be replicated.
You should not replicate the following:
• LAKEVIEW, MIMIXQGPL, or any MIMIX installation libraries.
• The LAKEVIEW or MIMIXOWN user profiles.
• System user profiles from one system to another. For example, QSYSOPR and
QSECOFR should not be replicated.
• IBM i5/OS objects from one system to another. IBM-supplied libraries, files, and
other objects for i5/OS typically begin with the prefix letter Q.
83
Data that should not be replicated
84
Planning for journaled IFS objects, data areas, and data
queues
You can choose to use the cooperative processing support within MIMIX to replicate
any combination of journaled IFS objects, data queue objects, or data queue objects
using user journal replication processes.
In addition to configuration and journaling requirements and the restrictions that
apply, you need to address several other considerations when planning to replicate
journaled IFS objects, data areas, or data queues. These considerations affect
whether journals should be shared, whether objects should be replicated in a data
group shared with database files, whether configuration changes are needed to
change apply sessions for database files, and whether exit programs need to be
updated.
85
Planning for journaled IFS objects, data areas, and data queues
• You may have previously used data groups with a Data group type (TYPE) value
of *OBJ to separate replication of IFS, data area, or data queue objects from other
activity. Converting these data groups to use advanced journaling will not cause
problems with the data group. The data group definition and existing data group
entries must be changed to the values required for advanced journaling.
• When converting an existing data group to use advanced journaling, all objects in
the IFS path or the library specified that match the selection criteria are selected.
You may need to create additional data group IFS or object entries in order to
achieve the desired results. This may include creating entries that exclude objects
from replication.
• Adding IFS, data area, or data queue objects configured for advanced journaling
to an existing database replication environment may increase replication activity
and affect performance. If a large amount of data is to be replicated, consider the
overall replication performance and throughput requirements when choosing a
configuration.
• Changing the replication mechanism of IFS objects, data areas, or data queues
from system journal replication to user journal replication generally reduces
bandwidth consumption, improves replication latency, and eliminates the locking
contention associated with the save and restore process. However, if these
objects have never been replicated, the addition of IFS byte stream files, data
areas, or data queues to the replication environment will increase bandwidth
consumption and processing workload.
Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group
KEYAPP are running on IBM i V5R4. You use this data group for system journal
replication of the objects in library PRODLIB. The data group has one data group
object entry which has the following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*FILE)
Example 1 - You decide to use advanced journaling for all *DTAARA and *DTAQ
objects replicated with data group KEYAPP. You have confirmed that the data group
definition specifies TYPE(*ALL) and does not need to change. After performing a
controlled end of the data group, you change the data group object entry to have the
following values:
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ)
When the data group is started, object tracking entries are loaded and journaling is
started for the data area and data queue objects in PRODLIB. Those objects will now
be replicated from a user journal. Any other object types in PRODLIB continue to be
replicated from the system journal.
Example 2 - You want to use advanced journaling for data group KEYAPP but one
data area, XYZ, must remain replicated from the system journal. You will need the
data group object entry described in Example 1
86
LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD)
COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ)
You will also need a new data group object entry that specifies the following so that
data area XYZ can be replicated from the system journal:
LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD)
COOPDB(*NO)
87
Planning for journaled IFS objects, data areas, and data queues
incomplete journal entries, MIMIX provides two or more journal entries with
duplicate journal entry sequence numbers and journal codes and types to the user
exit program when the data for the incomplete entry is retrieved. Programs need
to correctly handle these duplicate entries representing the single, original journal
entry.
• Journal entries for journaled IFS objects, data areas, and data queues will be
routed to the user exit program. This may be a performance consideration relative
to user exit program design.
Contact your Certified MIMIX Consultant for assistance with user exit programs.
88
89
Starting the MIMIXSBS subsystem
90
Accessing the MIMIX Main Menu
The MIMIX command accesses the main menu for a MIMIX installation. The MIMIX
Main Menu has two assistance levels, basic and intermediate. The command defaults
to the basic assistance level, shown in Figure 6, with its options designed to simplify
day-to-day interaction with MIMIX. Figure 7 shows the intermediate assistance level.
The options on the menu vary with the assistance level. In either assistance level, the
available options also depend on the MIMIX products installed in the installation
library and their licensing. The products installed and the licensing also affect
subsequent menus and displays.
Accessing the menu - If you know the name of the MIMIX installation you want, you
can use the name to library-qualify the command, as follows:
Type the command library-name/MIMIX and press Enter. The default name
of the installation library is MIMIX.
If you do not know the name of the library, do the following:
1. Type the command LAKEVIEW/WRKPRD and press Enter.
2. Type a 9 (Display product menu) next to the product in the library you want on the
Lakeview Technology Installed Products display and press Enter.
Changing the assistance level - The F21 key (Assistance level) on the main menu
toggles between basic and intermediate levels of the menu. You can also specify the
the Assistance Level (ASTLVL) parameter on the MIMIX command.
Note: Procedures are written assuming you are using the MIMIX Availability Status
(WRKMMXSTS) display, which can only be selected from the MIMIX Basic
91
Accessing the MIMIX Main Menu
Main Menu. We recommend you use the MIMIX Basic Main Menu unless you
must access the MIMIX Intermediate Main Menu.
Selection or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel
(C) Copyright Lakeview Technology Inc., 1990, 2007.
Selection or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel
(C) Copyright Lakeview Technology Inc., 1990, 2007.
92
Chapter 4
This chapter describes the replication choices available for objects and identifies
critical requirements, limitations, and configuration considerations for those choices.
Many MIMIX processes are customized to provide optimal handling for certain
classes of related object types and differentiate between database files, library-based
objects, integrated file system (IFS) objects, and document library objects (DLOs).
Each class of information is identified for replication by a corresponding class of data
group entries. A data group can have any combination of data group entry classes.
Some classes even support multiple choices for replication.
In each class, a data group entry identifies a source of information that can be
replicated by a specific data group. When you configure MIMIX, each data group
entry you create identifies one or more objects to be considered for replication or to
be explicitly excluded from replication. When determining whether to replicate a
journaled transaction, MIMIX evaluates all of the data group entries for the class to
which the object belongs. If the object is within the name space determined by the
existing data group entries, the transaction is replicated.
The topics in this chapter include:
• “Replication choices by object type” on page 96 identifies the available replication
choices for each object class.
• “Configured object auditing value for data group entries” on page 98 describes
how MIMIX uses a configured object auditing value that is identified in data group
entries and when MIMIX will change an object’s auditing value to match this
configuration value.
• “Identifying library-based objects for replication” on page 100 includes information
that is common to all library-based objects, such as how MIMIX interprets the data
group object entries defined for a data group. This topic also provides examples
and additional detail about configuring entries to replicate spooled files and user
profiles.
• “Identifying logical and physical files for replication” on page 105 identifies the
replication choices and considerations for *FILE objects with logical or physical file
extended attributes. This topic identifies the requirements, limitations, and
configuration requirements of MIMIX Dynamic Apply and legacy cooperative
processing.
• “Identifying data areas and data queues for replication” on page 112 identifies the
replication choices and configuration requirements for library-based objects of
type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of
these object types when user journal processes (advanced journaling) is used.
• “Identifying IFS objects for replication” on page 118 identifies supported and
unsupported file systems, replication choices, and considerations such as long
path names and case sensitivity for IFS objects. This topic also identifies
restrictions and configuration requirements for replication of these object types
when user journal processes (advanced journaling) is used.
93
• “Identifying DLOs for replication” on page 124 describes how MIMIX interprets the
data group DLO entries defined for a data group and includes examples for
documents and folders.
• “Processing of newly created files and objects” on page 127 describes how new
IFS objects, data areas, data queues, and files that have journaling implicitly
started are replicated from the user journal.
• “Processing variations for common operations” on page 130 describes
configuration-related variations in how MIMIX replicates move/rename, delete,
and restore operations.
94
Planning choices and details by object class
95
Replication choices by object type
Objects of type *FILE, Default: user journal with Object entries and “Identifying logical and
extended attributes: MIMIX Dynamic Apply1 File entries physical files for
• PF (data, source) replication” on page 105
Other: For PF data files, Object entries and
• LF
legacy cooperative File entries
processing2. (For PF
source and LF files,
system journal)
• *FILE, other Default: For other files, Object entries “Identifying library-based
extended attributes system journal objects for replication” on
page 100
Objects of type Default: system journal Object entries “Identifying data areas
*DTAARA and data queues for
Other: advanced Object entries and replication” on page 112
journaling2 Object tracking entries
IFS objects Default: system journal IFS entries “Identifying IFS objects for
replication” on page 118
Other: advanced IFS entries and IFS
journaling2 tracking entries
96
97
Configured object auditing value for data group entries
98
When a compare request includes an object with a configured object auditing value of
*NONE, any differences found for attributes that could generate T-ZC or T-YC journal
entries are reported as *EC (equal configuration).
You may also want to read the following:
• For more information about when MIMIX sets an object’s auditing value, see
“Managing object auditing” on page 57.
• For more information about manually setting values and examples, see “Setting
data group auditing values manually” on page 297.
• To see what attributes can be compared and replicated, see the following topics:
– “Attributes compared and expected results - #FILATR, #FILATRMBR audits”
on page 591
– “Attributes compared and expected results - #OBJATR audit” on page 596
– “Attributes compared and expected results - #DLOATR audit” on page 606.
– “Attributes compared and expected results - #IFSATR audit” on page 604
99
Identifying library-based objects for replication
100
How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects
you specify in data group object entries will be selected for replication. MIMIX
determines which replication process will be used only after it determines whether the
library-based object will be replicated.
When determining whether to process a journal entry for a library-based object,
MIMIX looks for a match between the object information in the journal entry and one
of the data group object entries. The data group object entries are checked from the
most specific to the least specific. The library name is the first search element, then
followed by the object type, attribute (for files and device descriptions), and the object
name. The most significant match found (if any) is checked to determine whether to
include or exclude the journal entry in replication.
Table 7 shows how MIMIX checks a journal entry for a match with a data group object
entry. The columns are arranged to show the priority of the elements within the object
entry, with the most significant (library name) at left and the least significant (object
name) at right.
101
Identifying library-based objects for replication
When configuring data group object entries, the flexibility of the generic support
allows a variety of include and exclude combinations for a given library or set of
libraries. But, generic name support can also cause unexpected results if it is not well
planned. Consider the search order shown in Table 7 when configuring data group
object entries to ensure that objects are not unexpectedly included or excluded in
replication.
Example - For example, say you that you have a data group configured with data
group object entries like those shown in Table 9. The journal entries MIMIX is
evaluating for replication are shown in Table 8.
A transaction is received from the system journal for program BOOKKEEP in library
FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group
object entry shown in Table 9.
A transaction for file ACCOUNTG in library FINANCE would also be replicated since it
fits the third entry.
A transaction for data area BALANCE in library FINANCE would not be replicated
since it fits the second entry, an Exclude entry.
Table 9. Sample of data group object entries, arranged in order from most to least specific
Entry Source Library Object Type Object Name Attribute Process Type
1 Finance *PGM *ALL *ALL *INCLD
2 Finance *DTAARA *ALL *ALL *EXCLD
3 Finance *ALL acc* *ALL *INCLD
Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be
replicated. Although the transaction fits both the second and third entries shown in
Table 9, the second entry determines whether to replicate because it provides a more
significant match in the second criteria checked (object type). The second entry
provides an exact match for the library name, an exact match for object type, and a
object name match to *ALL.
In order for MIMIX to process the data area ACCOUNT1, an additional data group
object entry with process type *INCLD could be added for object type of *DTAARA
with an exact name of ACCOUNT1 or a generic name ACC*.
102
queue that is identified by an object entry with the appropriate settings, all spooled
files for the output queue (*OUTQ) are replicated by system journal replication
processes.
Table 10. Data group object entry parameter values for spooled file replication
Parameter Value
Is it important to consider which spooled files must be replicated and which should
not. Some output queues contain a large number of non-critical spooled files and
probably should not be replicated. Most likely, you want to limit the spooled files that
you replicate to mission-critical information. It may be useful to direct important
spooled files that should be replicated to specific output queues instead of defining a
large number of output queues for replication.
When an output queue is selected for replication and the data group object entry
specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA
and *PRTDTA are included in the system value for the security auditing level
(QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the
system journal. When a spooled file is created, moved, deleted, or its attributes are
changed, the resulting entries in the system journal are processed by a MIMIX object
send job and are replicated.
103
Identifying library-based objects for replication
program that automatically prints spooled files, you may want to use one of these
values to control what is printed after replication when printers writers are active.
If you move a spooled file between output queues which have different configured
values for the SPLFOPT parameter, consider the following:
• Spooled files moved from an output queue configured with SPLFOPT(*NONE) to
an output queue configured with SPLFOPT(*HLD) are placed in a held state on
the target system.
• Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an
output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV)
remain in a held state on the target system until you take action to release them.
Table 11. Sample data group object entries for maintaining private authorities of message
queues associated with user profiles
Entry Source Library Object Type Object Name Process Type
1 QSYS *USRPRF A* *INCLD
2 QUSRSYS *MSGQ A* *INCLD
3 QSYS *USRPRF ABC *EXCLD
4 QUSRSYS *MSGQ ABC *EXCLD
104
Identifying logical and physical files for replication
MIMIX supports multiple ways of replicating *FILE objects with extended attributes of
LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines
the replication method used for these logical and physical files. The following
configurations are possible:
• MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this
configuration, logical files and physical files (source and data) are replicated
primarily through the user (database) journal. This configuration is the most
efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files.
In this configuration, files are identified by data group object entries and file
entries.
• Legacy cooperative processing - Legacy cooperative processing supports only
data files (PF-DTA and PF38-DTA). It does not support source physical files or
logical files. In legacy cooperative processing, record data and member data
operations are replicated through user journal processes, while all other file
transactions such as creates, moves, renames, and deletes are replicated
through system journal processes. The database processes can use either
remote journaling or MIMIX source-send processes, making legacy cooperative
processing the recommended choice for physical data files when the remote
journaling environment required by MIMIX Dynamic Apply is not possible. In this
configuration, files are identified by data group object entries and file entries.
• User journal (database) only configurations - Environments that do not meet
MIMIX Dynamic Apply requirements but which have data group definitions that
specify TYPE(*DB) can only replicate data changes to physical files. These
configurations may not be able to replicate other operations such as creates,
restores, moves, renames, and some copy operations. In this configuration, files
are identified by data group file entries.
• System journal (object) only configurations - Data group definitions which
specify TYPE(*OBJ) are less efficient at processing logical and physical files. The
entire member is updated with each replicated transaction. Members must be
closed in order for replication to occur. In this configuration, files are identified by
data group object entries.
You should be aware of common characteristics of replicating library-based objects,
such when the configured object auditing value is used and how MIMIX interprets
data group entries to identify objects eligible for replication. For this information, see
“Configured object auditing value for data group entries” on page 98 and “How MIMIX
uses object entries to evaluate journal entries for replication” on page 101.
Some advanced techniques may require specific configurations. See “Configuring
advanced replication techniques” on page 353 for additional information.
For detailed procedures, see “Creating data group object entries” on page 267.
105
Identifying logical and physical files for replication
defaults are used. With this configuration, logical and physical files are processed
primarily from the user journal.
Cooperative journal - The value specified for the Cooperative journal (COOPJRN)
parameter in the data group definition is critical to determining how files are
cooperatively processed. When creating a new data group, you can explicitly specify
a value or you can allow MIMIX to automatically change the default value (*DFT) to
either *USRJRN or *SYSJRN based on whether operating system and configuration
requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX
changes the value *DFT to *USRJRN. When the MIMIX Dynamic Apply requirements
are not met, MIMIX changes *DFT to *SYSJRN.
Note: Data groups created prior to upgrading to version 5 continue to use their
existing configuration. The installation process sets the value of COOPJRN to
*SYSJRN and this value remains in effect until you take action as described in
“Converting to MIMIX Dynamic Apply” on page 150.
When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
106
system journal and should not have any corresponding data group file entries.
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. See “Requirements and limitations of MIMIX Dynamic
Apply” on page 110 and “Requirements and limitations of legacy cooperative
processing” on page 111 for additional information. For more information about
load balancing apply sessions, see “Database apply session balancing” on
page 87.
Commitment control - This database technique allows multiple updates to one or
more files to be considered a single transaction. When used, commitment control
maintains database integrity by not exposing a part of a database transaction until the
whole transaction completes. This ensures that there are no partial updates when the
process is interrupted prior to the completion of the transaction. This technique is also
useful in the event that a partially updated transaction must be removed, or rolled
back, from the files or when updates identified as erroneous need to be removed.
MIMIX fully simulates commitment control on the target system. When commitment
control is used on a source system in a MIMIX environment, MIMIX maintains the
integrity of the database on the target system by preventing partial transactions from
being applied until the whole transaction completes. If the source system becomes
unavailable, MIMIX will not have applied incomplete transactions on the target
system. In the event of an incomplete (or uncommitted) commitment cycle, the
integrity of the database is maintained.
If your application dynamically creates database files that are subsequently used in a
commitment control environment, use MIMIX Dynamic Apply for replication.
Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit
cycle is open when MIMIX tries to save the file. The save operation will be delayed
and may fail if the file being saved has uncommitted transactions.
107
Identifying logical and physical files for replication
User exit programs may be affected when journaled LOB data is added to an existing
data group. Non-minimized LOB data produces incomplete entries. For incomplete
journal entries, two or more entries with duplicate journal sequence numbers and
journal codes and types will be provided to the user exit program when the data for
the incomplete entry is retrieved and segmented. Programs need to correctly handle
these duplicate entries representing the single, original journal entry.
You should also be aware of the following restrictions:
• Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work
against database files with LOB fields.
• There is no collision detection for LOB data. Most collision detection classes
compare the journal entries with the content of the record on the target system.
Although you can compare the actual content of the record, you cannot compare
the content of the LOBs.
108
Table 12. Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing
Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy
cooperative processing require that existing files identified by a data group object
entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must
also be identified by data group file entries.
When a file is identified by both a data group object entry and an data group file entry,
the following are also required:
• The object entry must enable the cooperative processing of files by specifying
109
Identifying logical and physical files for replication
110
configured for replication
Files created by these actions can be added to the MIMIX configuration by running
the #DGFE audit. The audit recovery will synchronize the file as part of adding the file
entry to the configuration. In data groups that specify TYPE(*ALL), the above actions
are fully supported.
Referential constraints - The following restrictions apply:
• If using referential constraints with *CASCADE or *SETNULL actions you must
specify *YES for the Journal on target (JRNTGT) parameter in the data group
definition.
• Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. If a particular preferred apply session has been specified
in file entry options (FEOPT), MIMIX may ignore the specification in order to
satisfy this restriction.
Positional replication only - Keyed replication is not supported by MIMIX Dynamic
Apply. Data group definitions, data group object entries, and data group file entries
must specify *POSITION for the Replication type element of the file and tracking entry
options (FEOPT) parameter. The value *KEYED cannot be used.
111
Identifying data areas and data queues for replication
112
identified by object tracking entries.
Table 13. Critical configuration parameters for replicating *DTAARA and *DTAQ objects
from a user journal
Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on
page 85 for additional details if any of the following apply:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether data area or data queue objects should be replicated in a
data group that also replicates database files.
• Serialized transactions - If you need to serialize transactions for database files
and data area or data queue objects replicated from a user journal, you may need
to adjust the configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all data area and data queue objects are replicated from a user journal. Other
replication activity can use this apply session, and may cause it to become
overloaded. You may need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
When considering replicating data areas and data queues using MIMIX user journal
replication processes, be aware of the following restrictions:
• For V5R3 operating systems, only a static environment of data areas and data
queues is replicated. For V5R3 systems, while changes to the actual data are
recognized and replicated, attribute changes are not. MIMIX AutoGuard™ must be
used to detect attribute changes that occur on the source objects and correct the
113
Identifying data areas and data queues for replication
Table 14. Journal entry types supported by MIMIX for data areas
Notes:
1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
114
Table 14. Journal entry types supported by MIMIX for data areas
E ZA Change authority 1
E ZO Ownership change 1
E ZT Auditing change 1
Notes:
1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
Table 15 shows the currently supported journal entry types for data queues.
Notes:
1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
115
Identifying data areas and data queues for replication
Q ZA Change authority 1
Q ZO Ownership change 1
Q ZT Auditing change 1
Notes:
1. The indicated journal entry type is only supported for i5/OS V5R4 and higher.
For more information about journal entries, see Journal Entry Information (Appendix
D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information
Center.
116
117
Identifying IFS objects for replication
Table 16. IFS file systems that are not supported by MIMIX
Journaling is not supported for files in Network Work Storage Spaces (NWSS), which
are used as virtual disks by IXS and IXA technology. Therefore, IFS objects
configured to be replicated from a user journal must be in the Root (‘/’) or QOpenSys
file systems.
Refer to the IBM book OS/400 Integrated File System Introduction for more
information about IFS.
118
Considerations when identifying IFS objects
The following considerations for IFS objects apply regardless of whether replication
occurs through the system journal or user journal.
119
Identifying IFS objects for replication
When character case does matter (QOpenSys file system), MIMIX presents path
names in the appropriate case. For example, the WRKDGACTE display and the
WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path.
Names must be entered in the appropriate character case. For example, subsetting
the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.
Table 17. Critical configuration parameters for replicating IFS objects from a user journal
120
Table 17. Critical configuration parameters for replicating IFS objects from a user journal
Additionally, see “Planning for journaled IFS objects, data areas, and data queues” on
page 85 for additional details if any of the following apply:
• Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether IFS objects should be replicated in a data group that also
replicated database files.
• Serialized transactions - If you need to serialize transactions for database files
and IFS objects replicated from a user journal, you may need to adjust the
configuration for the replicated files.
• Apply session load balancing - One database apply session, session A, is used
for all IFS objects that are replicated from a user journal. Other replication activity
can use this apply session, and may cause it to become overloaded. You may
need to adjust the configuration accordingly.
• User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
When considering replicating IFS objects using MIMIX user journal replication
processes, be aware of the following restrictions:
• The operating system does not support before-images for data updates to IFS
objects. As such, MIMIX cannot perform data integrity checks on the target
system to ensure that data being replaced on the target system is an exact match
to the data replaced on the source system. MIMIX will check the integrity of the
IFS data through the use of regularly scheduled audits, specifically the #IFSATR
audit.
• The apply of IFS objects is restricted to a single database apply job (DBAPYA). If
a data group has too much replication activity, this job may fall behind in the
processing of journal entries. If this occurs, you should load-level the apply
sessions by moving some or all of the database files to another database apply
job.
• Pre-existing IFS objects to be selected for replication must have journaling started
both the source and target systems before the data group is started.
• A physical object, such as an IFS object, is identified by a hard link. Typically, an
unlimited number of hard links can be created as identifiers for one object. For
journaled IFS objects, MIMIX does not support the replication of additional hard
links because doing so causes the same FID to be used for multiple names for the
same IFS object.
121
Identifying IFS objects for replication
• The ability to “lock on apply” IFS objects in order to prevent unauthorized updates
from occurring on the target system is not supported when advanced journaling is
configured.
• The ability to use the Remove Journaled Changes (RMVJRNCHG) command for
removing journaled changes for IFS tracking entries is not supported.
• It is recommended that option 14 (Remove related) on the Work with Data Group
Activity (WKRDGACT) display not be used for failed activity entries representing
actions against cooperatively processed IFS objects. Because this option does
not remove the associated tracking entries, orphan tracking entries can
accumulate on the system.
B B3 Move/rename object 1
B FR Restore object 1
B FW Start of save-while-active
B WA Write after-image
122
Table 18. IFS entry types supported by MIMIX
Note:
1. The action identified in these entries are replicated cooperatively through the security
audit journal.
123
Identifying DLOs for replication
How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits
them from processing. This information can help you understand what is included or
omitted.
When determining whether to process a journal entry for a DLO, MIMIX looks for a
match between the DLO information in the journal entry and one of the data group
DLO entries. The data group DLO entries are checked from the most specific to the
least specific. The folder path is the most significant search element, followed by the
document name, then the owner. The most significant match found (if any) is checked
to determine whether to process the entry.
An exact or generic folder path name in a data group DLO entry applies to folder
paths that match the entry as well as to any unnamed child folders of that path which
are not covered by a more explicit entry. For example, a data group DLO entry with a
folder path of “ACCOUNT” would also apply to a transaction for a document in folder
path ACCOUNT/JANUARY. If a second data group DLO entry with a folder path of
“ACCOUNT/J*” were added, it would take precedence because it is more specific.
For a folder path with multiple elements (for example, A/B/C/D), the exact checks and
generic checks against data group DLO entries are performed on the path. If no
match is found, the lowest path element is removed and the process is repeated. For
example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until
a match is found or until all elements of the path have been removed. If there is still no
match, then checks for folder path *ALL are performed.
124
Table 19. Matching order for document names
Search Order Folder Path Document Name Owner
4 Exact Generic* *ALL
5 Exact *ALL Exact
6 Exact *ALL *ALL
7 Generic* Exact Exact
8 Generic* Exact *ALL
9 Generic* Generic* Exact
10 Generic* Generic* *ALL
11 Generic* *ALL Exact
12 Generic* *ALL *ALL
13 *ALL Exact Exact
14 *ALL Exact *ALL
15 *ALL Generic* Exact
16 *ALL Generic* *ALL
17 *ALL *ALL Exact
18 *ALL *ALL *ALL
Document example - Table 20 illustrates some sample data group DLO entries. For
example, a transaction for any document in a folder named FINANCE would be
blocked from replication because it matches entry 6. A transaction for document
ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it
matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would
be blocked by entry 3. Likewise, documents LEDGER.JUL and LEDGER.AUG in
FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1
would be blocked by entry 1. A transaction for any document in FINANCE2 would be
blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a
child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of
entry 5.
Table 20. Sample data group DLO entries, arranged in order from most to least specific
Entry Folder Path Document Owner Process Type
1 FINANCE1 PAYROLL *ALL *EXCLD
2 FINANCE1 LEDGER* *ALL *EXCLD
3 FINANCE1 *ALL SMITHA *EXCLD
4 FINANCE1 *ALL *ALL *INCLD
5 FINANCE2/Q1 *ALL *ALL *INCLD
6 FIN* *ALL *ALL *EXCLD
125
Identifying DLOs for replication
There is one exception to the requirement of replicating folders to satisfy the folder
path for an include entry. A folder will not be replicated when the only include entry
that would cause its replication specifies *ALL for its folder path and the folder
matches an exclude entry with an exact or a generic folder path name, a document
value of *ALL and an owner of *ALL.
Table 20 and Table 21 illustrate the differences in matching folders to be replicated.
In Table 20, above, a transaction for a folder named FINANCE would be blocked from
replication because it matches entry 6. This would also affect all folders within
FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4.
Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5.
Note that any transactions for documents in FINANCE2 or any child folders other than
those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself
must exist to satisfy entry 5.
In Table 21, although entry 5 is an include entry, a transaction for folder ACCOUNT
would be blocked from replication because it matches entry 2. This is because of the
exception described above. ACCOUNT matches an exclude entry with an exact folder
path, document value of *ALL, and an owner of *ALL, and the only include entry that
would cause it to be replicated specifies folder path *ALL. The exception also affects
all child folders in the ACCOUNT folder path. Note that the exception holds true even
if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific
folder name match takes precedence.
126
Processing of newly created files and objects
Your production environment is dynamic. New objects continue to be created after
MIMIX is configured and running. When properly configured, MIMIX automatically
recognizes entries in the user journal that identify new create operations and
replicates any that are eligible for replication. Optionally, MIMIX can also notify you of
newly created objects not eligible for replication so that you can choose whether to
add them to the configuration.
Configurations that replicate files, data areas, data queues, or IFS objects from user
journal entries require journaling to be started on the objects before replication can
occur. When a configuration enables journaling to be implicitly started on new objects,
a newly created object is already journaled. When the journaled object falls within the
group of objects identified for replication by a data group, MIMIX replicates the create
operation. Processing variations exist based on how the data group and the data
group entry with the most specific match to the object are configured. These
variations are described in the following subtopics.
The MMNFYNEWE monitor is a shipped journal monitor that watches the security
audit journal (QAUDJRN) for newly created libraries, folders, or directories that are
not already included or excluded for replication by a data group and sends warning
notifications when its conditions are met. This monitor is shipped disabled. User
action is required to enable this monitor on the source system within your MIMIX
environment. Once enabled, the monitor will automatically start with the master
monitor. For more information about the conditions that are checked, see topic
‘Notifications for newly created objects’ in the Using MIMIX book.
For more information about requirements and restrictions for implicit starting of
journaling as well as examples of how MIMIX determines whether to replicate a new
object, see “What objects need to be journaled” on page 323.
127
Processing of newly created files and objects
128
For more information about requirements for implicit starting of journaling, see “What
objects need to be journaled” on page 323.
If the object is journaled to the user journal, MIMIX user journal replication processes
can fully replicate the create operation. The user journal entries contain all the
information necessary for replication without needing to retrieve information from the
object on the source system. MIMIX creates a tracking entry for the newly created
object and an activity entry representing the T-CO (create) journal entry.
If the object is not journaled to the user journal, then the create of the object is
processed with system journal processing.
If the specified values in data group entry that identified the object as eligible for
replication do not allow the object type to be cooperatively processed, the create of
the object and subsequent operations are replicated through system journal
processes.
When MIMIX replicates a create operation through the user journal, the create
timestamp (*CRTTSP) attribute may differ between the source and target systems.
129
Processing variations for common operations
1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry,
it is not guaranteed that an object with the same name exists on the backup system or
that it is really the same object as on the source system. To ensure the integrity of the
target (backup) system, a copy of the source object must be brought over from the
source system.
2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is
no guarantee that the target library exists on the target system. Further, the customer is
assumed not to care if the target object is replicated, since it is not defined with an
Include entry, so deleting the object is the most straight forward approach.
130
Move/rename operations - user journaled data areas, data queues, IFS
objects
IFS, data area, and data queue objects replicated by user journal replication
processes can be moved or renamed while maintaining the integrity of the data. If the
new location or new name on the source system remains within the set of objects
identified as eligible for replication, MIMIX will perform the move or rename operation
on the object on the target system.
When a move or rename operation starts with or results in an object that is not within
the name space for user journal replication, MIMIX may need to perform additional
operations in order to replicate the operation. MIMIX may use a create or delete
operation and may need to add or remove tracking entries.
Each row in Table 23 summarizes a move/rename scenario and identifies the action
taken by MIMIX.
Table 23. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved
Identified for Within name space of Moves or renames the object on the target system and
replication with user objects to be renames the associated tracking entry. See example 1.
journal processing replicated with user
journal processing
Identified for Not identified for Deletes the target object and deletes the associated
replication with user replication tracking entry. The object will no longer be replicated. See
journal processing example 3.
Identified for Within name space of Moves or renames the object using system journal
replication with user objects to be processes and removes the associated tracking entry.
journal processing replicated with See example 4.
system journal
processing
Identified for Within name space of Creates tracking entry for the object using the new name
replication with objects to be or location and moves or renames the object using user
system journal replicated with user journal processes. If the object is a library or directory,
processing journal processing MIMIX creates tracking entries for those objects within the
library or directory that are also within name space for
user journal replication and synchronizes those objects.
See example 5.
Not identified for Within name space of Creates tracking entry for the object using the new name
replication objects to be or location. If the object is a library or directory, MIMIX
replicated with user creates tracking entries for those objects within the library
journal processing or directory that are also within name space for user
journal replication. Synchronizes all of the objects
identified by these new tracking entries. See example 6.
131
Processing variations for common operations
The following examples use IFS objects and directories to illustrate the MIMIX
operations in move/rename scenarios that involve user journal replication (advanced
journaling). The MIMIX behavior described is the same as that for data areas and
data queues that are within the configured name space for advanced journaling.
Table 24 identifies the initial set of source system objects, data group IFS entries, and
IFS tracking entries before the move/rename operation occurs.
Table 24. Initial data group IFS entries, IFS tracking entries, and source IFS objects for
examples
/TEST/dir1/doc1
Table 25. Results of move/rename operations within name space for advanced journaling
Resulting Target IFS objects Resulting data group IFS tracking entries
/TEST/stmf2 /TEST/stmf2
/TEST/dir2/doc1 /TEST/dir2
/TEST/dir2/doc1
132
but the new name is not. MIMIX treats this as a delete operation during replication
processing. MIMIX deletes the IFS directory and IFS stream file from the target
system. MIMIX also deletes the associated IFS tracking entries.
Example 4, moves/renames from advanced journaling to system journal name
space: In this example, MIMIX encounters user journal entries indicating that the
source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS
stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both
the old names and new names are eligible for replication as indicated in Table 23.
However, the new names fall within the name space for replication through the
system journal. As a result, MIMIX removes the tracking entries associated with the
original names and performs the rename operation the objects on the target system.
Table 26 shows these results.
Table 26. Results of move/rename operations from advanced journaling to system journal
name space
Resulting target IFS objects Resulting data group IFS tracking entries
/TEST/notajstmf1 (removed)
/TEST/notajdir1/doc1 (removed)
Table 27. Results of move/rename operations from system journal to advanced journaling
name space
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
133
Processing variations for common operations
the name space for advanced journaling as indicated in Table 23. Because the
objects were not previously replicated, MIMIX processes the operations as creates
during replication. See “Newly created files” on page 127.
MIMIX also creates tracking entries for any objects that reside within the moved or
renamed IFS directory (or library in the case of data areas or data queues). The
objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 28 illustrates the results.
Table 28. Results of move/rename operations from outside to within advanced journaling
name space
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is
encountered in the system journal, MIMIX system journal replication processes
generate an activity entry representing the delete operation and handle the delete of
the object from the target system. The user journal replication processes remove the
corresponding tracking entry.
Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, the pre-existing object is
replaced by a backup copy on the source system. With user journal replication,
restores of IFS, data area, and data queue objects on the source system are
134
supported through cooperative processing between MIMIX system journal and user
journal replication processes.
Provided the object was journaled when it was saved, a restored IFS, data area, or
data queue object is also journaled .
During cooperative processing, system journal replication processes generate an
activity entry representing the T-OR (restore) journal entry from the system journal
and perform a save and restore operation on the IFS, data area, or data queue object.
Meanwhile, user journal replication processes handle the management of the
corresponding IFS or object tracking entry. MIMIX may also start journaling, or end
and restart journaling on the object so that the journaling characteristics of the IFS,
data area, or data queue object match the data group definition.
135
Processing variations for common operations
136
Chapter 5
Configuration checklists
MIMIX can be configured in a variety of ways to support your replication needs. Each
configuration requires a combination of definitions and data group entries. Definitions
identify systems, journals, communications, and data groups that make up the
replication environment. Data group entries identify what to replicate and the
replication option to be used. For available options, see “Replication choices by object
type” on page 96. Also, advanced techniques, such as keyed replication, have
additional configuration requirements. For additional information see “Configuring
advanced replication techniques” on page 353.
New installations: Before you start configuring MIMIX, system-level configuration
for communications (lines, controllers, IP interfaces) must already exist between the
systems that you plan to include in the MIMIX installation. Choose one of the following
checklists to configure a new installation of MIMIX.
• “Checklist: New remote journal (preferred) configuration” on page 139 uses
shipped default values to create a new installation. Unless you explicitly configure
them otherwise, new data groups will use the i5/OS remote journal function as
part of user journal replication processes.
• “Checklist: New MIMIX source-send configuration” on page 143 configures a new
installation and is appropriate when your environment cannot use remote
journaling. New data groups will use MIMIX source-send processes in user journal
replication.
• To configure a new installation that is to use the integrated MIMIX support for IBM
WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ
book.
Upgrades and conversions: You can use any of the following topics, as
appropriate, to change a configuration:
• “Checklist: Converting to remote journaling” on page 147 changes an existing
data group to use remote journaling within user journal replication processes.
• “Converting to MIMIX Dynamic Apply” on page 150 provides checklists for two
methods of changing the configuration of an existing data group to use MIMIX
Dynamic Apply for logical and physical file replication. Data groups that existed
prior to installing version 5 must use this information in order to use MIMIX
Dynamic Apply.
• “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling” on
page 154 changes the configuration of an existing data group to use user journal
replication processes for these objects.
• To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an
existing installation, use topic ‘Choosing the correct checklist for MIMIX for MQ’ in
the MIMIX for IBM WebSphere MQ book.
• “Checklist: Converting to legacy cooperative processing” on page 157 changes
the configuration of an existing data group so that logical and physical source files
are processed from the system journal and physical data files use legacy
137
cooperative processing.
Other checklists: The following configuration checklist employs less frequently used
configuration tools and is not included in this chapter.
• Use “Checklist: copy configuration” on page 553 if you need to copy configuration
data from an existing product library into another MIMIX installation.
138
Checklist: New remote journal (preferred) configuration
Use this checklist to configure a new installation of MIMIX. This checklist creates the
preferred configuration that uses i5/OS remote journaling and uses MIMIX Dynamic
Apply to cooperatively process logical and physical files.
To configure your system manually, perform the following steps on the system that
you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to Chapter 6, “System-level
communications for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic “Creating system definitions” on
page 170.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic “Creating a transfer definition” on page 184.
4. If you have implemented DDM password validation, you need to verify that your
environment will allow MIMIX RJ support to work properly. Use topic “Checking
DDM password validation level in use” on page 306.
5. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic “Starting the Lakeview TCP/IP server” on page 189.
Note: You can optionally configure so that the Lakeview TCP server starts
automatically. Use the procedure in topic “Using autostart job entries to
start the TCP server” on page 190.
6. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic “Starting the DDM TCP/IP server” on page 308.
7. Verify that the communications link defined in each transfer definition is
operational using topic “Verifying a communications link for system definitions” on
page 194.
8. Start the MIMIX managers using topic “Starting the system and journal managers”
on page 296. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
9. Create the data group definitions that you need using topic “Creating a data group
definition” on page 247. The referenced topic creates a data group definition with
appropriate values to support MIMIX Dynamic Apply.
10. Verify all potential communications links that can be used by this configuration
using topic “Verifying the communications link for a data group” on page 195.
139
Checklist: New remote journal (preferred) configuration
11. Use Table 29 to create data group entries for this configuration. This configuration
requires object entries and file entries for LF and PF files. For other object types or
classes, any replication options identified in planning topic “Replication choices by
object type” on page 96 are supported.
Table 29. How to configure data group entries for the remote journal (preferred) configuration.
Library- 1. Create object entries using. Use“Creating data group “Identifying library-based
based object entries” on page 267. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 273. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 105
PF data files, you should still create file entries for PF data “Identifying data areas and data
files to ensure that legacy cooperative processing can be queues for replication” on
used. page 112
3. After creating object entries, load object tracking entries
for any *DTAARA and *DTAQ objects to be replicated
from a user journal. Use “Loading object tracking entries”
on page 285.
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 282. replication” on page 118
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 284.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 287. replication” on page 124
12. Use the #DGFE audit to confirm and automatically correct any problems found in
file entries associated with data group object entries. Do the following:
a. Type WRKAUD RULE(#DGFE) and press Enter.
b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.
c. The results are placed in an outfile. For additional information, see “Interpreting
results for configuration data - #DGFE audit” on page 580.
13. If you anticipate a delay between configuring data group entries (object, DLO, or
IFS) and starting the data group, you should use the SETDGAUD command
before synchronizing data between systems. Doing so will ensure that replicated
objects will be properly audited and that any transactions for the objects that occur
between configuration and starting the data group will be replicated. Use the
procedure “Setting data group auditing values manually” on page 297.
14. Ensure that there are no batch jobs or users on the system that will be the source
for replication for the rest of this procedure. Do not allow users onto the source
140
system or batch processing until you have successfully completed Step 18.
15. Start journaling using the following procedures as needed for your configuration.
• For user journal replication, use “Journaling for physical files” on page 326 to
start journaling on both source and target systems
• For IFS objects, configured for advanced journaling, use “Journaling for IFS
objects” on page 330
• For data areas or data queues configured for advanced journaling, use
“Journaling for data areas and data queues” on page 334
16. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 483
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
17. Verify your configuration. Topic “Verifying the initial synchronization” on page 487
identifies the additional aspects of your configuration that are necessary for
successful replication.
18. Start the data groups. You should use the procedure “Starting Selected Data
Group Processes” in the Using MIMIX book.
141
Checklist: New remote journal (preferred) configuration
142
Checklist: New MIMIX source-send configuration
Best practices for MIMIX are to use MIMIX Remote Journal support for database
replication. However, in cases where you cannot use remote journaling, this checklist
will configure a new installation that uses MIMIX source-send processes for database
replication. System journal replication is also configured.
To configure a source-send environment, perform the following steps on the system
that you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to Chapter 6, “System-level
communications for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic “Creating system definitions” on
page 170.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic “Creating a transfer definition” on page 184.
4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic “Starting the Lakeview TCP/IP server” on page 189.
Note: You can optionally configure so that the Lakeview TCP server starts
automatically. Use the procedure in topic “Using autostart job entries to
start the TCP server” on page 190.
5. Verify that the communications link defined in each transfer definition is
operational using topic “Verifying a communications link for system definitions” on
page 194.
6. Start the MIMIX managers using topic “Starting the system and journal managers”
on page 296. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
7. Create the data group definitions that you need using topic “Creating a data group
definition” on page 247.
8. If the journaling environment does not exist, use topic “Building the journaling
environment” on page 219 to create the journaling environment.
9. Verify all potential communications links that can be used by this configuration
using topic “Verifying the communications link for a data group” on page 195.
10. Use Table 30 to create data group entries for this configuration. This configuration
requires object entries and file entries for legacy cooperative processing of PF
data files. For other object types or classes, any replication options identified in
143
Checklist: New MIMIX source-send configuration
Table 30. How to configure data group entries a new MIMIX source-send configuration.
Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 267. objects for replication” on
objects 2. After creating object entries, load file entries for PF (data) page 100
*FILE objects using “Loading file entries from a data “Identifying logical and physical
group’s object entries” on page 273. files for replication” on
3. After creating object entries, load object tracking entries page 105
for *DTAARA and *DTAQ objects to be replicated from a “Identifying data areas and data
user journal. Use “Loading object tracking entries” on queues for replication” on
page 285. page 112
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 282. replication” on page 118
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
“Loading IFS tracking entries” on page 284.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 287. replication” on page 124
11. Use the #DGFE audit to confirm and automatically correct any problems found in
file entries associated with data group object entries. Do the following:
a. Type WRKAUD RULE(#DGFE) and press Enter.
b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.
c. The results are placed in an outfile. For additional information, see “Interpreting
results for configuration data - #DGFE audit” on page 580.
12. If you anticipate a delay between configuring data group entries (object, DLO, or
IFS) and starting the data group, you should use the SETDGAUD command
before synchronizing data between systems. Doing so will ensure that replicated
objects will be properly audited and that any transactions for the objects that occur
between configuration and starting the data group will be replicated. Use the
procedure “Setting data group auditing values manually” on page 297.
13. Ensure that there are no batch jobs or users on the system that will be the source
for replication for the rest of this procedure. Do not allow users onto the source
system or batch processing until you have successfully completed Step 17.
14. Start journaling using the following procedures as needed for your configuration.
• For user journal replication, use “Journaling for physical files” on page 326 to
start journaling on both source and target systems
144
• For IFS objects, configured for advanced journaling, use “Journaling for IFS
objects” on page 330
• For data areas or data queues configured for advanced journaling, use
“Journaling for data areas and data queues” on page 334
15. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 483
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
16. Verify your configuration. Topic “Verifying the initial synchronization” on page 487
identifies the additional aspects of your configuration that are necessary for
successful replication.
17. Start the data groups. You should use the procedure ‘Starting Selected Data
Group Processes’ in the Using MIMIX book.
145
Checklist: New MIMIX source-send configuration
146
Checklist: Converting to remote journaling
Use this checklist to convert an existing data group from using MIMIX source-send
processes to using MIMIX Remote Journal support for user journal replication.
Note: This checklist does not change values specified in data group entries that
affect how files are cooperatively processed or how data areas, data queues,
and IFS objects are processed. For example, files configured for legacy
processing prior to this conversion will continue to be replicated with legacy
cooperative processing.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. If you use a startup program, make the modifications to the program described in
“Changes to startup programs” on page 305.
2. If you have implemented DDM password validation, you need to verify that your
environment will allow MIMIX RJ support to work properly. Use topic “Checking
DDM password validation level in use” on page 306.
3. Do the following to ensure that you have a functional transfer definition:
a. Modify the transfer definition to identify the RDB directory entry. Use topic
“Changing a transfer definition to support remote journaling” on page 186.
b. Verify the communication link using “Verifying the communications link for a
data group” on page 195.
4. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic “Starting the DDM TCP/IP server” on page 308.
5. Connect the journal definitions for the local and remote journals using “Adding a
remote journal link” on page 225. This procedure also creates the target journal
definition.
6. Build the journaling environment on each system defined by the RJ pair using
“Building the journaling environment” on page 219.
7. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *YES for the Use remote journal link prompt.
d. When you are ready to accept the changes, press Enter.
8. To make the configuration changes effective, you need to end the data group you
are converting to remote journaling and start it again as follows:
a. Perform a controlled end of the data group (ENDDG command), specifying
*ALL for Process and *CNTRLD for End process. Refer to topic “Ending all
replication in a controlled manner” in the Using MIMIX book.
147
Checklist: Converting to remote journaling
b. Start data group replication using the procedure “Starting selected data group
processes” in the Using MIMIX book. Be sure to specify *ALL for Start
processes prompt (PRC parameter) and *LASTPROC as the value for the
Database journal receiver and Database sequence number prompts.
148
149
Converting to MIMIX Dynamic Apply
150
Checklist: manually converting to MIMIX Dynamic Apply
Perform the following steps from the management system to enable an existing data
group to use MIMIX Dynamic Apply:
1. Verify the environment meets the requirements and restrictions. See
“Requirements and limitations of MIMIX Dynamic Apply” on page 110.
2. Apply any IBM PTFs (or their supersedes) associated with i5/OS releases as they
pertain to your environment. Log in to Support Central and refer to the Technical
Documents page for a list of required and recommended IBM PTFs.
3. Verify that the System Manager jobs are active. See “Starting the system and
journal managers” on page 296.
4. Verify that data group is synchronized by running the MIMIX audits. See “Verifying
the initial synchronization” on page 487.
5. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic “Preparing for a controlled
end of a data group” in the Using MIMIX book.
Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX
book includes subtask “Preparing for a controlled end of a data group” and
the other subtasks needed for Step 6 and Step 7.
6. Perform a controlled end of the data group you are converting. Follow the
procedure for “Performing the controlled end” in the Using MIMIX book.
7. Ensure that there are no open commit cycles for the database apply process.
Follow the steps for “Confirming the end request completed without problems” in
the Using MIMIX book.
8. From the management system, change the data group definition so that the
Cooperative journal (COOPJRN) parameter specifies *USRJRN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN)
9. Ensure that you have one or more data group object entries that specify the
required values. These entries identify the items within the name space for
replication. You may need to create additional entries to achieve desired results.
For more information, see “Identifying logical and physical files for replication” on
page 105.
10. To ensure that new files created while the data group is inactive are automatically
journaled, create the QDFTJRN data areas into the libraries configured for
replication of cooperatively processed files by running the following command
from the source system:
SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN)
11. From the management system, use the following command to load the data group
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*ADD) LODSYS(value) SELECT(*NO)
151
Converting to MIMIX Dynamic Apply
For additional information about loading file entries, see “Loading file entries from
a data group’s object entries” on page 273.
12. Start journaling for all files not previously journaled. See “Starting journaling for
physical files” on page 326.
13. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CRLPND(*YES)
14. Verify that data groups are synchronized by running the MIMIX audits. See
“Verifying the initial synchronization” on page 487.
152
153
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
154
8. Start journaling using the following procedures as needed for your configuration. If
you ever plan to switch the data groups, you must also start journaling on the
target system.
• For IFS objects, use “Starting journaling for IFS objects” on page 330
• For data areas or data queues, use “Starting journaling for data areas and data
queues” on page 334
9. Verify that journaling is started correctly. This step is important to ensure the IFS
objects, data areas and data queues are actually replicated. For IFS objects, see
“Verifying journaling for IFS objects” on page 332. For data areas and data
queues, see “Verifying journaling for data areas and data queues” on page 336.
10. If you anticipate a delay between configuring data group IFS, object, or file entries
and starting the data group, use the SETDGAUD command before synchronizing
data between systems. Doing so will ensure that replicated objects are properly
audited and that any transactions for the objects that occur between configuration
and starting the data group are replicated. Use the procedure “Setting data group
auditing values manually” on page 297.
11. Synchronize the IFS objects, data areas and data queues between the source
and target systems. For IFS objects, follow the Synchronize IFS Object
(SYNCIFS) procedures. For data areas and data queues, follow the Synchronize
Object (SYNCOBJ) procedures. Refer to chapter “Synchronizing data between
systems” on page 472 for additional information.
12. If you are replicating large amounts of data, you should specify i5/OS journal
receiver size options that provide large journal receivers and large journal entries.
Journals created by MIMIX are configured to allow maximum amounts of data.
Journals that already exist may need to be changed.
a. After IFS objects are configured, perform the steps in “Verifying journal
receiver size options” on page 213 to ensure journaling is configured
appropriately.
b. Change any journal receiver size options necessary using “Changing journal
receiver size options” on page 213.
13. If you have database replication user exit programs, changes may need to be
made. See “User exit program considerations” on page 87.
14. Once you have completed the preceding steps, start the data groups. For more
information about starting data groups, see the Using MIMIX book.
155
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
156
Checklist: Converting to legacy cooperative processing
If you find that you cannot use MIMIX Dynamic Apply for logical and physical files, use
this checklist to change the configuration of an existing data group so that user journal
replication (MIMIX Dynamic Apply) is no longer used. This checklist changes the
configuration so that physical data files can be processed using legacy cooperative
processing. Logical files and physical source files will be processed using the system
journal. For more information, see “Requirements and limitations of legacy
cooperative processing” on page 111.
Important! Before you use this checklist, consider the following:
• As of version 5, newly created data groups are configured for MIMIX Dynamic
Apply when default values are taken and configuration requirements are met.
• This checklist does not convert user journal replication processes from using
remote journaling to MIMIX source-send processing.
• This checklist only affects the configuration of *FILE objects. The configuration of
any other *DTAARA, *DTAQ, or IFS objects that are replicated through the user
journal are not affected.
Perform the following steps to enable legacy cooperative processing and system
journal replication:
1. Verify that data group is synchronized by running the MIMIX audits. See “Verifying
the initial synchronization” on page 487.
2. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic “Preparing for a controlled
end of a data group” in the Using MIMIX book.
Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX
book includes subtask “Preparing for a controlled end of a data group” and
the subtask needed for Step 3.
3. End the data group you are converting by performing a controlled end. Follow the
procedure for “Performing the controlled end” in the Using MIMIX book.
4. From the management system, change the data group definition so that the
Cooperative journal (COOPJRN) parameter specifies *SYSJRN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN)
5. From the management system, use the following command to load the data group
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see “Loading file entries from
a data group’s object entries” on page 273.
6. Optional step: Delete the QDFTJRN data areas. These data areas automatically
start journaling for newly created files. This may not be desired because the
157
Checklist: Converting to legacy cooperative processing
journal image (JRNIMG) value for these files may be different than the value
specified in the MIMIX configuration. Such a difference will be detected by the file
attributes (#FILATR) audit. To delete these data areas, run the following
command from each system:
DLTDTAARA DTAARA(library/QDFTJRN)
7. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CRLPND(*YES)
158
Chapter 6
System-level communications
159
Configuring for native TCP/IP
MIMIX users can also continue to use IBM ANYNET support to run SNA protocols
over TCP networks.
Preparing your system to use TCP/IP communications with MIMIX requires the
following:
1. Configure both systems to use TCP/IP. The procedure for configuring a system to
use TCP/IP is documented in the information included with the i5/OS software.
Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the
instructions to configure the system to use TCP/IP communications.
2. If you need to use port aliases, do the following:
a. Refer to the examples “Port aliases-simple example” on page 160 and “Port
aliases-complex example” on page 161.
b. Create the port aliases for each system using the procedure in topic “Creating
port aliases” on page 162.
3. Once the system-level communication is configured, you can begin the MIMIX
configuration process.
Figure 8. Creating Ports. In this example, the MIMIX installation consists of two systems.
Figure 9. Creating Ports. In this example, the MIMIX installation consists of three systems,
160
System-level communications
In both Figure 8 and Figure 9, if you need to use port aliases for port 50410, you need
to have a service table entry on each system that equates the port number to the port
alias. For example, you might have a service table entry on system LONDON that
defines an alias of MXMGT for port number 50410. Similarly, you might have service
table entries on systems HONGKONG and CHICAGO that define an alias of MXNET
for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in
the transfer definition.
Figure 10. Creating Port Aliases. In this example, the system CHICAGO participates in two
161
Configuring for native TCP/IP
MIMIX installations and uses a separate port for each MIMIX installation.
If you need to use port aliases in an environment such as Figure 10, you need to have
a service table entry on each system that equates the port number to the port alias. In
this example, CHICAGO would require two port aliases and two service table entries.
For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and
an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might
use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for
port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the
PORT1 and PORT2 parameters on the transfer definitions.
162
System-level communications
3. The Configure Related Tables display appears. Select option 1 (Work with
service table entries) and press Enter.
4. The Work with Service Table Entries display appears. Do the following:
a. Type a 1 in the Opt column next to the blank lines at the top of the list.
b. In the blank at the top of the Service column, use uppercase characters to
specify the alias that the System i5 will use to identify this port as a MIMIX
native TCP port.
Note: Port alias names are case sensitive and must be unique to the system
on which they are defined. For environments that have only one MIMIX
installation, Lakeview Technology recommends that you use the same
port number or same port alias on each system in the MIMIX
installation.
c. In the blank at the top of the Port column, specify the number of an unused port
ID to be associated with the alias. The port ID can be any number greater than
1024 and less than 55534 that is not being used by another application. You
can page down through the list to ensure that the number is not being used by
the system.
d. In the blank at the top of the Protocol column, type TCP to identify this entry as
using TCP/IP communications.
e. Press Enter.
5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the
information shown for the alias and port is what you want. At the Text 'description'
prompt, type a description of the port alias, enclosed in apostrophes, and then
press Enter.
Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA
(APPN or APPC) line, controller, and device must exist between the systems that will
be identified by the transfer definition. If a line, controller, and device do not exist,
consult your network administrator before continuing.
Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist
between the two system that you identify in the transfer definition
You can use the OptiConnect® product from IBM for all communication for most1
MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify
OptiConnect communications. Then you can do the following:
163
Configuring OptiConnect
• Ensure that the QSOC library is in the system portion of the library list. Use the
command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library
is in the system portion of the library list. If it not, use the CHGSYSVAL command
to add this library to the system library list.
• When you create the transfer definition, specify *OPTI for the transfer protocol.
1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP commu-
nicaitons.
164
System-level communications
165
Chapter 7
166
Tips for system definition parameters
This topic provides tips for using the more common options for system definitions.
Context-sensitive help is available online for all options on the system definition
commands.
System definition (SYSDFN) This parameter is a single-part name that represents a
system within a MIMIX installation. This name is a logical representation and does not
need to match the system name that it represents.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
System type (TYPE) This parameter indicates the role of this system within the
MIMIX installation. A system can be a management (*MGT) system or a network
(*NET) system. Only one system in the MIMIX installation can be a management
system.
Transfer definitions (PRITRFDFN, SECTFRDFN) These parameters identify the
primary and secondary transfer definitions used for communicating with the system.
The communications path and protocol are defined in the transfer definitions. For
MIMIX to be operational, the transfer definition names you specify must exist. MIMIX
does not automatically create transfer definitions. If you accept the default value
primary for the Primary transfer definition, create a transfer definition by that name.
If you specify a Secondary transfer definition, it will be used by MIMIX if
communications path specified by the primary transfer definition is not available.
Cluster member (CLUMBR) You can specify if you want this system definition to be
a member of a cluster. The system (node) will not be added to the cluster until the
system manager is started the first time.
Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition
that cluster resource services will use to communicate to the node and for the node to
communicate with other nodes in the cluster. You must specify *TCP as the transfer
protocol.
Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message
log facility which is common to all MIMIX products. These parameters provide
additional flexibility by allowing you to identify the message queues associated with
the system definition and define the message filtering criteria for each message
queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL
library. You can specify a different message queue or optionally specify a secondary
message queue. You can also control the severity and type of messages that are sent
to each message queue.
Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the
delay times used for all journal management and system management jobs. The
value of the journal manager delay parameter determines how often the journal
manager process checks for work to perform. The value of the system manager delay
parameter determines how often the system manager process checks for work to
perform.
167
Tips for system definition parameters
Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output
queue used by this system definition and define characteristics of how the queue is
handled. Any MIMIX functions that generate reports use this output queue. You can
hold spooled files on the queue and save spooled files after they are printed.
Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of
days to retain MIMIX system history and data group history. MIMIX system history
includes the system message log. Data group history includes time stamps and
distribution history. You can keep both types of history information on the system for
up to a year.
Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the
number of days to retain new and acknowledged notifications. The Keep new
notifications (days) parameter specifies the number of days to retain new notifications
in the MIMIX data library. The Keep acknowledged notifications (days) parameter
specifies the number of days to retain acknowledged notifications in the MIMIX data
library.
MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT)
Three parameters define information about MIMIX data libraries on the system. The
Keep MIMIX data (days) parameter specifies the number of days to retain objects in
the MIMIX data library, including the container cache used by system journal
replication processes. The MIMIX data library ASP parameter identifies the auxiliary
storage pool (ASP) from which the system allocates storage for the MIMIX data
library. For libraries created in a user ASP, all objects in the library must be in the
same ASP as the library. The Disk storage limit (GB) parameter specifies the
maximum amount of disk storage that may be used for the MIMIX data libraries.
User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs
under the MIMIXOWN user profile and uses several job descriptions to optimize
MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library.
Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system
manager and journal manager, restart daily to maintain the MIMIX environment. You
can change the time at which these jobs restart. The management or network role of
the system affects the results of the time you specify on a system definition. Changing
the job restart time is considered an advanced technique.
Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control
characteristics of printed output.
Product library (PRDLIB) This parameter is used for installing MIMIX into a
switchable independent ASP, and allows you to specify a MIMIX installation library
that does not match the library name of the other system definitions. The only time
this parameter should be used is in the case of an INTRA system (which is handled by
the default value) or in replication environments where it is necessary to have extra
MIMIX system definitions that will “switch locations” along with the switchable
independent ASP. Due to its complexity, changing the product library is considered
an advanced technique and should not be attempted without the assistance of a
Certified MIMIX Consultant.
ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable
independent ASP, and defines the ASP group (independent ASP) in which the
product library exists. Again, this parameter should only be used in replication
168
environments involving a switchable independent ASP. Due to its complexity,
changing the ASP group is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.
169
Creating system definitions
170
Changing a system definition
To change a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 2 (Change) next to the
system definition you want and press Enter.
3. The Change System Definition (CHGSYSDFN) display appears. Press F10
(Additional parameters)
4. Locate the prompt for the parameter you need to change and specify the value
you want. Press F1 (Help) for more information about the values for each
parameter.
5. To save the changes press Enter.
171
Multiple network system considerations
Figure 11. Example of system definition values in a multiple network system environment.
Figure 12. Example of a contextual (*ANY) transfer definition in use for a multiple network
172
system environment.
---------Definition--------- Threshold
Opt Name System 1 System 2 Protocol (MB)
__ __________ _______ ________
PRIMARY *ANY *ANY *TCP *NOMAX
173
Chapter 8
By creating a transfer definition, you identify to MIMIX the communications path and
protocol to be used between two systems. You need at least one transfer definition for
each pair of systems between which you want to perform replication. A pair of
systems consists of a management system and a network system. If you want to be
able to use different transfer protocols between a pair of systems, create a transfer
definition for each protocol.
System-level communication must be configured and operational before you can use
a transfer definition.
You can also define an additional communications path in a secondary transfer
definition. If configured, MIMIX can automatically use a secondary transfer definition if
the path defined in your primary transfer definition is not available.
In an Intra environment, a transfer definition defines a communications path and
protocol to be used between the two product libraries used by Intra. For detailed
information about configuring an Intra environment, refer to “Configuring Intra
communications” on page 559.
Once transfer definitions exist for MIMIX, they can be used for other functions, such
as the Run Command (RUNCMD), or by other MIMIX products for their operations.
The topics in this chapter include:
• “Tips for transfer definition parameters” on page 176 provides tips for using the
more common options for transfer definitions.
• “Using contextual (*ANY) transfer definitions” on page 181 describes using the
value (*ANY) when configuring transfer definitions.
• “Creating a transfer definition” on page 184 provides the steps to follow for
creating a transfer definition.
• “Changing a transfer definition” on page 186 provides the steps to follow for
changing a transfer definition. This topic also includes sub-task for how to
changing a transfer definition when converting to a remote journaling
environment.
• “Finding the system database name for RDB directory entries” on page 188
provides the steps to follow for finding the system database name for RDB
directory entries.
• “Starting the Lakeview TCP/IP server” on page 189 provides the steps to follow if
you need to start the Lakeview TCP/IP server.
• “Using autostart job entries to start the TCP server” on page 190 provides the
steps to configure the Lakeview TCP server to start automatically every time the
MIMIX subsystem is started
• “Verifying a communications link for system definitions” on page 194 provides the
steps to verify that the communications link defined for each system definition is
operational.
174
Configuring transfer definitions
• “Verifying the communications link for a data group” on page 195 provides a
procedure to verify the primary transfer definition used by the data group.
175
Tips for transfer definition parameters
176
with a range from 1000 through 55534. Lakeview Technology recommends using
values between 40000 and 55500 to avoid potential conflicts with designations
made by the operating system. By default, the PORT1 parameter uses the port
50410. For the PORT2 parameter, the default special value *PORT1 indicates
that the value specified on the System 1 port number or alias (PORT1) parameter
is used. If you configured TCP using port aliases in the service table, specify the
alias name instead of the port number.
For the *SNA protocol the following parameters apply:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
• System x network identifier (NETID1, NETID2) These two parameters specify
name of the network for system 1 and system 2, respectively. The default value
*LOC indicates that the network identifier for the location name associated with
the system is used. The special value *NETATR indicates that the value specified
in the system network attributes is used. The special value *NONE indicates that
the network has no name. For the NETID2 parameter, the special value *NETID1
indicates that the network identifier specified on the System 1 network identifier
(NETID1) parameter is used.
• SNA mode (MODE) This parameter specifies the name of mode description used
for communication. The default name is MIMIX. The special value *NETATR
indicates that the value specified in the system network attributes is used.
The following parameters apply for the *OPTI protocol:
• System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
Threshold size (THLDSIZE) This parameter is accessible when you press F10
(Additional parameters). This controls the size of files and objects by specifying the
maximum size of files and objects that are sent. If the file or object exceeds the
threshold it is not sent. Valid values range from 1 through 9999999. The special
value *NOMAX indicates that no maximum value is set. Transmitting large files and
objects can consume excessive communications bandwidth and negatively impact
communications performance, especially for slow communication lines.
177
Tips for transfer definition parameters
Relational database (RDB) This parameter is accessible when you press F10
(Additional parameters) and is valid when default remote journaling configuration is
used. The parameter consists of a four relational database values, which identify the
communications path used by the i5/OS remote journal function to transport journal
entries: a relational database directory entry name, two system database names, and
a management indicator for directory entries. This parameter creates two RDB
directory entries, one on each system identified in the transfer definition. Each entry
identifies the other system’s relational database.
Note: If you use the value *ANY for both system 1 and system 2 on the transfer
definition, *NONE is used for the directory entry name, and no directory entry
is generated.
If MIMIX is managing your RDB directory entries, a directory entry is
generated if you use the value *ANY for only one of the systems on the
transfer definition. This directory entry is generated for the system that is
specified as something other than *ANY. For more information about the use
of the value *ANY on transfer definitions, see “Using contextual (*ANY)
transfer definitions” on page 181.
The four elements of the relational database parameter are:
• Directory entry This element specifies the name of the relational database entry.
The default value *GEN causes MIMIX to create an RDB entry and add it to the
relational database. The generated name is in the format MX_nnnnnnnnnn_ssss,
where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer
definition short name. If you specify a value for the RDB parameter, it is
recommended that you limit its length to 18 characters. When you specify the
special value *NONE, the directory entry is not added or changed by MIMIX.
• System 1 relational database This element specifies the name of the relational
database for System 1. The default value *SYSDB specifies that MIMIX will
determine the relational database name. If you are managing the RDB directory
entries and you need to determine the system database name, refer to “Finding
the system database name for RDB directory entries” on page 188.
Note: For remote journaling that uses an independent ASP, specify the database
name for the independent ASP.
• System 2 relational database This element specifies the name of the relational
database for System 2. The default value *SYSDB specifies that MIMIX will
determine the relational database name. If you are managing the RDB directory
entries and you need to determine the system database name, refer to “Finding
the system database name for RDB directory entries” on page 188.
Note: For remote journaling that uses an independent ASP, specify the database
name for the independent ASP.
• Manage directory entries This element specifies that MIMIX will manage the
relational database directory entries associated with the transfer definition
whether the directory entry name is specified or whether the directory entry name
is generated by MIMIX. Management of the relational database directory entries
consists of adding, changing, and deleting the directory entries on both systems,
as needed, when the transfer definition is created, changed, or deleted. The
178
special value *DFT indicates that MIMIX manages the relational database
directory entries only when the name is generated using the special value *GEN
on the Directory entry element of this parameter. The special value *YES
indicates that the directory entries on each system are managed by MIMIX. If the
relational database directory entries do not exist, MIMIX adds them. If they do
exist, MIMIX changes them to match the values specified by the Relational
database (RDB) parameter. When any of the transfer definition relational
database values change, the directory entry is also changed. When the transfer
definition is deleted, the directory entries are also deleted.
179
Tips for transfer definition parameters
180
Using contextual (*ANY) transfer definitions
When the three-part name of transfer definition specifies the value *ANY for System 1
or System 2 instead system names, MIMIX uses information from the context in which
the transfer definition is called to resolve to the correct system. Such a transfer
definitions is called contextual transfer definition.
For remote journaling environments, best practice is to use transfer definitions that
identify specific system definitions in the thee-part transfer definition name. Although
you can use contextual transfer definitions with remote journaling, they are not
recommended. For more information, see “Considerations for remote journaling” on
page 182.
In MIMIX source-send configurations, a contextual transfer definition may be an aid in
configuration. For example, if you create a transfer definition named PRIMARY SYSA
*ANY. This definition can be used to provide the necessary parameters for
establishing communications between SYSA and any other system.
The *ANY value represents several transfer definitions, one for each system
definition. For example, a transfer definition PRIMARY SYSA *ANY in an installation
that has three system definitions (SYSA, SYSB, INTRA) represents three transfer
definitions:
• PRIMARY SYSA SYSA
• PRIMARY SYSA SYSB
• PRIMARY SYSA INTRA
181
Using contextual (*ANY) transfer definitions
transfer definition that matches the transfer definition that you specified, for example,
(PRIMARY SYSA SYSB).
182
Naming conventions for contextual transfer definitions
The following suggested naming conventions make the contextual (*ANY) transfer
definitions more useful in your environment.
*TCP protocol: The MIMIX system definition names should correspond to DNS or
host table entries that tie the names to a specific TCP address.
*SNA protocol: The MIMIX system definition names must match SNA environment
(controller names) for the respective systems. The MIMIX system definitions should
match the net attribute system name (DSPNETA). For example, with two MIMIX
systems called SYSA and SYSB, on the SYSA system there would have to be a
controller called SYSB that is used for SYSA to SYSB communications. Conversely,
on SYSB, a SYSA controller would be necessary.
*OPTI protocol: The MIMIX system definition names must match the OptiConnect
names for the systems (DSPOPCLNK).
183
Creating a transfer definition
184
185
Changing a transfer definition
186
page 188 for special considerations when changing your transfer
definitions that are configured to use RDB directory entries.
187
Finding the system database name for RDB directory entries
188
Starting the Lakeview TCP/IP server
Use this procedure if you need to start the Lakeview TCP/IP server. You can also
start the TCP/IP server automatically.
Once the TCP communication connections have been defined in a transfer definition,
the Lakeview TCP server must be started on each of the systems identified by the
transfer definition.
Note: Use the host name and port number (or port alias) defined in the transfer
definition for the system on which you are running this command.
From a 5250 emulator, do the following on the system on which you want to start the
TCP server:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The Utilities Menu appears. Select option 51 (Start TCP server) and press Enter.
3. The Start Lakeview TCP Server display appears. At the Host name or address
prompt, specify the host name for the local system as defined in the transfer
definition.
4. At the Port number or alias prompt, verify that the value shown is correct. If
necessary, change the value.
Note: If you specify an alias, you must have an entry in the service table on this
system that equates the alias to the port number.
5. Press Enter.
6. Verify that the Lakeview server job is running under the MIMIX subsystem on that
system. You can use the Work with Active Jobs (WRKACTJOB) command to look
for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER.
189
Using autostart job entries to start the TCP server
190
b. Press Enter. The job description is changed.
7. Type the command ADDAJE and press Enter.
8. The Add Autostart Job Entry (ADDAJE) display appears. Specify the following
values to configure the job description to start each time the MIMIXSBS
subsystem is started:
a. At the Subsystem description prompt specify MIMIXSBS.
b. At the Library prompt, specify MIMIXQGPL.
c. At the Job name prompt specify a name to describe the job being processed.
Lakeview Technology suggests that you use the value you specified in Step 4.
d. At the Job description prompt specify the name of the job description you just
changed in Step 4.
e. At the Library prompt specify MIMIXQGPL.
f. Press Enter. The job description is added to the automatic start procedures
within the MIMIXSBS subsystem. Each time the MIMIXSBS subsystem is
started, this TCP server is also started.
191
Using autostart job entries to start the TCP server
parameter in the job description determines which program or command is run when
the MIMIXSBS subsystem is started. Use the following command to change the job
description to call the new system definition name or port number used for the
autostart job entry which calls the STRSVR command when the MIMIXSBS
subsystem is started:
CHGJOBD JOBD(MIMIXLIB/STRMXSVR) RQSDTA(‘MIMIXLIB/STRSVR
HOST(System name) PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)’)
• where System name is the system host name for the system where the
autostart job entry is defined in the MIMIX transfer definition.
• where nnnnn is either the port number in the form PORTnnnnn or the port
alias of the system where the autostart job entry is defined in the MIMIX
transfer definition.
192
193
Verifying a communications link for system definitions
194
Verifying the communications link for a data group
Before you synchronize data between systems, ensure that the communications link
for the data group is active. This procedure verifies the primary transfer definition
used by the data group. If your configuration requires multiple data groups, be sure to
check communications for each data group definition.
Do the following:
1. From the Work with Data Group Definitions display, type an 11 (Verify
communications link) next to the data group you want and press F4.
2. The Verify Communications Link display appears. Ensure that the values shown
for the prompts are what you want.
3. To start the check, press Enter.
4. You should see a message "VFYCMNLNK command completed successfully."
If your data group definition specifies a secondary transfer definition, use the following
procedure to check all communications links.
195
Verifying the communications link for a data group
196
Chapter 9
By creating a journal definition you identify to MIMIX a journal environment that can
be used in the replication process. MIMIX uses the journal definition to manage the
journaling environment, including journal receiver management.
A journal definition does not automatically build the underlying journal environment
that it defines. If the journal environment does not exist, it must be built. This can be
done after the journal definition is created. Configuration checklists indicate when to
build the journal environment.
The topics in this chapter include:
• “Journal definitions created by other processes” on page 200 describes the
security audit journal (QAUDJRN) and other journal definitions that are
automatically created by MIMIX.
• “Tips for journal definition parameters” on page 201 provides tips for using the
more common options for journal definitions.
• “Journal definition considerations” on page 205 provides things to consider when
creating journal definitions for remote journaling.
• “Journal receiver size for replicating large object data” on page 213 provides
procedures to verify that a journal receiver is large enough to accommodate large
IFS stream files and files containing LOB data, and if necessary, to change the
receiver size options.
• “Creating a journal definition” on page 215 provides the steps to follow for creating
a journal definition.
• “Changing a journal definition” on page 217 provides the steps to follow for
changing a journal definition.
• “Building the journaling environment” on page 219 describes the journaling
environment and provides the steps to follow for building it.
• “Changing the remote journal environment” on page 222 provides steps to follow
when changing an existing remote journal configuration. The procedure is
appropriate for changing a journal receiver library for the target journal in a remote
journaling environment or for any other changes that affect the target journal.
• “Adding a remote journal link” on page 225 describes how to create a MIMIX RJ
link, which will in turn create a target journal definition with appropriate values to
support remote journaling. In most configurations, the RJ link is automatically
created for you when you follow the steps of the configuration checklists.
• “Changing a remote journal link” on page 227 describes how to change an
existing RJ link.
• “Temporarily changing from RJ to MIMIX processing” on page 228 describes how
to change a data group configured for remote journaling to temporarily use MIMIX
send processing.
• “Changing from remote journaling to MIMIX processing” on page 229 describes
197
how change a data group that uses remote journaling so that it uses MIMIX send
processing. Remote journaling is preferred.
• “Removing a remote journaling environment” on page 231 describes how to
remove a remote journaling environment that you no longer need.
198
Configuring journal definitions
199
Journal definitions created by other processes
200
Tips for journal definition parameters
This topic provides tips for using the more common options for journal definitions.
Context-sensitive help is available online for all options on the journal definition
commands.
Journal definition (JRNDFN) This parameter is a two-part name that identifies a
journaling environment on a system. The first part of the name identifies the journal
definition. When a journal definition for the security audit journal (system journal) is
automatically created as a result of creating a system definition, the first part of the
name is QAUDJRN. The second part of the name identifies a system definition which
represents the system on which you want the journal to reside.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_). Journal definition names cannot be UPSMON
or begin with the characters MM. If the journal definition is configured by
MIMIX for use with MIMIX RJ support, the Name is the first eight characters
from the name of the source journal definition followed by the characters @R.
If a journal definition name is already in use, the name may include @S, @T,
@U, @V, or @W. There are additional specific naming conventions for journal
definitions that are used with remote journaling.
MIMIX uses the first six characters of the journal definition name to generate
the journal receiver prefix. MIMIX restricts the last character of the prefix from
being numeric. If the last character of a prefix resulting from the journal
definition name is numeric, it can become part of the receiver number and no
longer match the journal name.
Journal (JRN) This parameter specifies the qualified name of a journal to which
changes to files or objects to be replicated are journaled. For the journal name, the
default value *JRNDFN uses the name of the journal definition for the name of the
journal.
For the journal library, the default value *DFT allows MIMIX to determine the library
name based on the ASP in which the journal library is allocated, as specified in the
Journal library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses
#MXJRNIASP for the default journal library name; otherwise, the default library name
is #MXJRN.
Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal library. You can
use the default value *CRTDFT or you can specify the number of an ASP in the range
1 through 32.
The value *CRTDFT indicates that the command default value for the i5/OS Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
201
Tips for journal definition parameters
202
The following parameters specify conditions that must be met before change
management can occur.
• Receiver threshold size (MB) (THRESHOLD) You can specify the size, in
megabytes, of the journal receiver at which it is changed. The default value is
6600 MB. This value is used when MIMIX or the system changes the receivers.
If you decide to decrease the size of the Receiver threshold size you will need to
manually change your journal receiver to reflect this change.
If you change the journal receiver threshold size in the journal definition, the
change is effective with the next receiver change.
• Time of day to change receiver (TIME) You can specify the time of day at which
MIMIX changes the journal receiver. The time is based on a 24 hour clock and
must be specified in HHMMSS format.
• Reset sequence threshold (RESETTHLD) You can specify the sequence number
(in millions) at which to reset the receiver sequence number. When the threshold
is reached, the next receiver change resets the sequence number to 1.
For information about how change management occurs in a remote journal
environment and about using other change management choices, see “Journal
receiver management” on page 37
Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT,
KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal
receivers associated with the replication process.
The Receiver delete management (DLTMGT) parameter specifies whether or not
MIMIX performs delete management for the journal receivers. By default, MIMIX
performs the delete management operations. MIMIX operations can be adversely
affected if you allow the system or another process to handle delete management.
For example, if another process deletes a journal receiver before MIMIX is finished
with it, replication can be adversely affected.
All of the requirements that you specify in the following parameters must be met
before MIMIX deletes a journal receiver:
• Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to
have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers
allows you to back out (rollback) changes in the event that you need to recover
from a disaster. The default value *YES causes MIMIX to keep unsaved journal
receivers until they are saved.
• Keep journal receiver count (KEEPRCVCNT) You can specify the number of
detached journal receivers to retain. For example, if you specify 2 and there are
10 journal receivers including the attached receiver (which is number 10), MIMIX
retains two detached receivers (8 and 9) and deletes receivers 1 through 7.
• Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of
days to retain detached journal receivers. For example, if you specify to keep the
journal receiver for 7 days and the journal receiver is eligible for deletion, it will be
deleted after 7 days have passed from the time of its creation. The exact time of
the deletion may vary. For example, the deletion may occur within a few hours
after the 7 days have passed.
203
Tips for journal definition parameters
204
Journal definition considerations
Consider the following as you create journal definitions for remote journaling:
• The source journal definition identifies the local journal and the system on
which the local journal exists. Similarly, the target journal definition identifies
the remote journal and the system on which the remote journal exists.
Therefore, the source journal definition identifies the source system of the
remote journal process and the target journal definition identifies the target
system of the remote journal process.
• You can use an existing journal definition as the source journal definition to
identify the local journal. However, using an existing journal definition for the
target journal definition it is not recommended. The existing definition is likely
to be used for journaling and therefore is not appropriate as the target journal
definition for a remote journal link.
• MIMIX recognizes the receiver change management parameters (CHGMGT,
THRESHOLD, TIME, RESETTHLD) specified in the source journal definition
and ignores those specified in the target journal definition. When a new
receiver is attached to the local journal, a new receiver with the same name is
automatically attached to the remote journal. The receiver prefix specified in
the target journal definition is ignored.
• Each remote journal link defines a local-remote journal pair that functions in
only one direction. Journal entries flow from the local journal to the remote
journal. The direction of a defined pair of journals cannot be switched. If you
want to use the RJ process in both directions for a switchable data group, you
need to create journal definitions for two remote journal links (four journal
definitions). For more information, see “Example journal definitions for a
switchable data group” on page 207.
• MIMIX will try to create *TYPE2 journals when possible and *TYPE1 journals
when a *TYPE2 journal cannot be created. MIMIX creates the environment
that is appropriate for the type of journal created. Refer to the IBM book,
Backup and Recovery, for information about save and restore considerations
for *TYPE2 and *TYPE1 journals in a remote journaling environment.
• After the journal environment is built for a target journal definition, MIMIX
cannot change the value of the target journal definition’s Journal receiver prefix
(JRNRCVPFX) or Threshold message queue (MSGQ), and several other
values. To change these values see the procedure in the IBM topic “Library
Redirection with Remote Journals” in the IBM eServer iSeries Information
Center.
• If you are configuring MIMIX for a scenario in which you have one or more
target systems, there are additional considerations for the names of journal
receivers. Each source journal definition must specify a unique value for the
Journal receiver prefix (JRNRCVPFX) parameter. MIMIX ensures that the
same prefix is not used more than once on the same system but cannot
determine if the prefix is used on a target journal while it is being configured. If
the prefix defined by the source journal definition is reused by target journals
205
Journal definition considerations
that reside in the same library and ASP, attempts to start the remote journals
will fail with message CPF699A (Unexpected journal receiver found).
When you create a target journal definition instead of having it generated using
the Add Remote Journal Link (ADDRJLNK) command, use the default value
*GEN for the prefix name for the JRNRCVPFX on a target journal definition.
The receiver name for source and target journals will be the same on the
systems but will not be the same in the journal definitions. In the target journal,
the prefix will be the same as that specified in the source journal definition.
206
Example journal definitions for a switchable data group
To support a switchable data group in a remote journaling environment, you need to
have four journal definitions configured: two for the RJ link used for normal
production-to-backup operations, and two for the RJ link used for replication in the
opposite direction.
In this example, a switchable data group named PAYABLES is created between
systems CHICAGO and NEWYORK. System 1 (CHICAGO) is the data source. The
data group definition specifies *YES to Use remote journal link. Command defaults
create the data group using a generated short data group name and using the data
group name for the system 1 and system 2 journal definitions.
To create the RJ link and associated journal definitions for normal operations, option
10 (Add RJ link) on the Work with Journal Definitions display is used on an existing
journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13).
This is the source journal definition for normal operations. The process of adding the
link creates the target journal definition PAYABLES@R NEWYORK (the last entry
listed in Figure 13).
To create the RJ link and associated definitions for replication in the opposite
direction, a new source journal definition, PAYABLES NEWYORK, is created (the
second entry listed in Figure 13). Then that definition is used to create second RJ link,
which in turn generates the target journal definition PAYABLES@R CHICAGO (the
third entry listed in Figure 13).
Bottom
F3=Exit F4=Prompt F5=Refresh F6=Create
F12=Cancel F18=Subset F21=Print list F22=Work with RJ links
207
Journal definition considerations
Identifying the correct journal definition on the Work with Journal Definition display
can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the
association between journal definitions much more clearly.
Bottom
Parameters or command
===>
F3=Exit F4=Prompt F5=Refresh F6=Add F9=Retrieve F11=View 2
F12=Cancel F13=Repeat F16=Jrn Definitions F18=Subset F21=Print list
208
• Manually create journal definitions (CRTJRNDFN command) using the library
name-mapping convention. Journal definitions created when a data group is
created may not have unique names and will not create all the necessary target
journal definitions.
• Once the appropriately named journal definitions are created for source and
target systems, manually create the remote journal links between them
(ADDRJLNK command).
209
Journal definition considerations
Figure 15. Library-mapped journal definitions - three node environment. All nodes are management systems
210
Figure 16 shows the RJ links needed for this example.
Figure 16. Library-mapped names shown in RJ links for three node environment
211
Journal definition considerations
212
Journal receiver size for replicating large object data
For potentially large IFS stream files and files containing LOB data, it is important that
your journal receiver is large enough to accommodate the data. You may need to
change your journal receiver size in order to accommodate the data.
For data groups that can be switched, the journal receivers on both the source and
target systems must be large enough to accomodate the data.
213
Journal receiver size for replicating large object data
214
Creating a journal definition
Do the following to create a journal definition:
1. Access the Work with Journal Definitions display. From the MIMIX Configuration
Menu select option 3 (Work with journal definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Journal Definition display appears. At the Journal definition prompts,
specify a two-part name.
Note: Journal definition names cannot be UPSMON or begin with the characters
MM.
4. Verify that the following prompts contain the values that you want. If you have not
journaled before, the default values are appropriate. If you need to identify an
existing journaling environment to MIMIX, specify the information you need.
Journal
Library
Journal library ASP
Journal receiver prefix
Library
Journal receiver library ASP
5. At the Target journal state prompt, specify the requested status of the target
journal. The default value is *ACTIVE. This value can be used with active
journaling support or journal standby state.
6. At the Journal caching prompt, specify whether the system should cache journal
entries in main storage before writing them to disk. The recommended default
value is *BOTH.
7. Set the values you need to manage changing journal receivers, as follows:
a. At the Receiver change management prompt, specify the value you want.
Lakeview recommends that you use the default values. For more information
about valid combinations of values, press F1 (Help).
b. Press Enter.
c. One or more additional prompts related to receiver change management
appear on the display. Verify that the values shown are what you want and, if
necessary, change the values.
Receiver threshold size (MB)
Time of day to change receiver
Reset sequence threshold
d. Press Enter.
8. Set the values you need to manage deleting journal receivers, as follows:
215
Creating a journal definition
a. Lakeview recommends that you accept the default value *YES for the Receiver
delete management prompt to allow MIMIX to perform delete management.
b. Press Enter.
c. One or more additional prompts related to receiver delete management appear
on the display. If necessary, change the values.
Keep unsaved journal receivers
Keep journal receiver count
Keep journal receivers (days)
9. At the Description prompt, type a brief text description of the transfer definition.
10. This step is optional. If you want to access additional parameters that are
considered advanced functions, press F10 (Additional parameters). Make any
changes you need to the additional prompts that appear on the display.
11. To create the journal definition, press Enter.
216
Changing a journal definition
To change a journal definition, do the following:
1. Access the Work with Journal Definitions display according to your configuration
needs:
• In a clustering environment, from the MIMIX Cluster Menu select option 20
(Work with system definitions) and press Enter. When the Work with System
Definitions display appears, type 12 (Journal Definitions) next to the system
name you want and press Enter.
• In a standard MIMIX environment, from the MIMIX Configuration Menu select
option 3 (Work with journal definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change Journal Definition (CHGJRNDFN) display appears. Press Enter twice
to see all prompts for the display.
4. Make any changes you need to the prompts. Press F1 (Help) for more information
about the values for each parameter.
5. If you need to access advanced functions, press F10 (Additional parameters).
When the additional parameters appear on the display, make the changes you
need.
6. To accept the changes, press Enter.
Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective
with the next receiver change. Before a change to any other parameter is
effective, you must rebuild the journal environment. Rebuilding the journal
environment ensures that it matches the journal definition and prevents
problems starting the data group.
217
Changing a journal definition
218
Building the journaling environment
Before replication for a data group can occur, the journal environment for all journal
definitions used by that data group must be created on each system. A journaling
environment includes the following objects: library, journal, journal receiver, and
threshold message queue on the system specified in the journal definition. The Build
Journal Environment (BLDJRNENV) command is used to build the journal
environment objects for a journal definition. When the BLDJRNENV command is run,
if the objects do not exist, they are created based on what is specified in the journal
definition. If the journal exists, the Source for values (JRNVAL) parameter of the
BLDJRNENV command is used to determine the source for the values of these
objects. The journal receiver prefix and library, message queue and library, and
threshold parameters are updated from the source specified in the JRNVAL
parameter.
Specifying *JRNENV for the JRNVAL parameter changes the values of the objects in
the journal definition to match the values in the existing journal environment objects.
Specifying *JRNDFN for the JRNVAL parameter changes the values of the journal
environment objects to match the values of the objects in the journal definition. In a
remote journal environment, the values specified in the journal definition (*JRNDFN)
are only applicable to the source journal.
If the data group definition specifies to journal on the target system, the journal
environment must be built on each system that will be a target system for replication
of that data group. If you do not build either source or target journal environments, the
first time the data group starts MIMIX will automatically build the journal environments
for you.
Note: When building a journal environment, ensure the journal receiver prefix in the
specified library is not already used. If the journal receiver prefix in the
specified library is already used, you must change it to an unused value.
For switchable data groups not specified to journal on the target system, it is
recommended to build the source journaling environments for both directions of
replication so the environments exist for data group replication after switching.
All previous steps in your configuration checklist must be complete before you use
this procedure.
To build the journaling environment, do the following:
Note: If you are journaling on the target system, perform this procedure for both
the source and target systems.
1. From the MIMIX Main Menu, select 11 (Configuration menu) and press Enter.
2. From the MIMIX Configuration Menu, select one of the following and press Enter:
a. Select 8 (Work with remote journal links) to build the journaling environments
for remote journaling.
b. Select 3 (Work with journal definitions) to build all other journaling
environments.
3. From the Work with display, type 14 (Build) next to the journal definition you want
219
Building the journaling environment
220
221
Changing the remote journal environment
222
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, and the journal receiver, press Enter.
6. Make the changes you need for the target journal.
For example, to change the target (remote) journal definition to a new receiver
library, do the following:
a. Press F12 to return to the Work with Journal Definitions display.
b. Type option 2 (Change) next to the journal definition for the target system you
want and press Enter.
7. From the Work with Journal Definitions display, type a 14 (Build) next to the target
journal definition and press Enter.
Note: The target journal definition will end with @R.
8. Return to the Work with Data Groups display. Then do the following:
a. Type an 8 (Display status) next to the data group you want and press Enter.
b. Locate the name of the receiver in the Last Read field for the Database
process.
9. Do the following to start the RJ link:
a. From the Work with Data Groups display, type a 44 (RJ links) next to the data
group you want and press Enter.
b. Locate the link you want based on the name in the Target Jrn Def column.
Type a 9 (Start) next to the link with the target journal definition and press F4
(Prompt)
c. The Start Remote Journal Link (STRRJLNK) appears. Specify the receiver
name from Step 8b as the value for the Starting journal receiver (STRRCV)
and press Enter.
10. Start the data group using default values Refer to topic “Starting selected data
group processes” in the Using MIMIX book.
223
Changing the remote journal environment
224
Adding a remote journal link
This procedure requires that a source journal definition exists. The process of creating
an RJ link will create the target journal definition with appropriate values for remote
journaling.
Before you create the RJ link you should be familiar with the “Journal definition
considerations” on page 205.
To create a link between journal definitions, do the following:
1. From the MIMIX Configuration menu, select option 3 (Work with journal
definitions) and press Enter.
2. The Work with Journal Definitions display appears. Type a 10 (Add RJ link) next
to the journal definition you want and press Enter.
3. The Add Remote Journal Link (ADDRJLNK) display appears. The journal
definition you selected in the previous step appears in the prompts for the Source
journal definition. Verify that this is the definition you want as the source for RJ
processing.
4. At the Target journal definition prompts, specify *GEN as the Name and specify
the value you want for System.
Note: If you specify the name of a journal definition, the definition must exist and
you are responsible for ensuring that its values comply with the
recommended values. Refer to the related topic on considerations for
creating journal definitions for remote journaling for more information.
5. Verify that the values for following prompts are what you want. If necessary,
change the values.
• Delivery
• Sending task priority
• Primary transfer definition
• Secondary transfer definition
• If you are using an independent ASP in this configuration you also need to
identify the auxiliary storage pools (ASPs) from which the journal and journal
receiver used by the remote journal are allocated. Verify and change the
values for Journal library ASP, Journal library ASP device, Journal receiver
library ASP, and Journal receiver lib ASP dev as needed.
6. At the Description prompt, type a text description of the link, enclosed in
apostrophes.
7. To create the link between journal definitions, press Enter.
225
Adding a remote journal link
226
Changing a remote journal link
Changes to the delivery and sending task priority take effect only after the remote
journal link has been ended and restarted.
To change characteristics of the link between source and target journal definitions, do
the following:
1. Before you change a remote journal link, end activity for the link. The Using MIMIX
book describes how to end only the RJ link.
Note: If you plan to change the primary transfer definition or secondary transfer
definition to a definition that uses a different RDB directory entry, you also
need to remove the existing connection between objects. Use topic
“Removing a remote journaling environment” on page 231 before
changing the remote journal link.
2. From the Work with RJ Links display, type a 2 (Change) next to the entry you
want and press Enter.
3. The Change Remote Journal Link (CHGRJLNK) display appears. Specify the
values you want for the following prompts:
• Delivery
• Sending task priority
• Primary transfer definition
• Secondary transfer definition
• Description
4. When you are ready to accept the changes, press Enter.
5. To make the changes effective, do the following:
a. If you removed the RJ connection in Step 1, you need to use topic “Building the
journaling environment” on page 219.
b. Start the data group which uses the RJ link.
227
Temporarily changing from RJ to MIMIX processing
228
Changing from remote journaling to MIMIX processing
Use this procedure when you no longer want to use remote journaling for a data
group and want to permanently change the data group to use MIMIX send
processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in “Checklist: Converting to legacy cooperative
processing” on page 157 before you remove remote journaling.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Perform a controlled end for the data group that you want to change using topic
“Ending a data group in a controlled manner” in the Using MIMIX book. On the
ENDDG command, specify the following:
• *ALL for the Process prompt
• *CNTRLD for the End process prompt
Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not
in use by any other processes or data groups before ending and
removing the RJ environment.
2. Perform the procedure in topic “Removing a remote journaling environment” on
page 231.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change, press Enter.
4. Start data group replication using the procedure “Starting selected data group
processes” in the Using MIMIX book and specify *ALL for the Start processes
prompt (PRC parameter).
229
Changing from remote journaling to MIMIX processing
230
Removing a remote journaling environment
Use this procedure when you want to remove a remote journaling environment that
you no longer need. This procedure removes configuration elements and system
objects necessary for data group replication with remote journaling.
1. Verify that the remote journal link is not used by any data group. Use “Identifying
data groups that use an RJ link” on page 310.
If you identify a data group that uses the remote journal link, check with your
MIMIX administrator and determine how to proceed. Possible courses of action
are:
• If the data group is being converted to use MIMIX send processing or if the
data group will no longer be used, perform a controlled end of the data group.
When the data group is ended, continue with Step 2 of this procedure.
• If the data group needs to remain operable using remote journaling, do not
continue with this procedure.
2. End the remote journal link and verify that it has a state value of *INACTIVE
before you continue. Refer to topics “Ending a remote journal link independently”
and “Checking status of a remote journal link” in the Using MIMIX book.
3. From the management system, do the following to remove the connection to the
remote journal:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (Journal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next
to the link that you want and press Enter.
Note: If more than one RJ link is available for the data group, ensure that you
choose the link you want.
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
4. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
231
Removing a remote journaling environment
232
Chapter10
By creating a data group definition, you identify to MIMIX the characteristics of how
replication occurs between two systems. You must have at least one data group
definition in order to perform replication.
In an Intra environment, a data group definition defines how replication occurs
between the two product libraries used by INTRA.
Once data group definitions exist for MIMIX, they can also be used by the MIMIX
Promoter product.
The topics in this chapter include:
• “Tips for data group parameters” on page 234 provides tips for using the more
common options for data group definitions.
• “Creating a data group definition” on page 247 provides the steps to follow for
creating a data group definition.
• “Changing a data group definition” on page 251 provides the steps to follow for
changing a data group definition.
• “Fine-tuning backlog warning thresholds for a data group” on page 251 describes
what to consider when adjusting the values at which the backlog warning
thresholds are triggered.
233
Tips for data group parameters
234
similar attributes in which the roles of source and target are reversed in order to
support high availability.
Data group type (TYPE) The default value *ALL indicates that the data group can be
used by both user journal and system journal replication processes. This enables you
to use the same data group for all of the replicated data for an application. The value
*ALL is required for user journal replication of IFS objects, data areas, and data
queues. MIMIX Dynamic Apply also supports the value *DB. For additional
information, see “Requirements and limitations of MIMIX Dynamic Apply” on
page 110
Note: In Clustering environments only, the data group value of *PEER is available.
This provides you with support for system values and other system attributes
that MIMIX currently does not support.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
transfer definitions used to communicate between the systems defined by the data
group. The name you specify in these parameters must match the first part of a
transfer definition name. By default, MIMIX uses the name PRIMARY for a value of
the primary transfer definition (PRITFRDFN) parameter and for the first part of the
name of a transfer definition.
If you specify a secondary transfer definition (SECTRFDFN), it is used if the
communications path specified in the primary transfer definition is not available.
Once MIMIX starts using the secondary transfer definition, it continues to use it even
after the primary communication path becomes available again.
Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of
seconds that the send process waits when there are no entries available to process.
Jobs go into a delay state when there are no entries to process. Jobs wait for the time
you specify even when new entries arrive in the journal. A value of 0 uses more
system resources.
Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1,
ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters
apply to data groups that can include database files or tracking entries. Data group
types of *ALL or *DB include database files. Data group types of *ALL may also
include tracking entries.
Journal on target (JRNTGT) The default value *YES enables journaling on the
target system, which allows you to switch the direction of a data group more
quickly. Replication of files with some types of referential constraint actions may
require a value of *YES. For more information, see “Considerations for LF and PF
files” on page 105.
If you specify *NO, you must ensure that, in the event of a switch to the direction
of replication, you manually start journaling on the target system before allowing
users to access the files. Otherwise, activity against those files may not be
properly recorded for replication.
System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) parameters identify the user journal definitions associated with the
systems defined as System 1 and System 2, respectively, of the data group. The
value *DGDFN indicates that the journal definition has the same name as the data
235
Tips for data group parameters
group definition.
The DTASRC, ALWSWT, JRNTGT, JRNDFN1, and JRNDFN2 parameters
interact to automatically create as much of the journaling environment as possible.
The DTASRC parameter determines whether system 1 or system 2 is the source
system for the data group. When you create the data group definition, if the
journal definition for the source system does not exist, a journal definition is
created. If you specify to journal on the target system and the journal definition for
the target system does not exist, that journal definition is also created. The
names of journal definitions created in this way are taken from the values of the
JRNDFN1 and JRNDFN2 parameters according to which system is considered
the source system at the time they are created. You may need to build the
journaling environment for these journal definitions.
System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2)
parameters identify the name of the primary auxiliary storage pool (ASP) device
within an ASP group on each system. The value *NONE allows replication from
libraries in the system ASP and basic user ASPs 2-32. Specify a value when you
want to replicate IFS objects from a user journal or when you want to replicate
objects from ASPs 33 or higher. For more information see “Benefits of
independent ASPs” on page 564.
Use remote journal link (RJLNK) This parameter identifies how journal entries
are moved to the target system. The default value, *YES, uses remote journaling
to transfer data to the target system. This value results in the automatic creation of
the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK
command), if needed. The RJ link defines the source and target journal definitions
and the connection between them. When ADDRJLNK is run during the creation of
a data group, the data group transfer definition names are used for the
ADDRJLNK transfer definition parameters.
MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate
when MIMIX source-send processes must be used.
Cooperative journal (COOPJRN) This parameter determines whether
cooperatively processed operations for journaled objects are performed primarily
by user (database) journal replication processes or system (audit) journal
replication processes. Cooperative processing through the user journal is
recommended and is called MIMIX Dynamic Apply. For data groups created on
version 5, the shipped default value *DFT resolves to *USRJRN (user journal)
when configuration requirements for MIMIX Dynamic Apply are met. If those
requirements are not met, *DFT resolves to *SYSJRN and cooperative processing
is performed through system journal replication processes.
Number of DB apply sessions (NBRDBAPY) You can specify the number of
apply sessions allowed to process the data for the data group.
DB journal entry processing (DBJRNPRC) This parameter allows you to
specify several criteria that MIMIX will use to filter user journal entries before they
reach the database apply (DBAPY) process. Each element of the parameter
identifies a criteria that can be set to either *SEND or *IGNORE.
The value *SEND causes the journal entries meeting the criteria to be processed
and sent to the database apply process. For data groups configured to use
236
MIMIX source-send processes, *SEND can minimize the amount of data that is
sent over a communications path. The value *IGNORE prevents the entries from
being sent to the database apply process. Certain database techniques, such as
keyed replication, may require that an element be set to a specific value.
The following available elements describe how journal entries are handled by the
database reader (DBRDR) or the database send (DBSND) processes.
• Before images This criteria determines whether before-image journal entries
are filtered out before reaching the database apply process. If you use keyed
replication, the before-images are often required and you should specify
*SEND. *SEND is also required for the IBM RMVJRNCHG (Remove Journal
Change) command. See “Additional considerations for data groups” on
page 244 for more information.
• For files not in data group This criteria determines whether journal entries for
files not defined to the data group are filtered out.
• Generated by MIMIX activity This criteria determines whether journal entries
resulting from the MIMIX database apply process are filtered out.
• Not used by MIMIX This criteria determines whether journal entries not used by
MIMIX are filtered out.
Additional parameters: Use F10 (Additional parameters) to access the following
parameters. These parameters are considered advanced configuration topics.
Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog
threshold criteria for the remote journal function. When the backlog reaches any of the
specified criterion, the threshold exceeded condition is indicated in the status of the
RJ link. The threshold can be specified as a time difference, a number of journal
entries, or both. When a time difference is specified, the value is amount of time, in
minutes, between the timestamp of the last source journal entry and the timestamp of
the last remote journal entry. When a number of journal entries is specified, the value
is the number of journal entries that have not been sent from the local journal to the
remote journal. If *NONE is specified for a criterion, that criterion is not considered
when determining whether the backlog has reached the threshold.
Synchronization check interval (SYNCCHKITV) This parameter, which is only valid
for database processing, allows you to specify how many before-image entries to
process between synchronization checks. For MIMIX to use this feature, the journal
image file entry option (FEOPT parameter) must allow before-image journaling
(*BOTH). When you specify a value for the interval, a synchronization check entry is
sent to the apply process on the target system. The apply process compares the
before-image to the image in the file (the entire record, byte for byte). If there is a
synchronization problem, MIMIX puts the data group file entry on hold and stops
applying journal entries. The synchronization check transactions still occur even if
you specify to ignore before-images in the DB journal entry processing (DBJRNPRC)
parameter.
Time stamp interval (TSPITV) This parameter, which is only valid for database
processing, allows you to specify the number of entries to process before MIMIX
creates a time stamp entry. Time stamps are used to evaluate performance.
Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups.
237
Tips for data group parameters
Verify interval (VFYITV) This parameter allows you to specify the number of journal
transactions (entries) to process before MIMIX performs additional processing.
When the value specified is reached, MIMIX verifies that the communications path
between the source system and the target system is still active and that the send and
receive processes are successfully processing transactions. A higher value uses less
system resources. A lower value provides more timely reaction to error conditions.
Larger, high-volume systems should have higher values. This value also affects how
often the status is updated with the "Last read" entries. A lower value results in more
accurate status information.
Data area polling interval (DTAARAITV) This parameter specifies the number of
seconds that the data area poller waits between checks for changes to data areas.
The poller process is only used when configured data group data area entries exist.
The preferred methods of replicating data areas require that data group object entries
be used to identify data areas. When object entries identify data areas, the value
specified in them for cooperative processing (COOPDB) determines whether the data
areas are processed through the user journal with advanced journaling, or through
the system journal.
Journal at creation (JRNATCRT) This parameter allows you to specify whether to
start journaling when objects are created in the libraries replicated by the data group.
This applies to new objects of type *FILE, *DTAARA, and *DTAQ that are
cooperatively processed. All new objects of the same type are journaled, including
those not replicated by the data group. If multiple data groups include the same library
in their configurations, only allow one data group to use journal at object creation
(*YES or *DFT). The default for this parameter is *DFT which allows MIMIX to
determine the objects to journal at creation.
For example, a data group is configured to cooperatively process only file ABC from
library APPDTA. The library also contains data areas and temporary files that are not
configured for replication. Specifying a value that permits journaling of newly created
objects (*YES or *DFT) will result in all newly created files in library APPDTA being
journaled. Newly created data areas in this library would not be journaled.
Note: There are operating system restrictions and some IBM library restrictions. For
more information, see the requirements for implicit starting of journaling in
“What objects need to be journaled” on page 323. For additional information,
see “Processing of newly created files and objects” on page 127.
Parameters for automatic retry processing: MIMIX may use delay retry cycles
when performing system journal replication to automatically retry processing an object
that failed due to a locking condition or an in-use condition. It is normal for some
pending activity entries to undergo delay retry processing—for example, when a
conflict occurs between replicated objects in MIMIX and another job on the system.
The following parameters define the scope of two retry cycles:
Number of times to retry (RTYNBR) This parameter specifies the number of
attempts to make during a delay retry cycle.
First retry delay interval (RTYDLYITV1) This parameter specifies the amount of
time, in seconds, to wait before retrying a process in the first (short) delay retry
cycle.
Second retry delay interval (RTYDLYITV2) specifies the amount of time, in
238
seconds, to wait before retrying a process in the second (long) delay retry cycle.
This is only used after all the retries for the RTYDLYITV1 parameter have been
attempted.
After the initial failed save attempt, MIMIX delays for the number of seconds specified
for the First retry delay interval (RTYDLYITV1) before retrying the save operation.
This is repeated for the specified number of times (RTYNBR).
If the object cannot be saved after all attempts in the first cycle, MIMIX enters the
second retry cycle. In the second retry cycle, MIMIX uses the number of seconds
specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the
save attempt for the specified number of times (RTYNBR).
If the object identified by the entry is in use (*INUSE) after the first and second retry
cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic
object recovery policy is enabled. The values in effect for the Number of third
delay/retries policy and the Third retry interval (min.) policy determine the scope of the
third retry cycle. After all attempts have been performed, if the object still cannot be
processed because of contention with other jobs, the status of the entry will be
changed to *FAILED.
Adaptive cache (ADPCHE) This parameter enables adaptive caching for a data
group. Adaptive caching is a technique by which MIMIX caches data into memory
before it is needed by user journal replication processes. Using adaptive caching
provides greater elapsed time performance by using additional memory.
File and tracking entry options (FEOPT) This parameter specifies default options
that determine how MIMIX handles file entries and tracking entries for the data group.
All database file entries, object tracking entries, and IFS tracking entries defined to
the data group use these options unless they are explicitly overridden by values
specified in data group file or object entries. File entry options in data group object
entries enable you to set values for files and tracking entries that are cooperatively
processed.
The options are as follows:
• Journal image This option allows you to control the kinds of record images that
are written to the journal when data updates are made to database file records,
IFS stream files, data areas or data queues. The default value *AFTER causes
only after-images to be written to the journal. The value *BOTH causes both
before-images and after-images to be written to the journal. Some database
techniques, such as keyed replication, may require the use of both before-image
and after-images. *BOTH is also required for the IBM RMVJRNCHG (Remove
Journal Change) command. See “Additional considerations for data groups” on
page 244 for more information.
• Omit open/close entries This option allows you to specify whether open and close
entries are omitted from the journal. The default value *YES indicates that open
and close operations on file members or IFS tracking entries defined to the data
group do not create open and close journal entries and are therefore omitted from
the journal. If you specify *NO, journal entries are created for open and close
operations and are placed in the journal.
• Replication type This option allows you to specify the type of replication to use for
239
Tips for data group parameters
database files defined to the data group. The default value *POSITION indicates
that each file is replicated based on the position of the record within the file.
Positional replication uses the values of the relative record number (RRN) found
in the journal entry header to locate a database record that is being updated or
deleted. MIMIX Dynamic Apply requires the value *POSITION.
The value *KEYED indicates that each file is replicated based on the value of the
primary key defined to the database file. The value of the key is used to locate a
database record that is being deleted or updated. MIMIX strongly recommends
that any file configured for keyed replication also be enabled for both before-
image and after-image journaling. Files defined using keyed replication must have
at least one unique access path defined. For additional information, see “Keyed
replication” on page 355.
• Lock member during apply This option allows you to choose whether you want the
database apply process to lock file members when they are being updated during
the apply process. This prevents inadvertent updates on the target system that
can cause synchronization errors. Members are locked only when the apply
process is active.
• Apply session With this option, you can assign a specific apply session for
processing files defined to the data group. The default value *ANY indicates that
MIMIX determines which apply session to use and performs load balancing.
Notes:
• Any changes made to the apply session option are not effective until the data
group is started with *YES specified for the clear pending and clear error
parameters.
• For IFS and object tracking entries, only apply session A is valid. For additional
information see “Database apply session balancing” on page 87.
• Collision resolution This option determines how data collisions are resolved. The
default value *HLDERR indicates that a file is put on hold if a collision is detected.
The value *AUTOSYNC indicates that MIMIX will attempt to automatically
synchronize the source and target file. You can also specify the name of the
collision resolution class (CRCLS) to use. A collision resolution class allows you to
specify how to handle a variety of collision types, including calling exit programs to
handle them. See the online help for the Create Collision Resolution Class
(CRTCRCLS) command for more information.
Note: The *AUTOSYNC value should not be used if the Automatic database
recovery policy is enabled.
• Disable triggers during apply This option determines if MIMIX should disable any
triggers on physical files during the database apply process. The default value
*YES indicates that triggers should be disabled by the database apply process
while the file is opened.
• Process trigger entries This option determines if MIMIX should process any
journal entries that are generated by triggers. The default value *YES indicates
that journal entries generated by triggers should be processed.
240
Database reader/send threshold (DBRDRTHLD) This parameter specifies the
backlog threshold criteria for the database reader (DBRDR) process. When the
backlog reaches any of the specified criterion, the threshold exceeded condition is
indicated in the status of the DBRDR process. If the data group is configured for
MIMIX source-send processing instead of remote journaling, this threshold applies to
the database send (DBSND) process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
Database apply processing (DBAPYPRC) This parameter allows you to specify
defaults for operations associated with the database apply processes. Each
configured apply session uses the values specified in this parameter. The areas for
which you can specify defaults are as follows:
• Force data interval You can specify the number of records that are processed
before MIMIX forces the apply process information to disk from cache memory. A
lower value provides easier recovery for major system failures. A higher value
provides for more efficient processing.
• Maximum open members You can specify the maximum number of members
(with journal transactions to be applied) that the apply process can have open at
one time. Once the limit specified is reached, the apply process selectively closes
one file before opening a new file. A lower value reduces disk usage by the apply
process. A higher value provides more efficient processing because MIMIX does
not open and close files as often.
• Threshold warning You can specify the number of entries the apply process can
have waiting to be applied before a warning message is sent. When the threshold
is reached, the threshold exceeded condition is indicated in the status of the
database apply process and a message is sent to the primary and secondary
message queues.
• Apply history log spaces You can specify the maximum number of history log
spaces that are kept after the journal entries are applied. Any value other than
zero (0) affects performance of the apply processes.
• Keep journal log user spaces You can specify the maximum number of journal log
spaces to retain after the journal entries are applied. Log user spaces are
automatically deleted by MIMIX. Only the number of user spaces you specify are
kept.
• Size of log user spaces (MB) You can specify the size of each log space (in
megabytes) in the log space chain. Log spaces are used as a staging area for
journal entries before they are applied. Larger log spaces provide better
performance.
Object processing (OBJPRC) This parameter allows you to specify defaults for
object replication. The areas for which you can specify defaults are as follows:
• Object default owner You can specify the name of the default owner for objects
241
Tips for data group parameters
whose owning user profile does not exist on the target system. The product
default uses QDFTOWN for the owner user profile.
• DLO transmission method You can specify the method used to transmit the DLO
content and attributes to the target system. The value *OPTIMIZED uses i5/OS
APIs. The *SAVRST uses i5/OS save and restore commands.
• IFS transmission method You can specify the method used to transmit IFS object
content to the target system. The value *SAVRST uses i5/OS save and restore
commands. The value *OPTIMIZED uses i5/OS APIs.
Note: It is recommended that you use the *OPTIMIZED method of IFS
transmission only in environments in which the high volume of IFS activity
results in persistent replication backlogs. The i5/OS save and restore
method guarantees that all attributes of an IFS object are replicated. The
IFS optimization method does not currently replicate digital signatures or
other attributes that have been added in i5/OS V5R2 or later.
• User profile status You can specify the user profile Status value for user profiles
when they are replicated. This allows you to replicate user profiles with the same
status as the source system in either an enabled or disabled status for normal
operations. If operations are switched to the backup system, user profiles can
then be enabled or disabled as needed as part of the switching process.
• Keep deleted spooled files You can specify whether to retain replicated spooled
files on the target system after they have been deleted from the source system.
When you specify *YES, the replicated spooled files are retained on the target
system after they are deleted from the source system. MIMIX does not perform
any clean-up of these spooled files. You must delete them manually when they
are no longer needed. If you specify *NO, the replicated spooled files are deleted
from the target system when they are deleted from the source system.
• Keep DLO system object name You can specify whether the DLO on the target
system is created with the same system object name as the DLO on the source
system. The system object name is only preserved if the DLO is not being
redirected during the replication process. If the DLO from the source system is
being directed to a different name or folder on the target system, then the system
object name will not be preserved.
• Object retrieval delay You can specify the amount of time, in seconds, to wait after
an object is created or updated before MIMIX packages the object. This delay
provides time for your applications to complete their access of the object before
MIMIX begins packaging the object.
Object send threshold (OBJSNDTHLD) This parameter specifies the backlog
threshold criteria for the object send (OBJSND) process. When the backlog reaches
any of the specified criterion, the threshold exceeded condition is indicated in the
status of the OBJSND process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
242
Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object retrieve requests
and the threshold at which the number of pending requests queued for processing
causes additional temporary jobs to be started. The specified minimum number of
jobs will be started when the data group is started. During periods of peak activity, if
the number of pending requests exceeds the backlog jobs threshold, additional jobs,
up to the maximum, are started to handle the extra work. When the backlog is
handled and activity returns to normal, the extra jobs will automatically end. If the
backlog reaches the warning message threshold, the threshold exceeded condition is
indicated in the status of the object retrieve (OBJRTV) process. If *NONE is specified
for the warning message threshold, the process status will not indicate that a backlog
exists.
Container send processing (CNRSNDPRC) This parameter allows you to specify
the minimum and maximum number of jobs allowed to handle container send
requests and the threshold at which the number of pending requests queued for
processing causes additional temporary jobs to be started. The specified minimum
number of jobs will be started when the data group is started. During periods of peak
activity, if the number of pending requests exceeds the backlog jobs threshold,
additional jobs, up to the maximum, are started to handle the extra work. When the
backlog is handled and activity returns to normal, the extra jobs will automatically end.
If the backlog reaches the warning message threshold, the threshold exceeded
condition is indicated in the status of the container send (CNRSND) process. If
*NONE is specified for the warning message threshold, the process status will not
indicate that a backlog exists.
Object apply processing (OBJAPYPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object apply requests and
the threshold at which the number of pending requests queued for processing triggers
additional temporary jobs to be started. The specified minimum number of jobs will be
started when the data group is started. During periods of peak activity, if the number
of pending requests exceeds the backlog threshold, additional jobs, up to the
maximum, are started to handle the extra work. When the backlog is handled and
activity returns to normal, the extra jobs will automatically terminate. You can also
specify a threshold for warning message that indicates the number of pending
requests waiting in the queue for processing before a warning message is sent. When
the threshold is reached, the threshold exceeded condition is indicated in the status of
the object apply process and a message is sent to the primary and secondary
message queues.
User profile for submit job (SBMUSR) This parameter allows you to specify the
name of the user profile used to submit jobs. The default value *JOBD indicates that
the user profile named in the specified job description is used for the job being
submitted. The value *CURRENT indicates that the same user profile used by the job
that is currently running is used for the submitted job.
Send job description (SNDJOBD) This parameter allows you to specify the name
and library of the job description used to submit send jobs. The product default uses
MIMIXSND in library MIMIXQGPL for the send job description.
243
Tips for data group parameters
Apply job description (APYJOBD) This parameter allows you to specify the name
and library of the job description used to submit apply requests. The product default
uses MIMIXAPY in library MIMIXQGPL for the apply job description.
Reorganize job description (RGZJOBD) This parameter, used by database
processing, allows you to specify the name and library of the job description used to
submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL
for the reorganize job description.
Synchronize job description (SYNCJOBD) This parameter, used by database
processing, allows you to the name and library of the job description used to submit
synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for
synchronization job description. This is valid for any synchronize command that does
not have JOBD parameter on the display.
Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the
MIMIX environment. You can change the time at which these jobs restart. The source
or target role of the system affects the results of the time you specify on a data group
definition. Results may also be affected if you specify a value that uses the job restart
time in a system definition defined to the data group. Changing the job restart time is
considered an advanced technique.
1. Recovery windows and recovery points are supported with the MIMIX CDP™ feature, which
requires an additional access code.
244
• File and tracking entry options (FEOPT)
Journal image *BOTH
For each data group file entry, the following must be specified:
• File entry options
Journal image *DGDFT or *BOTH
Finally, if you are changing an existing data group to have these values, you must end
and restart the data group. Once you have these values specified, you will be able to
use the RMVJRNCHG command if needed.
245
Tips for data group parameters
246
Creating a data group definition
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. These data group use
remote journaling as an integral part of the user journal replication processes. For
additional information see Table 12 in “Considerations for LF and PF files” on
page 105. For information about command parameters, see “Tips for data group
parameters” on page 234.
To create a data group, do the following:
1. To access the appropriate command, do the following:
a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and
press Enter
b. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
c. From the Work with Data Group Definitions display, type a 1 (Create) next to
the blank line at the top of the list area and press Enter.
2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid
three-part name at the Data group definition prompts.
Note: Data group names cannot be UPSMON or begin with the characters MM.
3. For the remaining prompts on the display, verify the values shown are what you
want. If necessary, change the values.
a. If you want a specific prefix to be used for jobs associated with the data group,
specify a value at the Short data group name prompt. Otherwise, MIMIX will
generate a prefix.
b. Ensure that the value of the Data source prompt represents the system that
you want to use as the source of data to be replicated.
c. Verify that the value of the Allow to be switched prompt is what you want.
d. Verify that the value of the Data group type prompt is what you need. MIMIX
Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing
and user journal replication of IFS objects, data areas, and data queues
require *ALL.
e. Verify that the value of the Primary transfer definition prompt is what you want.
f. If you want MIMIX to have access to an alternative communications path,
specify a value for the Secondary transfer definition prompt.
g. Verify that the value of the Reader wait time (seconds) prompt is what you
want.
h. Press Enter.
4. If you specified *OBJ for the Data group type, skip to Step 9.
5. The Journal on target prompt appears on the display. Verify that the value shown
is what you want and press Enter.
247
Creating a data group definition
Note: If you specify *YES and you require that the status of journaling on the
target system is accurate, you should perform a save and restore
operation on the target system prior to loading the data group file entries. If
you are performing your initial configuration, however, it is not necessary
to perform a save and restore operation. You will synchronize as part of
the configuration checklist.
6. More prompts appear on the display that identify journaling information for the
data group. You may need to use the Page Down key to see the prompts. Do the
following:
a. Ensure that the values of System 1 journal definition and System 2 journal
definition identify the journal definitions you need.
Notes:
• If you have not journaled before, the value *DGDFN is appropriate. If you
have an existing journaling environment that you have identified to MIMIX in
a journal definition, specify the name of the journal definition.
• If you only see one of the journal definition prompts, you have specified *NO
for both the Allow to be switched prompt and the Journal on target prompt.
The journal definition prompt that appears is for the source system as
specified in the Data source prompt.
b. If any objects to replicate are located in an auxiliary storage pool (ASP) group
on either system, specify values for System1 ASP group and System 2 ASP
group as needed. The ASP group name is the name of the primary ASP device
within the ASP group.
c. The default for the Use remote journal link prompt is *YES, which required for
MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a
transfer definition and an RJ link, if needed. To create a data group definition
for a source-send configuration, change the value to *NO.
d. At the Cooperative journal (COOPJRN) prompt, specify the journal for
cooperative operations. For new data groups, the value *DFT automatically
resolves to *USRJRN when Data group type is *ALL or *DB and Remote
journal link is *YES. The value *USRJRN processes through the user
(database) journal while the value *SYSJRN processes through the system
(audit) journal.
7. At the Number of DB apply sessions prompt, specify the number of apply sessions
you want to use.
8. Verify that the values shown for the DB journal entry processing prompts are what
you want.
Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change)
command. See “Additional considerations for data groups” on page 244
for more information.
9. At the Description prompt, type a text description of the data group definition,
enclosed in apostrophes.
10. Do one of the following:
248
• To accept the basic data group configuration, Press Enter. Most users can
accept the default values for the remaining parameters. The data group is
created when you press Enter.
• To access prompts for advanced configuration, press F10 (Additional
Parameters) and continue with the next step.
Advanced Data Group Options: The remaining steps of this procedure are only
necessary if you need to access options for advanced configuration topics. The
prompts are listed in the order they appear on the display. Because i5/OS does not
allow additional parameters to be prompt-controlled, you will see all parameters
regardless of the value specified for the Data group type prompt.
11. Specify the values you need for the following prompts associated with user journal
replication:
• Remote journaling threshold
• Synchronization check interval
• Time stamp interval
• Verify interval
• Data area polling interval
• Journal at creation
12. Specify the values you need for the following prompts associated with system
journal replication:
• Number of times to retry
• First retry delay interval
• Second retry delay interval
13. Accept the value *YES for the Adaptive cache prompt unless the system is
memory constrained.
14. Specify the values you need for each of the prompts on the File and tracking ent.
opts (FEOPT) parameter.
Notes:
• Replication type must be *POSITION for MIMIX Dynamic Apply.
• Apply session A is used for IFS objects, data areas, and data queues that are
configured for user journal replication. For more information see “Database
apply session balancing” on page 87.
• The journal image value *BOTH is required for the IBM RMVJRNCHG
(Remove Journal Change) command. See “Additional considerations for data
groups” on page 244 for more information.
15. Specify the values you need for each element of the following parameters:
• Database reader/send threshold
• Database apply processing
• Object processing
249
Creating a data group definition
250
Changing a data group definition
For information about command parameters, see “Tips for data group parameters” on
page 234.
To change a data group definition, do the following:
1. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to
see additional prompts.
3. Make any changes you need for the values of the prompts. Page Down to see
more of the prompts.
Note: If you change the Number of DB apply sessions prompt (NBRDBAPY),
you need to start the data group specifying *YES for the Clear pending
prompt (CLRPND).
4. If you need to access advanced functions, press F10 (Additional parameters).
Make any changes you need for the values of the prompts.
5. When you are ready to accept the changes, press Enter.
251
Fine-tuning backlog warning thresholds for a data group
threshold conditions would have on RTO and your tolerance for data loss in the
event of a failure.
Table 31 lists the shipped values for thresholds available in a data group definition,
identifies the risk associated with a backlog for each replication process, and
identifies available options to address a persistent threshold condition. For each data
group, you may need to use multiple options or adjust one or more threshold values
multiple times before finding an appropriate setting.
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog
Remote journaling All journal entries in the backlog for the remote Option 3
threshold journaling function exist only in the source Option 4
10 minutes system journal and are waiting to be
transmitted to the remote journal. These entries
cannot be processed by MIMIX user journal
replication processes and are at risk of being
lost if the source system fails. After the source
system becomes available again, journal
analysis may be required.
Database reader/send For data groups that use remote journaling, all Option 2
threshold journal entries in the database reader backlog Option 3
10 minutes are physically located on the target system but Option 4
MIMIX has not started to replicate them. If the
source system fails, these entries need to be
read and applied before switching.
For data groups that use MIMIX source-send
processing, all journal entries in the database
send backlog, are waiting to be read and to be
transmitted to the target system. The
backlogged journal entries exist only in the
source system and are at risk of being lost if the
source system fails. After the source system
becomes available again, journal analysis may
be required.
Database apply warning All of the entries in the database apply backlog Option 2
message threshold are waiting to applied to the target system. If Option 3
100,000 entries the source system fails, these entries need to Option 4
be applied before switching. A large backlog
can also affect performance.
252
Table 31. Shipped threshold values for replication processes and the risk associated with a backlog
Object send threshold All of the journal entries in the object send Option 2
10 minutes backlog exist only in the system journal on the Option 3
source system and are at risk of being lost if the Option 4
source system fails. MIMIX may not have
determined all of the information necessary to
replicate the objects associated with the
journal entries. As this backlog clears,
subsequent processes may have backlogs as
replication progresses.
Object retrieve warning All of the objects associated with journal entries Option 1
message threshold in the object retrieve backlog are waiting to be Option 2
100 entries packaged so they can be sent to the target Option 3
system. The latest changes to these objects
Option 4
exist only in the source system and are at risk
of being lost if the source system fails. As this
backlog clears, subsequent processes may
have backlogs as replication progresses.
Container send warning All of the packaged objects associated with Option 1
message threshold journal entries in the container send backlog Option 2
100 entries are waiting to be sent to the target system. The Option 3
latest changes to these objects exist only in the
Option 4
source system and are at risk of being lost if the
source system fails. As this backlog clears,
subsequent processes may have backlogs as
replication progresses
Object apply warning All of the entries in the object apply backlog are Option 1
message threshold waiting to be applied to the target system. If the Option 2
100 requests source system fails, these entries need to be Option 3
applied before switching. Any related objects
Option 4
for which an automatic recovery action was
collecting data may be lost.
The following options are available, listed in order of preference. Some options are
not available for all thresholds.
Option 1 - Adjust the number of available jobs. This option is available only for the
object retrieve, container send, and object apply processes. Each of these processes
have a configurable minimum and maximum number of jobs, a threshold at which
more jobs are started, and a warning message threshold. If the number of entries in a
backlog divided by the number of active jobs exceeds the job threshold, extra jobs are
automatically started in an attempt to address the backlog. If the backlog reaches the
higher value specified in the warning message threshold, the process status reflects
the threshold condition. If the process frequently shows a threshold status, the
253
Fine-tuning backlog warning thresholds for a data group
maximum number of jobs may be too low or the job threshold value may be too high.
Adjusting either value in the data group configuration can result in more throughput.
Option 2 - Temporarily increase job performance. This option is available for all
processes except the RJ link. Use work management functions to increase the
resources available to a job by increasing its run priority or its timeslice (CHGJOB
command). These changes are effective only for the current instance of the job. The
changes do not persist if the job is ended manually or by nightly cleanup operations
resulting from the configured job restart time (RESTARTTIME) on the data group
definition.
Option 3 - Change threshold values or add criterion. All processes support
changing the threshold value. In addition, if the quantity of entries is more of a
concern than time, some processes support specifying additional threshold criteria
not used by shipped default settings. For the remote journal, database reader (or
database send), and object send processes, you can adjust the threshold so that a
number of journal entries is used as criteria instead of, or in conjunction with a time
value. If both time and entries are specified, the first criterion reached will trigger the
threshold condition. Changes to threshold values are effective the next time the
process status is requested.
Option 4 - Get assistance. If you tried the other options and threshold conditions
persist, contact your Certified MIMIX Consultant for assistance. It may be necessary
to change configurations to adjust what is defined to each data group or to make
permanent work management changes for specific jobs.
254
Chapter11
The procedures for performing common functions, such as copying, displaying, and
renaming, are very similar for all types of definitions used by MIMIX. The generic
procedures in this topic can be used for copying, deleting, displaying, and printing
definitions. Specific procedures are included for renaming each type of definition and
for swapping system definition names.
The topics in this chapter include:
• “Copying a definition” on page 255 provides a procedure for copying a system
definition, transfer definition, journal definition, or a data group definition.
• “Deleting a definition” on page 256 provides a procedure for deleting a system
definition, transfer definition, journal definition, or a data group definition.
• “Displaying a definition” on page 257 provides a procedure for displaying a system
definition, transfer definition, journal definition, or a data group definition.
• “Printing a definition” on page 257 provides a procedure for creating a spooled file
which you can print that identifies a system definition, transfer definition, journal
definition, or a data group definition.
• “Renaming definitions” on page 258 provides procedure for renaming definitions,
such as renaming a system definition which is typically done as a result in a
change of software.
Copying a definition
Use this procedure on a management system to copy a system definition, transfer
definition, journal definition, or a data group definition.
Notes for data group definitions:
• The data group entries associated with a data group definition are not copied.
• Before you copy a data group definition, ensure that activity is ended for the
definition to which you are copying.
Notes for journal definitions:
• The journal definition identified in the From journal definition prompt must exist
before it can be copied. The journal definition identified in the To journal defining
prompt cannot exist when you specify *NO for the Replace definition prompt.
• If you specify *YES for the Replace definition prompt, the To journal defining
prompt must exist. It is possible to introduce conflicts in your configuration when
replacing an existing journal definition. These conflicts are automatically resolved
or an error message is sent when the journal environment for the definition is built.
To copy a definition, do the following:
Note: The following procedure includes using MIMIX menus. See “Accessing the
255
Deleting a definition
Deleting a definition
Use this procedure on a management system to delete a system definition, transfer
definition, journal definition, or a data group definition.
256
Additional options: working with definitions
a. From the MIMIX Main Menu, select option 2 (Work with systems) and press
Enter.
b. Type an 8 (Work with data groups) next to the system you want and press
Enter.
c. The result is a list of data groups for the system you selected. Type a 17 (File
entries) next to the data group you want and press Enter.
d. On the Work with DG File Entries display, verify that the status of the file
entries is *INACTIVE. If necessary, use option 10 (End journaling).
e. On the Work with Data Groups display, use option 10 (End data group).
f. Before deleting a system definition, on the Work with Systems display, uses
option 10 (End managers).
2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
3. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to
definition you want and press Enter.
5. A confirmation display appears with a list of definitions to be deleted. To delete the
definitions press Enter.
Displaying a definition
Use this procedure to display a system definition, transfer definition, journal definition,
or a data group definition.
To display a definition, do the following:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 91 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
3. The "Work with" display for the definition type appears. Type a 5 (Display) next
to definition you want and press Enter.
4. The definition display appears. Page Down to see all of the values.
Printing a definition
Use this procedure to create a spooled file which you can print that identifies a system
definition, transfer definition, journal definition, or a data group definition.
To print a definition, do the following;
257
Renaming definitions
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 91 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
3. The "Work with" display for the definition type appears. Type a 6 (Print) next to
definition you want and press Enter.
4. A spooled file is created with a name of MX***DFN, where *** indicates the type of
definition. You can print the spooled file according to your standard print
procedures.
Renaming definitions
The procedures for renaming a system definition, transfer definition, journal
definition, or data group definition must be run from a management system.
Attention: Before you rename any definition, ensure that all other
configuration elements related to it are not active.
258
Additional options: working with definitions
names, a temporary system definition name must be used because there cannot be
two system definitions with the same name.
To rename system definitions, do the following for each system whose definition you
are renaming from the management system unless noted otherwise:
Note: The following procedure includes using MIMIX menus. See “Accessing the
MIMIX Main Menu” on page 91 for information about using these.
1. Perform a controlled end of the MIMIX installation. See the Using MIMIX book for
procedures for ending MIMIX.
2. End the MIMIXSBS subsystem on all systems. See the Using MIMIX book for
procedures for ending the MIMIXSBS subsystem.
3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems)
and press Enter.
4. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definition you are renaming, and press Enter.
5. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 8 (Display status) and
press Enter.
b. Record the Last Read Receiver name and Sequence # for both database and
object.
6. If changing the host name or IP address, do the following steps. Otherwise,
continue with Step 7.
a. From the MIMIX Intermediate Main Menu, select option 11 (Configuration
menu) and press Enter.
b. From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitons) and press Enter.
c. The Work with Transfer Definitions display appears. Select option 2 (Change)
for each transfer definition that includes the system whose definition you are
renaming and press Enter.
d. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 to
access additional parameters.
e. Specify the new host name or IP address for the System 1 host name or
address and System 2 host name or address and press Enter.
Note: Many installations will have an autostart entry for the STRSVR command.
Autostart entries must be reviewed for possible updates of a new system
name or IP address. For more information, see “Identifying the autostart
job entry in the MIMIXSBS subsystem” on page 191 and “Changing the job
description for an autostart job entry” on page 191.
259
Renaming definitions
7. Start the MIMIXSBS subsystem and the port jobs on all systems using the host
names or IP addresses. If you changed these, use the host name or IP address
specified in Step 6.
8. For all systems, ensure communications before continuing. Follow the steps in
topic “Verifying all communications links” on page 195.
9. From the Work with Systems Definitions (WRKSYSDFN) display type a 7
(Rename) next to the system whose definition is being renamed and press Enter.
10. The Rename System Definitions (RNMSYSDFN) display appears. At the To
system definition prompt, specify the new name for the system whose definition is
being renamed and press Enter.
11. The Confirm Rename System Defintion display appears. Press Enter.
12. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems)
and press Enter.
13. The Work with Systems display appears. Type a 9 (Start) next to the management
system you want and press Enter.
14. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. At the Manager prompt, specify *ALL.
b. Press F10 to access additional parameters.
c. In the Reset configuration prompt, specify *YES.
d. Press Enter.
15. The Work with Systems display appears. For each network system, do the
following:
a. Type a 9 (Start) next to each network system you want and press Enter.
b. The Start MIMIX Managers (STRMMXMGR) display appears. Press Enter.
Wait for the MIMIX Managers to start before continuing.
16. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definitions you have renamed and press Enter.
17. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 9 (Start DG) and press
Enter.
b. The Start Data Group (STRDG) display appears. Press F10 to display
additional parameters.
c. Type the Receiver names and Sequence #, adding 1 to the sequence #s, that
were recorded in Step 5b for both database and object. Press Enter.
18. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definition you have renamed and ensure all data groups are
active. You should see the letter ‘A’, highlighted blue in the database source
column. Refer to the Using MIMIX book for more information.
19. Press F3 to return to the Work with Systems display.
260
Additional options: working with definitions
20. From the Work with Systems display, select option 8 (Work with data groups) on
the management system and press Enter.
21. From the Work with Data Groups display, select option 9 (Start DG) for data
groups (highlighted red) that are not active and press Enter.
22. The Start Data Group (STRDG) display appears. Press Enter. Additional
parameters are displayed. Press Enter again to start the data groups.
23. The Work with data groups display appears. Ensure all data groups are active.
You should see the letter ‘A’, highlighted blue in the database source column.
Refer to the Using MIMIX book for more information. Press F5 to refresh data.
261
Renaming definitions
12. From the Change Data Group Definition display, specify the new name for the
transfer definition and press Enter until the Work with DG Definitions display
appears.
13. Press F12 to return to the MIMIX Configuration Menu.
14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal
links) and press Enter.
15. From the Work with RJ Links menu, press F11 to display the transfer definitions.
16. Type a 2 (Change) next to the RJ link where you changed the transfer definition
and press Enter.
17. From the Change Remote Journal Link display, specify the new name for the
transfer definition and press Enter.
262
Additional options: working with definitions
1. Ensure that the data group is ended. If the data group is active, end it using the
263
Renaming definitions
procedure “Ending a data group in a controlled manner” in the Using MIMIX book.
2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
3. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data
group name you want to rename and press Enter.
5. From the Rename Data Group Definition display, specify the new name for the
data group definition and press Enter.
264
Chapter12
Data group entries can identify one or many objects to be replicated or excluded from
replication. You can add individual data group entries, load entries from an existing
source, and change entries as needed.
The topics in this chapter include:
• “Creating data group object entries” on page 267 describes data group object
entries which are used to identify library-based objects for replication. Procedures
for creating these are included.
• “Creating data group file entries” on page 272 describes data group file entries
which are required for user journal replication of *FILE objects. Procedures for
creating these are included.
• “Creating data group IFS entries” on page 282 describes data group IFS entries
which identify IFS objects for replication. Procedures for creating these are
included.
• “Loading tracking entries” on page 284 describes how to manually load tracking
entries for IFS objects, data areas, and data queues that are configured for user
journal replication.
• “Creating data group DLO entries” on page 287 describes data group DLO entries
which identify document library objects (DLOs) for replication by MIMIX system
journal replication processes. Procedures for creating these are included.
• “Creating data group data area entries” on page 289 describes data group data
area entries which identify data areas to be replicated by the data area poller
process. Procedures for creating these are included.
• “Additional options: working with DG entries” on page 291 provides procedures for
performing data group entry common functions, such as copying, removing, and
displaying,
The appendix “Supported object types for system journal replication” on page 549
lists i5/OS object types and indicates whether each object type is replicated by
MIMIX.
265
266
Creating data group object entries
Data group object entries are used to identify library-based objects for replication.
How replication is performed for the objects identified depends on the object type and
configuration settings. For object types that cannot be journaled to a user journal,
system journal replication processes are used. For object types that can be journaled
(*FILE, *DTAARA, and *DTAQ), values specified in the object entry and other
configuration information determine whether the object is replicated through the
system journal or is cooperatively processed with the user journal. For *FILE objects,
several configuration options are available, some of which also require data group file
entries to be configured.
For detailed concepts and requirements for supported configurations, see the
following topics:
• “Identifying library-based objects for replication” on page 100
• “Identifying logical and physical files for replication” on page 105
• “Identifying data areas and data queues for replication” on page 112
When you configure MIMIX, you can create data group object entries by adding
individual object entries or by using the custom load function for library-based objects.
The custom load function can simplify creating data group entries. This function
generates a list of objects that match your specified criteria, from which you can
selectively create data group object entries. For example, if you want to replicate all
but a few of the data areas in a specific library, you could use the Add Data Group
Object Entry (ADDDGOBJE) command to create a single data group object entry that
includes all data areas in the library. Then, using the same object selection criteria
with the custom load function, you can select from a list of data areas in the library to
create exclude entries for the objects you do not want replicated.
Once you have created data group object entries, you can tailor them to meet your
requirements. You can also use the #DGFE audit or the Check Data Group File
Entries (CHKDGFE) command to ensure that the correct file entries exist for the
object entries configured for the specified data group.
267
Creating data group object entries
268
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Do one of the following:
• To add a new entry, type a 1 (Add) next to the blank line at the top of the list
and press Enter.
• To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group Object Entry display appears. When adding an entry,
you must specify values for the System 1 library and System 1 object prompts.
Note: When changing an existing object entry to enable replication of data areas
or data queues from a user journal (COOPDB(*YES)), make sure that you
specify only the objects you want to enable for the System 1 object
prompt. Otherwise, all objects in the library specified for System 1 library
will be enabled.
5. If necessary, specify a value for the Object type prompt.
6. Press F9 (All parameters).
7. If necessary, specify values for the Attribute, System 2 library, System 2 object,
and Object auditing value prompts.
8. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
9. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts.
Note: To ensure that journaled files, data areas, or data queues will be replicated
from the user journal, you must specify *YES for Cooperate with database
and you must specify the appropriate object types for Cooperating object
types.
10. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
11. To specify file entry options that will override those set in the data group definition,
do the following:
a. If necessary, Press Page Down to locate the File entry options prompt.
b. Specify the values you need on the elements of the File entry options prompt.
12. Press Enter.
13. For object entries configured for user journal replication of data areas or data
queues, return to Step 7 in procedure “Checklist: Change *DTAARA, *DTAQ, IFS
objects to user journaling” on page 154 to complete additional steps necessary to
complete the conversion.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
269
Creating data group object entries
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
270
271
Creating data group file entries
272
parameter override the values loaded from the FEOPTSRC parameter for all data
group file entries created by a load request.
Regardless of where the configuration source and file entry option source are located,
the Load Data Group File Entries (LODDGFE) command must be used from a system
designated as a management system.
Note: The Load Data Group File Entries (LODDGFE) command performs a journal
verification check on the file entries using the Verify Journal File Entries
(VFYJRNFE) command. In order to accurately determine whether files are
being journaled to the target system, you should first perform a save and
restore operation to synchronize the files to the target system before loading
the data group file entries.
273
Creating data group file entries
Procedure: Use this procedure to create data group file entries from the object
entries defined to a data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears. The name of the
data group for which you are creating file entries and the Configuration source
value of *DGOBJE are pre-selected. Press Enter.
5. The following prompts appear on the display. Specify appropriate values.
a. From data group definition - To load from entries defined to a different data
group, specify the three-part name of the data group.
b. Load from system - Ensure that the value specified is appropriate. For most
environments, files should be loaded from the source system of the data group
you are loading. (This value should be the same as the value specified for Data
source in the data group definition.)
c. Update option - If necessary, specify the value you want.
d. Default FE options source - Specify the source for loading values for default file
entry options. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element
in Step 6.
6. Optionally, you can specify a file entry option value to override those loaded from
the configuration source. Do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
9. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 279 to customize
values for any of the data group file entries.
274
Loading file entries from a library
Example: The data group file entries are created by loading from a library named
TESTLIB on the source system. This example assumes the configuration is set up so
that system 1 in the data group definition is the source for replication.
LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from a library on
either the source system or the target system.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *NONE and press Enter.
5. Identify the location of the files to be used for loading. For common
configurations, you can accomplish this by specifying a library name at the
System 1 library prompt and accepting the default values for the System 2 library,
Load from system, and File prompts.
If you are using system 2 as the data source for replication or if you want the
library name to be different on each system, then you need to modify these values
to appropriately reflect your data group defaults.
6. If necessary, specify the values you want for the following:
Update option prompt
Add entry for each member prompt
7. The value of the Default FE options source prompt is ignored when loading from a
library. To optionally specify file entry options, do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
275
Creating data group file entries
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. If necessary, you can use “Changing a data group file entry” on
page 279 to customize values for any of the data group file entries.
276
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 279 to customize
values for any of the data group file entries.
277
Creating data group file entries
5. At the Production library prompt, either accept *CURRENT or specify the name of
an installation library from which the data group you are copying is located.
6. At the From data group definition prompts, specify the three-part name of the data
group from which you are loading.
7. If necessary, specify the value you want for the Update option prompt.
8. Specify the source for loading values for default file entry options at the Default FE
options source prompt. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element in
Step 9.
9. If necessary, do the following specify a file entry option value to override those
loaded from the configuration source:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source
11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
12. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use “Changing a data group file entry” on page 279 to customize
values for any of the data group file entries.
278
at the top of the list and press Enter.
4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File
and Library prompts, specify the file that you want to replicate.
5. By default, all members in the file are replicated. If you want to replicate only a
specific member, specify its name at the Member prompt.
Note: All replicated members of a file must be in the same database apply
session. For data groups configured for multiple apply sessions, specify
the apply session on the File entry options prompt. See Step 7.
6. Verify that the values of the remaining prompts on the display are what you want.
If necessary, change the values as needed.
Notes:
• If you change the value of the Dynamically update prompt to *NO, you need to
end and restart the data group before the addition is recognized.
• If you change the value of the Start journaling of file prompt to *NO and the file
is not already journaled, MIMIX will not be able to replicate changes until you
start journaling the file.
7. Optionally, you can specify file entry options that will override those defined for the
data group. Do the following:
a. Press F10 (Additional parameters), then press Page Down.
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure
8. Press Enter to create the data group file entry.
279
Creating data group file entries
• All replicated members of a file must be in the same database apply session.
For data groups configured for multiple apply sessions, specify the apply
session on the File entry options prompt.
5. To accept your changes, press Enter.
The replication processes do not recognize the change until the data group has been
ended and restarted.
280
281
Creating data group IFS entries
282
5. If necessary, specify values for the System 2 object and Object auditing value
prompts.
6. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
7. Specify the appropriate value for the Cooperate with database prompt. To ensure
that journaled IFS objects can be replicated from the user journal, specify *YES.
To replicate from the system journal, specify *NO.
8. If necessary, specify a value for the Object retrieval delay prompt.
9. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
10. Press Enter to create the IFS entry.
11. For IFS entries configured for user journal replication, return to Step 7 in
procedure “Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling”
on page 154 to complete additional steps necessary to complete the conversion.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
283
Loading tracking entries
284
9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.
285
Loading tracking entries
286
Creating data group DLO entries
Data group DLO entries identify document library objects (DLOs) for replication by
MIMIX system journal replication processes.
When you configure MIMIX, you can create data group DLO entries by loading from a
generic entry and selecting from documents in the list, or by creating individual DLO
entries. Once you have created the DLO entries, you can tailor them to meet your
requirements.
For detailed concepts and requirements, see “Identifying DLOs for replication” on
page 124.
287
Creating data group DLO entries
288
Creating data group data area entries
This procedure creates data group data area entries that identify data areas to be
replicated by the data area poller process.
Note: The data area poller method is not the preferred way to replicate data
areas.The preferred method of replicating data areas is with user journal
replication processes using advanced journaling. The next best method is
identifying them with data group object entries for system journal replication
processes.
For detailed concepts and requirements for supported configurations, see the
following topics:
• “Identifying library-based objects for replication” on page 100
• “Identifying data areas and data queues for replication” on page 112
You can load all data group data area entries from a library or you can add individual
data area entries. Once the data group data area entries are created, you can tailor
them to meet your requirements by adding, changing, or deleting entries. You must
define data group data area entries from the management system. The data area
entries can be created from libraries on either system. If the system manager is
configured and running, all created and changed data group data area entries are
sent to the network systems automatically.
289
Creating data group data area entries
finished.
290
Additional options: working with DG entries
The procedures for performing common functions, such as copying, removing, and
displaying, are very similar for all types of data group entries used by MIMIX. Each
generic procedure in this topic indicates the type of data group entry for which it can
be used.
Table 32. Values to specify for each type of data group entry.
291
Additional options: working with DG entries
Table 32. Values to specify for each type of data group entry.
5. The value *NO for the Replace definition prompt prevents you from replacing an
existing entry in the definition to which you are copying. If you want to replace an
existing entry, specify *YES.
6. To copy the entry, press Enter.
7. For file entries, end and restart the data group being copied.
292
3. For data group file entries, a display with additional prompts appears. Specify the
values you want and press Enter.
4. A confirmation display appears with a list of entries to be deleted. To delete the
entries, press Enter.
293
Chapter13
The tasks in this chapter provide supplemental configuration tasks. Always use the
configuration checklists to guide you though the steps of standard configuration
scenarios.
• “Accessing the Configuration Menu” on page 295 describes how to access the
menu of configuration options from a 5250 emulator.
• “Starting the system and journal managers” on page 296 provides procedures for
starting these jobs. System and journal manager jobs must be running before
replication can be started.
• “Setting data group auditing values manually” on page 297 describes when to
manually set the object auditing level for objects defined to MIMIX and provides a
procedure for doing so.
• “Checking file entry configuration manually” on page 303 provides a procedure
using the CHKDGFE command to check the data group file entries defined to a
data group.
Note: The preferred method of checking is to use MIMIX AutoGuard to
automatically schedule the #DGFE audit, which calls the CHKDGFE
command and can automatically correct detected problems. For additional
information, see “Interpreting results for configuration data - #DGFE audit”
on page 580.
• “Changes to startup programs” on page 305 describes changes that you may
need to make to your configuration to support remote journaling.
• “Checking DDM password validation level in use” on page 306 describes how to
check the whether the DDM communications infrastructure used by MIMIX
Remote Journal support requires a password. This topic also describes options
for ensuring that systems in a MIMIX configuration have the same password and
the implications of these options.
• “Starting the DDM TCP/IP server” on page 308 describes how to start this server
that is required in configurations that use remote journaling.
• “Identifying data groups that use an RJ link” on page 310 describes how to
determine which data groups use a particular RJ link.
• “Using file identifiers (FIDs) for IFS objects” on page 312 describes the use of FID
parameters on commands for IFS tracking entries. When IFS objects are
configured for replication through the user journal, commands that support IFS
tracking entries can specify a unique FID for the object on each system. This topic
describes the processing resulting from combinations of values specified for the
object and FID prompts.
• “Configuring restart times for MIMIX jobs” on page 313 describes how to change
the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to
ensure that the MIMIX environment remains operational.
294
Accessing the Configuration Menu
The MIMIX Configuration Menu provides access to the options you need for
configuring MIMIX.
To access the MIMIX Configuration Menu, do the following:
1. Access the MIMIX Basic Main Menu. See “Accessing the MIMIX Main Menu” on
page 91.
2. From the on the MIMIX Basic Main Menu, select option 11 (Configuration menu)
and press Enter.
295
Starting the system and journal managers
296
Setting data group auditing values manually
Default behavior for MIMIX is to change the auditing value of IFS, DLO, and library-
based objects configured for system journal replication as needed when starting data
groups with the Start Data Group (STRDG) command.
To manually set the system auditing level of replicated objects, or to force a change to
a lower configured level, you can use the Set Data Group Auditing (SETDGAUD)
command.
The SETDGAUD command allows you to set the object auditing level for all existing
objects that are defined to MIMIX by data group object entries, data group DLO
entries, and data group IFS entries. The SETDGAUD command can be used for data
groups configured for replicating object information (type *OBJ or *ALL).
When to set object auditing values manually - If you anticipate a delay between
configuring data group entries and starting the data group, you should use the
SETDGAUD command before synchronizing data between systems. Doing so will
ensure that replicated objects will be properly audited and that any transactions for
the objects that occur between configuration and starting the data group will be
replicated.
You can also use the SETDGAUD command to reset the object auditing level for all
replicated objects if a user has changed the auditing level of one or more objects to a
value other than what is specified in the data group entries.
Processing options - MIMIX checks for existing objects identified by data group
entries for the specified data group. The object auditing level of an existing object is
set to the auditing value specified in the data group entry that most specifically
matches the object. Default behavior is that MIMIX only changes an object’s auditing
value if the configured value is higher than the object’s existing value. However, you
can optionally force a change to a configured value that is lower than the existing
value through the command’s Force audit value (FORCE) parameter.
• The default value *NO for the FORCE parameter prevents MIMIX from reducing
the auditing level of an object. For example, if the SETDGAUD command
processes a data group entry with a configured object auditing value of *CHANGE
and finds an object identified by that entry with an existing auditing value of *ALL,
MIMIX does not change the value.
• If you specify *YES for the FORCE parameter, MIMIX will change the auditing
value even if it is lower than the existing value.
For IFS objects, it is particularly important that you understand the ramifications of the
value specified for the FORCE parameter. For more information see “Examples of
changing of an IFS object’s auditing value” on page 298.
Procedure -To set the object auditing value for a data group, do the following on each
system defined to the data group:
1. Type the command SETDGAUD and press F4 (Prompt).
2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the
data group you want.
297
Setting data group auditing values manually
3. At the Object type prompt, specify the type of objects for which you want to set
auditing values.
4. If you want to allow MIMIX to force a change to a configured value that is lower
than the object’s existing value, specify *YES for the Force audit value prompt.
Note: This may affect the operation of your replicated applications. Lakeview
recommends that you force auditing value changes only when you have
specified *ALLIFS for the Object type.
5. Press Enter.
Simply ending and restarting the data group will not cause these configuration
changes to be effective. Because the change is to a lower auditing level, the change
must be forced with the SETDGAUD command. Similarly, running the SETDGAUD
command with FORCE(*NO) does not change the auditing values for this scenario.
298
Table 34 shows the intermediate and final results as each data group IFS entry is
processed by the force request.
Table 34. Intermediate audit values which occur during FORCE(*YES) processing for example 1.
Notes:
1. Because the first data group IFS entry excludes objects from replication, object auditing processing does
not apply.
2. This object’s auditing value is evaluated when the third data group IFS entry is processed but the entry
does not cause the value to change. The existing value is the same as the configured value of the third
entry at the time it is processed.
Example 2: Table 35 identifies a set of data group IFS entries and their configured
auditing values. The entries are listed in the order in which they are processed by the
SETDGAUD command. In this scenario there are multiple configured values.
For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are the same or lower than the existing values.
Running the command with FORCE(*YES) does change the existing objects’ values.
Table 36 shows the intermediate values as each entry is processed by the force
request and the final results of the change. Data group IFS entry #3 in Table 35
299
Setting data group auditing values manually
Table 36. Intermediate audit values which occur during FORCE(*YES) processing for example 2.
Example 3: This scenario illustrates why you may need to force the configured values
to take effect after changing the existing data group IFS entries from *ALL to lower
values. Table 37 identifies a set of data group IFS entries and their configured
auditing values. The entries are listed in the order in which they are processed by the
SETDGAUD command.
For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are lower than the existing values.
In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured
auditing values take effect. Table 38 shows the intermediate values as each entry is
processed by the force request and the final results of the change.
Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3.
300
Table 38. Intermediate audit values which occur during FORCE(*YES) processing for example 3.
Example 4: This example begins with the same set of data group IFS entries used in
example 3 (Table 37) and uses the results of the forced change in example 3 as the
auditing values for the existing objects in Table 39.
Table 39 shows how running the SETDGAUD command with FORCE(*NO) causes
changes to auditing values. This scenario is quite possible as a result of a normal
STRDG request. Complex data group IFS entries and multiple configured values
cause these potentially undesirable results.
Note: Any addition or change to the data group IFS entries can cause these results
to occur.
There is no way to maintain the existing values in Table 39 without ensuring that a
forced change occurs every time SETDGAUD is run, which may be undesirable. In
this example, the next time data groups are started, the objects’ auditing values will
be set to those shown in Table 39 for FORCE(*NO).
Any addition or change to the data group IFS entries can potentially cause similar
results the next time the data group is started. To avoid this situation, we recommend
that you configure a consistent auditing value of *CHANGE across data group IFS
entries which identify objects with common parent directories.
301
Setting data group auditing values manually
Example 5: This scenario illustrates the results of SETDGAUD command when the
object’s auditing value is determined by the user profile which accesses the object
(value *USRPRF). Table 40 shows the configured data group IFS entry.
Table 41 compares the results running the SETDGAUD command with FORCE(*NO)
and FORCE(*YES).
Running the command with FORCE(*NO) does not change the value. The value
*USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an
auditing value of *USRPRF is not considered for change.
Running the command with FORCE(*YES) does force a change because the existing
value and the configured value are not equal.
302
Checking file entry configuration manually
The Check DG File Entries (CHKDGFE) command provides a means to detect
whether the correct data group file entries exist with respect to the data group object
entries configured for a specified data group in your MIMIX configuration. When file
entries and object entries are not properly matched, your replication results can be
affected.
Note: The preferred method of checking is to use MIMIX AutoGuard to automatically
schedule the #DGFE audit, which calls the CHKDGFE command and can
automatically correct detected problems. For additional information, see
“Interpreting results for configuration data - #DGFE audit” on page 580.
To check your file entry configuration manually, do the following:
1. On a command line, type CHKDGFE and press Enter. The Check Data Group File
Entries (CHKDGFE) command appears.
2. At the Data group definition prompts, select *ALL to check all data groups or
specify the three-part name of the data group.
3. At the Options prompt, you can specify that the command be run with special
options. The default, *NONE, uses no special options. If you do not want an error
to be reported if a file specified in a data group file entry does not exist, specify
*NOFILECHK.
4. At the Output prompt, specify where the output from the command should be
sent—to print, to an outfile, or to both. See Step 6.
5. At the User data prompt, you can assign your own 10-character name to the
spooled file or choose not to assign a name to the spooled file. The default, *CMD,
uses the CHKDGFE command name to identify the spooled file.
6. At the File to receive output prompts, you can direct the output of the command to
the name and library of a specific database file. If the database file does not exist,
it will be created in the specified library with the name MXCDGFE.
7. At the Output member options prompts, you can direct the output of the command
to the name of a specific database file member. You can also specify how to
handle new records if the member already exists. Do the following:
a. At the Member to receive output prompt, accept the default *FIRST to direct
the output to the first member in the file. If it does not exist, a new member is
created with the name of the file specified in Step 6. Otherwise, specify a
member name.
b. At the Replace or add records prompt, accept the default *REPLACE if you
want to clear the existing records in the file member before adding new
records. To add new records to the end of existing records in the file member,
specify *ADD.
8. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to check data group file entries.
303
Checking file entry configuration manually
• To submit the job for batch processing, accept *YES. Press Enter and continue
with the next step.
9. At the Job description prompts, specify the name and library of the job description
used to submit the batch request. Accept MXAUDIT to submit the request using
Lakeview’s default job description, MXAUDIT.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the data group file entry check, press Enter.
304
Changes to startup programs
If you use startup programs, ensure that you include the following operations when
you configure for remote journaling:
• If you use TCP/IP as the communications protocol you need to start TCP/IP,
including the DDM server, before starting replication.
• If you use OptiConnect as the communications protocol, the QSOC subsystem
must be active.
305
Checking DDM password validation level in use
306
c. If you selected multiple transfer definitions, press Enter to advance to the next
selection and record its RDB value. Ensure that you record the values for all
transfer definitions you selected.
Note: If the RDB value was generated by MIMIX, it will be in the form of the
characters MX followed by the System1 definition, System2 definition,
and the name of the transfer definition, with up to 18 characters.
2. On the source system, change the MIMIXOWN user profile to have a password
and to prevent signing on with the profile. To do this, enter the following
command:
CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password)
INLMNU(*SIGNOFF)
Note: The password is case sensitive and must be the same on all systems in
the MIMIX network. If the password does not match on all systems, some
MIMIX functions will fail with security error message LVE0127.
3. You need a server authentication entry for the MIMIXOWN user profile for each
RDB entry you recorded in Step 1. To add a server authentication entry, type the
following command, using the password you specified in Step 2 and the RDB
value from Step 1. Then press Enter.
ADDSVRAUTE USRPRF(MIMIXOWN) SERVER(recorded-RDB-value)
PASSWORD(user-defined-password)
4. Repeat Step 2 and Step 3 on the target system.
307
Starting the DDM TCP/IP server
308
309
Identifying data groups that use an RJ link
310
311
Using file identifiers (FIDs) for IFS objects
312
Configuring restart times for MIMIX jobs
Certain MIMIX jobs are restarted, or recycled, on a regular basis in order to maintain
the MIMIX environment. The ability to configure this activity can ease conflicts with
your scheduled workload by changing when the MIMIX jobs restart to a more
convenient time for your environment.
The default operation of MIMIX is to restart MIMIX jobs at midnight (12:00 a.m.).
However, you can change the restart time by setting a different value for the Job
restart time parameter (RSTARTTIME) on system definitions and data group
definitions. The time is based on a 24 hour clock. The values specified in the system
definitions and data group definitions are retrieved at the time the MIMIX jobs are
started. Changes to the specified values have no effect on jobs that are currently
running. Changes are effective the next time the affected MIMIX jobs are started.
For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2
for the Job restart time (RSTARTTIME) parameter. Respectively, these values use
the restart time specified in the system definition identified as System 1 or System 2
for the data group.
Both system and data group definition commands support the special value *NONE,
which prevents the MIMIX jobs from automatically restarting. Be sure to read
“Considerations for using *NONE” on page 315 before using this value.
313
Configuring restart times for MIMIX jobs
The system manager jobs are a pair of jobs that run between a network system and
the management system. The management and network systems both have journal
manager jobs, but the jobs operate independently. The job restart time specified in
the management system’s system definition determines when to restart the journal
manager on the management system. The job restart time specified in the network
system’s system definition determines when to restart the journal manager job on the
network system, when to restart the system manager jobs on both systems, and also
affects when cleanup jobs on both systems are submitted. Table 42 shows how the
role of the system affects the results of the specified job restart time.
Table 42. Effect of the system’s role on changing the job restart time in a system definition.
Management System managers Specified value is not used to determine restart time. Restart is
System determined by value specified for network system.
Cleanup jobs
Network System managers Jobs on both systems restart Jobs are not restarted on either
System when time on the management system.
system reaches the time
specified.
Cleanup jobs Jobs are submitted on both Jobs are submitted on both
systems by system manager systems when midnight occurs
jobs after they restart. on the management system.
Journal managers Job on network system restarts Job on network system is not
at time specified. restarted.
Collector services
For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is
built into the job restart processing. The actual delay is unique to each job. By
distributing the jobs within this range the load on systems and communications is
more evenly distributed, reducing bottlenecks caused by many jobs simultaneously
attempting to end, start, and establish communications. MIMIX determines the actual
restart time for the object apply (OBJAPY) jobs based on the timestamp of the system
on which the jobs run. For all other affected jobs, MIMIX determines the actual start
time for object or database jobs based on the timestamp of the system on which the
OBJSND or the DBSND job runs. Table 43 shows how these key jobs affect when
314
other data group-level jobs restart.
In each row, the highlighted job determines the restart time for all jobs in the row.
For more information about MIMIX jobs see “Replication job and supporting job
names” on page 47.
If you specify the value *NONE for the Job restart time in a data group definition, no
MIMIX data group-level jobs are automatically restarted.
If you specify the value *NONE for the Job restart time in a system definition, the
cleanup jobs started by the system manager will continue to be submitted based on
when midnight occurs on the management system. All other affected MIMIX system-
level jobs will not be restarted. Table 42 shows the effect of the value *NONE.
315
Configuring restart times for MIMIX jobs
316
Example 5: You have a data group that operates between SYSTEMA and
SYSTEMB, which are both in the same time zone. Both the system definitions and the
data group definition use the default value 000000 (midnight) for the job restart time.
For both systems, the MIMIX system-level jobs restart at midnight. The data group
jobs on both systems restart between 2 and 35 minutes after midnight.
Example 6: 10:30 Tuesday morning you change data group definition APP1 to have
a job restart time value of 013500. The data group operates between SYSTEMA and
SYSTEMB, which are both in the same time zone. Both system definitions use the
default restart time of midnight. MIMIX jobs remain up and running. At midnight, the
system-level jobs on both systems restart using the values from the preexisting
configuration; the data group-level jobs restart on both systems between 0:02 and
0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between
1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups
restart at midnight.
Example 7: You have a data group that operates between SYSTEMA and SYSTEMB
which are both in the same time zone and are defined as the values of System 1 and
System 2, respectively. The data group definition specifies a job restart time value of
*SYSDFN2. The system definition for SYSTEMA specifies the default job restart time
of 000000 (midnight). SYSTEMB is the management system and its system definition
specifies the value *NONE for the job restart time. The journal manager on SYSTEMB
does not restart and the data group jobs do not restart on either system because of
the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA
restarts at midnight. System manager jobs on both systems restart and submit
cleanup jobs at midnight as a result of the value in the network system and the fact
that the systems are in the same time zone.
Example 8A: You have a data group defined between CHICAGO and NEWYORK
(System 1 and System 2, respectively) and the data group’s job restart time is set to
030000 (3 a.m.). CHICAGO is the source system as well as a network system; its
system definition uses the default job restart time of midnight. NEWYORK is the
target system as well as the management system; its system definition uses a job
restart time of 020000 (2 a.m.). There is a one hour time difference between the two
systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17
shows the effect of the time zone difference on this configuration.
The journal manager on CHICAGO restarts at midnight Chicago time and the journal
manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs
on both systems restart when the management system (NEWYORK) reaches the
restart time specified for the network system (CHICAGO). The cleanup jobs are
submitted by the system manager jobs when they restart.
With the exception of the object apply jobs (OBJAPY), the data group jobs restart
during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35
minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJAPY jobs
are based on the time on the target system, which is an hour ahead of the source
317
Configuring restart times for MIMIX jobs
system time used for the other jobs, the OBJAPY jobs restart between 3:02 and 3:35
a.m. New York time.
Figure 17. Results of Example 8A. This is configured as a standard MIMIX environment.
Example 8B: This scenario is the same as example 8A with one exception. In this
scenario, the MIMIX environment is configured to use MIMIX Remote Journal
support. Figure 18 shows that the database reader (DBRDR) job restarts based on
the time on the target system. Because the database send (DBSND) and database
receive (DBRCV) jobs are not used in a remote journaling environment, those jobs do
not restart.
Figure 18. Results of example 8B. This environment is configured to use MIMIX Remote
Journal support.
318
Configuring the restart time in a system definition
To configure the restart time for MIMIX system-level jobs in an existing environment,
do the following:
1. On the Work with System Definitions display, type a 2 (Change) next to the
system definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want. You need to consider
the role of the system definition (management or network system) and the effect
of any time zone differences between the management system and the network
system.
Notes:
• The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight (00:00:00 a.m.).
• If you specify *NONE, cleanup jobs are submitted on both the network and
management systems based on when midnight occurs on the management
system. System manager and journal manager jobs will not restart. The value
*NONE is not recommended. For more information, see “Considerations for
using *NONE” on page 315.
4. To accept the change, press Enter.
The change has no effect on jobs that are currently running. The value for the Job
restart time is retrieved from the system definition at the time the jobs are started.
The change is effective the next time the jobs are started.
319
Configuring restart times for MIMIX jobs
320
321
Chapter 14Starting, ending, and verifying
journaling
This chapter describes procedures for starting and ending journaling. Journaling must
be active on all files, IFS objects, data areas and data queues that you want to
replicate through a user journal. Normally, journaling is started during configuration.
However, there are times when you may need to start or end journaling on items
identified to a data group.
The topics in this chapter include:
• “What objects need to be journaled” on page 323 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
• “MIMIX commands for starting journaling” on page 325 identifies the MIMIX
commands available for starting journaling and describes the checking performed
by the commands.
• “Journaling for physical files” on page 326 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
physical files identified by data group file entries.
• “Journaling for IFS objects” on page 330 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
IFS objects replicated cooperatively (advanced journaling). IFS tracking entries
are used in these procedures.
• “Journaling for data areas and data queues” on page 334 includes procedures for
displaying journaling status, starting journaling, ending journaling, and verifying
journaling for data area and data queue objects replicated cooperatively
(advanced journaling). IFS tracking entries are used in these procedures.
322
What objects need to be journaled
A data group can be configured in a variety of ways that involve a user journal in the
replication of files, data areas, data queues and IFS objects. Journaling must be
started for any object to be replicated through a user journal or to be replicated by
cooperative processing between a user journal and the system journal.
Requirements for system journal replication - System journal replication
processes use a special journal, the security audit (QAUDJRN) journal. The IBM i
system logs events in this journal to create a security audit trail. When data group
object entries, IFS entries, and DLO entries are configured, each entry specifies an
object auditing value that determines the type of activity on the objects to be logged in
the journal. Object auditing is automatically set for all objects defined to a data group
when the data group is first started, or any time a change is made to the object
entries, IFS entries, or DLO entries for the data group. Because security auditing logs
the object changes in the system journal, no special action is need.
Requirements for user journal replication - User journal replication processes
require that the journaling be started for the objects identified by data group file
entries. Both MIMIX Dynamic Apply and legacy cooperative processing use data
group file entries and therefore require journaling to be started. Configurations that
include advanced journaling for replication of data areas, data queues, or IFS objects
also require that journaling be started on the associated object tracking entries and
IFS tracking entries, respectively. Starting journaling ensures that changes to the
objects are recorded in the user journal, and are therefore available for MIMIX to
replicate.
During initial configuration, the configuration checklists direct you when to start
journaling for objects identified by data group file entries, IFS tracking entries, and
object tracking entries. The MIMIX commands STRJRNFE, STRJRNIFSE, and
STRJRNOBJE simplify the process of starting journaling. For more information about
these commands, see “MIMIX commands for starting journaling” on page 325.
Although MIMIX commands for starting journaling are preferred, you can also use
IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start journaling if you have
the appropriate authority for starting journaling.
Requirements for implicit starting of journaling - Journaling can be automatically
started for newly created database files, data areas, data queues, or IFS objects
when certain requirements are met.
The user ID creating the new objects must have the required authority to start
journaling and the following requirements must be met:
• IFS objects - A new IFS object is automatically journaled if the directory in which it
is created is journaled as a result of a request that permitted journaling inheritance
for new objects. Typically, if MIMIX started journaling on the parent directory,
inheritance is permitted. If you manually start journaling on the parent directory
using the IBM command STRJRN, specify INHERIT(*YES). This will allow IFS
objects created within the journaled directory to inherit the journal options and
journal state of the parent directory.
• Database files created by SQL statements - A new file created by a CREATE
323
What objects need to be journaled
324
• If you use the IBM commands (STRJRNPF, STRJRN, STRJRNOBJ) to start
journaling, the user ID that performs the start journaling request must have the
appropriate authority requirements.
For journaling to be successfully started on an object, one of the following authority
requirements must be satisfied:
• The user profile of the user attempting to start journaling for an object must have
*ALLOBJ special authority.
• The user profile of the user attempting to start journaling for an object must have
explicit *ALL object authority for the journal to which the object is to be journaled.
• Public authority (*PUBLIC) must have *OBJALTER, *OBJMGT, and *OBJOPR
object authorities for the journal to which the object is to be journaled.
325
Journaling for physical files
326
4. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (JRNTGT) and starts or
prevents journaling from starting as required.
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To start journaling for the physical file associated with the selected data group,
press Enter.
The system returns a message to confirm the operation was successful.
327
Journaling for physical files
328
329
Journaling for IFS objects
330
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts1.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (JRNTGT) and starts or
prevents journaling from starting as required.
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. The System 1 file identifier and System 2 file identifier prompts identify the file
identifier (FID) of the IFS object on each system. You cannot change the values2.
8. To start journaling on the IFS objects specified, press Enter.
1. When the command is invoked from a command line, you can change values specified for the
IFS objects prompts. Also, you can specify as many as 300 object selectors by using the + for
more values prompt.
2. When the command is invoked from a command line, use F10 to see the FID prompts. Then you
can optionally specify the unique FID for the IFS object on either system. The FID values can be
used alone or in combination with the IFS object path name.
331
Journaling for IFS objects
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown2.
7. To end journaling on the IFS objects specified, press Enter.
332
333
Journaling for data areas and data queues
334
tracking entry you selected. Although you can change the values shown for these
prompts, it is not recommended unless the command was invoked from a
command line.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (JRNTGT) and starts or
prevents journaling from starting as required.
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. To start journaling on the objects specified, press Enter.
335
Journaling for data areas and data queues
336
Chapter15
This chapter describes how to modify your configuration to use advanced techniques
to improve journal performance and MIMIX performance.
Journal performance: The following topics describe how to improve journal
performance:
• “Minimized journal entry data” on page 339 describes benefits of and restrictions
for using minimized user journal entries for *FILE and *DTAARA objects. A
discussion of large object (LOB) data in minimized entries and configuration
information are included.
• “Configuring for high availability journal performance enhancements” on page 341
describes journal caching and journal standby state within MIMIX to support IBM’s
High Availability Journal Performance i5/OS option 42, Journal Standby feature
and Journal caching. Requirements and restrictions are included.
MIMIX performance: The following topics describe how to improve MIMIX
performance:
• “Caching extended attributes of *FILE objects” on page 345 describes how to
change the maximum size of the cache used to store extended attributes of *FILE
objects replicated from the system journal.
• “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on
page 346 describes how you can improve object send performance by changing
the size of the block of data from a receive journal entry (RCVJRNE) call and
delaying the next call based on a percentage of the requested block size.
• “Configuring high volume objects for better performance” on page 350 describes
how to change your configuration to improve system journal performance.
• “Improving performance of the #MBRRCDCNT audit” on page 351 describes how
to use the CMPRCDCNT commit threshold policy to limit comparisons and
thereby improve performance of this audit in environments which use commitment
control.
337
338
Minimized journal entry data
MIMIX supports the ability to process minimized journal entries placed in a user
journal for object types of file (*FILE) and data area (*DTAARA).
The i5/OS operating system provides the ability to create journal entries using an
internal format that minimizes the data specific to these object types that are stored in
the journal entry. This support is enabled in the MIMIX create or change journal
definitions commands and built using the Build Journal Environment (BLDJRNENV)
command.
When a journal entry for one of these object types is generated, the system compares
the size of the minimized format to the standard format and places whichever is
smaller in the journal. For database files, only update journal entries (R-UP and R-
UB) and rollback-type update entries (R-BR and R-UR) can be minimized.
If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record
includes LOB fields, LOB data is journaled only when that LOB is changed. Changes
to other fields in the record will not cause the LOB data to be journaled unless the
LOB is also changed. When database files have records with static LOB values,
minimized journal entries can produce considerable savings.
The benefit of using minimized journal entries is that less data is stored in the journal.
In a MIMIX replication environment, you also benefit by having less data sent over
communications lines and saved in MIMIX log spaces. Factors in your environment
such as the percentage of journal entries that are updates (R-UP), the size of
database records, the number of bytes typically changed in an update, may influence
how much benefit you achieve.
339
Minimized journal entry data
• Configuring for minimized journal entry data may affect your ability to use the
Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For
example, using option 2 (Change) on WRKDGFEHLD to convert a minimized
record update (RUP) to a record put (RPT), will result in failure when applied.
RPTs requires the presence of a full, non-minimized, record.
See the IBM book, Backup and Recovery for restrictions and usage of journal entries
with minimized entry-specific data.
340
Configuring for high availability journal performance
enhancements
MIMIX supports IBM’s High Availability Journal Performance i5/OS option 42, Journal
Standby feature and Journal caching. These high availability performance
enhancements improve replication performance on the target system and provide
significant performance improvement by eliminating the need to start journaling at
switch time.
MIMIX support of IBM’s high availability performance enhancements consists of two
independent components: journal standby state and journal caching. These
components work individually or together, although when used together, each
component must be enabled separately. Journal standby state minimizes replication
impact on the target system by providing the benefits of an active journal without
writing the journal entries to disk. As such, journal standby state is particularly helpful
in saving disk space in environments that do not rely on journal entries for other
purposes. Moreover, journal standby state minimizes switch times by retaining the
journal relationship for replicated objects.
Journal caching provides a means by which to cache journal entries and their
corresponding database records into main storage and write to disks only as
necessary. Journal caching is particularly helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
Journal standby state and journal caching can be used in source send configuration
environments as well as in environments where remote journaling is enabled. For
restrictions of MIMIX support of IBM’s high availability performance enhancements,
see “Restrictions of high availability journal performance enhancements” on
page 343.
Note: For more information, also see the topics on journal management and system
performance in the IBM eServer iSeries Information Center.
341
Configuring for high availability journal performance enhancements
Journal caching
Journal caching is an attribute of the journal that is defined. When journal caching is
enabled, the system caches journal entries and their corresponding database records
into main storage. This means that neither the journal entries nor their corresponding
database records are written to disk until an efficient disk write can be scheduled. This
usually occurs when the buffer is full, or at the first commit, close, or file end of data.
Because most database transactions must no longer wait for a synchronous write of
the journal entries to disk, the performance gain can be significant.
For example, batch operations must usually wait for each new journal entry to be
written to disk. Journal caching can be helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
The default value for journal caching is *BOTH. It is recommended that you use the
default value of *BOTH to perform journal caching on both the source and the target
systems.
For more information about journal caching, see IBM’s Redbooks Technote””Journal
Caching: Understanding the Risk of Data Loss”.
342
To enable journal standby state or journal caching in a MIMIX environment, two
parameters have been added to the Create Journal Definition (CRTJRNDFN) and
Change Journal Definition (CHGJRNDFN) commands: Target journal state
(TGTSTATE) and Journal caching (JRNCACHE). See “Creating a journal definition”
on page 215 and “Changing a journal definition” on page 217.
When journaling is used on the target system, the TGTSTATE parameter specifies
the requested status of the target journal. Valid values for the TGTSTATE parameter
are *ACTIVE and *STANDBY. When *ACTIVE is specified and the data group
associated with the journal definition is journaling on the target system
(JRNTGT(*YES)), the target journal state is set to active when the data group is
started. When *STANDBY is specified, objects are journaled on the target system, but
most journal entries are prevented from being deposited into the target journal. An
additional value, *SAME, is valid for the CHGJRNDFN command, which indicates the
TGTSTATE value should remain unchanged.
The JRNCACHE parameter specifies whether the system should cache journal
entries in main storage before writing them to disk. Valid values for the JRNCACHE
parameter are *TGT, *BOTH, *NONE, or *SRC. Although journal caching can be
configured on the target system, source system, or both, it is recommended to be
performed on both (*BOTH) the target system and source system. The recommended
value of *BOTH is the default. An additional value, *SAME, is valid for the
CHGJRNDFN command, which indicates the JRNCACHE value should remain
unchanged.
Table 44. Software requirements for MIMIX support of IBM’s high availability performance
enhancements
LPP installed and Product 5722SS1, option 42, feature 5117, i5/OS -
available HA Journal Performance
343
Configuring for high availability journal performance enhancements
344
Caching extended attributes of *FILE objects
In order to accurately replicate actions against *FILE objects, it is sometimes
necessary to retrieve the extended attribute of a *FILE object, such as PF, LF or
DSPF. Whenever large volumes of journal entries for *FILE objects are replicated
from the security audit journal (system journal), MIMIX caches this information for a
fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute.
The result is a potential reduction of CPU consumption by the object send job and a
significant performance improvement.
This function can be tailored to suit your environment. The maximum size of the
cache is controlled though the use of a data area in the MIMIX product library. The
cache size indicates the number of entries that can be contained in the cache. If the
data area is not created or does not exist in the MIMIX product library, the size of the
cache defaults to 15.
To configure the extended attribute cache, do the following:
1. Create the data area on the systems on which the object send jobs are running.
Type the following command:
CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR)
LEN(2)
2. Specify the cache size (xx). Valid cache values are numbers 00 through 99. Type
the following command:
CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE('xx,
RCVJRNE_delay_values')
Notes:
• The four RCVJRNE delay values are specified in this string along with the
cache size. See topic “Increasing data returned in journal entry blocks by
delaying RCVJRNE calls” on page 346 for more information.
• Using 00 for the cache size value disables the extended attribute cache.
345
Increasing data returned in journal entry blocks by delaying RCVJRNE calls
346
Note: Delays are not applied to blocks larger than the specified medium block
percentage. In the previous example, no delays will be applied to blocks larger
than 30 percent of the RCVJRNE block size, or 60,000 bytes.
347
Increasing data returned in journal entry blocks by delaying RCVJRNE calls
LEN(20)
Note: Although you will see improvements from the file attribute cache with the
default character value (LEN(2)), enhancements are maximized by
recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE
call delays.
2. Specify the RCVJRNE block size, percentages, and multipliers to be used for the
delay. Valid values for the RCVJRNE block size are 32Kb to 4000Kb. Valid values
for the percentages and multipliers are numbers 01 through 99. Lakeview
recommends typing the following as a starting point where cache size is the two
character number for the size of the file attribute cache:
CHGDTAARA DTAARA(installation_library/MXOBJSND)
VALUE(‘cache_size,10,02,30,01,0100’)
Note: For information about the cache size, see “Caching extended attributes of
*FILE objects” on page 345.
348
349
Configuring high volume objects for better performance
350
Improving performance of the #MBRRCDCNT audit
Environments that use commitment control may find that, in some conditions, a
request to run the #MBRRCDCNT audit or the Compare Record Count
(CMPRCDCNT) command can be extremely long-running. This is possible in
environments that use commitment control with long-running commit transactions that
include large numbers (tens of thousands) of record operations within one
transaction. In such an environment, the compare request can be long running when
the number of members to be compared is very large and there are uncommitted
changes present at the time of the request.
The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT
commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify
a threshold at which requests to compare record counts will no longer perform the
comparison due to commit cycle activity on the source system.
The shipped default values for this policy (CMPRCDCMT parameter) permit record
count comparison requests without regard to commit cycle activity on the source
system. These policy default values are suitable for environments that do not have
the commitment control environment indicated, or that can tolerate a long-running
comparison.
If your environment cannot tolerate a long-running request, you can specify a numeric
value for the CMPRCDCMT parameter for either the MIMIX installation or for a
specific data group. This will change the behavior of MIMIX by affecting what is
compared, and can improve performance of #MBRRCDCNT and CMPRCDCNT
requests.
Note: Equal record counts suggest but do not guarantee that files are synchronized.
When a threshold is specified for the CMPRCDCNT commit threshold policy,
record count comparisons can have a higher number of file members that are
not compared. This must be taken into consideration when using the
comparison results to gauge of whether systems are synchronized.
A numeric value for the CMPRCDCMT parameter defines the maximum number of
uncommitted record operations that can exist for files waiting to be applied in an apply
session at the time a compare record count request is invoked. The number specified
must be representative of the number of uncommitted record operations.
When a numeric value is specified, MIMIX recognizes whether the number of
uncommitted record operations for an apply session exceeds the threshold at the time
a compare request is invoked. If an apply session has not reached the threshold, the
comparison is performed. If the threshold is exceeded, MIMIX will not attempt to
compare members from that apply session. Instead, the results will display the *CMT
value for the difference indicator, indicating that commit cycle activity on the source
system prevented active processing from comparing counts of current records and
deleted records in the selected member.
Each database apply session is evaluated against the threshold independently. As a
result, it is possible for record counts to be compared for files in one apply session but
not be compared in another apply session, as illustrated in the following example.
351
Improving performance of the #MBRRCDCNT audit
Example: This example shows the result of setting the policy for a data group to a
value of 10,000. Table 45 shows the files replicated by each of the apply sessions
used by the data group and the result of comparison. Because of the number of
uncommitted record operations present at the time of the request, files processed by
apply sessions A and C are not compared.
352
Chapter16
353
• “Using Save-While-Active in MIMIX” on page 396 describes how to change type of
save-while-active option to be used when saving objects. You can view and
change these configuration values for a data group through an interface such as
SQL or DFU.
354
Keyed replication
By default, MIMIX user journal replication processes use positional replication. You
can change from positional replication to keyed replication for database files.
355
Keyed replication
You can use the Verify Key Attributes (VFYKEYATR) command to determine whether
a physical file is eligible for keyed replication. See “Verifying key attributes” on
page 359.
356
• DB journal entry processing must have Before images as *SEND for source
send configurations. When using remote journaling, all journal entries are sent.
• Verify that you have the value you need specified for the Journal image
element of the File and tracking ent. options. *BOTH is recommended.
• File and tracking ent. options must specify *KEYED for the Replication type
element.
3. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the Using MIMIX
book.
4. If you have modified file entry options on individual data group file entries, you
need to ensure that the values used are compatible with keyed replication.
5. Start journaling for the file entries using “Starting journaling for physical files” on
page 326.
357
Keyed replication
entries in this way, you should specify *UPDADD for the Update option
parameter.
• Use topic “Adding a data group file entry” on page 278 to create a new file
entry.
• Use topic “Changing a data group file entry” on page 279 to modify an
existing file entry.
5. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic “Verifying Key Attributes” in the Using MIMIX
book.
6. After you have changed individual data group file entries, you need to start
journaling for the file entries using “Starting journaling for physical files” on
page 326.
358
Verifying key attributes
Before you configure for keyed replication, verify that the file or files you for which you
want to use keyed replication are actually eligible.
Do the following to verify that the attributes of a file are appropriate for keyed
replication:
1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key
Attributes display appears.
2. Do one of the following:
• To verify a file in a library, specify a file name and a library.
• To verify all files in a library, specify *ALL and a library.
• To verify files associated with the file entries for a data group, specify
*MIMIXDFN for the File prompt and press Enter. Prompts for the Data group
definition appear. Specify the name of the data group that you want to check.
3. Press Enter.
4. A spooled file is created that indicates whether you can use keyed replication for
the files in the library or data group you specified. Display the spooled file
(WRKSPLF command) or use your standard process for printing. You can use
keyed replication for the file if *BOTH appears in the Replication Type Allowed
column. If a value appears in the Replication Type Defined column, the file is
already defined to the data group with the replication type shown.
359
360
Data distribution and data management scenarios
MIMIX supports a variety of scenarios for data distribution and data management
including bi-directional data flow, file combining, file sharing, and file merging. MIMIX
also supports data distribution techniques such as broadcasting, and cascading.
Often, this support requires a combination of advanced replication techniques as well
as customizing. These techniques require additional planning before you configure
MIMIX. You may need to consider the technical aspects of implementing a technique
as well as how your business practices may be affected. Consider the following:
• Can each system involved modify the data?
• Do you need to filter data before sending to it to another system?
• Do you need to implement multiple techniques to accomplish your goal?
• Do you need customized exit programs?
• Do any potential collision points exist and how will each be resolved?
MIMIX user journal replication provides filtering options within the data group
definition. Also, MIMIX provides options within the data group definition and for
individual data group file entries for resolving most collision points. Additionally,
collision resolution classes allow you to specify different resolution methods for each
collision point.
361
Data distribution and data management scenarios
• Configure two data group definitions between the two systems. In one data group,
specify *SYS1 for the Data source (DTASRC) parameter. In the other data group,
specify *SYS2 for this parameter.
• Each data group definition should specify *NO for the Allow to be switched
(ALWSWT) parameter.
Note: In system journal replication, MIMIX does not support simultaneous updates to
the same object on multiple systems and does not support conflict resolution
for objects. Once an object is replicated to a target system, system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.
362
Configuring for file routing and file combining
File routing and file combining are data management techniques supported by MIMIX
user journal replication processes. The way in which data is used can affect the
configuration requirements for a file routing or file combining operation. Evaluate the
needs for each pair of systems (source and target) separately. Consider the following:
• Does the data need to be updated in both directions between the systems? If you
need bi-directional data flow, see topic “Configuring for bi-directional flow” on
page 361.
• Will users update the data from only one or both systems? If users can update
data from both systems, you need to prevent the original data from being returned
to its original source system (recursion).
• Is the file routing or file combining scenario a complete solution or is it part of a
larger solution? Your complete solution may be a combination of multiple data
management and data distribution techniques. Evaluate the requirements for
each technique separately for a pair of systems (source and target). Each
technique that you need to implement may have different configuration
requirements.
File combining is a scenario in which all or partial information from files on multiple
systems can be sent to and combined in a single file on a target system. In its user
journal replication processes, MIMIX implements file combining between multiple
source systems and a target system that are defined to the same MIMIX installation.
MIMIX determines what data from the multiple source files is sent to the target system
based on the contents of a journal transaction. An example of file combining is when
many locations within an enterprise update a local file and the updates from all local
files are sent to one location to update a composite file. The example in Figure 20
363
Data distribution and data management scenarios
shows file combining from multiple source systems onto a composite file on the
management system.
To enable file combining between two systems, MIMIX user journal replication must
be configured as follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 355.
• If only part of the information from the source system is to be sent to the target
system, you need an exit program to filter out transactions that should not be sent
to the target system.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file combining operation
effectively becomes a file routing operation. To ensure that the data group will
perform file combining operations after a switch, you need an exit program that
allows the appropriate transactions to be processed regardless of which system is
acting as the source for replication.
• After the combining operating is complete, if the combined data will be replicated
or distributed again, you need to prevent it from returning to the system on which it
originated.
File routing is a scenario in which information from a single file can be split and sent
to files on multiple target systems. In user journal replication processes, MIMIX
implements file routing between a source system and multiple target systems that are
defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit
program that makes the file routing decision. The user exit program determines what
data from the source file is sent to each of the target systems based on the contents
364
of a journal transaction. An example of file routing is when one location within an
enterprise performs updates to a file for all other locations, but only updated
information relevant to a location is sent back to that location. The example in Figure
21 shows the management system routing only the information relevant to each
network system to that system.
To enable file routing, MIMIX user journal replication processes must be configured
as follows:
• Configure the data group definition for keyed replication. See topic “Keyed
replication” on page 355.
• The data group definition must call an exit program that filters transactions so that
only those transactions which are relevant to the target system are sent to it.
• If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file routing operation
effectively becomes a file combining operation. To ensure that the data group will
perform file routing operations after a switch, you need an exit program that allows
the appropriate transactions to be processed regardless of which system is acting
as the source for replication.
365
Data distribution and data management scenarios
Data can pass through one intermediate system within a MIMIX installation.
Additional MIMIX installations will allow you to support cascading in scenarios that
require data to flow though two or more intermediate systems before reaching its
destination. Figure 22 shows the basic cascading configuration that is possible within
one MIMIX installation.
366
data groups acting between the management system and the destination systems
and need to prevent updates from flowing back to their system of origin.
Figure 23. Bi-directional example that implements cascading for file distribution.
367
Trigger support
Trigger support
A trigger program is a user exit program that is called by the database when a
database modification occurs. Trigger programs can be used to make other database
modifications which are called trigger-induced database modifications.
368
This is because the database apply process checks each transaction before
processing to see if filtering is required, and firing the trigger adds additional
overhead to database processing.
369
Constraint support
Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of
constraints: referential, unique, primary key and check. Unique, primary key and
check constraints are single file operations transparent to MIMIX. If a constraint is met
for a database operation on the source system, the same constraint will be met for the
replicated database operation on the target. Referential constraints, however, ensure
the integrity between multiple files. For example, you could use a referential constraint
to:
• Ensure when an employee record is added to a personnel file that it has an
associated department from a company organization file.
• Empty a shopping cart and remove the order records if an internet shopper exits
without placing an order.
When constraints are added, removed or changed on files replicated by MIMIX, these
constraint changes will be replicated to the target system. With the exception of files
that have been placed on hold, MIMIX always enables constraints and applies
constraint entries. MIMIX tolerates mismatched before images or minimized journal
entry data CRC failures when applying constraint-generated activity. Because the
parent record was already applied, entries with mismatched before images are
applied and entries with minimized journal entry data CRC failures are ignored. To
use this support:
• Ensure that your target system is at the same release level or greater than the
source system to ensure the target system is able to use all of the i5/OS function
that is available on the source system. If an earlier i5/OS level is installed on the
target system the operation will be ignored.
• You must have your MIMIX environment configured for either MIMIX Dynamic
Apply or legacy cooperative processing.
370
Referential constraint handling for these dependent files is supported through the
replication of constraint-induced modifications.
MIMIX does not provide the ability to disable constraints because i5/OS would check
every record in the file to ensure constraints are met once the constraint is re-
enabled. This would cause a significant performance impact on large files and could
impact switch performance. If the need exists, this can be done through automation.
371
Constraint support
372
Handling SQL identity columns
If you replicate an SQL table with an identity column with a switchable data group, you
may experience problems following a switch to the backup system. The next identity
column generated on the backup system may not be as expected.
In environments with both systems running i5/OS V5R4 or higher and MIMIX service
pack 5.0.09.00 or higher, MIMIX automatically checks for scenarios that can cause
duplicate identity column values and, if possible, attempts to prevent the problem
from occurring. Even in this environment, MIMIX cannot prevent all troublesome
scenarios from occurring.
As a result, the Set Identity Column Attribute (SETIDCOLA) command is available to
help support SQL tables with identity columns. This command is useful for handling
scenarios that would otherwise result in errors caused by duplicate identity column
values when inserting rows into tables.
373
Handling SQL identity columns
other than the next expected value. The starting value for the value generator on the
backup system is used instead of the next expected value based on the table’s
content. This can result in the reuse of identity column values which in turn can cause
a duplicate key exception.
Detailed technical descriptions of all attributes are available in the IBM eServer
iSeries Information Center. Look in the Database section for the SQL Reference for
CREATE TABLE and ALTER TABLE statements.
374
chosen must be valid for all tables in the data group. See “Examples of choosing a
value for INCREMENTS” on page 377.
Not supported -The following scenarios are known to be problematic and are not
supported. If you cannot use the SETIDCOLA command in your environment,
consider the “Alternative solutions” on page 375.
• Columns that have cycled - If an identity column allows cycling and adding a row
increments its value beyond the maximum range, the restart value is reset to the
beginning of the range. Because cycles are allowed, the assumption is that
duplicate keys will not be a problem. However, unexpected behavior may occur
when cycles are allowed and old rows are removed from the table with a
frequency such that the identity column values never actually complete a cycle. In
this scenario, the ideal starting point would be wherever there is the largest gap
between existing values. The SETIDCOLA command cannot address this
scenario; it must be handled manually.
• Rows deleted on production table - An application may require that an identity
column value never be generated twice. For example, the value may be stored in
a different table, data area or data queue, given to another application, or given to
a customer. The application may also require that the value always locate either
the original row or, if the row is deleted, no row at all. If rows with values at the end
of the range are deleted and you perform a switch followed by the SETIDCOLA
command, the identity column values of the deleted rows will be re-generated for
newly inserted rows. The SETIDCOLA command is not recommended for this
environment. This must be handled manually.
• No rows in backup table - If there are no rows in the table on the backup system,
the restart value will be set to the initial start value. Running the SETIDCOLA
command on the backup system may result in re-generating values that were
previously used. The SETIDCOLA command cannot address this scenario; it
must be handled manually.
• Application generated values - Optionally, applications can supply identity column
values at the time they insert rows into a table. These application-generated
identity values may be outside the minimum and maximum values set for the
identity column. For example, a table’s identity column range may be from 1
through 100,000,000 but an application occasionally supplies values in the range
of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA
command is run, the command would recognize the higher values from the
application and would cycle back to the minimum value of 1. Because the result
would be problematic, the SETIDCOLA command is not recommended for tables
which allow application-generated identity values. This must be handled
manually.
Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you
have these options.
Manually reset the identity column starting point: Following a switch to the
backup system, you can manually reset the restart value for tables with identity
375
Handling SQL identity columns
columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for
this purpose.
Convert to SQL sequence objects: To overcome the limitations of identity column
switching and to avoid the need to use the SETIDCOLA command, SQL sequence
objects can be used instead of identity columns. Sequence objects are implemented
using a data area which can be replicated by MIMIX. The data area for the sequence
object must be configured for replication through the user journal (cooperatively
processed).
376
Following a planned switch where tables are synchronized, you can usually use
*DFT.
number-of-increments-to-skip Specify the number of increments to skip. Valid
values are 1 through 2,147,483,647. Following an unplanned switch, use a larger
value to ensure that you skip any values used on the production system that may
not have been replicated to the backup system.
Usage notes
• The reason you are using this command determines which system you should run
it from. See “When the SETIDCOLA command is useful” on page 374 for details.
• The command can be invoked manually or as part of a MIMIX Model Switch
Framework custom switching program. Evaluation of your environment to
determine an appropriate increment value is highly recommended before using
the command.
• This command can be long running when many files defined for replication by the
specified data group contain identity columns. This is especially true when
affected identity columns do not have indexes over them or when they are
referenced by constraints. Specifying a higher number of jobs (JOBS) can reduce
this time.
• This command creates a work library named SETIDCOLA which is used by the
command. The SETIDCOLA library is not deleted so that it can be used for any
error analysis.
• Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each
job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts.
RUNSQLSTM produces spooled files showing the ALTER TABLE statements
executed, along with any error messages received. If any statement fails, the
RUNSQLSTM will also fail, and return the failing status back to the job where
SETIDCOLA is running and an escape message will be issued.
377
Handling SQL identity columns
For example, data group ORDERS contains tables A and B. Each row added to table
A increases the identity value by 1 and each row added to table B increases the
identify value by 1,000. Rows are inserted into table A at a rate of approximately 600
rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per
hour. Prior to a switch, on the production system the latest value for table A was 75
and the latest value for table B was 30,000. Consider the following scenarios:
• Scenario 1. You performed a planned switch for test purposes. Because
replication of all transactions completed before the switch and no users have been
allowed on the backup system, the backup system has the same values as the
production. Before starting replication in the reverse direction you run the
SETIDCOLA command with an INCREMENTS value of 1. The next rows added
to table A and B will have values of 76 and 31,000, respectively.
• Scenario 2. You performed an unplanned switch. From previous experience, you
know that the latency of changes being transferred to the backup system is
approximately 15 minutes. Rows are inserted into Table A at the highest rate. In
15 minutes, approximately 150 rows will have been inserted into Table A (600
rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However,
since all measurements are approximations or based on historical data, this
amount should be adjusted by a factor of at least 100% to 300 to ensure that
duplicate identity column values are not generated on the backup system. The
next rows added to table A and B will have values of 75+(300*1) = 375 and 30,000
+ (300*1000)= 330,000 respectively.
378
limitations” on page 374.
3. Determine what increment value is appropriate for use for all tables replicated by
the data group. Consider the needs of each table. Also consider the MIMIX
backlog at the time you plan to use the command. See “Examples of choosing a
value for INCREMENTS” on page 377.
4. From the appropriate system, as defined in “When the SETIDCOLA command is
useful” on page 374 specify a data group and the number of increments to skip in
the command:
SETIDCOLA DGDFN(name system1 system2) ACTION(*SET)
INCREMENTS(number)
379
Handling SQL identity columns
380
Collision resolution
Collision resolution is a function within MIMIX user journal replication that
automatically resolves detected collisions without user intervention. MIMIX supports
the following choices for collision resolution that you can specify in the file entry
options (FEOPT) parameter in either a data group definition or in an individual data
group file entry:
• Held due to error: (*HLDERR) This is the default value for collision resolution in
the data group definition and data group file entries. MIMIX flags file collisions as
errors and places the file entry on hold. Any data group file entry for which a
collision is detected is placed in a "held due to error" state (*HLDERR). This
results in the journal entries being replicated to the target system but they are not
applied to the target database. If the file entry specifies member *ALL, a
temporary file entry is created for the member in error and only that file entry is
held. Normal processing will continue for all other members in the file. You must
take action to apply the changes and return the file entry to an active state. When
held due to error is specified in the data group definition or the data group file
entry, it is used for all 12 of the collision points.
• Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically
synchronize file members when an error is detected. The member is put on hold
while the database apply process continues with the next transaction. The file
member is synchronized using copy active file processing, unless the collision
occurred at the compare attributes collision point. In the latter case, the file is
synchronized using save and restore processing. When automatic
synchronization is specified in the data group definition or data group file entry, it
is used for all 12 of the collision points.
• Collision resolution class: A collision resolution class is a named definition
which provides more granular control of collision resolution. Some collision points
also provide additional methods of resolution that can only be accessed by using
a collision resolution class. With a defined collision resolution class, you can
specify how to handle collision resolution at each of the 12 collision points. You
can specify multiple methods of collision resolution to attempt at each collision
point. If the first method specified does not resolve the problem, MIMIX uses the
next method specified for that collision point.
381
Collision resolution
382
• You must specify either *AUTOSYNC or the name of a collision resolution class
for the Collision resolution element of the File entry option (FEOPT) parameter.
Specify the value as follows:
– If you want to implement collision resolution for all files processed by a data
group, specify a value in the parameter within the data group definition.
– If you want to implement collision resolution for only specific files, specify a
value in the parameter within an individual data group file entry.
Note: Ensure that data group activity is ended before you change a data group
definition or a data group file entry.
• If you plan to use an exit program for collision resolution, you must first create a
named collision resolution class. In the collision resolution class, specify
*EXITPGM for each of the collision points that you want to be handled by the exit
program and specify the name of the exit program.
383
Collision resolution
7. At the Number of retry attempts prompt, specify the number of times to try to
automatically synchronize a file. If this number is exceeded in the time specified in
the Retry time limit, the file will be placed on hold due to error
8. At the Retry time limit prompt, specify the number of maximum number of hours to
retry a process if a failure occurs due to a locking condition or an in-use condition.
Note: If a file encounters repeated failures, an error condition that requires
manual intervention is likely to exist. Allowing excessive synchronization
requests can cause communications bandwidth degradation and
negatively impact communications performance.
9. To create the collision resolution class, press Enter.
384
Printing a collision resolution class
Use this procedure to create a spooled file of a collision resolution class which you
can print.
1. From the Work with CR Classes display, type a 6 (Print) next to the collision
resolution class you want and press Enter.
2. A spooled file is created with the name MXCRCLS on which you can use your
standard printing procedure.
385
Collision resolution
386
Omitting T-ZC content from system journal replication
For logical and physical files configured for replication solely through the system
journal, MIMIX provides the ability to prevent replication of predetermined sets of T-
ZC journal entries associated with changes to object attributes or content changes.
Default T-ZC processing: Files that have an object auditing value of *CHANGE or
*ALL will generate T-ZC journal entries whenever changes to the object attributes or
contents occur. The access type field within the T-ZC journal entry indicates what
type of change operation occurred. Table 46 lists the T-ZC journal entry access types
that are generated by PF-DTA, PF38-DTA, PF-SRC, PF38-SRC, LF, and LF-38 file
types.
Table 46. T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible
for replication through the system journal.
Access Access Type Operation Type Operations that Generate T-ZC Access Type
Type Description
File Member Data
387
Omitting T-ZC content from system journal replication
By default, MIMIX replicates file attributes and file member data for all T-ZC entries
generated for logical and physical files configured for system journal replication. While
MIMIX recreates attribute changes on the target system, member additions and data
changes require MIMIX to replicate the entire object using save, send, and restore
processes. This can cause unnecessary replication of data and can impact
processing time, especially in environments where the replication of file data
transactions is not necessary.
Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data
group object entry commands, you can specify a predetermined set of access types
for *FILE objects to be omitted from system journal replication. T-ZC journal entries
with access types within the specified set are omitted from processing by MIMIX.
The OMTDTA parameter is useful when a file or member’s data does not need to be
the replicated. For example, when replicating work files and temporary files, it may be
desirable to replicate the file layout but not the file members or data. The OMTDTA
parameter can also help you reduce the number of transactions that require
substantial processing time to replicate, such as T-ZC journal entries with access type
30 (Open).
Each of the following values for the OMTDTA parameter define a set of access types
that can be omitted from replication:
*NONE - No T-ZCs are omitted from replication. All file, member, and data
operations in transactions for the access types listed in Table 46 are replicated.
This is the default value.
*MBR - Data operations are omitted from replication. File and member operations
in transactions for the access types listed in Table 46 are replicated. Access type
7 (Change) for both file and member operations are replicated.
*FILE - Member and data operations are omitted from replication. Only file
operations in transactions for the access types listed in Table 46 are replicated.
Only file operations in transactions with access type 7 (Change) are replicated.
388
continue to be journaled and replicated, the data group object entry should also
specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter.
For all library-based objects, MIMIX evaluates the object auditing level when starting
data a group after a configuration change. If the configured value specified for the
OBJAUD parameter is higher than the object’s actual value, MIMIX will change the
object to use the higher value. If you use the SETDGAUD command to force the
object to have an auditing level of *NONE and the data group object entry also
specifies *NONE, any changes to the file will no longer generate T-ZC entries in the
system journal. For more information about object auditing, see “Managing object
auditing” on page 57.
Object attribute considerations - When MIMIX evaluates a system journal entry
and finds a possible match to a data group object entry which specifies an attribute in
its Attribute (OBJATR) parameter, MIMIX must retrieve the attribute from the object in
order to determine which object entry is the most specific match.
If the object attribute is not needed to determine the most specific match to a data
group object entry, it is not retrieved.
After determining which data group object entry has the most specific match, MIMIX
evaluates that entry to determine how to proceed with the journal entry. When the
matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to
consider the object attribute in any other evaluations. As a result, the performance of
the object send job may improve.
389
Omitting T-ZC content from system journal replication
replication. This may affect whether replicated files on the source and target systems
are identical.
For example, recall how a file with an object auditing attribute value of *NONE is
processed. After MIMIX replicates the initial creation of the file through the system
journal, the file on the target system reflects the original state of the file on the source
system when it was retrieved for replication. However, any subsequent changes to file
data are not replicated to the target system. According to the configuration
information, the files are synchronized between source and target systems, but the
files are not the same.
A similar situation can occur when OMTDTA is used to prevent replication of
predetermined types of changes. For example, if *MBR is specified for OMTDTA, the
file and member attributes are replicated to the target system but the member data is
not. The file is not identical between source and target systems, but it is synchronized
according to configuration. Comparison commands will report these attributes as *EC
(equal configuration) even though member data is different. MIMIX audits, which call
comparison commands with a data group specified, will have the same results.
Running a comparison command without specifying a data group will report all the
synchronized-but-not-identical attributes as *NE (not equal) because no configuration
information is considered.
Consider how the following comparison commands behave when faced with non-
identical files that are synchronized according to the configuration.
• The Compare File Attributes (CMPFILA) command has access to configuration
information from data group object entries for files configured for system journal
replication. When a data group is specified on the command, files that are
configured to omit data will report those omitted attributes as *EC (equal
configuration). When CMPFILA is run without specifying a data group, the
synchronized-but-not-identical attributes are reported as *NE (not equal).
• The Compare File Data (CMPFILDTA) command uses data group file entries for
configuration information. As a result, when a data group is specified on the
command, any file objects configured for OMTDTA will not be compared. When
CMPFILDTA is run without specifying a data group, the synchronized-but-not-
identical file member attributes are reported as *NE (not equal).
• The Compare Object Attributes (CMPOBJA) command can be used to check for
the existence of a file on both systems and to compare its basic attributes (those
which are common to all object types). This command never compares file-
specific attributes or member attributes and should not be used to determine
whether a file is synchronized.
390
Selecting an object retrieval delay
When replicating objects, particularly documents (*DOC) and stream files (*STMF),
MIMIX will obtain a lock on the object that can prevent your applications from
accessing the object in a timely manner.
Some of your applications may be unable to recover from this condition and may fail
in an unexpected manner.
You can reduce, or eliminate, contention for an object between MIMIX and your
applications if the object retrieval processing is delayed for a predetermined amount
of time before obtaining a lock on the object to retrieve it for replication.
You can use the Object retrieval delay element within the Object processing
parameter on the change or create data group definition commands to set the delay
time between the time the object was last changed on the source system and the time
MIMIX attempts to retrieve the object on the source system.
Although you can specify this value at the data group level, you can override the data
group value at the object level by specifying an Object retrieval delay value on the
commands for creating or changing data group entries.
You can specify a delay time from 0 through 999 seconds. The default is 0.
If the object retrieval latency time (the difference between when the object was last
changed and the current time) is less than the configured delay value, then MIMIX will
delay its object retrieval processing until the difference between the time the object
was last changed and the current time exceeds the configured delay value.
If the object retrieval latency time is greater than the configured delay value, MIMIX
will not delay and will continue with the object retrieval processing.
391
Selecting an object retrieval delay
• The Object Retrieve job encounters the create/change journal entry at 10:45:52. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 10:45:51 + configured delay value
of :02 = 10:45:53) exceeds the current date/time (10:45:52). Because the object
retrieval delay value has not be met or exceeded, the object retrieve job delays for
1 second to satisfy the configured delay value.
• After the delay (at time 10:45:53), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 10:45:51 + configured delay value of :02 =
10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval
delay value has been met, the object retrieve job continues with normal
processing and attempts to package the object.
Example 3 - The object retrieval delay value is configured to be 4 seconds:
• Object A is created or changed at 13:20:26.
• The Object Retrieve job encounters the create/change journal entry at 13:20:27. It
retrieves the “last change date/time” attribute from the object and determines that
the delay time (object last changed date/time of 13:20:26 + configured delay value
of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3
seconds to satisfy the configured delay value.
• While the object retrieve job is waiting to satisfy the configured delay value, the
object is changed again at 13:20:28.
• After the delay (at time 13:20:30), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) again exceeds the current date/time (13:20:30) and delays for 2
seconds to satisfy the configured delay value.
• After the delay (at time 13:20:32), the Object Retrieve job again retrieves the “last
change date/time” attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 + configured delay value of :04 =
13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval
delay value has now been met, the object retrieve job continues with normal
processing and attempts to package the object.
392
Configuring to replicate SQL stored procedures and
user-defined functions
DB2 UDB for System i5 supports external stored procedures and SQL stored
procedures. This information is specifically for replicating SQL stored procedures and
user-defined functions. SQL stored procedures are defined entirely in SQL and may
contain SQL control statements. MIMIX can replicate operations related to stored
procedures that are written in SQL (SQL stored procedures), such as CREATE
PROCEDURE (create), DROP PROCEDURE (delete), GRANT PRIVILEGES ON
PROCEDURE (authority), and REVOKE PRIVILEGES ON PROCEDURE (authority).
An SQL procedure is a program created and linked to the database as the result of a
CREATE PROCEDURE statement that specifies the language SQL and is called using
the SQL CALL statement. For example, the following statement creates program
SQLPROC in LIBX and establishes it as a stored procedure associated with LIBX:
CREATE PROCEDURE LIBX/SQLPROC(OUT NUM INT) LANGUAGE SQL
SELECT COUNT(*) INTO NUM FROM FILEX
For SQL stored procedures, an independent program object is created by the system
and contains the code for the procedure. The program object usually shares the name
of the procedure and resides in the same library with which the procedure is
associated. A DROP PROCEDURE statement for an SQL procedure removes the
procedure from the catalog and deletes the external program object.
Procedures are associated with a particular library. Because information about the
procedure is stored in the database catalog and not the library, it cannot be seen by
looking at the library. Use System i5 Navigator to view the stored procedures
associated with a particular library (select Databases > Libraries).
393
Configuring to replicate SQL stored procedures and user-defined functions
2. Ensure that you have a data group object entry that includes the associated
program object. For example:
ADDDGOBJE DGDFN(name system1 system2) LIB1(library)
OBJ1(*ALL) OBJTYPE(*PGM)
394
395
Using Save-While-Active in MIMIX
396
value will also use save-while-active. All other attempts to save the object will use a
normal save.
Note: Although MIMIX has the capability to replicate DLOs using save/restore
techniques, it is recommended that DLOs be replicated using optimized
techniques, which can be configured using the DLO transmission method
under Object processing in the data group definition.
Example configurations
The following examples describe the SQL statements that could be used to view or
set the configuration settings for a data group definition (data group name, system 1
name, system 2 name) of MYDGDFN, SYS1, SYS2.
Example - Viewing: Use this SQL statement to view the values for the data group
definition:
SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MIMIX/DM0200P WHERE
DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Disabling: If you want to modify the values for a data group definition to
disable use of save-while-active for a data group and use a normal save, you could
use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Example - Modifying: If you want to modify a data group definition to enable use of
save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you
could use the following statement:
UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=’MYDGDFN’ AND
DGSYS=’SYS1’ AND DGSYS2=’SYS2’
Note: You only have to make this change on the management system; the network
system will be automatically updated by MIMIX.
397
Using Save-While-Active in MIMIX
398
Chapter17
Many of the Compare and Synchronize commands, which provide underlying support
for MIMIX AutoGuard, use an enhanced set of common parameters and a common
processing methodology that is collectively referred to as ‘object selection.’ Object
selection provides powerful, granular capability for selecting objects by data group,
object selection parameter, or a combination.
The following commands use the MIMIX object selection capability:
• Compare File Attributes (CMPFILA)
• Compare Object Attributes (CMPOBJA)
• Compare IFS Attributes (CMPIFSA)
• Compare DLO Attributes (CMPDLOA)
• Compare File Data (CMPFILDTA)
• Compare Record Count (CMPRCDCNT)
• Synchronize Object (SYNCOBJ)
• Synchronize IFS Object (SYNCIFS)
• Synchronize DLO (SYNCDLO)
The topics in this chapter include:
• “Object selection process” on page 399 describes object selection which interacts
with your input from a command so that the objects you expect are selected for
processing.
• “Parameters for specifying object selectors” on page 402 describes object
selectors and elements which allow you to work with classes of objects
• “Object selection examples” on page 407 provides examples and graphics with
detailed information about object selection processing, object order precedence,
and subtree rules.
• “Report types and output formats” on page 418 describes the output of compare
commands: spooled files and output files (outfiles).
399
Object selection process
The object selection process takes a candidate group of objects, subsets them as
defined by a list of object selectors, and produces a list of objects to be processed.
Figure 24 illustrates the process flow for object selection.
Candidate objects are those objects eligible for selection. They are input to the
object selection process. Initially, candidate objects consist of all objects on the
400
Object selection for Compare and Synchronize commands
system. Based on the command, the set of candidate objects may be narrowed down
to objects of a particular class (such as IFS objects).
The values specified on the command determine the object selectors used to further
refine the list of candidate objects in the class. An object selector identifies an object
or group of objects. Object selectors can come from the configuration information for
a specified data group, from items specified in the object selector parameter, or both.
MIMIX processing for object selection consists of two distinct steps. Depending on
what is specified on the command, one or both steps may occur.
The first major selection step is optional and is performed only if a data group
definition is entered on the command. In that case, data group entries are the
source for object selectors. Data group entries represent one of four classes of
objects: files, library-based objects, IFS objects, and DLOs. Only those entries that
correspond to the class associated with the command are used. The data group
entries subset the list of candidate objects for the class to only those objects that are
eligible for replication by the data group.
If the command specifies a data group and items on the object selection parameter,
the data group entries are processed first to determine an intermediate set of
candidate objects that are eligible for replication by the data group. That intermediate
set is input to the second major selection step. The second step then uses the input
specified on the object selection parameter to further subset the objects selected by
the data group entries.
If no data group is specified on the data group definition parameter, the object
selection parameter can be used independently to select from all objects on the
system.
The second major object selection step subsets the candidate objects based on
Object selectors from the command’s object selector parameter (file, object, IFS
object, or DLO). Up to 300 object selectors may be specified on the parameter. If
none are specified, the default is to select all candidate objects.
Note: A single object selector can select multiple objects through the use of generic
names and special values such as *ALL, so the resulting object list can easily
exceed the limit of 300 object selectors that can be entered on a command.
The selection parameter is separate and distinct from the data group
configuration entries. If a data group is specified, the possible object selectors are 1
to N, where N is defined by the number of data group entries. The remaining
candidate objects make up the resultant list of objects to be processed.
Each object selector consists of multiple object selector elements, which serve as
filters on the object selector. The object selector elements vary by object class.
Elements provide information about the object such as its name, an indicator of
whether the objects should be included in or omitted from processing, and name
mapping for dual-system and single-system environments. See Table 47 for a list of
object selector elements by object class.
Order precedence
Object selectors are always processed in a well-defined sequence, which is important
when an object matches more than one selector.
401
Parameters for specifying object selectors
Selectors from a data group follow data group rules and are processed in most- to
least-specific order. Selectors from the object selection parameter are always
processed last to first. If a candidate object matches more than one object selector,
the last matching selector in the list is used.
As a general rule when specifying items on an object selection parameter, first specify
selectors that have a broad scope and then gradually narrow the scope in subsequent
selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1.
“Object selection examples” on page 407 illustrates the precedence of object
selection.
For each object selector, the elements are checked according to a priority defined for
the object class. The most specific element is checked for a match first, then the
subsequent elements are checked according to their priority. For additional, detailed
information about order precedence and priority of elements, see the following topics:
• “How MIMIX uses object entries to evaluate journal entries for replication” on
page 101
• “Identifying IFS objects for replication” on page 118
• “How MIMIX uses DLO entries to evaluate journal entries for replication” on
page 124
• “Processing variations for common operations” on page 130
402
Object selection for Compare and Synchronize commands
Name mapping System 2 file1 System 2 object System 2 path System 2 path
elements: System 2 library1 System 2 library System 2 name System 2 name
pattern pattern
1. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.
File name and object name elements: The File name and Object name elements
allow you to identify a file or object by name. These elements allow you to choose a
specific name, a generic name, or the special value *ALL.
Using a generic name, you can select a group of files or objects based on a common
character string. If you want to work with all objects beginning with the letter A, for
example, you would specify A* for the object name.
To process all files within the related selection criteria, select *ALL for the file or object
name. When a data group is also specified on the command, a value of *ALL results
in the selection of files and objects defined to that data group by the respective data
group file entries or data group object entries. When no data group is specified on the
command, specifying *ALL and a library name, only the objects that reside within the
given library are selected.
Library name element: The library name element specifies the name of the library
that contains the files or objects to be included or omitted from the resultant list of
403
Parameters for specifying object selectors
objects. Like the file or object name, this element allows you to define a library a
specific name, a generic name, or the special value *ALL.
Note: The library value *ALL is supported only when a data group is specified.
Member element: For commands that support the ability to work with file members,
the Member element provides a means to select specific members. The Member
element can be a specific name, a generic name, or the special value *ALL.
Refer to the individual commands for detailed information on member processing.
Object path name (IFS) and DLO path name elements: The Object path name
(IFS) and DLO path name elements identify an object or DLO by path name. They
allow a specific path, a generic path, or the special value *ALL.
Traditionally, DLOs are identified by a folder path and a DLO name. Object selection
uses an element called DLO path, which combines the folder path and the DLO
name.
If you specify a data group, only those objects defined to that data group by the
respective data group IFS entries or data group DLO entries are selected.
Directory subtree and folder subtree elements: The Directory subtree and Folder
subtree elements allow you to expand the scope of selected objects and include the
descendants of objects identified by the given object or DLO path name. By default,
the subtree element is *NONE, and only the named objects are selected. However, if
*ALL is used, all descendants of the named objects are also selected.
Figure 25 illustrates the hierarchical structure of folders and directories prior to
processing, and is used as the basis for the path, pattern, and subtree examples
shown later in this document. For more information, see the graphics and examples
beginning with “Example subtree” on page 410.
404
Object selection for Compare and Synchronize commands
Directory subtree elements for IFS objects: When selecting IFS objects, only the
objects in the file system specified will be included. Object selection will not cross file
system boundaries when processing subtrees with IFS objects. Objects from other file
systems do not need to be explicitly excluded, however you will need to specify if you
want to include objects from other file systems. For more information, see the graphic
and examples beginning with “Example subtree for IFS objects” on page 415.
Name pattern element: The Name pattern element provides a filter on the last
component of the object path name. The Name pattern element can be a specific
name, a generic name, or the special value *ALL.
If you specify a pattern of $*, for example, only those candidate objects with names
beginning with $ that reside in the named DLO path or IFS object path are selected.
Keep in mind that improper use of the Name pattern element can have undesirable
results. Let us assume you specified a path name of /corporate, a subtree of *NONE,
and pattern of $*. Since the path name, /corporate, does not match the pattern of $*,
the object selector will identify no objects. Thus, the Name pattern element is
generally most useful when subtree is *ALL.
For more information, see the “Example Name pattern” on page 414.
Object type element: The Object type element provides the ability to filter objects
based on an object type. The object type is valid for library-based objects, IFS
objects, or DLOs, and can be a specific value or *ALL. The list of allowable values
varies by object class.
When you specify *ALL, only those object types which MIMIX supports for replication
are included. For a list of replicated object types, see “Supported object types for
system journal replication” on page 549.
Supported object types for CMPIFSA and SYNCIFS are listed in Table 48.
*ALL All directories, stream files, and symbolic links are selected
*DIR Directories
Supported object types for CMPDLOA and SYNCDLO are listed in Table 49.
*DOC Documents
*FLR Folders
405
Parameters for specifying object selectors
For unique object types supported by a specific command, see the individual
commands.
Object attribute element: The Object attribute element provides the ability to filter
based on extended object attribute. For example, file attributes include PF, LF, SAVF,
and DSPF, and program attributes include CLP and RPG. The attribute can be a
specific value, a generic value, or *ALL.
Although any value can be entered on the Object attribute element, a list of supported
attributes is available on the command. Refer to the individual commands for the list
of supported attributes.
Owner element: The Owner element allows you to filter DLOs based on DLO owner.
The Owner element can be a specific name or the special value *ALL. Only candidate
DLOs owned by the designated user profile are selected.
Include or omit element: The Include or omit element determines if candidate
objects or included in or omitted from the resultant list of objects to be processed by
the command.
Included entries are added to the resultant list and become candidate objects for
further processing. Omitted entries are not added to the list and are excluded from
further processing.
System 2 file and system 2 object elements: The System 2 file and System 2
object elements provide support for name mapping. Name mapping is useful when
working with multiple sets of files or objects in a dual-system or single-system
environment.
This element may be a specific name or the special value *FILE1 for files or *OBJ1 for
objects. If the File or Object element is not a specific name, then you must use the
default value of *FILE1 or *OBJ1. This specification indicates that the name of the file
or object on system 2 is the same as on system 1 and that no name mapping occurs.
Generic values are not supported for the system 2 value if a generic value was
specified on the File or Object parameter.
System 2 library element: The System 2 library element allows you to specify a
system 2 library name that differs from the system 1 library name, providing name
mapping between files or objects in different libraries.
This element may be a specific name or the special value *LIB1. If the System 2
library element is not a specific name, then you must use the default value of *LIB1.
This specification indicates that the name of the library on system 2 is the same as on
system 1 and that no name mapping occurs. Generic values are not supported for the
system 2 value if a generic value was specified on the Library object selector.
System 2 object path name and system 2 DLO path name elements: The System
2 object path name and System 2 DLO path name elements support name mapping
for the path specified in the Object path name or DLO path name element. Name
mapping is useful when working with two sets of IFS objects or DLOs in different
paths in either a dual-system or single-system environment.
Generic values are not supported for the system 2 value if you specified a generic
value for the IFS Object or DLO element. Instead, you must choose the default values
of *OBJ1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of
406
Object selection for Compare and Synchronize commands
the file or object on system 2 is the same as that value on system 1. The default
provides support for a two-system environment without name mapping.
System 2 name pattern element: The System 2 name pattern provides support for
name mapping for the descendents of the path specified for the Object path name or
DLO path name element.
The System 2 name pattern element may be a specific name or the special value
*PATTERN1. If the Object path name or DLO path name element is not a specific
name, then you must use the default value of *PATTERN1. This specification
indicates that no name mapping occurs. Generic values are not supported for the
System 2 name pattern element if you specified a generic value for the Name pattern
element.
AB LIBX *SBSD
A LIBX *OUTQ
DE LIBX *DTAARA
D LIBX *CMD
Next, Table 51 represents the object selectors based on the data group object entry
configuration for data group DG1. Objects are evaluated against data group entries in
the same order of precedence used by replication processes.
Table 51. Object selectors from data group entries for data group DG1
407
Object selection examples
Table 51. Object selectors from data group entries for data group DG1
The object selectors from the data group subset the candidate object list, resulting in
the list of objects defined to the data group shown in Table 52. This list is internal to
MIMIX and not visible to users.
A LIBX *OUTQ
AB LIBX *SBSD
Note: Although job queue DEF in library LIBX did not appear in Table 50, it would be
added to the list of candidate objects when you specify a data group for some
commands that support object selection. These commands are required to
identify or report candidate objects that do not exist.
Perhaps you now want to include or omit specific objects from the filtered candidate
objects listed in Table 52. Table 53 shows the object selectors to be processed based
on the values specified on the object selection parameter. These object selectors
serve as an additional filter on the candidate objects.
The objects compared by the CMPOBJA command are shown in Table 54. These
are the result of the candidate objects selected by the data group (Table 52) that were
subsequently filtered by the object selectors specified for the Object parameter on the
CMPOBJA command (Table 53).
A LIBX *OUTQ
AB LIBX *SBSD
In this example, the CMPOBJA command is used to compare a set of objects. The
input source is a selection parameter. No data group is specified.
408
Object selection for Compare and Synchronize commands
The data in the following tables show how candidate objects would be processed in
order to achieve a resultant list of objects.
Table 55 lists all the candidate objects on your system.
AB LIBX *SBSD
A LIBX *OUTQ
DE LIBX *DTAARA
D LIBX *CMD
Table 56 represents the object selectors chosen on the object selection parameter.
The sequence column identifies the order in which object selectors were entered. The
object selectors serve as filters to the candidate objects listed in Table 55.
The last object selector entered on the command is the first one used when
determining whether or not an object matches a selector. Thus, generic object
selectors with the broadest scope, such as A*, should be specified ahead of more
specific generic entries, such as ABC*. Specific entries should be specified last.
409
Object selection examples
Table 58 represents the included objects from Table 57. This filtered set of candidate
objects is the resultant list of objects to be processed by the CMPOBJA command.
A LIBX *OUTQ
AB LIBX *SBSD
D LIBX *CMD
DE LIBX *DTAARA
Example subtree
In the following graphics, the shaded area shows the objects identified by the
combination of the Object path name and Subtree elements of the Object parameter
for an IFS command. Circled objects represent the final list of objects selected for
processing.
410
Object selection for Compare and Synchronize commands
411
Object selection examples
additional filtering is performed on the objects identified by the path and subtree. The
candidate objects selected consist of the specified objects only.
412
Object selection for Compare and Synchronize commands
413
Object selection examples
414
Object selection for Compare and Synchronize commands
scenario, only those candidate objects which match the generic pattern value ($123,
$236, and $895) are selected for processing.
415
Object selection examples
Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded
areas are the file systems. Table 59 contains examples showing what file systems
would be selected with the path names specified and a subtree specification of *ALL.
Table 59. Examples of specified paths and objects selected for Figure 31
416
Object selection for Compare and Synchronize commands
417
Report types and output formats
Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output
parameter. The spooled output consists of four main sections—the input or header
section, the object selection list section, the differences section, and the summary
section.
First, the header section of the spooled report includes all of the input values specified
on the command, including the data group value (DGDFN), comparison level
(CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual
attributes compared, number of files, objects, IFS objects or DLOs compared, and
number of detected differences. It also provides a legend that provides a description
of special values used throughout the report.
418
The second section of the report is the object selection list. This section lists all of the
object selection entries specified on the comparison command. Similar to the header
section, it provides details on the input values specified on the command.
The detail section is the third section of the report, and provides details on the objects
and attributes compared. The level of detail in this section is determined by the report
type specified on the command. A report type value of *ALL will list all objects
compared, and will begin with a summary status that indicates whether or not
differences were detected. The summary row indicates the overall status of the object
compared. Following the summary row, each attribute compared is listed—along with
the status of the attribute and the attribute value. In the event the attribute compared
is an indicator, a special value of *INDONLY will be displayed in the value columns.
A comparison level value of *DIF will list details only for those objects with detected
attribute differences. A value of *SUMMARY will not include the detail section for any
object.
The fourth section of the report is the summary, which provides a one row summary
for each object compared. Each row includes an indicator that indicates whether or
not attribute differences were detected.
Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output
parameter. Similar to the spooled output, the level of output in the output file is
dependent on the report type value specified on the Report type parameter.
Each command is shipped with an outfile template that uses a normalized database
to deliver a self-defined record, or row, for every attribute you compare. Key
information, including the attribute type, data group name, timestamp, command
name, and system 1 and system 2 values, helps define each row. A summary row
precedes the attribute rows. The normalized database feature ensures that new
object attributes can be added to the audit capabilities without disruption to current
automation processing.
The template files for the various commands are located in the MIMIX product library.
419
Chapter18
Comparing attributes
This chapter describes the commands that compare attributes: Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands
are designed to audit the attributes, or characteristics, of the objects within your
environment and report on the status of replicated objects. Together, these command
are collectively referred to as the compare attributes commands.
You may already be using the compare attributes commands when they are called by
audit functions within MIMIX AutoGuard. When used in combination with the
automatic recovery features in MIMIX AutoGuard, the compare attributes commands
provide robust functionality to help you determine whether your system is in a state to
ensure a successful rollover for planned events or failover for unplanned events.
The topics in this chapter include:
• “About the Compare Attributes commands” on page 420 describes the unique
features of the Compare Attributes commands (CMPFILA, CMPOBJA, CMPIFSA,
and CMPDLOA.
• “Comparing file and member attributes” on page 425 includes the procedure to
compare the attributes of files and members.
• “Comparing object attributes” on page 428 includes the procedure to compare
object attributes.
• “Comparing IFS object attributes” on page 431 includes the procedure to compare
IFS object attributes.
• “Comparing DLO attributes” on page 434 includes the procedure to compare DLO
attributes.
420
Comparing attributes
provides you with assurance that files are most likely synchronized.
• The CMPOBJA command supports many attributes important to other library-
based objects, including extended attributes. Extended attributes are attributes
unique to given objects, such as auto-start job entries for subsystems.
• The CMPIFSA and CMPDLOA commands provide enhanced audit capability for
IFS objects and DLOs, respectively.
Unique parameters
The following parameters for object selection are unique to the compare attributes
commands and allow you to specify an additional level of detail when comparing
objects or files.
Unique File and Object elements: The following are unique elements on the File
parameter (CMPFILA command) and Objects parameter (CMPOBJA command):
• Member: On the CMPFILA command, the value specified on the Member
element is only used when *MBR is also specified on the Comparison level
parameter.
• Object attribute: The Object attribute element enables you to select particular
characteristics of an object or file, and provides a level of filtering. For details, see
“CMPFILA supported object attributes for *FILE objects” on page 423 and
“CMPOBJA supported object attributes for *FILE objects” on page 423.
System 2: The System 2 parameter identifies the remote system name, and
represents the system to which objects on the local system are compared.
This parameter is ignored when a data group is specified, since the system 2
421
About the Compare Attributes commands
information is derived from the data group. A value is required if no data group is
specified.
Comparison level (CMPFILA only): The Comparison level parameter indicates
whether attributes are compared at the file level or at the member level.
System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only):
The System 1 ASP group and System 2 ASP group parameters identify the name of
the auxiliary storage pool (ASP) group where objects configured for replication may
reside. The ASP group name is the name of the primary ASP device within the ASP
group. This parameter is ignored when a data group is specified.
422
Comparing attributes
report, the auto-start job entry attribute is ignored for object types that are not of type
*SBSD.
If a data group is specified on a compare request, configuration data is used when
comparing objects that are identified for replication through the system journal. If an
object’s configured object auditing value (OBJAUD) is *NONE, its attribute changes
are not replicated. When differences are detected on attributes of such an object, they
are reported as *EC (equal configuration) instead of being reported as *NE (not
equal).
For *FILE objects configured for replication through the system journal and configured
to omit T-ZC journal entries, also see “Omit content (OMTDTA) and comparison
commands” on page 389.
*ALL All physical and logical file types are selected for processing
LF Logical file
423
About the Compare Attributes commands
424
Comparing file and member attributes
You can compare file attributes to ensure that files and members needed for
replication exist on both systems or any time you need to verify that files are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring escape messages for differences
in file attributes, be aware that differences due to active replication (Step 16)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of files and members, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1
(Compare file attributes) and press Enter.
3. The Compare File Attributes (CMPFILA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare files by name only, specify *NONE and continue with the next step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors
that either identify files to compare or that act as filters to the files defined to the
data group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 399.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you
want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
425
Comparing file and member attributes
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Comparison level prompt, accept the default to compare files at a file level
only. Otherwise, specify *MBR to compare files at a member level.
Note: If *FILE is specified, the Member prompt is ignored (see Step 4b).
7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes based on whether the comparison is at a file or member level or
press F4 to see a valid list of attributes.
8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Report type prompt, specify the level of detail for the output report.
12. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 14.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.
13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 18.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the Maximum replication lag prompt, specify the maximum amount of time
between when a file in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
426
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, specify *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
427
Comparing object attributes
428
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing objects not defined
to a data group. If necessary, specify the name of the remote system to which
objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Report type prompt, specify the level of detail for the output report.
11. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 13.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.
12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 17.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the Maximum replication lag prompt, specify the maximum amount of time
between when an object in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
429
Comparing object attributes
16. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
430
Comparing IFS object attributes
You can compare IFS object attributes to ensure that IFS objects needed for
replication exist on both systems or any time you need to verify that IFS objects are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring for differences in IFS object
attributes, be aware that differences due to active replication (Step 13) are
signaled via a new difference indicator (*UA) and escape message. See the
auditing and reporting topics in this book.
To compare the attributes of IFS objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3
(Compare IFS attributes) and press Enter.
3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group
definition prompts, do one of the following:
• To compare attributes for all IFS objects defined by the data group IFS object
entries for a particular data group definition, specify the data group name and
skip to Step 6.
• To compare IFS objects by object path name only, specify *NONE and continue
with the next step.
• To compare a subset of IFS objects defined to a data group, specify the data
group name and continue with the next step.
4. At the IFS objects prompts, you can specify elements for one or more object
selectors that either identify IFS objects to compare or that act as filters to the IFS
objects defined to the data group indicated in Step 3. For more information, see
“Object selection for Compare and Synchronize commands” on page 399.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object path name prompt, accept *ALL or specify the name or the
generic value you want.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
compare.
e. At the Include or omit prompt, specify the value you want.
431
Comparing IFS object attributes
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which IFS objects on the local system are compared.
Note: The System 2 object path name and System 2 name pattern values are
ignored if a data group is specified on the Data group definition
prompts.
g. Press Enter.
5. The System 2 parameter prompt appears if you are comparing IFS objects not
defined to a data group. If necessary, specify the name of the remote system to
which IFS objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when an IFS object in the data group changes and when replication of
the change is expected to be complete, or accept *DFT to use the default
maximum time of 300 seconds (5 minutes). You can also specify *NONE, which
indicates that comparisons should occur without consideration for replication in
progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
432
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
433
Comparing DLO attributes
434
f. At the Include or omit prompt, specify the value you want.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which DLOs on the local system are compared.
Note: The System 2 DLO path name and System 2 DLO name pattern values
are ignored if a data group is specified on the Data group definition
prompts.
h. Press Enter.
5. The System 2 parameter prompt appears if you are comparing DLOs not defined
to a data group. If necessary, specify the name of the remote system to which
DLOs on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
• To generate print output, accept *PRINT and press Enter.
• To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
• To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when a DLO in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
435
Comparing DLO attributes
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
436
Chapter19
This chapter describes the features and capabilities of the Compare Record Counts
(CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command.
The topics in this chapter include:
• “Comparing file record counts” on page 437 describes the CMPRCDCNT
command and provides a procedure for performing the comparison.
• “Significant features for comparing file member data” on page 440 identifies
enhanced capabilities available for use when comparing file member data.
• “Considerations for using the CMPFILDTA command” on page 441 describes
recommendations and restrictions of the command. This topic also describes
considerations for security, use with firewalls, comparing records that are not
allocated, as well as comparing records with unique keys, triggers, and
constraints.
• “Specifying CMPFILDTA parameter values” on page 445 provides additional
information about the parameters for selecting file members to compare and using
the unique parameters of this command.
• “Advanced subset options for CMPFILDTA” on page 451 describes how to use
the capability provided by the Advanced subset options (ADVSUBSET)
parameter.
• “Ending CMPFILDTA requests” on page 454 describes how to end a CMPFILDTA
request that is in progress and describes the results of ending the job.
• “Comparing file member data - basic procedure (non-active)” on page 455
describes how to compare file data in a data group that is not active.
• “Comparing and repairing file member data - basic procedure” on page 458
describes how to compare and repair file data in a data group that is not active.
• “Comparing and repairing file member data - members on hold (*HLDERR)” on
page 461 describes how to compare and repair file members that are held due to
error using active processing.
• “Comparing file member data using active processing technology” on page 464
describes how to use active processing to compare file member data.
• “Comparing file member data using subsetting options” on page 467 describes
how to use the subset feature of the CMPFILDTA command to compare a portion
of member data at one time.
437
Comparing file record counts
command compares the number of current records (*CURRDS) and the number of
deleted records (*NBRDLTRCDS) for members of physical files that are defined for
replication by an active data group. In resource-constrained environments, this
capability provides a less-intensive means to gauge whether files are likely to be
synchronized.
Note: Equal record counts suggest but do not guarantee that members are
synchronized. To check for file data differences, use the Compare File Data
(CMPFILDTA) command. To check for attribute differences, use the Compare
File Attributes (CMPFILA) command.
Members to be processed must be defined to a data group that permits replication
from a user journal. Journaling is required on the source system. User journal
replication processes must be active when this command is used.
Members on both systems can be actively modified by applications and by MIMIX
apply processes while this command is running.
For information about the results of a comparison, see “What differences were
detected by #MBRRCDCNT” on page 583.
The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase.
Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery
phase. Differences detected by this audit appear as not recovered in the Audit
Summary user interfaces. Any repairs must be undertaken manually, in the following
ways:
• In MIMIX Availability Manager, repair actions are available for specific errors when
viewing the output file for the audit.
• Run the #FILDTA audit for the data group to detect and correct problems.
• Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.
438
Comparing file record counts and file member data
439
Significant features for comparing file member data
Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it
detects in member data between systems.
When files are not synchronized, the CMPFILDTA command provides the ability to
resynchronize the file at the record level by sending only the data for the incorrect
member to the target system. (In contrast, the Synchronize DG File Entry
(SYNCDGFE) command would resynchronize the file by transferring all data for the
file from the source system to the target system.)
440
Comparing file record counts and file member data
Additional features
The CMPFILDTA command incorporates many other features to increase
performance and efficiency.
Subsetting and advanced subsetting options provide a significant degree of flexibility
for performing periodic checks of a portion of the data within a file.
Parallel processing uses multi-threaded jobs to break up file processing into smaller
groups for increased throughput. Rather than having a single-threaded job on each
system, multiple “thread groups” break up the file into smaller units of work. This
technology can benefit environments with multiple processors as well as systems with
a single processor.
441
Considerations for using the CMPFILDTA command
Keyed replication - Although you can run the CMPFILDTA command on keyed files,
the command only supports files configured for *POSITIONAL replication. The
CMPFILDTA command cannot compare files configured for *KEYED replication.
SNA environments - CMPFILDTA requires a TCP/IP transfer definition—you cannot
use SNA. You can be configured for SNA, but then you must override CMPFILDTA to
refer to a transfer definition. For more information, see “System-level
communications” on page 159.
Apply threshold and apply backlog - Do not compare data using active processing
technology if the apply process is 180 seconds or more behind, or has exceeded a
threshold limit.
Security considerations
You should take extra precautions when using CMPFILDTA’s repair function, as it is
capable of accessing and modifying data on your system.
To compare file data, you must have read access on both systems. When using the
repair function, write access on the system to be repaired may also be necessary
when active technology is not used.
CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a
remote process using RUNCMD, which requires two conditions to be true. First, the
user profile of the job that is invoking CMPFILDTA must exist on the remote system
and have the same password on the remote system as it does on the local system.
Second, the user profile must have appropriate read or update access to the
members to be compared or repaired. If active processing and repair is requested,
only read access is needed. In this case, the repair processing would be done by the
database apply process.
442
Comparing file record counts and file member data
If one or more members differ in the manner described above, a distinct escape
message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor
these escape messages specifically.
Update, insert, and *NEW Any value other than *NO Not supported
delete *NONE
Update, insert, and *NEW Any value other than *YES Supported
delete *NONE
443
Considerations for using the CMPFILDTA command
Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA
job. However, the run priority of either CMPFILDTA job is superseded if a
444
Comparing file record counts and file member data
CMPFILDTA class object (*CLS) exists in the installation library of the system on
which the job is running.
Note: Use the Change Job (CHGJOB) command on the local system to modify the
run priority of the local job. CMPFILDTA uses the priority of the local job to set
the priority of the remote job, so that both jobs have the same run priority. To
set the remote job to run at a different priority than the local job, use the
Create Class (CRTCLS) command to create a *CLS object for the job you
want to change.
445
Specifying CMPFILDTA parameter values
446
Comparing file record counts and file member data
When members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair
members held due to error—and when possible, restore them to an active state.
Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A
data group must also be specified on the command or the parameter is ignored. The
default value, *ALL, indicates that all supported entry statuses (*ACTIVE and
*HLDERR) are included in compare and repair processing. The value *ACTIVE
processes only those members that are active1. When *HLDERR is specified, only
member-level entries being held due to error are selected for processing. To repair
members held due to error using *ALL or *HLDERR, you must also specify that the
repair be performed on the target system and request that active processing be used.
System 1 ASP group and System 2 ASP group: The System 1 ASP group and
System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP)
group where objects configured for replication may reside. The ASP group name is
the name of the primary ASP device within the ASP group. This parameter is ignored
when a data group is specified. You must be running on OS V5R2 or greater to use
these parameters.
Subsetting option: The Subsetting option parameter provides a robust means by
which to compare a subset of the data within members. In some instances, the value
you select will determine which additional elements are used when comparing data.
Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or
*RANGE. If *ALL is specified, all data within all selected files is compared, and no
additional subsetting is performed. The other options compare only a subset of the
data.
The following are common scenarios in which comparing a subset of your data is
preferable:
• If you only need to check a specific range of records, use *RANGE.
• When a member, such as a history file, is primarily modified with insert operations,
only recently inserted data needs to be compared. In this situation, use *ENDDTA.
• If time does not permit a full comparison, you can compare a random sample
using *ADVANCED.
• If you do not have time to perform a full comparison all at once but you want all
data to be compared over a number of days, use *ADVANCED.
*RANGE indicates that the Subset range parameter will be used to specify the subset
of records to be compared. For more information, see the “Subset range” section.
If you select *ENDDTA, the Records at end of file parameter specifies how many
trailing records are compared. This value allows you to compare a selected number of
records at the end of all selected members. For more information, see the section
titled “Records at end of file.”
Advanced subsetting can be used to audit your entire database over a number of
days or to request that a random subset of records be compared. To specify
1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve pre-
vious behavior, specify STATUS(*ACTIVE).
447
Specifying CMPFILDTA parameter values
448
Comparing file record counts and file member data
449
Specifying CMPFILDTA parameter values
Transfer definition: The default for the Transfer definition parameter is *DFT. If a
data group was specified, the default uses the transfer definition associated with the
data group. If no data group was specified, the transfer definition associated with
system 2 is used.
The CMPFILDTA command requires that you have a TCP/IP transfer definition for
communication with the remote system. If your data group is configured for SNA,
override the SNA configuration by specifying the name of the transfer definition on the
command.
Number of thread groups: The Number of thread groups parameter indicates how
many thread groups should be used to perform the comparison. You can specify from
1 to 100 thread groups.
When using this parameter, it is important to balance the time required for processing
against the available resources. If you increase the number of thread groups in order
to reduce processing time, for example, you also increase processor and memory
use. The default, *CALC, will determine the number of thread groups automatically.
To maximize processing efficiency, the value *CALC does not calculate more than 25
thread groups.
The actual number of threads used in the comparison is based on the result of the
formula 2x + 1, where x is the value specified or the value calculated internally as the
result of specifying *CALC. When *CALC is specified, the CMPFILDTA command
displays a message showing the value calculated as the number of thread groups.
Note: Thread groups are created for primary compare processing only. During
setup, multiple threads may be utilized to improve performance, depending on
the number of members selected for processing. The number of threads used
during setup will not exceed the total number of threads used for primary
compare processing. During active processing, only one thread will be used.
Wait time (seconds): The Wait time (seconds) value is only valid when active
processing is in effect and specifies the amount of time to wait for active processing to
complete. You can specify from 0 to 3600 seconds, or the default *NOMAX.
If active processing is enabled and a wait time is specified, CMPFILDTA processing
waits the specified time for all pending compare operations processed through the
MIMIX replication path to complete. In most cases, the *NOMAX default is highly
recommended.
DB apply threshold: The DB apply threshold parameter is only valid during active
processing and requires that a data group be specified. The parameter specifies
what action CMPFILDTA should take if the database apply session backlog exceeds
the threshold warning value configured for the database apply process. The default
value *END stops the requested compare and repair action when the database apply
threshold is reached; any repair actions that have not been completed are lost. The
value *NOMAX allows the compare and repair action to continue even when the
database apply threshold has been reached. Continuing processing when the apply
process has a large backlog may adversely affect performance of the CMPFILDTA
job and its ability to compare a file with an excessive number of outstanding entries.
Therefore, *NOMAX should only be used in exceptional circumstances.
450
Comparing file record counts and file member data
451
Advanced subset options for CMPFILDTA
If you specify *NONE, records in each member are divided on a percentage basis. For
example:
Note that when the total number of records in a member changes, the mapping also
changes. Records that were once assigned to bin 2 may in the future be assigned to
bin 1. If you wish to compare all records over the course of a few days, the changing
mapping may cause you to miss records. A specific Interleave value is preferable in
this case.
Using bytes, the Interleave value specifies a number of contiguous records that
should be assigned to each bin before moving to the next bin. Once the last bin is
filled, assignment restarts at the first bin. Let us assume you have specified in
interleave value of 20 bytes. The following example is based on the one provided in
Table 63:
Interleave (bytes): 20 20
Interleave (records): 2 2
452
Comparing file record counts and file member data
If the Interleave and Number of Subsets is constant, the mapping of relative record
numbers to bins is maintained, despite the growth of member size. Because every bin
is eventually selected, comparisons made over several days will compare every
record that existed on the first day.
In most circumstances, *CALC is recommended for the interleave specification. When
you select *CALC, the system determines how many contiguous bytes are assigned
to each bin before subsequent bytes are placed in the next bin. This calculated value
will not change due to member size changes.
Specifying *NONE or a very large interleave factor maximizes processing efficiency,
since data in each bin is processed sequentially. Specifying a very small interleave
factor can greatly reduce efficiency, as little sequential processing can be done before
the file must be repositioned. If you wish to compare a random sample, a smaller
interleave factor provides a more random, or scattered, sample to compare.
The next parameters, the First subset and the Last subset, allow you to specify which
bin to process.
First and last subset: The First subset and Last subset values work in combination
to determine a range of bins to compare. For the First subset, the possible values are
*FIRST and subset-number. If you select *FIRST, the range to compare will start with
bin 1. Last subset has similar values, *LAST and subset-number. When you specify
*LAST, the highest numbered bin is the last one processed.
To compare a random sample of your data, specify a range of subsets that represent
the size of the sample. For example, suppose you wish to compare seven percent of
your data. If the number of subsets are 100, the first subset is 1, and the last subset is
7, seven percent of the data is compared. A first subset value of 21 and a last subset
value of 27 would also compare seven percent of your data, but it would compare a
different seven percent than the first example.
453
Ending CMPFILDTA requests
To compare all your data over the course of several days, specify the number of
subsets and interleave factor that allows you to size each day’s workload as your
needs require. For example, you would keep the subset value and interleave factor a
constant, but vary the First and Last subset values each day. The following settings
could be used over the course of a week to compare all of your data:
Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX
Monitor documentation for more information.
454
Comparing file member data - basic procedure (non-
active)
You can use the CMPFILDTA command to ensure that data required for replication
exists on both systems and any time you need to verify that files are synchronized
between systems. You can optionally specify that results of the comparison are
placed in an outfile.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 441. You
should also read “Specifying CMPFILDTA parameter values” on page 445 for
additional information about parameters and values that you can specify.
To perform a basic data comparison, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors
that either identify files to compare or that act as filters to the files defined to the
data group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 399.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you
want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
455
Comparing file member data - basic procedure (non-active)
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, accept *NONE to indicate that no repair action is
done.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
456
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
457
Comparing and repairing file member data - basic procedure
458
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or
the system definition name to indicate the system on which repair action should
be performed.
Note: *TGT and *SRC are only valid if you are comparing files defined to a data
group. *SRC is not valid if active processing is in effect.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
13. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
459
Comparing and repairing file member data - basic procedure
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
460
Comparing and repairing file member data - members on
hold (*HLDERR)
Members that are being held due to error (*HLDERR) can be repaired with the
Compare File Data (CMPFILDTA) command during active processing. When
members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair the
members—and when possible, restore them to an active state.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 441. You
should also read “Specifying CMPFILDTA parameter values” on page 445 for
additional information about parameters and values that you can specify.
The following procedure repairs a member without transmitting the entire member. As
such, this method is generally faster than other methods of repairing members in
*HLDERR status that transmit the entire member or file. However, if significant activity
has occurred on the source system that has not been replicated on the target system,
it may be faster to synchronize the member using the Synchronize Data Group File
Entry (SYNCDGFE) command.
To repair a member with a status of *HLDERR, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, you must specify a data group name.
Note: If you want to compare data for all files defined by the data group file
entries for a particular data group definition, skip to Step 5.
4. At the File prompts, you can optionally specify elements for one or more object
selectors that act as filters to the files defined to the data group indicated in
Step 3. For more information, see “Object selection for Compare and Synchronize
commands” on page 399.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you
want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. Press Enter.
Note: The System 2 file and System 2 library values are ignored when a data
461
Comparing and repairing file member data - members on hold (*HLDERR)
462
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To compare and repair the file, press Enter.
463
Comparing file member data using active processing technology
464
6. At the Process while active prompt, specify *YES or *DFT to indicate that active
processing technology be used in the comparison. Since a data group is specified
on the Data group definition prompts, *DFT will render the same results as *YES.
7. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
11. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
12. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 17.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that
you select *SYS2 for the System to receive output prompt.
465
Comparing file member data using active processing technology
16. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used when the command is invoked from outside of
shipped audits. When used as part of shipped audits, the default value is *OMIT
since the results are already placed in an outfile.
17. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
466
Comparing file member data using subsetting options
You can use the CMPFILDTA command to audit your entire database over a number
of days.
Before you begin, see the recommendations, restrictions, and security considerations
described in “Considerations for using the CMPFILDTA command” on page 441. You
should also read “Specifying CMPFILDTA parameter values” on page 445 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the subsetting options, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
• To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
• To compare data by file name only, specify *NONE and continue with the next
step.
• To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors
that either identify files to compare or that act as filters to the files defined to the
data group indicated in Step 3. For more information, see “Object selection for
Compare and Synchronize commands” on page 399.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you
want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
467
Comparing file member data using subsetting options
468
a. At the First record prompt, specify the relative record number of the first record
to compare in the range.
b. At the Last record prompt, specify the relative record number of the last record
to compare in the range.
c. Skip to Step 12.
11. At the Advanced subset options prompts, do the following:
a. At Number of subsets prompt, specify the number of approximately equal-
sized subsets to establish. Subsets are numbered beginning with 1.
b. At the Interleave prompt, specify the interleave factor. In most cases, the
default *CALC is highly recommended.
c. At the First subset prompt, specify the first subset in the sequence of subsets
to compare.
d. At the Last subset prompt, specify the last subset in the sequence of subsets
to compare.
12. At the Records at end of file prompt, specify the number of records at the end of
the member to compare. These records are compared regardless of other
subsetting criteria.
Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify
a value other than *NONE.
13. At the Report type prompt, do one of the following:
• If you want all compared objects to be included in the report, accept the
default.
• If you only want objects with detected differences to be included in the report,
specify *DIF.
• If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
• The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
• The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
14. At the Output prompt, do one of the following:
• To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
• To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
469
Comparing file member data using subsetting options
• If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 19.
• To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
15. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
16. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
17. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
18. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
19. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
20. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
21. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
22. To start the comparison, press Enter.
470
471
Chapter20
This chapter contains information about support provided by MIMIX commands for
synchronizing data between two systems. The data that MIMIX replicates must by
synchronized on several occasions.
• During initial configuration of a data group, you need to ensure that the data to be
replicated is synchronized between both systems defined in a data group.
• If you change the configuration of a data group to add new data group entries, the
objects must be synchronized.
• You may also need to synchronize a file or object if an error occurs that causes
the two systems to become not synchronized.
• The automatic recovery features of MIMIX® AutoGuard™ also use synchronize
commands to recover differences detected during replication and audits. If
automatic recovery policies are disabled, you may need to use synchronize
commands to correct a file or object in error or to correct differences detected by
audits or compare commands.
The Lakeview-provided synchronize commands can be loosely grouped by common
characteristics and the level of function they provide. Topic “Considerations for
synchronizing using MIMIX commands” on page 474 describes subjects that apply to
more than one group of commands, such as the maximum size of an object that can
be synchronized, how large objects are handled, and how user profiles are
addressed.
Initial synchronization: Initial synchronization can be performed manually with a
variety of MIMIX and IBM commands, or by using the Synchronize Data Group
(SYNCDG) command. The SYNCDG command is intended especially for performing
the initial synchronization of one or more data groups and uses the auditing and
automatic recovery support provided by MIMIX AutoGuard. The command can be
long-running. For information about initial synchronization, see these topics:
• “Performing the initial synchronization” on page 483 describes how to establish a
synchronization point and identifies other key information.
• Environments using MIMIX support for IBM WebSphere MQ have additional
requirements for the initial synchronization of replicated queue managers. For
more information, see the MIMIX for IBM WebSphere MQ book.
Synchronize commands: The commands Synchronize Object (SYNCOBJ),
Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide
robust support in MIMIX environments, for synchronizing library-based objects, IFS
objects, and DLOs, as well as their associated object authorities. Each command has
considerable flexibility for selecting objects associated with or independent of a data
group. Additionally, these commands are often called by other functions, such as by
the automatic recovery features of MIMIX AutoGuard and by options to synchronize
objects identified in tracking entries used with advanced journaling. For additional
information, see:
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
472
Synchronizing data between systems
page 478
• “About synchronizing tracking entries” on page 482
Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry
(SYNCDGACTE) command provides the ability to synchronize library-based objects,
IFS objects, and DLOs that are associated with data group activity entries which have
specific status values. The contents of the object and its attributes and authorities are
synchronized. For additional information, see “About synchronizing data group activity
entries (SYNCDGACTE)” on page 479.
Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE)
command provides the means to synchronize database files associated with a data
group by data group file entries. Additional options provide the means to address
triggers, referential constraints, logical files, and related files. For more information
about this command, see “About synchronizing file entries (SYNCDGFE command)”
on page 480.
Send Network commands: The Send Network Object (SNDNETOBJ), Send
Network IFS Object (SNDNETIFS), and Send Network DLO (SNDNETDLO)
commands support fewer usage options and usability benefits than the Synchronize
commands. These commands may require multiple invocations per library, path, or
directory, respectively. These commands do not support synchronizing based on a
data group name.
Procedures: The procedures in this chapter are for commands that are accessible
from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to
synchronize individual items in your configuration, the best approach is to use the
options provided on the displays where they are appropriate to use. The options call
the appropriate command and, in many cases, pre-select some of the fields. The
following procedures are included:
• “Synchronizing database files” on page 489
• “Synchronizing objects” on page 491
• “Synchronizing IFS objects” on page 495
• “Synchronizing DLOs” on page 499
• “Synchronizing data group activity entries” on page 503
• “Synchronizing tracking entries” on page 505
• “Sending library-based objects” on page 506
• “Sending IFS objects” on page 508
• “Sending DLO objects” on page 509
473
Considerations for synchronizing using MIMIX commands
1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify
*TFRDFN.
474
implicitly or explicitly. The following information describes slight variations in
processing.
475
Considerations for synchronizing using MIMIX commands
When synchronizing other object types, this command implicitly synchronizes user
profiles associated with the object if they do not exist on the target system. Although
only the requested object type, such as *PGM, is specified on the command, the
owning user profile, the primary group profile, and user profiles that have private
authorities to an object are implicitly synchronized. The object and associated user
profiles are synchronized. The status of the user profile on the target system is set to
*DISABLED.
476
The Synchronize commands (SYNCOBJ, SYNCIFS and SYNCDLO) do not change
the status of activity entries associated with the objects being synchronized. Activity
entries retain the same status after the command completes.
Note: The SYNCIFS command will change the status of an activity entry for an
IFS object configured for advanced journaling.
When advanced journaling is configured, each replicated activity has associated
tracking entries. When you use the SYNCOBJ or SYNCIFS commands to
synchronize an object that has a corresponding tracking entry, the status of the
tracking entry will change to *ACTIVE upon successful completion of the
synchronization request. If the synchronization is not successful, the status of the
tracking entry will remain in its original status or have a status of *HLD. If the data
group is not active, the status of the tracking entry will be updated once the data
group is restarted.
477
About MIMIX commands for synchronizing objects, IFS objects, and DLOs
478
Additional parameters: On each command, the following parameters provide
additional control of the synchronization process.
• The Save active parameter provides the ability to save the object in an active
environment using IBM's save while active support. Values supported are the
same as those used in related IBM commands.
• The Save active wait time parameter specifies the amount of time to wait for a
commit boundary or for a lock on an object. If a lock is not obtained in the
specified time, the object is not saved. If a commit boundary is not reached in the
specified time, the save operation ends and the synchronization attempt fails.
• The Maximum sending size (MB) parameter specifies the maximum size that an
object can be in order to be synchronized. For more information, see “Limiting the
maximum sending size” on page 474.
479
About synchronizing file entries (SYNCDGFE command)
*DATA This is the default value. Only the physical file data is replicated using
MIMIX Copy Active File processing. File attributes are not replicated
using this method.
If the file exists on the target system, MIMIX refreshes its contents. If the
file format is different on the target system, the synchronization will fail. If
the file does not exist on the target system, MIMIX uses save and restore
operations to create the file on the target system and then uses copy
active file processing to fill it with data from the file on the source system.
*ATR1 Only the physical file attributes are replicated and synchronized.
*AUT1 Only the authorities for the physical file are replicated and synchronized.
*SAVRST The content and attributes are replicated using the IBM i save and
restore commands. This method allows save-while-active operations.
This method also has the capability to save associated logical files.
1. Available when service pack SP070.00.0 or higher is installed.
Files with triggers: The SYNCDGFE command provides the ability to optionally
disable triggers during synchronization processing and enable them again when
processing is complete. The Disable triggers on file (DSBTRG) parameter specifies
whether the database apply process (used for synchronization) disables triggers
when processing a file.
The default value *DGFE uses data group file entry to determine whether triggers
should be disabled. The value *YES will disable triggers on the target system during
synchronization.
480
If configuration options for the data group, or optionally for a data group file entry,
allow MIMIX to replicate trigger-generated entries and disable the triggers, when
synchronizing a file with triggers you must specify *DATA as the sending mode.
Including logical files: The Include logical files (INCLF) parameter allows you to
include any attached logical files in the synchronization request. This parameter is
only valid when *SAVRST is specified for the Sending mode prompt.
Physical files with referential constraints: Physical files with referential constraints
require a field in another physical file to be valid. When synchronizing physical files
with referential constraints, ensure all files in the referential constraint structure are
synchronized concurrently during a time of minimal activity on the source system.
Doing so will ensure the integrity of synchronization points.
Including related files: You can optionally choose whether the synchronization
request will include files related to the file specified by specifying *YES for the Include
related (RELATED) parameter. Related files are those physical files which have a
relationship with the selected physical file by means of one or more join logical files.
Join logical files are logical files attached to fields in two or more physical files.
The Include related (RELATED) parameter defaults to *NO. In some environments,
specifying *YES could result in a high number of files being synchronized and could
potentially strain available communications and take a significant amount of time to
complete.
A physical file being synchronized cannot be name mapped if it is not in the same
library as the logical file associated with it. Logical files may be mapped by using
object entries.
481
About synchronizing tracking entries
482
Performing the initial synchronization
Ensuring that data is synchronized before you begin replication is crucial to
successful replication. How you perform the initial synchronization can be influenced
by the available communications bandwidth, the complexity of describing the data,
the size of the data, as well as time.
Note: If you have configured or migrated a MIMIX configuration to use integrated
support for IBM WebSphere MQ, you must use the procedure ‘Initial
synchronization for replicated queue managers’ in the MIMIX for IBM
WebSphere MQ book. Large IBM WebSphere MQ environments should plan
to perform this during off-peak hours.
483
Using SYNCDG to perform the initial synchronization
more flexibility in object selection and also provide the ability to synchronize object
authorities. By specifying a data group on any of these commands, you can
synchronize the data defined by its data group entries.
You can also use the Synchronize Data Group File Entry (SYNCDGFE) to
synchronize database files and members. This command provides the ability to
choose between MIMIX copy active file processing and save/restore processing
and provides choices for handling trigger programs during synchronization.
If you have configured or migrated to integrated advanced journaling, follow the
SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and
data queues, and SYNCDGFE procedures for files containing LOB data. You can
also use options to synchronize objects associated with tracking entries from the
Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries
display.
• SNDNET commands: The Send Network commands (SNDNETIFS,
SNDNETDLO, SNDNETOBJ) support fewer options for selecting and specifying
multiple objects and do not provide a way to specify by data group. These
commands may require multiple invocations per path, folder, or library,
respectively.
This chapter (“Synchronizing data between systems” on page 472) includes
additional information about the MIMIX SYNC and SNDNET commands.
484
• Apply any IBM PTFs (or their supersedes) associated with IBM i releases as
they pertain to your environment. Log in to Support Central and access the
Technical Documents page for a list of required and recommended IBM PTFs.
• Journaling is started on the source system for everything defined to the data
group.
• All replication processes are active.
• The user ID submitting the SYNCDG has *MGT authority in product level
security if it is enabled for the installation.
• No other audits (comparisons or recoveries) are in progress when the
SYNCDG is requested.
• Collector services has been started.
While the synchronization is in progress, other audits for the data group are prevented
from running. MIMIX Availability Manager displays initialization mode on the Audit
Summary and Compliance interfaces while running this command if the data group
definition (DGDFN) specifies *ALL.
485
Using SYNCDG to perform the initial synchronization
486
Verifying the initial synchronization
This procedure uses MIMIX AutoGuard™ to ensure your environment is ready to start
replication. Shipped policy settings for MIMIX allow audits to automatically attempt
recovery actions for any problems they detect. You should not use this procedure if
you have already synchronized your systems using the Synchronize Data Group
(SYNCDG) command or the automatic synchronization method in MIMIX IntelliStart.
The audits used in this procedure will:
• Verify that journaling is started on the source and target systems for the items you
identified in the deployed replication patterns. Without journaling, replication will
not occur.
• Verify that data is synchronized between systems. Audits will detect potential
problems with synchronization and attempt to automatically recover differences
found.
Do the following:
1. Check whether all necessary journaling is started for each data group. Enter the
following command:
(installation-library-name)/DSPDGSTS DGDFN(data-group-name)
VIEW(*DBFETE)
On the File and Tracking Entry Status display, The File Entries column identifies
how many file entries were configured from your replication patterns and indicates
whether any file entries are not journaled on the source and target systems. If you
are configured for advanced journaling, the Tracking Entries columns provide
similar information.
2. Use MIMIX AutoGuard to audit your environment. To access the audits, enter the
following command:
(installation-library-name)/WRKAUD
3. Each audit listed on the Work with Audits display is a unique combination of data
group and MIMIX rule. When verifying an initial configuration, you need to
perform a subset of the available audits for each data group in a specific order,
shown in Table 67. Do the following:
a. To change the number of active audits at any one time, enter the following
command:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(*NOMAX)
b. Use F18 (Subset) to subset the audits by the name of the rule you want to run.
c. Type a 9 (Run rule) next to the audit for each data group and press Enter.
487
Verifying the initial synchronization
Repeat Step 3b and Step 3c for each rule in Table 67 until you have started all the
listed audits for all data groups.
Table 67. Rules for initial validation, listed in the order to be performed.
Rule Name
1. #DGFE
2. #OBJATR
3. #FILATR
4. #IFSATR
5. #FILATRMBR
6. #DLOATR
d. Reset the number of active audit jobs to values consistent with regular
auditing:
CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY)
MAXACT(5)
4. Wait for all audits to complete. Some audits may take time to complete. Then
check the results and resolve any problems. You may need to change subsetting
values again so you can view all rule and data group combinations at once. On
the Work with Audits display, check the Audit Status column for the following
value:
*NOTRCVD - The comparison performed by the rule detected differences. Some
of the differences were not automatically recovered. Action is required. View
notifications for more information and resolve the problem.
Note: See the MIMIX AutoGuard document for more information about viewing
audit results.
488
Synchronizing database files
The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE)
command to synchronize selected database files associated with a data group,
between two systems. If you use this command when performing the initial
synchronization of a data group, use the procedure from the source system to send
database files to the target system.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About synchronizing file entries (SYNCDGFE command)” on page 480.
To synchronize a database file between two systems using the SYNCDGFE
command defaults, do the following or use the alternative process described below:
1. From the Work with DG Definitions display, type 17 (File entries) next to the data
group to which the file you want to synchronize is defined and press Enter.
2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next
to the file entry for the file you want to synchronize and press Enter.
Note: If you are synchronizing file entries as part of your initial configuration, you
can type 16 next to the first file entry and then press F13 (Repeat). When
you press Enter, all file entries will be synchronized.
Alternative Process:
You will need to identify the data group and data group file entry in this procedure. In
Step 8 and Step 9, you will need to make choices about the sending mode and trigger
support. For additional information, see “About synchronizing file entries
(SYNCDGFE command)” on page 480.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41
(Synchronize DG File Entry) and press Enter.
3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group
definition prompts, specify the name of the data group to which the file is
associated.
4. At the System 1 file and Library prompts, specify the name of the database file
you want to synchronize and the library in which it is located on system 1.
5. If you want to synchronize only one member of a file, specify its name at the
Member prompt.
6. At the Data source prompt, ensure that the value matches the system that you
want to use as the source for the synchronization.
7. The default value *YES for the Release wait prompt indicates that MIMIX will hold
the file entry in a release-wait state until a synchronization point is reached. Then
it will change the status to active. If you want to hold the file entry for your
intervention, specify *NO.
489
Synchronizing database files
8. At the Sending mode prompt, specify the value for the type of data to be
synchronized.
9. At the Disable triggers on file prompt, specify whether the database apply process
should disable triggers when processing the file. Accept *DGFE to use the value
specified in the data group file entry or specify another value. Skip to Step 14.
10. At the Save active prompt, accept *NO so that objects in use are not saved, or,
specify another value.
11. At the Save active wait time prompt, specify the number of seconds to wait for a
commit boundary or a lock on the object before continuing the save.
12. At the Allow object differences prompt, accept the default or specify *YES to
indicate whether certain differences encountered during the restore of the object
on the target system should be allowed.
13. At the Include logical files prompt, accept the default or *NO to indicate whether
you want to include attached logical files when sending the file.
14. To change any of the additional parameters, press F10 (Additional parameters).
Verify that the values shown for Include related files, Maximum sending file size
(MB) and Submit to batch are what you want.
15. To synchronize the file, press Enter
490
Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ) command to
synchronize library-based objects between two systems. The objects to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 478
491
Synchronizing objects
492
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
e. At the System 2 object and System 2 library prompts, if the object and library
names on system 2 are equal to the system 1 names, accept the defaults.
Otherwise, specify the name of the object and library on system 2 to which you
want to synchronize the objects.
f. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system to
which to synchronize the objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
Note: When you specify *ONLY and a data group name is not specified, if any
files that are processed by this command are cooperatively processed and
the data group that contains these files is active, the command could fail if
the database apply job has a lock on these files.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. At the Save active wait time, specify the number of seconds to wait for a commit
boundary or a lock on the object before continuing the save.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. At the System 1 ASP group or device prompt, specify the name of the auxiliary
storage pool (ASP) group or device where objects configured for replication may
reside on system 1. Otherwise, accept the default to use the current job’s ASP
group name.
11. At the System 2 ASP device number prompt, specify the number of the auxiliary
storage pool (ASP) where objects configured for replication may reside on system
2. Otherwise, accept the default to use the same ASP number from which the
object was saved (*SAVASP). Only the libraries in the system ASP and any basic
user ASPs from system 2 will be in the library name space.
12. At the System 2 ASP device name prompt, specify the name of the auxiliary
storage pool (ASP) device where objects configured for replication may reside on
system 2. Otherwise, accept the default to use the value specified for the system
1 ASP group or device (*ASPGRP1).
13. Determine how the synchronize request will be processed. Choose one of the
following
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
493
Synchronizing objects
14. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
15. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
16. To start the synchronization, press Enter.
494
Synchronizing IFS objects
The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to
synchronize IFS objects between two systems. The IFS objects to be synchronized
can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 478
495
Synchronizing IFS objects
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object path name and System 2 name pattern values are
ignored when a data group is specified.
f. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
7. If you chose values in Step 6 to save active objects, you can optionally specify
additional options at the Save active option prompt. Press F1 (Help) for additional
information.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 12.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 312.
13. To start the synchronization, press Enter.
496
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the IFS objects prompts, specify elements for one or more object selectors that
identify IFS objects to synchronize. You can specify as many as 300 object
selectors by using the + for more prompt for each selector. For more information,
see the topic on object selection in the MIMIX Reference book.
For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 13.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the IFS objects.
g. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the IFS objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. If you chose values in Step 7 to save active objects, you can optionally specify
additional options at the Save active option prompt. Press F1 (Help) for additional
information.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
497
Synchronizing IFS objects
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 13.
11. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
12. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
13. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on
page 312.
14. To start the synchronization, press Enter.
498
Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to
synchronize document library objects (DLOs) between two systems. The DLOs to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 478
499
Synchronizing DLOs
500
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the DLOs.
h. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the DLOs.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. At the Save active wait time, specify the number of seconds to wait for a lock on
the object before continuing the save.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
11. At the Submit to batch prompt, do one of the following:
• If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
• To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
12. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
13. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
14. To start the synchronization, press Enter.
501
Synchronizing DLOs
502
Synchronizing data group activity entries
The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE)
command to synchronize an object that is identified by a data group activity entry with
any status value—*ACTIVE, *DELAYED, *FAILED, or *COMPLETED.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About synchronizing data group activity entries (SYNCDGACTE)” on page 479
To synchronize an object identified by a data group activity entry, do the following:
1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next
to the activity entry that identifies the object you want to synchronize and press
Enter.
2. The Confirm Synchronize of Object display appears. Press Enter to confirm the
synchronization.
Alternative Process:
You will need to identify the data group and data group activity entry in this procedure.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45
(Synchronize DG File Entry) and press Enter.
3. At the Data group definition prompts, specify the data group name.
4. At the Object type prompt, specify a specific object type to synchronize or press
F4 to see a valid list.
5. Additional parameters appear based on the object type selected. Do one of the
following:
• For files, you will see the Object, Library, and Member prompts. Specify the
object, library and member that you want to synchronize.
• For objects, you will see the Object and Library prompts. Specify the object
and library of the object you want to synchronize.
• For IFS objects, you will see the IFS object prompt. Specify the IFS object that
you want to synchronize.
• For DLOs, you will see the Document library object and Folder prompts.
Specify the folder path and DLO name of the DLO you want to synchronize.
6. Determine how the synchronize request will be processed. Choose one of the
following:
• To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
• To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
503
Synchronizing data group activity entries
7. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
8. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
9. To start the synchronization, press Enter.
504
Synchronizing tracking entries
Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data
queues configured for replication with MIMIX advanced journaling. You can use a
tracking entry to synchronize the contents, attributes, and authorities of the item it
represents.
You should be aware of the information in the following topics:
• “Considerations for synchronizing using MIMIX commands” on page 474
• “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on
page 478
• “About synchronizing tracking entries” on page 482
505
Sending library-based objects
506
b. If the library on the remote system has a different name, specify its name at the
Remote library prompt.
c. The remaining prompts on the display are used for objects synchronized via a
save and restore operation. Verify that the values shown are what you want.
To see a description of each prompt and its available values, place the cursor
on the prompt and press F1 (Help).
9. By default, objects are restored to the same ASP device or number from which
they were saved. To change the location where objects are restored, press F10
(Additional parameters), then specify a value for either the Restore to ASP device
prompt or the Restore to ASP number prompt.
Note: Object types *JRN, *JRNRCV, *LIB, and *SAVF can be restored to any
ASP. IBM restricts which object types are allowed in user ASPs. Some
object types may not be restored to user ASPs. Specifying a value of 1
restores objects to the system ASP. Specifying 2 through 32 restores
values to the basic user ASP specified. If the specified ASP number does
not exist on the target system or if it has overflowed, the objects are placed
in the system ASP on the target system.
10. By default, authority to the object on the remote system is determined by that
system. To have the authorities on the remote system determined by the settings
of the local system, press F10 (Additional parameters), then specify *SRC at the
Target authority prompt.
11. To start sending the specified objects, press Enter.
507
Sending IFS objects
508
Sending DLO objects
This procedure uses i5/OS save and restore functions to send one or more document
library objects (DLOs) between two systems using the Send Network DLO
(SNDNETDLO) command. When you are configuring for system journal replication,
use this procedure from the source system to send DLOs to the target system for
replication.
Use the appropriate command: In general, you should use the SYNCDLO
command to synchronize objects between systems. For more information about
differences between commands, see “Performing the initial synchronization” on
page 483.
You should be familiar with the information in “Considerations for synchronizing using
MIMIX commands” on page 474.
To send DLO objects between systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The MIMIX Utilities Menu appears. Select option 12 (Send DLO object) and press
Enter.
3. The Send Network DLO (SNDNETDLO) display appears. At the Document library
object prompt, specify either *ALL or the name of the DLO.
Note: You can specify multiple DLOs. To expand this prompt for multiple entries,
type a plus sign (+) at the prompt and press Enter.
4. Specify the name of the folder that contains the DLOs at the Folder prompt.
5. Specify the name of the system to which you are sending DLOs at the Remote
system prompt.
6. Specify a folder name in the Folder field and a network system name in the
Remote system field.
7. Press F10 (Additional parameters).
8. Additional parameters appear on the display. MIMIX uses the Remote folder,
Save active, Save active wait time, and Allow object differences prompts in the
save and restore operations. Verify that the values shown are what you want. To
see a description of each prompt and its available values, place the cursor on the
prompt and press F1 (Help).
9. By default, authority to the object on the remote system is determined by that
system. To have the authorities on the remote system determined by the settings
of the local system, specify *SRC at the Target authority prompt.
10. To start sending the specified DLOs, press Enter.
509
Chapter21
Introduction to programming
MIMIX includes a variety of functions that you can use to extend MIMIX capabilities
through automation and customization.
The topics in this chapter include:
• “Support for customizing” on page 511 describes several functions you can use to
customize your replication environment.
• “Completion and escape messages for comparison commands” on page 514 lists
completion, diagnostic, and escape messages generated by comparison
commands.
• The MIMIX message log provides a common location to see messages from all
MIMIX products. “Adding messages to the MIMIX message log” on page 521
describes how you can include your own messaging from automation programs in
the MIMIX message log.
• MIMIX supports batch output jobs on numerous commands and provides several
forms of output, including outfiles. For more information, see “Output and batch
guidelines” on page 523.
• “Displaying a list of commands in a library” on page 528 describes how to display
the super set of all Lakeview commands known to License Manager or subset the
list by a particular library.
• “Running commands on a remote system” on page 529 describes how to run a
single command or multiple commands on a remote system.
• “Procedures for running commands RUNCMD, RUNCMDS” on page 530
provides procedures for using run commands with a specific protocol or by
specifying a protocol through existing MIMIX configuration elements.
• “Using lists of retrieve commands” on page 536 identifies how to use MIMIX list
commands to include retrieve commands in automation.
• Commands are typically set with default values that reflect the recommendation of
Lakeview Technology. “Changing command defaults” on page 537 provides a
method for customizing default values should your business needs require it.
510
Support for customizing
MIMIX includes several functions that you can use to customize processing within
your replication environment.
Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a
target object and a source object are both updated at the same time. When the
change to the source object is replicated to the target object, the data does not match
and the collision is detected.
With MIMIX user journal replication, the definition of a collision is expanded to include
any condition where the status of a file or a record is not what MIMIX determines it
should be when MIMIX applies a journal transaction. Examples of these detected
conditions include the following:
• Updating a record that does not exist
• Deleting a record that does not exist
• Writing to a record that already exists
• Updating a record for which the current record information does not match the
before image
The database apply process contains 12 collision points at which MIMIX can attempt
to resolve a collision.
When a collision is detected, by default the file is placed on hold due to an error
(*HLDERR) and user action is needed to synchronize the files. MIMIX provides
additional ways to automatically resolve detected collisions without user intervention.
This process is called collision resolution. With collision resolution, you can specify
different resolution methods to handle these different types of collisions. If a collision
does occur, MIMIX attempts the specified collision resolution methods until either the
collision is resolved or the file is placed on hold.
You can specify collision resolution methods for a data group or for individual data
group file entries. If you specify *AUTOSYNC for the collision resolution element of
the file entry options, MIMIX attempts to fix any problems it detects by synchronizing
the file.
You can also specify a named collision resolution class. A collision resolution class
allows you to define what type of resolution to use at each of the collision points.
Collision resolution classes allow you to specify several methods of resolution to try
511
Support for customizing
for each collision point and support the use of an exit program. These additional
choices for resolving collisions allow customized solutions for resolving collisions
without requiring user action. For more information, see “Collision resolution” on
page 381.
512
513
Completion and escape messages for comparison commands
CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of
*FILE:
• Completion LVI3E01 – This message indicates that all files were compared
successfully.
• Diagnostic LVE3E0D – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3385 – This message indicates that differences were detected for
an active file.
• Diagnostic LVE3E12 – This message indicates that a file was not compared. The
reason the file was not compared is included in the message.
• Escape LVE3E05 – This message indicates that files were compared with
differences detected. If the cumulative differences include files that were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3381 – This message indicates that compared files were different but
active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E09 – This message indicates that the CMPFILA command ended
abnormally.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The following are the messages for CMPFILA, with a comparison level specification of
*MBR:
• Completion LVI3E05 – This message indicates that all members compared
successfully.
• Diagnostic LVE3388 – This message indicates that differences were detected for
514
an active member.
• Escape LVE3E16 – This message indicates that members were compared with
differences detected. If the cumulative differences include members that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
CMPOBJA messages
The following are the messages for CMPOBJA:
• Completion LVI3E02 – This message indicates that objects were compared but no
differences were detected.
• Diagnostic LVE3384 – This message indicates that differences were detected for
an active object.
• Escape LVE3E06 – This message indicates that objects were compared and
differences were detected. If the cumulative differences include objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3380 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
The LVI3E02 includes message data containing the number of objects compared, the
system 1 name, and the system 2 name. The LVE3E06 message includes the same
message data as LVI3E02, and also includes the number of differences detected.
CMPIFSA messages
The following are the messages for CMPIFSA:
• Completion LVI3E03 – This message indicates that all IFS objects were
compared successfully.
• Diagnostic LVE3E0F – This message indicates that a particular attribute was
compared differently.
• Diagnostic LVE3386 – This message indicates that differences were detected for
an active IFS object.
• Diagnostic LVE3E14 – This message indicates that a IFS object was not
compared. The reason the IFS object was not compared is included in the
message.
• Escape LVE3E07 – This message indicates that IFS objects were compared with
differences detected. If the cumulative differences include IFS objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
515
Completion and escape messages for comparison commands
• Escape LVE3382 – This message indicates that compared IFS objects were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0B – This message indicates that the CMPIFSA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
CMPDLOA messages
The following are the messages for CMPDLOA:
• Completion LVI3E04 – This message indicates that all DLOs were compared
successfully.
• Diagnostic LVE3E11 – This message indicates that a particular attribute
compared differently.
• Diagnostic LVE3387 – This message indicates that differences were detected for
an active DLO.
• Diagnostic LVE3E15 – This message indicates that a DLO was not compared.
The reason the DLO was not compared is included in the message.
• Escape LVE3E08 – This message indicates that DLOs were compared and
differences were detected. If the cumulative differences include DLOs that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
• Escape LVE3383 – This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
• Escape LVE3E17 – This message indicates that no object matched the specified
selection criteria.
• Escape LVE3E0C – This message indicates that the CMPDLOA command ended
abnormally.
• Informational LVI3E06 – This message indicates that no object was selected to be
processed.
CMPRCDCNT messages
The following are the messages for CMPRCDCNT:
• Escape LVE3D4D – This message indicates that ACTIVE(*YES) outfile
processing failed and identifies the reason code.
• Escape LVE3D5A – This message indicates that system journal replication is not
active.
• Escape LVE3D5F – This message indicates that an apply session exceeded the
516
unprocessed entry threshold.
• Escape LVE3D6D – This message indicates that user journal replication is not
active.
• Escape LVE3D6F – This message identifies the number of members compared
and how many compared members had differences.
• Escape LVE3D72 – This message identifies a child process that ended
unexpectedly.
• Escape LVE3E17 – This message indicates that no object was found for the
specified selection criteria.
• Informational LVI306B – This message identifies a child process that started
successfully.
• Informational LVI306D – This message identifies a child process that completed
successfully.
• Informational LVI3D45 – This message indicates that active processing
completed.
• Informational LVI3D50 – This message indicates that work files are not deleted.
• Informational LVI3D5A – This message indicates that system journal replication is
not active.
• Informational LVI3D5F – This message identifies an apply session that has
exceeded the unprocessed entry threshold.
• Informational LVI3D6D – This message indicates that user journal replication is
not active.
• Informational LVI3E05 – This message identifies the number of members
compared. No differences were detected.
• Informational LVI3E06 – This message indicates that no object was selected for
processing.
CMPFILDTA messages
The following are the messages for CMPFILDTA:
• Completion LVI3D59 – This message indicates that all members compared were
identical or that one or more members differed but were then completely repaired.
• Diagnostic LVE3031 - This message indicates the name of the local system is
entered on the System 2 (SYS2) prompt. Using the name of the local system on
the SYS2 prompt is not valid.
• Diagnostic LVE3D40 – This message indicates that a record in one of the
members cannot be processed. In this case, another job is holding an update lock
on the record and the wait time has expired.
• Diagnostic LVE3D42 - This message indicates that a selected member cannot be
processed and provides a reason code.
• Diagnostic LVE3D46 – This message indicates that a file member contains one or
517
Completion and escape messages for comparison commands
more field types that are not supported for comparison. These fields are excluded
from the data compared.
• Diagnostic LVE3D50 – This message indicates that a file member contains one or
more large object (LOB) fields and a value other than *NONE was specified on the
Repair on system (REPAIR) prompt. Files containing LOB fields cannot be
repaired. In this case, the request to process the file member is ignored. Specify
REPAIR(*NONE) to process the file member.
• Diagnostic LVE3D64 – This message indicates that the compare detected minor
differences in a file member. In this case, one member has more records
allocated. Excess allocated records are deleted. This difference does not affect
replication processing, however.
• Diagnostic LVE3D65 – This message indicates that processing failed for the
selected member. The member cannot be compared. Error message LVE0101 is
returned.
• Escape LVE3358 – This message indicates that the compare has ended
abnormally, and is shown only when the conditions of messages LVI3D59,
LVE3D5D, and LVE3D59 do not apply.
• Escape LVE3D5D – This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions. It is also possible that one or more selected
members contains excluded fields, such as large objects (LOBs).
• Escape LVE3D5E – This message indicates that the compare request ended
because the data group was not fully active. The request included active
processing (ACTIVE), which requires a fully active data group. Output may not be
complete or accurate.
• Escape LVE3D5F – This message indicates that the apply session exceeded the
specified threshold for unprocessed entries. The DB apply threshold
(DBAPYTHLD) parameter determines what action should be taken when the
threshold is exceeded. In this case, the value *END was specified for
DBAPYTHLD, thereby ending the requested compare and repair action.
• Escape LVE3D59 – This message indicates that significant differences were
found or remain after repair, or that one or more selected members could not be
compared. The message provides a statistical summary of the differences found.
• Escape LVE3D56 – This message indicates that no member was selected by the
object selection criteria.
• Escape LVE3D60 – This message indicates that the status of the data group
could not be determined. The WRKDG (MXDGSTS) outfile returned a value of
*UNKNOWN for one or more fields used in determining the overall status of the
data group.
• Escape LVE3D62 – This message indicates the number of mismatches that will
not be fully processed for a file due to the large number of mismatches found for
this request. The compare will stop processing the affected file and will continue to
518
process any other files specified on the same request.
• Escape LVE3D67 – This message indicates that the value specified for the File
entry status (STATUS) parameter is not valid. To process members in *HLDERR
status, a data group must be specified on the command and *YES must be
specified for the Process while active parameter.
• Escape LVE3D68 – This message indicates that a switch cannot be performed
due to members undergoing compare and repair processing.
• Escape LVE3D69 – This message indicates that the data group is not configured
for database. Data groups used with the CMPFILDTA command must be
configured for database, and all processes for that data group must be active.
• Escape LVE3D6C – This message indicates that the CMPFILDTA command
ended before it could complete the requested action. The processing step in
progress when the end was received is indicated. The message provides a
statistical summary of the differences found.
• Escape LVE3E41 – This message indicates that a database apply job cannot
process a journal entry with the indicated code, type, and sequence number
because a supporting function failed. The journal information and the apply
session for the data group are indicated. See the database apply job log for
details of the failed function.
• Informational LVI3727 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and is now in
*CMPRLS state.
• Informational LVI3728 – This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and has been
changed from *CMPRLS to *CMPACT state.
• Informational LVI3729 – This message indicates that the repair request for a
specific member was not successful. As a result, the CMPFILDTA command has
changed the data group file entry for the member back to *HLDERR status.
• Informational LVI372C – The CMPFILDTA command is ending controlled
because of a user request. The command did not complete the requested
compare or repair. Its output may be incomplete or incorrect.
• Informational LVI372D – The CMPFILDTA command exceeded the maximum rule
recovery time policy and is ending. The command did not complete the requested
compare or repair. Its output may be incomplete or incorrect.
• Informational LVI372E – The CMPFILDTA command is ending unexpectedly. It
received an unexpected request from the remote CMPFILDTA job to shut down
and is ending. The command did not complete the requested compare or repair.
Its output may be incomplete or incorrect.
• Informational LVI3D4B – This message indicates that work files are not
automatically deleted because the time specified on the Wait time (seconds)
(ACTWAIT) prompt expired or an internal error occurred.
• Informational LVI3D59 – This message indicates that the CMPFILDTA command
519
Completion and escape messages for comparison commands
520
Adding messages to the MIMIX message log
The Add Message Log Entry (ADDMSGLOGE) command allows you to add an entry
to the MIMIX message log. This is helpful when you want to include messages from
your automation programs into the MIMIX message log for easier tracking. To see the
parameters for this command, type the command and press F4 (Prompt). Help text for
the parameters describe the options available.
The message is written to the message log file. The message is also sent to the
primary and secondary messages queues if the message meets the filter criteria for
those queues. The message can also be sent to a program message queue.
Messages generated on a network system will be automatically sent to the
management system. However, messages generated on a management system may
not be sent to any network systems. The system manager on the management
system does not send messages to network systems when it cannot determine which
system should receive the message.
521
Adding messages to the MIMIX message log
522
Output and batch guidelines
This topic provides guidelines for display, print, and file output. In addition, the user
interface, the mechanics of selecting and producing output, and content issues such
as formatting are described.
Batch job submission guidelines are also provided. These guidelines address the
user interface as well as the mechanics of submitting batch jobs that are not part of
the mainline replication process.
Output parameter
Some commands can produce output of more than one type—display, print, or output
file. In these cases, the selection is made on the Output parameter. Table 68 lists the
values supported by the Output parameter.
523
Output and batch guidelines
Note: Not all values are supported for all commands. For some commands, a
combination of values is supported.
* Display only
Commands that support OUTPUT(*) that can also run in batch are required to support
the other forms of output as well.
Commands called from a program or submitted to batch with a specification of
OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing
or when called from another program would otherwise fail.
With the exception of messages generated as a result of running a command,
commands that support OUTPUT(*NONE) will generate no other forms of output.
Commands that support combinations of output values do not support OUTPUT(*) in
combination with other output values.
Display output
Commands that support OUTPUT(*) provide the ability to display information
interactively. Display (DSP) and Work (WRK) commands commonly use display
support. Display commands typically display detailed information for a specific entity,
such as a data group definition. Work commands display a list of entries and provide
a summary view of list of entries. Display support is required to work interactively with
the MIMIX product.
Work commands often provide subsetting capabilities that allow you to select a
subset of information. Rather than viewing all configuration entries for all data groups,
for example, subsetting allows you to view the configuration entries for a specific data
group. This ability allows you to easily view data that is important or relevant to you at
a given time.
Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to
provide a readable form of output for print or distribution purposes. Output is
generated in the form of spooled output files that can easily be printed or distributed.
On commands that support spooled output, the spooled output is generated as a
result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK)
commands support this form of output. Other commands, such as Compare (CMP)
and Verify (VFY), also support spooled output in most cases.
524
The Work (WRK) and Display (DSP) commands support different categories of
reports. The following are standard categories of reports available from these
commands:
• The detail report contains information for one item, such as an object, definition,
or entry. A detail report is usually obtained by using option 6 (Print) on a Work
(WRK) display, or by specifying *PRINT on the Output parameter on a Display
(DSP) command.
• The list summary report contains summary information for multiple objects,
definitions, or entries. A list summary is usually obtained by pressing F21 (Print)
on a Work (WRK) display. You can also get this report by specifying *BASIC on
the Detail parameter on a Work (WRK) command.
• The list detail report contains detailed information for multiple objects,
definitions, or entries. A list detail report is usually obtained by specifying *PRINT
on the Output parameter of a Work (WRK) command.
Certain parameters, which vary from command to command, can affect the contents
of spooled output. The following list represents a common set of parameters that
directly impact spooled output:
• EXPAND(*YES or *NO) - The expand parameter is available on the Work with
Data Group Object Entries (WRKDGOBJE), the Work with Data Group IFS
Entries (WRKDGIFSE), and the Work with Data Group DLO Entries
(WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can
be accomplished using generic entries, which represent one or more actual
objects on the system. The object entry ABC*, for example, can represent many
entries on a system. Expand support provides a means to determine that actual
objects on a system are represented by a MIMIX configuration. Specifying *NO on
the EXPAND parameter prints the configured data group entries.
• DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail
option determines the level of detail in the generated spool file. Specifying
DETAIL(*BASIC) prints a summary list of entries. For example, this specification
on the Work with Data Group Definitions (WRKDGDFN) command will print a
summary list of data group definitions. Specifying DETAIL(*FULL) prints each
data group definition in detail, including all attributes of the data group definition.
Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is
specified.
• RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The
Report Type (RPTTYPE) parameter controls the amount of information in the
spooled file. The values available for this parameter vary, depending on the
command.
The values *DIF, *ALL, and *SUMMARY are available on the Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJA), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands.
Specifying *DIF reports only detected differences. A value of *SUMMARY reports
a summary of objects compared, including an indication of differences detected.
*ALL provides a comprehensive listing of objects compared as well as difference
detail.
525
Output and batch guidelines
The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values,
as well as the value *RRN. Specifying *RRN allows you to output the relative
record number of the first 1,000 objects that failed to compare. Using the *RRN
value can help resolve situations where a discrepancy is known to exist, but you
are unsure which system contains the correct data. In this case, *RRN provides
the information that enables you to display the specific records on the two
systems and to determine the system on which the file should be repaired.
File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile
support across the MIMIX product is important for a number of reasons. Outfile
support is a key enabler for advanced automation purposes. The support also allows
MIMIX customers and qualified MIMIX consultants to develop and deliver solutions
tailored to the individual needs of the user.
As with the other forms of output, output files are commonly supported across certain
classes of commands. The Work (WRK) commands commonly support output files. In
addition, many audit-based reports, such as Comparison (CMP) commands, also
provide output file support. Output file support for Work (WRK) commands provides
access to the majority of MIMIX configuration and status-related data. The Compare
(CMP) commands also provide output files as a key enabler for automatic error
detection and correction capabilities.
When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and
OUTMBR parameters. The OUTFILE parameter requires a qualified file and library
name. As a result of running the command, the specified output file will be used. If the
file does not exist, it will automatically be created.
Note: If a new file is created for CMPFILA, for example, the record format used is
from the Lakeview-supplied model database file MXCMPFILA, found in the
installation library. The text description of the created file is “Output file for
CMPFILA.” The file cannot reside in the product library.
The Outmember (OUTMBR) parameter allows you to specify which member to use in
the output file. If no member exists, the default value of *FIRST will create a member
name with the same name as the file name. A second element on the Outmember
parameter indicates the way in which information is stored for an existing member. A
value of *REPLACE will clear the current contents of the member and add the new
records. A value of *ADD will append the new records to the existing data.
Expand support: The Expand support was developed specifically as a feature for
data group configuration entries that support generic specifications. Data group object
entries, IFS entries, and DLO entries can all be configured using generic name
values. If you specify an object entry with an object name of ABC* in library XYZ and
accept the default values for all other fields, for example, all objects in library XYZ are
replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the
output files. Using EXPAND(*YES) will list all objects from the local system that match
the configuration specified. Thus, if object name ABC* for library XYZ represented
1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the
output file. EXPAND(*NO) would add a single generic entry.
Note: EXPAND(*YES) support locates all objects on the local system.
526
General batch considerations
MIMIX functions that are identified as long-running processes typically allow you to
submit the requests to batch and avoid the unnecessary use of interactive resources.
Parameters typically associated with the Batch (BATCH) parameter include Job
description (JOBD) and Job name (JOB).
527
Displaying a list of commands in a library
528
Running commands on a remote system
The Run Command (RUNCMD) and Run Commands (RUNCMDS) commands
provide a convenient way to run a single command or multiple commands on a
remote system. The RUNCMD and RUNCMDS commands replace and extend the
capabilities available in the IBM commands, Submit Remote Command
(SBTRMTCMD) and Run Remote Command (RUNRMTCMD).
The MIMIX commands provide a protocol-independent way of running commands
using MIMIX constructs such as system definitions, data group definitions, and
transfer definitions. The MIMIX commands enable you to run commands and receive
messages from the remote system.
In addition, the RUNCMD and RUNCMDS commands use the current data group
direction to determine where the command is to be run. This capability simplifies
automation by eliminating the need to manually enter source and target information at
the time a command is run.
Note: Do not change the RUNCMD or RUNCMDS commands to
PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.
529
Procedures for running commands RUNCMD, RUNCMDS
Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS
530
Table 69. Specific protocols and specifications used for RUNCMD and RUNCMDS
531
Procedures for running commands RUNCMD, RUNCMDS
11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. To run the commands or monitor for messages, press Enter.
532
Table 70. MIMIX configuration protocols and specifications
533
Procedures for running commands RUNCMD, RUNCMDS
534
Table 71. Options for processing journal entries with MIMIX *DGJRN protocol
Run when the database apply job for the Do the following:
specified file receives the journal entry 1. At the Protocol prompt, specify *DGJRN.
2. At the When to run prompt, specify *RCV.
535
Using lists of retrieve commands
536
Changing command defaults
Nearly all MIMIX processes are based on commands that have been shipped with
default values that reflect best practices recommendations. This ensures the easiest
and best use of each command. MIMIX implements named configuration definitions
through which you can customize your configuration by using options on commands
without resorting to changing command defaults.
If you wish to customize command defaults to fit a specific business need, use the
IBM Change Command Default (CHGCMDDFT) command. Be aware that by
changing a command default, you may be affecting the operation of other MIMIX
processes. Also, each update of MIMIX software will cause any changes to be lost.
537
Chapter22
The MIMIX family of products provide a variety of exit points to enable you to extend
and customize your operations.
The topics in this chapter include:
• “Summary of exit points” on page 538 provides tables that summarize the exit
points available for use.
• “Working with journal receiver management user exit points” on page 541
describes how to use user exit points safely.
MIMIX also supports a generic interface to existing database and object replication
process exit points that provides enhanced filtering capability on the source system.
This generic user exit capability is only available through a Certified MIMIX
Consultant.
538
Customizing with exit point programs
The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX
Model Switch Framework.
Event program exit point After condition check (pre-defined and user-defined)
539
Summary of exit points
540
Working with journal receiver management user exit
points
User exit points in critical processing areas enable you to incorporate specialized
processing with MIMIX to extend function to meet additional needs for your
environment. Access to user exit processing is provided through the use of an exit
program that can be written in any language supported by i5/OS.
Since user exit programming allows for user code to be run within MIMIX processes,
great care must be exercised to prevent the user code from interfering with the proper
operation of MIMIX. For example, a user exit program that inadvertently causes an
entry to be discarded that is needed by MIMIX could result in a file not being available
in case of a switch. Use caution in designing a configuration for use with user exit
programming. You can safely use user exit processing with proper design,
programming, and testing. Lakeview services are also available to help customers
implement specialized solutions.
541
Working with journal receiver management user exit points
the name of the first entry in the currently attached journal receiver.)
Restrictions for Change Management Exit Points: The following restriction applies
when the exit program is called from either of the change management exit points:
• Do not include the Change Data Group Receiver (CHGDGRCV) command in your
exit program.
• Do not submit batch jobs for journal receiver change or delete management from
the exit program. Submitting a batch job would allow the in-line exit point
processing to continue and potentially return to normal MIMIX journal
management processing, thereby conflicting with journal manager operations. By
not submitting journal receiver change management to a batch job, you prevent a
potential problem where the journal receiver is locked when it is accessed by a
batch program.
542
program fails and signals an exception to MIMIX, MIMIX processing continues as if
the exit program was not specified.
Return Code
OUTPUT; CHAR (1)
This value indicates how to continue processing the journal receiver when the exit
program returns control to the MIMIX process. This parameter must be set. When the
exit program is called from Function C2, the value of the return code is ignored. Pos-
sible values are:
0 Do not continue with MIMIX journal management processing for this journal
receiver.
1 Continue with MIMIX journal management processing.
Function
INPUT; CHAR (2)
The exit point from which this exit program is called. Possible values are:
Note: Restrictions for exit programs called from the C1 and C2 exit points are
described within topic “Change management exit points” on page 541.
Journal Definition
INPUT; CHAR (10)
The name that identifies the journal definition.
System
INPUT; CHAR (8)
The name of the system defined to MIMIX on which the journal is defined.
Reserved1
INPUT; CHAR (10)
This field is reserved and contains blank characters.
Journal Name
INPUT; CHAR (10)
The name of the journal that MIMIX is processing.
543
Working with journal receiver management user exit points
Journal Library
INPUT; CHAR (10)
The name of the library in which the journal is located.
Receiver Name
INPUT; CHAR (10)
The name of the journal receiver associated with the specified journal. This is the jour-
nal receiver on which journal management functions will operate. For receiver change
management functions, this always refers to the currently attached journal receiver.
For receiver delete management functions, this always refers to the same journal
receiver.
Receiver Library
INPUT; CHAR (10)
The library in which the journal receiver is located.
Sequence Option
INPUT; CHAR (6)
The value of the Sequence option (SEQOPT) parameter on the CHGJRN command
that MIMIX processing would have used to change the journal receiver. Lakeview
Technology recommends that you specify this parameter to prevent synchronization
problems if you change the journal receiver. This parameter is only used when the
exit program is called at the C1 (pre-change) exit point. Possible values are:
*CONT The journal sequence number of the next journal entry created is 1 greater than
the sequence number of the last journal entry in the currently attached journal
receiver.
*RESET The journal sequence number of the first journal entry in the newly attached
journal receiver is reset to 1. The exit program should either reset the sequence
number or set the return code to 0 to allow MIMIX to change the journal receiver
and reset the sequence number.
Threshold Value
INPUT; DECIMAL(15, 5)
The value to use for the THRESHOLD parameter on the CRTJRNRCV command.
This parameter is only used when the exit program is called at the C1 (pre-change)
exit point. Possible values are:
0 Do not change the threshold value. The exit program must not change the
threshold size for the journal receiver.
value The exit program must create a journal receiver with this threshold value, specified
in kilobytes. The exit program must also change the journal to use that receiver, or
send a return code value of 0 so that MIMIX processing can change the journal
receiver.
Reserved2
INPUT; CHAR (1)
This field is reserved and contains blank characters.
544
Reserved3
INPUT; CHAR (1)
This field is reserved and contains blank characters.
/*--------------------------------------------------------------*/
/* Program....: DMJREXIT */
/* Description: Example user exit program using CL */
/*--------------------------------------------------------------*/
545
Working with journal receiver management user exit points
/*--------------------------------------------------------------*/
/* Constants and misc. variables */
/*--------------------------------------------------------------*/
DCL VAR(&STOP) TYPE(*CHAR) LEN(1) VALUE('0')
DCL VAR(&CONTINUE) TYPE(*CHAR) LEN(1) VALUE('1')
DCL VAR(&PRECHG) TYPE(*CHAR) LEN(2) VALUE('C1')
DCL VAR(&POSTCHG) TYPE(*CHAR) LEN(2) VALUE('C2')
DCL VAR(&PRECHK) TYPE(*CHAR) LEN(2) VALUE('D0')
DCL VAR(&PREDLT) TYPE(*CHAR) LEN(2) VALUE('D1')
DCL VAR(&POSTDLT) TYPE(*CHAR) LEN(2) VALUE('D2')
DCL VAR(&RTNJRNE) TYPE(*CHAR) LEN(165)
DCL VAR(&PRVRCV) TYPE(*CHAR) LEN(10)
DCL VAR(&PRVRLIB) TYPE(*CHAR) LEN(10)
/*--------------------------------------------------------------*/
/* MAIN */
/*--------------------------------------------------------------*/
CHGVAR &RETURN &CONTINUE /* Continue processing receiver*/
/*--------------------------------------------------------------*/
/*--------------------------------------------------------------*/
IF (&JRNLIB *EQ 'MYLIB') THEN(DO)
IF (&THRESHOLD *GT 0) THEN(DO)
CRTJRNRCV JRNRCV(&RCVLIB/NEWRCV0000) +
THRESHOLD(&THRESHOLD)
CHGJRN JRN(&JRNLIB/&JRNNAME) +
JRNRCV(&RCVLIB/NEWRCV0000) SEQOPT(&SEQOPT)
ENDDO /* There has been a threshold change */
ELSE (CHGJRN JRN(&JRNLIB/&JRNNAME) JRNRCV(*GEN) +
SEQOPT(&SEQOPT)) /* No threshold change */
CHGVAR &RETURN &STOP /* Stop processing entry */
ENDDO /* &JRNLIB is MYLIB */
ENDDO /* &FUNCTION *EQ &PRECHG */
/*--------------------------------------------------------------*/
/* At the post-change user exit point if the journal library is */
/* ABCLIB, save the just detached journal receiver. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &POSTCHG) THEN(DO)
IF COND(&JRNLIB *EQ 'ABCLIB') THEN(DO)
RTVJRNE JRN(&JRNLIB/&JRNNAME) +
RCVRNG(&RCVLIB/&RCVNAME) FROMENT(*FIRST) +
RTNJRNE(&RTNJRNE)
546
Table 75. Sample journal receiver management exit program
/*----------------------------------------------------------*/
/* Retrieve the journal entry, extract the previous receiver*/
/* name and library to do the save with. */
/*----------------------------------------------------------*/
CHGVAR &PRVRCV (%SUBSTRING(&RTNJRNE 126 10))
CHGVAR &PRVRLIB (%SUBSTRING(&RTNJRNE 136 10))
SAVOBJ OBJ(&PRVRCV) LIB(&PRVRLIB) DEV(TAP02) +
OBJTYPE(*JRNRCV) /* Save detached receiver */
ENDDO /* &JRNLIB is ABCLIB */
ENDDO /* &FUNCTION is &POSTCHG */
/*--------------------------------------------------------------*/
/* Handle processing for the pre-check exit point. */
/*--------------------------------------------------------------*/
ELSE IF (&FUNCTION *EQ &PRECHK) THEN(DO)
IF (&JRNLIB *EQ 'TEAMLIB') THEN( +
SAVOBJ OBJ(&RCVNAME) LIB(&RCVLIB) DEV(TAP01) +
OBJTYPE(*JRNRCV))
ENDDO /* &FUNCTION is &PRECHK */
ENDPGM
547
Working with journal receiver management user exit points
548
Appendix A
Supported object types for system journal
replication
This list identifies IBM i object types and indicates whether MIMIX can replicate these
through the system journal.
Note: Not all object types exist in all releases of IBM i.
549
Object Type Description Replicated
*JOBSCD Job schedule Yes
*JRN Journal No7
*JRNRCV Journal receiver No7
*LIB Library Yes4
*LIND Line description Yes1
*LOCALE Locale space Yes
*M36 AS/400 Advanced 36 machine No8
*M36CFG AS/400 Advanced 36 machine No8
configuration
*MEDDFN Media definition Yes
*MENU Menu Yes
*MGTCOL Management collection Yes
*MODD Mode description Yes
*MODULE Module Yes
*MSGF Message file Yes
*MSGQ Message queue Yes4
*NODGRP Node group No9
*NODL Node list Yes
*NTBD NetBIOS description Yes
*NWID Network interface description Yes1
*NWSD Network server description Yes
*OOPOOL Persistent pool (for OO objects) No
*OUTQ Output queue Yes4, 5
*OVL Overlay Yes
*PAGDFN Page definition Yes
*PAGSEG Page segment Yes
*PDG Print descriptor group Yes
*PGM Program Yes12
*PNLGRP Panel group Yes
*PRDAVL Product availability No6
*PRDDFN Product definition No6
*PRDLOD Product load No6
*PSFCFG Print Services Facility (PSF) Yes
configuration
*QMFORM Query management form Yes
*QMQRY Query management query Yes
*QRYDFN Query definition Yes
*RCT Reference code translate table No9
*S36 System/36 machine description No9
*SBSD Subsystem description Yes
*SCHIDX Search index Yes
*SOCKET Local socket No
*SOMOBJ System Object Model (SOM) object No
*SPADCT Spelling aid dictionary Yes
*SPLF Spool file Yes
*SQLPKG Structured query language package Yes
*SQLUDT User-defined SQL type Yes
*SRVPGM Service program Yes
*SSND Session description Yes
*STMF Bytestream file Yes2
*SVRSTG Server storage space No8
*SYMLNK Symbolic link Yes2
*TBL Table Yes
550
Supported object types for system journal replication
551
Appendix B
Copying configurations
This section provides information about how you can copy configuration data between
systems.
• “Supported scenarios” on page 552 identifies the scenarios supported in version 5
of MIMIX.
• “Checklist: copy configuration” on page 553 directs you through the correct order
of steps for copying a configuration and completing the configuration.
• “Copying configuration procedure” on page 558 documents how to use the Copy
Configuration Data (CPYCFGDTA) command.
Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying
configuration data from one library to another library on the same system. After MIMIX
is installed, you can use the CPYCFGDTA command.
The supported scenarios are as follows: :
From To
552
Checklist: copy configuration
Use this checklist when you have installed MIMIX in a new library and you want to
copy an existing configuration into the new library.
To configure MIMIX with configuration information copied from one or more existing
product libraries, do the following:
1. Review “Supported scenarios” on page 552.
2. Use the procedure “Copying configuration procedure” on page 558 to copy the
configuration information from one or more existing libraries.
3. Verify that the system definitions created by the CPYCFGDTA command have the
correct message queue, output queues, and job descriptions required. Be sure to
check system definitions for the management system and all of the network
systems.
4. Verify that transfer definitions created have the correct three-part name and that
the values specified for each transfer protocol are correct. For *TCP, verify the
port number. For *SNA, verify that the SNA mode is what is defined for SNA
configuration.
Note: One of the transfer definitions should be named PRIMARY if you intend to
create additional data group definitions or system definitions that will use
the default value PRIMARY for the Primary transfer definition PRITFRDFN
parameter.
5. Verify that the journal definitions created have the information you want for the
journal receiver prefix name, auxiliary storage pool, and journal receiver change
management and delete management. The default journal receiver prefix for the
user journal is generated; for the system journal, the default journal receiver prefix
is AUDRCV. If you want to use a prefix other than these defaults, you will need to
modify the journal definition using topic “Changing a journal definition” on
page 217.
6. If you change the names of any of the system, transfer, or journal definitions
created by the copy configuration command, ensure that you also update that
name in other locations within the configuration.
If you change this name Also change the name in this location
553
Checklist: copy configuration
7. Verify the data group definitions created have the correct job descriptions. Verify
that the values of parameters for job descriptions are what you want to use.
MIMIX provides default job descriptions that are tailored for their specific tasks.
Note: You may have multiple data groups created that you no longer need.
Consider whether or not you can combine information from multiple data
groups into one data group. For example, it may be simpler to have both
database files and objects for an application be controlled by one data
group.
8. Verify that the options which control data group file entries are set appropriately.
a. For data group definitions, ensure that the values for file entry options
(FEOPT) are what you want as defaults for the data group.
b. Check the file entry options specified in each data group file entry. Any file
entry options (FEOPT) specified in a data group file entry will override the
default FEOPT values specified in the data group definition. You may need to
modify individual data group file entries.
9. Check the data group entries for each data group. Ensure that all of the files and
objects that you need to replicate are represented by entries for the data group.
Be certain that you have checked the data group entries for your critical files and
objects. Use the procedures in the Using MIMIX book to verify your configuration.
10. Check how the apply sessions are mapped for data group file entries. You may
need to adjust the apply sessions.
11. Use Table 78 to entries for any additional database files or objects that you need
to add to the data group.
Table 78. How to configure data group entries for the preferred configuration.
Library- 1. Create object entries using “Creating data group object “Identifying library-based
based entries” on page 267. objects for replication” on
objects 2. After creating object entries, load file entries for LF and page 100
PF (source and data) *FILE objects using “Loading file “Identifying logical and physical
entries from a data group’s object entries” on page 273. files for replication” on
Note: If you cannot use MIMIX Dynamic Apply for logical files or page 105
PF data files, you should still create file entries for PF “Identifying data areas and data
source files to ensure that legacy cooperative processing queues for replication” on
can be used. page 112
3. After creating object entries, load object tracking entries
for *DTAARA and *DTAQ objects that are journaled to a
user journal. Use “Loading object tracking entries” on
page 285.
554
Table 78. How to configure data group entries for the preferred configuration.
IFS 1. Create IFS entries using “Creating data group IFS “Identifying IFS objects for
objects entries” on page 282. replication” on page 118
2. After creating IFS entries, load IFS tracking entries for
IFS objects that are journaled to a user journal. Use
“Loading IFS tracking entries” on page 284.
DLOs Create DLO entries using “Creating data group DLO “Identifying DLOs for
entries” on page 287. replication” on page 124
12. Use the #DGFE audit to confirm and automatically correct any problems found in
file entries associated with data group object entries. Do the following:
a. Type WRKAUD RULE(#DGFE) and press Enter.
b. Next to the data group you want to confirm, type 9 (Run rule) and press Enter.
c. The results are placed in an outfile. For additional information, see “Interpreting
results for configuration data - #DGFE audit” on page 580.
13. If you anticipate a delay between configuring and starting the data group and the
data group contains object information, you should set object auditing to ensure
that any transactions that occur during the delay will be replicated. Use the
procedure “Setting data group auditing values manually” on page 297.
14. Verify that system-level communications are configured correctly.
a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and
that the communications entries are added to the MIMIXSBS subsystem.
b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is
started on each system (on each "side" of the transfer definition). You can use
the WRKACTJOB command for this. Look for a job under the MIMIXSBS
subsystem with a function of LV-SERVER.
c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that
a MIMIX installation on one system can communicate with a MIMIX installation
on another system. Refer to topic “Verifying the communications link for a data
group” on page 195.
15. Ensure that there are no users on the system that will be the source for replication
for the rest of this procedure. Do not allow users onto the source system until you
have successfully completed the last step of this procedure.
16. Start journaling using the following procedures as needed for your configuration.
• For user journal replication, use “Journaling for physical files” on page 326 to
start journaling on both source and target systems
• For IFS objects, configured for advanced journaling, use “Journaling for IFS
objects” on page 330
555
Checklist: copy configuration
• For data areas or data queues configured for advanced journaling, use
“Journaling for data areas and data queues” on page 334
17. Synchronize the database files and objects on the systems between which
replication occurs. Topic “Performing the initial synchronization” on page 483
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
18. Start the system managers using topic “Starting the system and journal
managers” on page 296.
19. Clear pending entries when you start the data groups. Use topic “Starting
Selected Data Group Processes” in the Using MIMIX book.
556
557
Copying configuration procedure
558
Appendix C
The MIMIX set of products supports a unique configuration called Intra. Intra is a
special configuration that allows the MIMIX products to function fully within a single-
system environment. Intra support replicates database and object changes to other
libraries on the same system by using system facilities that allow for communications
to be routed back to the same system. This provides an excellent way to have a test
environment on a single machine that is similar to a multiple-system configuration.
The Intra environment can also be used to perform backups while the system remains
active.
In an Intra configuration, the product is installed into two libraries on the same system
and configured in a special way. An Intra configuration uses these libraries to
replicate data to additional disk storage on the same system. The second library in
effect becomes a "backup" library.
By using an Intra configuration you can reduce or eliminate your downtime for routine
operations such as performing daily and weekly backups. When replicating changes
to another library, you can suspend the application of the replicated changes. This
enables you to concurrently back up the copied library to tape while your application
remains active. When the backup completes, you can resume operations that apply
replicated changes to the "backup" library.
An Intra configuration enables you to have a "live" copy of data or objects that can be
used to offload queries and report generations. You can also use an Intra
configuration as a test environment prior to installing MIMIX on another system or
connecting your applications to another System i5.
Because both libraries exist on the same system, an Intra configuration does not
provide protection from disaster.
Database replication within an Intra configuration requires that the source and target
files either have different names or reside in different libraries. Similarly, objects
cannot be replicated to the same named object in the same named library, folders, or
directory.
Note: Newly created data groups use remote journaling as the default configuration.
Remote journaling is not compatible with intra communications, so you must
use source send configuration when configuring for intra communications.
This section includes the following procedures:
• “Manually configuring Intra using SNA” on page 559
• “Manually configuring Intra using TCP” on page 561
559
Manually configuring Intra using SNA
configure the communications necessary for Intra, consider the default product library
(MIMIX) to be the local system and the second product library (in this example,
MIMIXI) to be the remote system.
If you need to manually configure SNA communications for an Intra environment, do
the following:
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system), use the local location name in the
following command:
CRTSYSDFN SYSDFN(local-location-name) TYPE(*MGT)
TEXT(‘Manual creation’)
b. For the MIMIXI library (remote system), use the following command:
CRTSYSDFN SYSDFN(INTRA) TYPE(*NET) TEXT(‘Manual creation’)
2. Create the transfer definition between the two product libraries with the following
command:
CRTTFRDFN TFRDFN(PRIMARY INTRA local-location-name)
PROTOCOL(*SNA) LOCNAME1(INTRA1) LOCNAME2(INTRA2)
NETID1(*LOC) TEXT(‘Manual creation’)
3. Create the MIMIX mode description using the following command:
CRTMODD MODD(MIMIX) MAXSSN(100) MAXCNV(100) LCLCTLSSN(12)
TEXT('MIMIX INTRA MODE DESCRIPTION – Manual creation.')
4. Create a controller description for MIMIX Intra using the following command:
CRTCTLAPPC CTLD(MIMIXINTRA) LINKTYPE(*LOCAL) TEXT('MIMIX
INTRA – Manual creation.')
5. Create a local device description for MIMIX using the following command:
CRTDEVAPPC DEVD(MIMIX) RMTLOCNAME(INTRA1) LCLLOCNAME(INTRA2)
CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO) SECURELOC(*YES)
TEXT('MIMIX INTRA – Manual creation.')
6. Create a remote device description for MIMIX using the following command:
CRTDEVAPPC DEVD(MIMIXI) RMTLOCNAME(INTRA2)
LCLLOCNAME(INTRA1) CTL(MIMIXINTRA) MODE(MIMIX) APPN(*NO)
SECURELOC(*YES) TEXT('MIMIX REMOTE INTRA SUPPORT.')
7. Add a communication entry to the MIMIXSBS subsystem for the local location
using the following command:
ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA2)
JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)
8. Add a communication entry to the MIMIXSBS subsystem for the remote location
using the following command:
ADDCMNE SBSD(MIMIXQGPL/MIMIXSBS) RMTLOCNAME(INTRA1)
JOBD(MIMIXQGPL/MIMIXCMN) DFTUSR(MIMIXOWN) MODE(MIMIX)
9. Vary on the controller, local device, and remote device using the following
560
Configuring Intra communications
commands:
VRYCFG CFGOBJ(MIMIXINTRA) CFGTYPE(*CTL) STATUS(*ON)
VRYCFG CFGOBJ(MIMIX) CFGTYPE(*DEV) STATUS(*ON)
VRYCFG CFGOBJ(MIMIXI) CFGTYPE(*DEV) STATUS(*ON)
10. Start the MIMIX system manager in both product libraries using the following
commands:
MIMIX/STRMMXMGR SYSDFN(*INTRA) MGR(*ALL)
MIMIX/STRMMXMGR SYSDFN(*LOCAL) MGR(*JRN)
Note: You still need to configure journal definitions and data group definitions.
561
Manually configuring Intra using TCP
2. Create the transfer definition between the two product libraries with the following
command. Note that the values for PORT1 and PORT2 must be unique.
MIMIX/CRTTFRDFN TFRDFN(PRIMARY SOURCE INTRA) HOST1(SOURCE)
HOST2(INTRA) PORT1(55501) PORT2(55502)
3. Create auto-start jobs in the MIMIX subsystem for the port associated with each
library so that MIMIX TCP server is started automatically when the subsystem is
started.
a. Within the MIMIX library use the commands:
CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)
TOLIB(MIMIX) NEWOBJ(PORT55501)
CHGJOBD JOBD(MIMIX/PORT55501) RQSDTA('MIMIX/STRSVR
HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501)
ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55501)
JOBD(MIMIX/PORT55501)
b. Within the MIMIXI library use the commands:
CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD)
TOLIB(MIMIXI) NEWOBJ(PORT55502)
CHGJOBD JOBD(MIMIXI/PORT55502) RQSDTA('MIMIXI/STRSVR
HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502)
ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(PORT55502)
JOBD(MIMIXI/PORT55502)
4. Start the server for the management system (source) by entering the following
command:
MIMIX/STRSVR HOST(SOURCE) PORT(55501) JOBD(MIMIX/PORT55501)
5. Start the server for the network system (Intra) by entering the following command:
MIMIXI/STRSVR HOST(INTRA) PORT(55502) JOBD(MIMIXI/PORT55502)
6. Start the system managers from the management system by entering the
following command:
MIMIX/STRMMXMGR SYSDFN(INTRA) MGR(*ALL) RESET(*YES)
Start the remaining managers normally.
Note: You will still need to configure journal definitions and data group definitions on
the management system.
You may want to add service table entries for ports 55501 and 55502 to ensure that
other applications will not try and use these ports.
562
MIMIX support for independent ASPs
MIMIX has always supported replication of library-based objects and IFS objects to
and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 2-
32). Now, MIMIX also supports replication of library-based objects and IFS objects,
including journaled IFS objects, data areas and data queues, located in independent
ASPs1 (33-255).
The system ASP and basic ASPs are collectively known as SYSBAS. Figure 32
shows that MIMIX supports replication to and from SYSBAS and to and from
independent ASPs. Figure 33 shows that MIMIX also supports replication from
SYSBAS to an independent ASP and from an independent ASP to SYSBAS.
Figure 32. MIMIX supports replication to and from an independent ASP as well as standard
replication to and from SYSBAS (the system ASP and basic ASPs).
Figure 33. MIMIX also supports replication between SYSBAS and an independent ASP.
1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in V5R2 of
i5/OS.
563
Benefits of independent ASPs
Restrictions: There are several permanent and temporary restrictions that pertain to
replication when an independent ASP is included in the MIMIX configuration. See
“Requirements for replicating from independent ASPs” on page 567 and “Limitations
and restrictions for independent ASP support” on page 567.
564
MIMIX support for independent ASPs
User ASPs are additional ASPs defined by the user. A user ASP can either be a
basic ASP or an independent ASP.
One type of user ASP is the basic ASP. Data that resides in a basic ASP is always
accessible whenever the server is running. Basic ASPs are identified as ASPs 2
through 32. Attributes, such as those for spooled files, authorization, and ownership
of an object, stored in a basic ASP reside in the system ASP. When storage for a
basic ASP is filled, the data overflows into the system ASP.
Collectively, the system ASP and the basic ASPs are called SYSBAS.
Another type of user ASP is the independent ASP. Identified by device name and
numbered 33 through 255, an independent ASP can be made available or
unavailable to the server without restarting the system. Unlike basic ASPs, data in an
independent ASP cannot overflow into the system ASP. Independent ASPs are
configured using iSeries Navigator.
1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only user-
defined file systems and cannot be a member of an ASP group unless they are converted to a pri-
mary or secondary independent ASP.
565
Auxiliary storage pool concepts at a glance
Before an independent ASP is made available (varied on), all primary and secondary
independent ASPs in the ASP group undergo a process similar to a server restart.
While this processing occurs, the ASP group is in an active state and recovery steps
are performed. The primary independent ASP is synchronized with any secondary
independent ASPs in the ASP group, and journaled objects are synchronized with
their associated journal.
While being varied on, several server jobs are started in the QSYSWRK subsystem to
support the independent ASP. To ensure that their names remain unique on the
server, server jobs that service the independent ASP are given their own job name
when the independent ASP is made available.
Once the independent ASP is made available, it is ready to use. Completion message
CPC2605 (vary on completed for device name) is sent to the history log.
566
Requirements for replicating from independent ASPs
The following requirements must be met before MIMIX can support your independent
ASP environment:
• License Program 5722-SS1 option 12 (Host Server) must be installed in order for
MIMIX to properly replicate objects in an independent ASP on the source and
target systems.
• Any PTFs for i5/OS that are identified as being required need to be installed on
both the source and target systems. Log in to Support Central and check the
Technical Documents page for a list of i5/OS PTFs that may be required.
• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into *SYSBAS.
567
Configuration planning tips for independent ASPs
• MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into SYSBAS. These libraries cannot exist in an independent ASP.
• Any *MSGQ libraries, *JOBD libraries, and *OUTFILE libraries specified on MIMIX
commands must reside in SYSBAS.
• For successful replication, ASP devices in ASP groups that are configured in data
group definitions must be made available (varied on). Objects in independent
ASPs attached to the source system cannot be journaled if the device is not
available. Objects cannot be applied to an independent ASP on the target system
if the device is not available.
• Planned switchovers of data groups that include an ASP group must take place
while the ASP devices on both the source and target systems are available. If the
ASP device for the data group on either the source or target system is unavailable
at the time the planned switchover is attempted, the switchover will not complete.
• To support an unplanned switch (failover), the independent ASP device on the
backup system (which will become the temporary production system) must be
available in order for the failover to complete successfully.
• You must run the Set ASP Group (SETASPGRP) command on the local system
before running the Send Network Object (SNDNETOBJ) command if the object
you are attempting to send to a remote system is located in an independent ASP.
Also be aware of the following temporary restrictions:
• MIMIX does not perform validity checking to determine if the ASP group specified
in the data group definition actually exists on the systems. This may cause error
conditions when running commands.
• Any monitors configured for use with MIMIX must specify the ASP group.
Monitors of type *JRN or *MSGQ that watch for events in an independent ASP
must specify the name of the ASP group where the journal or message queue
exists. This is done with the ASPGRP parameter of the CRTMONOBJ command.
• Information regarding independent ASPs is not provided on the following displays:
Display Data Group File Entry (DSPDGFE), Display Data Group Data Area Entry
(DSPDGDAE), Display Data Group Object Entry (DSPDGOBJE), and Display
Data Group Activity Entry (DSPDGACTE). To determine the independent ASP in
which the object referenced in these displays resides, see the data group
definition.
568
For object replication of library-based objects through the system journal, you should
configure related objects in SYSBAS and an ASP group to be replicated by the same
data group. Objects in SYSBAS and an ASP group that are not related should be
separated into different data groups. This precaution ensures that the data group will
start and that objects residing in SYSBAS will be replicated when the independent
ASP is not available.
Note: To avoid replicating an object by more than one data group, carefully plan
what generic library names you use when configuring data group object
entries in an environment that includes independent ASPs. Make every
attempt to avoid replicating both SYSBAS data and independent ASP data for
objects within the same data group. See the example in “Configuring library-
based objects when using independent ASPs” on page 569.
569
Configuration planning tips for independent ASPs
For example, data group APP1 defines replication between ASP groups named
WILLOW on each system. Similarly, group APP2 defines replication between ASP
groups named OAK on each system. Both data groups have a generic data group
object entry that includes object XZY from library names beginning with LIB*. If object
LIBASP/XYZ exists in both independent ASPs and matches the generic data group
object entry defined in each data group, both data groups replicate the corresponding
object. This is considered normal behavior for replication between independent
ASPs, as shown in Figure 35.
However, in this example, if SYSBAS contains an object that matches the generic
data group object entry defined for each data group, the same object is replicated by
both data groups. Figure 35 shows that object LIBBAS/XYZ meets the criteria for
replication by both data groups, which is not desirable.
Figure 35. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2
because the data groups contain the same generic data group object entry. As a result, this
presents a problem if you need to perform a switch.
570
the library list. This can affect the system and user portions of the library list as well
as the current library in the library list.
When a MIMIX command runs the SETASPGRP command during processing, MIMIX
resets the user portion of the library list and the current library in the library list to their
initial values. The system portion of the library list is not restored to its initial value.
Figure 36, Figure 37, and Figure 38 show how the system portion of the library list is
affected on the Display Library List (DSPLIBL) display when the SETASPGRP
command is run.
Figure 36. Before a MIMIX command runs. The library list contains three independent ASP
libraries, including a library in independent ASP WILLOW in the system portion of the library
list.
Figure 37. During the running of a MIMIX command. The independent ASP libraries are
removed from the library list.
Figure 38. After the MIMIX command runs. The library in independent ASP WILLOW in the
system portion of the library list is removed. The libraries in independent ASP OAK in the user
571
Detecting independent ASP overflow conditions
portion of the library list and the current library are restored.
572
Interpreting audit results
Audits use commands that compare and synchronize data. The results of the audits
are placed in output files associated with the commands. The following topics provide
supporting information for interpreting data returned in the output files.
• “Interpreting audit results - MIMIX Availability Manager” on page 575 describes
how to check the status of an audit and resolve any problems that occur from
within MIMIX Availability Manager.
• “Interpreting audit results - 5250 emulator” on page 576 describes how to check
the status of an audit and resolve any problems that occur from a 5250 emulator.
• “Checking the job log of an audit” on page 578 describes how to use an audit’s job
log to determine why an audit failed.
• “Interpreting results for configuration data - #DGFE audit” on page 580 describes
the #DGFE audit which verifies the configuration data defined to your
configuration using the Check Data Group File Entries (CHKDGFE) command.
• “Interpreting results of audits for record counts and file data” on page 582
describes the audits and commands that compare file data or record counts.
• “Interpreting results of audits that compare attributes” on page 586 describes the
Compare Attributes commands and their results.
573
574
Interpreting audit results - MIMIX Availability Manager
When viewing results of audits, the starting point is the Audit Summary window. You
may also need to view the output file or the job log, which are only available from the
system where the audits ran. In most cases, this is the management system.
Do the following:
1. Ensure that you have selected the management system for the installation you
want from the navigation bar. If you are not certain which system is the
management system, you can select Services to check.
2. From the management system, select Audit Summary from the navigation bar.
3. In the Audit Summary window, check the State and Results columns for the
values shown in Table 79. Audits with potential problems are at the top of the list.
4. For each audit, flyover text for the status icon identifies the appropriate action to
take. Table 79 provides additional information.
Rule Failed (blank) Check the job log or run the rule for the audit again.
To run the audit, select Run from the action list and click .
To see the job log, refer to “Checking the job log of an audit” on
page 578 for more information.
Rule Failed User journal Confirm that data group processes are active and run the rule for the
replication is audit again.
not active 1. Check the data group status. Select Data Groups from the
navigation bar. Then select the data group from the list.
2. In the Summary area, confirm that replication processes are active.
If necessary, select the Start action and click .
3. When processes are active, select Summary from the navigation
area.
4. Locate the audit in question. Select the Run action and click .
Completed Differences The detected differences must be manually resolved. Do the following:
Successfully detected, 1. Select Output File from the action list and click .
recovery 2. The detected differences are displayed. Look for items with a
disabled Difference Indicator value of *NC or *NE. You can display details
about the error or attempt the possible recovery action available.
3. Select the action you want and click .
To have MIMIX recover differences on subsequent audits, change the
value of the automatic audit recovery policy.
575
Interpreting audit results - 5250 emulator
For more information about the values displayed in the audit results, see “Interpreting
results for configuration data - #DGFE audit” on page 580, “Interpreting results of
audits for record counts and file data” on page 582, and “Interpreting results of audits
that compare attributes” on page 586.
576
problems are at the top of the list. Take the action indicated in Table 80.
Compliance Action
Status
*DIFNORCY The comparison performed by the audit detected differences. No recovery actions were
attempted because automatic audit recovery is disabled.
1. Use option 7 to view notifications for the audit.
2. A subsetted list of the notifications for the audit appears. Use option 8 to view the
results in the output file.
3. Check the Difference Indicator column for values of *NC and *NE. For any of these
differences, you will need manually resolve these problems.
To have MIMIX recover differences on subsequent audits, change the value of the
automatic audit recovery policy.
*NOTRCVD The comparison performed by the audit detected differences. Some of the differences were
not automatically recovered. The remaining detected differences must be manually
resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits,
such as #FILDTA, may correct the detected differences.
Do the following:
1. Use option 7 to view notifications for the audit.
2. A subsetted list of the notifications for the audit appears. Use option 8 to view the results
in the output file.
3. Check the Difference Indicator column for values of *NC, *NE, and *RCYFAILED. If
automatic audit recovery is disabled, you may see other values as well. For the
#MBRRCDCNT results, also look for values of: *HLD, *LCK, *NF1, *NF2, *SJ, *UE, and
*UN. For any of these differences, you will need to manually resolve these issues.
577
Checking the job log of an audit
578
579
Interpreting results for configuration data - #DGFE audit
580
Table 82. CHKDGFE - possible error resolution actions
*NOFILE Delete the DGFE, re-create the missing file, or restore the missing file.
*NOMBR Delete the DGFE for the member or add the member to the file.
581
Interpreting results of audits for record counts and file data
Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
582
Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*REP The file member is being processed for repair by another job
running the Compare File Data (CMPFILDTA) command.
*SJ The source file is not journaled, or is journaled to the wrong journal.
Table 84. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-
ference Indicator (DIFIND)
Values Description
583
Interpreting results of audits for record counts and file data
Table 84. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-
ference Indicator (DIFIND)
Values Description
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*SJ The source file is not journaled, or is journaled to the wrong journal.
584
585
Interpreting results of audits that compare attributes
1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare Object
Attributes (CMPOBJA), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes
(CMPDLOA).
586
What attribute differences were detected
The Difference Indicator (DIFIND) field identifies the result of the comparison. Table
85 identifies values that can appear in this field. Not all values may be valid for every
Compare command.
Within MIMIX Availability Manager, the value shown in the Summary List window is a
prioritized summary of the status of all attributes checked for the object. This
summary value is also presented along with other object-identifying information at the
top of the Details window. For each attribute displayed on the Details window, the
results of its comparison is shown.
When the output file is viewed from a 5250 emulator, the summary row is the first
record for each compared object and is indicated by an asterisk (*) in the Compared
Attribute (CMPATR) field. The summary row’s Difference Indicator value is the
prioritized summary of the status of all attributes checked for the object. When
included, detail rows appear below the summary row for the object compared and
show the actual result for the attributes compared.
The Priority2 column in Table 85 indicates the order of precedence MIMIX uses when
determining the prioritized summary value for the compared object.
Table 85. Possible values for output file field Difference Indicator (DIFIND)
*CMT An open commit cycle on the source system prevents active N/A
processing from comparing one or more records in the selected
member.
*EC The values are based on the MIMIX configuration settings. The 5
actual values may or may not be equal.
*HLD Indicates that a member is held or an inactive state was detected. N/A
587
Interpreting results of audits that compare attributes
Table 85. Possible values for output file field Difference Indicator (DIFIND)
*NA The values are not compared. The actual values may or may not 5
be equal.
*NC The values are not equal based on the MIMIX configuration 3
settings. The actual values may or may not be equal.
*NS Indicates that the attribute is not supported on one of the systems. 5
Will not cause a global not equal condition.
*SYNC Unable to process selected member. The file is being processed N/A
by the Synchronize DG File Entry (SYNCDGFE) command.
588
For most attributes, when a detailed row contains blanks in either of the System 1
Indicator or System 2 Indicator fields, MIMIX determines the value of the Difference
Indicator field according to Table 86. For example, if the System 1 Indicator is
*NOTFOUND and the System 2 Indicator is blank (Object found), the resultant
Difference Indicator is *NE.
Table 86. Difference Indicator values that are derived from System Indicator values.
Difference Indicator
System 1 Indicator
Object *NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED
Found (blank
value)
Object Found *EQ / *EQ *NA *NE *NS *UN *NE
(blank value) (LOB) / *NE /
*UA / *EC /
*NC
System
2 *NOTCMPD *NA *NA *NE *NS *UN *NE
Indicator *NOTFOUND *NE / *UA *NE / *UA *EQ *NE / *UA *NE / *UA *NE
*NOTSPT *NS *NS *NE *NS *UN *NE
*RTVFAILED *UN *UN *NE *UN *UN *NE
*DAMAGED *NE *NE *NE *NE *NE *NE
For a small number of specific attributes, the comparison is more complex. The
results returned vary according to parameters specified on the compare request and
MIMIX configuration values. For more information see the following topics:
• “Comparison results for journal status and other journal attributes” on page 608
• “Comparison results for auxiliary storage pool ID (*ASP)” on page 612
• “Comparison results for user profile status (*USRPRFSTS)” on page 615
• “Comparison results for user profile password (*PRFPWDIND)” on page 619
Table 87. Possible values for output file fields SYS1IND and SYS2IND
589
Interpreting results of audits that compare attributes
Table 87. Possible values for output file fields SYS1IND and SYS2IND
*NOTCMPD Attribute not compared. Due to MIMIX configuration settings, this N/A2
attribute cannot be compared.
*NOTSPT Attribute not supported. Not all attributes are supported on all IBM N/A2
i releases. This is the value that is used to indicate an
unsupported attribute has been specified.
*RTVFAILED Unable to retrieve the attributes of the object. Reason for failure 4
may be a lock condition.
1. The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary
record.
2. This value is not used in determining the priority of summary level records.
For comparisons which include a data group, the Data Source (DTASRC) field
identifies which system is configured as the source for replication. In MIMIX
Availability Manager Details windows, the direction of the arrow shown the data group
field identifies the flow of replication.
590
Attributes compared and expected results - #FILATR, #FILATRMBR
audits
The Compare File Attribute (CMPFILA) command supports comparisons at the file
and member level. Most of the attributes supported are for file-level comparisons. The
#FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for
the comparison phase of the audit.
Some attributes are common file attributes such as owner, authority, and creation
date. Most of the attributes, however, are file-specific attributes. Examples of file-
specific attributes include triggers, constraints, database relationships, and journaling
information.
The difference Indicator (DIFIND) returned after comparing file attributes may depend
on whether the file is defined by file entries or object entries. For instance, a attribute
could be equal (*EC) to the database configuration but not equal (*NC) to the object
configuration. See “What attribute differences were detected” on page 587.
Table 88 lists the attributes that can be compared and the value shown in the
Compared Attribute (CMPATR) field in the output file. The Returned Values column
lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value
(SYS2VAL) columns as a result of running the comparison.
*ALWOPS Allow operations Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD,
*ALWWRT
591
Table 88. Compare File Attributes (CMPFILA) attributes
*AUT File authorities Group which checks attributes *AUTL, *PGP, *PRVAUTIND,
*PUBAUTIND
*EXPDATE1 Expiration date for Blank for *NONE or date in CYYMMDD format, where C equals
member the century. Value 0 is 19nn and 1 is 20nn.
592
Table 88. Compare File Attributes (CMPFILA) attributes
*EXTENDED Pre-determined, Valid only for Comparison level of *FILE, this group compares the
extended set basic set of attributes (*BASIC) plus an extended set of attributes.
The following attributes are compared: *ACCPTH, *AUT (group),
*CCSID, *CST (group), *CURRCDS, *DBR (group), *MAXKEYL,
*MAXMBRS, *MAXRCDL, *NBRMBR, *OBJATR, *OWNER,
*PFSIZE (group), *RCDFMT, *REUSEDLT, *SELOMT,
*SQLTYP, *TEXT, and *TRIGGER (group).
*JOURNAL Journal attributes Group which checks *JOURNALED, *JRN, *JRNLIB, *JRNIMG,
*JRNOMIT. Results are described in “Comparison results for
journal status and other journal attributes” on page 608.
593
Table 88. Compare File Attributes (CMPFILA) attributes
*PFSIZE File size attributes Group which checks *CURRCDS, *INCRCDS, *MAXINC,
*NBRDLTRCD, *NBRRCDS
594
Table 88. Compare File Attributes (CMPFILA) attributes
595
Attributes compared and expected results - #OBJATR audit
The #OBJATR audit calls the Compare Object Attributes (CMPOBJA) command and
places the results in an output file. Table 89 lists the attributes that can be compared
by the CMPOBJA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The command supports attributes that are common
among most library-based objects as well as extended attributes which are unique to
specific object types, such as subsystem descriptions, user profiles, and data areas.
The Returned Values column lists the values you can expect in the System1 Value
(SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the
compare.
*ATTNPGM2 Attention key handling *SYSVAL, *NONE, *ASSIST, attention program name
program
Valid for user profiles
only.
596
Table 89. Compare Object Attributes (CMPOBJA) attributes
*CRTAUT2 Authority given to users *SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL,
who do not have specific *CHANGE, *ALL, *USE, *EXCLUDE
authority to the object.
Valid for libraries only.
*CRTOBJAUD2 Auditing value for objects *SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL
created in this library
Valid for libraries only.
*DTAARAEXT Data area extended Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE
attributes
597
Table 89. Compare Object Attributes (CMPOBJA) attributes
*EXTENDED Pre-determined, Group which compares the basic set of attributes (*BASIC)
extended set plus an extended set of attributes. The following attributes
are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS,
*OBJATR, *TEXT, and *USRATR.
*INFSTS Information status *OK (No errors occurred), *RTVFAILED (No information
returned - insufficient authority or object is locked),
*DAMAGED (Object is damaged or partially damaged).
*JOBDEXT Job description extended Group which checks *DDMCNV, *JOBQ, *JOBQLIB,
attributes *JOBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB,
*OUTQPRI, *PRTDEV
598
Table 89. Compare Object Attributes (CMPOBJA) attributes
*JOBQEXT Job queue extended Group which checks *AUTCHK, *JOBQSBS, *JOBQSTS,
attributes *OPRCTL
599
Table 89. Compare Object Attributes (CMPOBJA) attributes
600
Table 89. Compare Object Attributes (CMPOBJA) attributes
*PRFPWDIND User profile password See “Comparison results for user profile password
indicator (*PRFPWDIND)” on page 619 for details.
601
Table 89. Compare Object Attributes (CMPOBJA) attributes
602
Table 89. Compare Object Attributes (CMPOBJA) attributes
*USRPRFEXT User profile extended Group which checks *ATTNPGM, *CCSID, *CNTRYID,
attributes *CRTOBJOWN, *CURLIB, *GID, *GRPAUT, *GRPAUTTYP,
*GRPPRF, *INLMNU, *INLPGM, *LANGID, *LMTCPB,
*MSGQ, *PRFOUTQ, *PWDEXPITV, *PWDIND,
*SPCAUTIND, *SUPGRPIND, *USRCLS
603
Attributes compared and expected results - #IFSATR audit
The #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and
places the results in an output file. Table 90 lists the attributes that can be compared
by the CMPIFSA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
*BASIC Pre-determined set of Group which checks a pre-determined set of attributes. The
basic attributes following set of attributes are compared: *CCSID,
*DATASIZE, *OBJTYPE, and the group *PCATTR.
604
Table 90. Compare IFS Attributes (CMPIFSA) attributes
605
Attributes compared and expected results - #DLOATR audit
The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and
places the results in an output file. Table 91 lists the attributes that can be compared
by the CMPDLOA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
606
Table 91. Compare DLO Attributes (CMPDLOA) attributes
607
Comparison results for journal status and other journal attributes
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling
attributes listed in Table 92 for objects replicated from the user journal. These
commands function similarly when comparing journaling attributes.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether the file is journaled, whether the
request includes a data group, and the data group’s configured settings for journaling.
Regardless of which journaling attribute is specified on the command, MIMIX always
checks the journaling status first (*JOURNALED attribute). If the file or object is
journaled on both systems, MIMIX then considers whether the command specified a
data group definition before comparing any other requested attribute.
When specified on the CMPOBJA command, these values apply only to files, data areas,
or data queues. When specified on the CMPFILA command, these values apply only to
PF-DTA and PF38-DTA files.
*JRN 1 Journal. Indicates the name of the current or last journal. If blank, the
object has never been journaled.
*JRNIMG 1 2 Journal Image. Indicates the kinds of images that are written to the
journal receiver for changes to objects.
*JRNLIB 1 Journal Library. Identifies the library that contains the journal. If
blank, the object has never been journaled.
*JRNOMIT 1 Journal Omit. Indicates whether file open and close journal entries
are omitted.
1. When these values are specified on a Compare command, the journal status (*JOURNALED)
attribute is always evaluated first. The result of the journal status comparison determines whether
the command will compare the specified attribute.
2. Although *JRNIMG can be specified on the CMPIFSA command, it is not compared even when the
journal status is as expected. The journal image status is reflected as not supported (*NS) because
IBM i only supports after (*AFTER) images.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the journaled status (*JOURNALED attribute).
Table 93 shows the result displayed in the Differences Indicator field. If the file or
608
object is not journaled on both systems, the compare ends. If both source and target
systems are journaled, MIMIX then compares any other specified journaling attribute.
Table 93. Difference indicator values for *JOURNALED attribute when no data group is
specified
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EQ *NE *NE
Source No *NE *EQ *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Compares that specify a data group - When a data group is specified on the
compare request, MIMIX compares the journaled status (*JOURNALED attribute) to
the configuration values. If both source and target systems are journaled according to
the expected configuration settings, then MIMIX compares any other specified
journaling attribute against the configuration settings.
The Compare commands vary slightly in which configuration settings are checked.
• For CMPFILA requests, if the journaled status is as configured, any other
specified journal attributes are compared. Possible results from comparing the
*JOURNALED attribute are shown in Table 94.
• For CMPOBJA and CMPIFSA requests, if the journaled status is as configured
and the configuration specifies *YES for Cooperate with database (COOPDB),
then any other specified journal attributes are compared. Possible results from
comparing the *JOURNALED attribute are shown in Table 94 and Table 95. If the
configuration specifies COOPDB(*NO), only the journaled status is compared;
possible results are shown in Table 96.
Table 94, Table 95, and Table 96 show results for the *JOURNALED attribute that
can appear in the Difference Indicator field when the compare request specified a
data group and considered the configuration settings.
609
Table 94 shows results when the configured settings for Journal on target and
Cooperate with database are both *YES.
Table 94. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *YES for JRNTGT and COOPDB
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 95 shows results when the configured settings are *NO for Journal on target
and *YES for Cooperate with database. .
Table 95. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for JRNTGT and *YES for COOPDB.
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *NC *EC *NE
Source No *NC *NC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 96 shows results when the configured setting for Cooperate with database is
*NO. In this scenario, you may want to investigate further. Even though the Difference
Indicator shows values marked as configured (*EC), the object can be not journaled
610
on one or both systems. The actual journal status values are returned in the System 1
Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.
Table 96. Difference indicator values for *JOURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for COOPDB.
Difference Indicator
Target
Journal Status 1 Yes No *NOTFOUND
Yes *EC *EC *NE
Source No *EC *EC *NE
*NOTFOUND *NE *NE *UN
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
611
Comparison results for auxiliary storage pool ID (*ASP)
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJA),
Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA)
commands support comparing the auxiliary storage pool (*ASP) attribute for objects
replicated from the user journal. These commands function similarly.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether a data group was specified on the
compare request.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the *ASP attribute for all files or objects that
match the selection criteria specified in the request. The result displayed in the
Differences Indicator field. Table 97 shows the possible results in the Difference
Indicator field.
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EQ *NE *NE
Source ASP2 *NE *EQ *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Compares that specify a data group - When a data group is specified on the
compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not
compare the *ASP attribute. When a data group is specified on a CMPOBJA request
which specifies an object type except libraries (*LIB), MIMIX does not compare the
*ASP attribute. Table 98 shows the possible results in the Difference Indicator field
Table 98. Difference Indicator values for non-library objects when the request specified a
data group
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *NOTCMPD *NOTCMPD *NE
Source ASP2 *NOTCMPD *NOTCMPD *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
612
For CMPOBJA requests which specify a a data group and an object type of *LIB,
MIMIX considers configuration settings for the library. Values for the System 1 library
ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library
ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved
from the data group object entry and used in the comparison. Table 99, Table 100,
and Table 101 show the possible results in the Difference Indicator field.
Note: For Table 99, Table 100, and Table 101, the results are the same even if the
system roles are switched.
Table 99 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *SRCLIB for the System 1 library ASP
number and the data source is system 2. .
Table 99. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2).
Difference Indicator
Target
ASP Values 1 ASP1 ASP2 *NOTFOUND
ASP1 *EC *NC *NE
Source ASP2 *NC *EC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 100 shows the expected values for the ASP attribute the request specifies a
data group and the configuration specifies 1 for the System 1 library ASP number and
the data source is system 2.
Table 100. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(1) and DTASRC(*SYS2)
Difference Indicator
Target
1
ASP Values 1 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Table 101 shows the expected values for the ASP attribute when the request
specifies a data group and the configuration specifies *ASPDEV for the System 1
613
library ASP number, DEVNAME is specified for the System 1 library ASP device, and
data source is system 2. .
Table 101. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and
DTASRC(*SYS2)
Difference Indicator
Target
ASP Values 1 DEVNAME 2 *NOTFOUND
1 *EC *NC *NE
Source 2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
614
Comparison results for user profile status (*USRPRFSTS)
When comparing the attribute *USRPRFSTS (user profile status) with the Compare
Object Attributes (CMPOBJA) command, MIMIX determines the result displayed in
the Differences Indicator field by considering the following:
• The status values of the object on both the source and target systems
• Configured values for replicating user profile status, at the data group and object
entry levels
• The value of the Data group definition (DGDFN) parameter specified on the
CMPOBJA command.
Compares that do not specify a data group - When the CMPOBJA command does
not specify a data group, MIMIX compares the status values between source and
target systems. The result is displayed in the Differences Indicator field, according to
Table 85 in “Interpreting results of audits that compare attributes” on page 586.
Compares that specify a data group - When the CMPOBJA command specifies a
data group, MIMIX checks the configuration settings and the values on one or both
systems. (For additional information, see “How configured user profile status is
determined” on page 616.)
When the configured value is *SRC, the CMPOBJA command compares the values
on both systems. The user profile status on the target system must be the same as
the status on the source system, otherwise an error condition is reported. Table 102
shows the possible values.
Table 102. Difference Indicator values when configured user profile status is *SRC
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
615
104 show the possible values when configured values are *ENABLED or *DISABLED,
respectively.
Table 103. Difference Indicator values when configured user profile status is *ENABLED
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *EC *NC *NE
Source *DISABLED *EC *NC *NE
*NOTFOUND *NE *NE *UN
Table 104. Difference Indicator values when configured user profile status is *DISABLED
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NC *EC *NE
Source *DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
When the configured value is *TGT, the CMPOBJA command does not compare the
values because the result is indeterminate. Any differences in user profile status
between systems are not reported. Table 105 shows possible values.
Table 105. Difference Indicator values when configured user profile status *TGT
Difference Indicator
Target
User profile status *ENABLED *DISABLED *NOTFOUND
*ENABLED *NA *NA *NE
Source *DISABLED *NA *NA *NE
*NOTFOUND *NE *NE *UN
616
in an object entry, the default is to use the value *SRC from the data group definition.
Table 106 shows the possible values at both the data group and object entry levels.
*DGDFT Only available for data group object entries, this indicates that the specified
in the data group definition is used for the user profile statue. This is the
default value for object entries.
*DISABLE 1 The status of the user profile is set to *DISABLED when the user profile is
created or changed on the target system.
*ENABLE 1 The status of the user profile is set to *ENABLED when the user profile is
created or changed on the target system.
*SRC This is the default value in the data group definition. The status of the user
profile on the source system is always used when the user profile is created
or changed on the target system.
617
618
Comparison results for user profile password (*PRFPWDIND)
When comparing the attribute *PRFPWDIND (user profile password indicator) with
the Compare Object Attributes (CMPOBJA) command, MIMIX assumes that the user
profile names are the same on both systems. User profile passwords are only
compared if the user profile name is the same on both systems and the user profile of
the local system is enabled and has a defined password.
If the local or remote user profile has a password of *NONE, or if the local user profile
is disabled or expired, the user profile password is not compared. The System
Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The
Difference Indicator field will also return a value of not compared (*NA).
The CMPOBJA command does not support name mapping while comparing the
*PRFPWDIND attribute. If the user profile names are different, or if you attempt name
mapping, the System Indicator fields will indicate that comparing the attribute is not
supported (*NOTSPT). The Difference Indicator field will also return a value of not
supported (*NS).
The following tables identify the expected results when user profile password is
compared. Note that the local system is the system on which the command is being
run, and the remote system is defined as System 2.
Table 107 shows the possible Difference Indicator values when the user profile
passwords are the same on the local and remote systems and are not defined as
*NONE.
Table 107. Difference Indicator values when user profile passwords are the same, but not
*NONE.
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *EQ *EQ *EQ *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
619
Table 108 shows the possible Difference Indicator values when the user profile
passwords are different on the local and remote systems and are not defined as
*NONE.
Table 108. Difference Indicator values when user profile passwords are different, but not
*NONE
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NE *NE *NE *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
Table 109 shows the possible Difference Indicator values when the user profile
passwords are defined as *NONE on the local and remote systems.
Table 109. Difference Indicator values when user profile passwords are *NONE.
Difference Indicator
Remote System
User Profile Password *ENABLED *DISABLED Expired Not Found
*ENABLED *NA *NA *NA *NE
*DISABLED *NA *NA *NA *NE
Local System
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
620
Appendix F
Outfile formats
This section contains the output files (outfile) formats for those MIMIX commands that
provide outfile support.
Lakeview Technology provides a model database file that defines the record format
for the outfile. These database files can be found in the product installation library.
Public authority to the created outfile is the same as the create authority of the library
in which the file is created. Use the Display Library Description (DSPLIBD) command
to see the create authority of the library.
You can use the Run Query (RUNQRY) command to display outfiles with column
headings and data type formatting if you have the licensed program 5722QU1, Query,
installed.
Otherwise, you can use the Display File Field Description (DSPFFD) command to see
detailed outfile information, such as the field length, type, starting position, and
number of bytes.
621
Work panels with outfile support
Panel Description
622
MCAG outfile (WRKAG command)
623
MCAG outfile (WRKAG command)
624
MCAG outfile (WRKAG command)
625
MCDTACRGE outfile (WRKDTARGE command)
626
MCDTACRGE outfile (WRKDTARGE command)
627
MCNODE outfile (WRKNODE command)
628
MCNODE outfile (WRKNODE command)
629
MXCDGFE outfile (CHKDGFE command)
237
MXCDGFE outfile (CHKDGFE command)
237
MXCMPDLOA outfile (CMPDLOA command)
632
MXCMPDLOA outfile (CMPDLOA command)
633
MXCMPFILA outfile (CMPFILA command)
634
MXCMPFILA outfile (CMPFILA command)
635
MXCMPFILD outfile (CMPFILDTA command)
636
MXCMPFILD outfile (CMPFILDTA command)
637
MXCMPFILD outfile (CMPFILDTA command)
638
MXCMPFILR outfile (CMPFILDTA command, RRN report)
Table 118. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR)
Field Description Type, length Valid values Column head-
ings
SYSTEM 1 System 1 CHAR(8) User-defined system name SYSTEM 1
*local system name if no DG
specified
SYSTEM 2 System 2 CHAR(8) User-defined system name SYSTEM 2
*local system name if no DG
specified
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJECT
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
RRN Relative record number DECIMAL(10) Number RRN
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE
639
MXCMPRCDC outfile (CMPRCDCNT command)
640
MXCMPRCDC outfile (CMPRCDCNT command)
641
MXCMPRCDC outfile (CMPRCDCNT command)
642
643
MXCMPIFSA outfile (CMPIFSA command)
644
MXCMPIFSA outfile (CMPIFSA command)
645
MXCMPIFSA outfile (CMPIFSA command)
646
MXCMPOBJA outfile (CMPOBJA command)
647
MXCMPOBJA outfile (CMPOBJA command)
648
MXDGACT outfile (WRKDGACT command)
649
MXDGACT outfile (WRKDGACT command)
650
MXDGACTE outfile (WRKDGACTE command)
651
MXDGACTE outfile (WRKDGACTE command)
652
MXDGACTE outfile (WRKDGACTE command)
653
MXDGACTE outfile (WRKDGACTE command)
TGTOBJLIB Target system object library CHAR(10) User-defined name, BLANK TARGET
name OBJECT
LIBRARY
TGTOBJ Target system object name CHAR(10) User-defined name, BLANK TARGET
OBJECT
TGTOBJMBR Target system object CHAR(10) User-defined name, BLANK TARGET
member name MEMBER
TGTDLO Target system DLO name CHAR(12) User-defined name, BLANK TARGET DLO
TGTFLR Target system object folder CHAR(63) User-defined name, BLANK TARGET
name FOLDER
TGTSPLFJOB Target system spooled file CHAR(26) Three part spooled file name, BLANK TARGET
job name SPLF
TGTSPLF Target system spooled file CHAR(10) User-defined name, BLANK JOB
name
TGTSPLFNBR Target system spooled file PACKED(7 0) 1-999999, BLANK TARGET
job number SPLF
NUMBER
TGTOUTQ Target system output queue CHAR(10) User-defined name, BLANK TARGET
OUTQ
TGTOUTQLIB Target system output queue CHAR(10) User-defined name, BLANK TARGET
library OUTQ
LIBRARY
TGTIFS Target system IFS name CHAR(1024) User-defined name, BLANK TARGET IFS
VARLEN(100) OBJECT
654
MXDGACTE outfile (WRKDGACTE command)
655
MXDGACTE outfile (WRKDGACTE command)
656
MXDGACTE outfile (WRKDGACTE command)
657
MXDGACTE outfile (WRKDGACTE command)
658
MXDGDAE outfile (WRKDGDAE command)
659
MXDGDFN outfile (WRKDGDFN command)
660
MXDGDFN outfile (WRKDGDFN command)
661
MXDGDFN outfile (WRKDGDFN command)
662
MXDGDFN outfile (WRKDGDFN command)
663
MXDGDFN outfile (WRKDGDFN command)
664
MXDGDFN outfile (WRKDGDFN command)
665
MXDGDFN outfile (WRKDGDFN command)
666
MXDGDFN outfile (WRKDGDFN command)
667
MXDGDLOE outfile (WRKDGDLOE command)
668
MXDGDLOE outfile (WRKDGDLOE command)
669
MXDGFE outfile (WRKDGFE command)
670
MXDGFE outfile (WRKDGFE command)
671
MXDGFE outfile (WRKDGFE command)
672
MXDGFE outfile (WRKDGFE command)
673
MXDGIFSE outfile (WRKDGIFSE command)
674
MXDGIFSE outfile (WRKDGIFSE command)
675
MXDGSTS outfile (WRKDG command)
676
MXDGSTS outfile (WRKDG command)
677
MXDGSTS outfile (WRKDG command)
678
MXDGSTS outfile (WRKDG command)
679
MXDGSTS outfile (WRKDG command)
680
MXDGSTS outfile (WRKDG command)
681
MXDGSTS outfile (WRKDG command)
682
MXDGSTS outfile (WRKDG command)
683
MXDGSTS outfile (WRKDG command)
684
MXDGSTS outfile (WRKDG command)
685
MXDGSTS outfile (WRKDG command)
686
MXDGSTS outfile (WRKDG command)
687
MXDGSTS outfile (WRKDG command)
688
MXDGSTS outfile (WRKDG command)
689
MXDGSTS outfile (WRKDG command)
690
MXDGSTS outfile (WRKDG command)
691
MXDGSTS outfile (WRKDG command)
692
MXDGSTS outfile (WRKDG command)
693
MXDGSTS outfile (WRKDG command)
694
MXDGSTS outfile (WRKDG command)
695
MXDGSTS outfile (WRKDG command)
696
MXDGSTS outfile (WRKDG command)
697
MXDGSTS outfile (WRKDG command)
698
MXDGSTS outfile (WRKDG command)
699
MXDGSTS outfile (WRKDG command)
700
MXDGSTS outfile (WRKDG command)
701
MXDGSTS outfile (WRKDG command)
702
MXDGOBJE outfile (WRKDGOBJE command)
703
MXDGOBJE outfile (WRKDGOBJE command)
704
MXDGOBJE outfile (WRKDGOBJE command)
705
MXDGTSP outfile (WRKDGTSP command)
706
MXDGTSP outfile (WRKDGTSP command)
707
MXDGTSP outfile (WRKDGTSP command)
708
MXJRNDFN outfile (WRKJRNDFN command)
709
MXJRNDFN outfile (WRKJRNDFN command)
710
MXJRNDFN outfile (WRKJRNDFN command)
711
MXJRNDFN outfile (WRKJRNDFN command)
712
MXRJLNK outfile (WRKRJLNK command)
713
MXRJLNK outfile (WRKRJLNK command)
714
MXRJLNK outfile (WRKRJLNK command)
715
MXSYSDFN outfile (WRKSYSDFN command)
716
MXSYSDFN outfile (WRKSYSDFN command)
717
MXSYSDFN outfile (WRKSYSDFN command)
718
MXSYSDFN outfile (WRKSYSDFN command)
719
MXTFRDFN outfile (WRKTFRDFN command)
720
MXTFRDFN outfile (WRKTFRDFN command)
721
MZPRCDFN outfile (WRKPRCDFN command)
722
MZPRCE outfile (WRKPRCE command)
723
MZPRCE outfile (WRKPRCE command)
724
MZPRCE outfile (WRKPRCE command)
725
MXDGIFSTE outfile (WRKDGIFSTE command)
726
MXDGIFSTE outfile (WRKDGIFSTE command)
727
MXDGOBJTE outfile (WRKDGOBJTE command)
728
MXDGOBJTE outfile (WRKDGOBJTE command)
729
Notices
© Copyright 1999, 2008, Lakeview Technology Inc., All rights reserved. This document may not be copied,
reproduced, translated, or transmitted in whole or part, except under license of Lakeview Technology Inc.
® MIMIX is a registered trademark of Lakeview Technology Inc.
™ MIMIX AutoGuard, MIMIX AutoNotify, MIMIX Availability Manager, MIMIX ha1, MIMIX ha Lite, MIMIX DB2
Replicator, MIMIX Object Replicator, MIMIX Monitor, MIMIX Promoter, IntelliStart, RJ Link, and MIMIX Switch
Assistant are trademarks of Lakeview Technology Inc.
AS/400, DB2, eServer, i5/OS, IBM, iSeries, OS/400, Power, System i, and WebSphere are trademarks of
International Business Machines Corporation.
All other trademarks are the property of their respective owners.
Lakeview Technology Inc. is an IBM Business Partner.
If you are an entity of the U.S. government, you agree that this documentation and the program(s) referred to in
this document are Commercial Computer Software, as defined in the Federal Acquisition Regulations (FAR),
and the DoD FAR Supplement, and are delivered with only those rights set forth within the license agreement
for such documentation and program(s). Use, duplication or disclosure by the Government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFAR 252.227-7013 (48 CFR) or subparagraphs (c)(1) & (2) of the Commercial Computer Software
- Restricted Rights clause at FAR 52.227-19.
The information in this document is subject to change without notice. Lakeview Technology Inc. makes no
warranty of any kind regarding this material and assumes no responsibility for any errors that may appear in
this document. The program(s) referred to in this document are not specifically developed, or licensed, for use
in any nuclear, aviation, mass transit, or medical application or in any other inherently dangerous applications,
and any such use shall remove Lakeview Technology Inc. from liability. Lakeview Technology Inc. shall not be
liable for any claims or damages arising from such use of the Program(s) for any such applications.
Examples and Example Programs:
This book contains examples of reports and data used in daily operation. To illustrate them as completely as
possible the examples may include names of individuals, companies, brands, and products. All of these names
are fictitious. Any similarity to the names and addresses used by an actual business enterprise is entirely
coincidental.
This book contains small programs that are furnished by Lakeview Technology Inc. as simple examples to
provide an illustration. These examples have not been thoroughly tested under all conditions. Lakeview
Technology, therefore, cannot guarantee or imply reliability, serviceability, or function of these example
programs. All programs contained herein are provided to you “AS IS.” THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.
732
interpreting, file data comparisons 582 changing
timestamp difference 129 RJ link 227
troubleshoot 578 startup programs, remote journaling 305
auditing and reporting, compare commands changing from RJ to MIMIX processing
DLO attributes 434 permanently 229
file and member attributes 425 temporarily 228
file data using active processing 464 checklist
file data using subsetting options 467 convert *DTAARA, *DTAQ to user journaling
file data with repair capability 458 154
file data without active processing 455 convert IFS objects to user journaling 154
files on hold 461 converting to remote journaling 147
IFS object attributes 431 copying configuration data 553
object attributes 428 legacy cooperative processing 157
auditing value, i5/OS object manual configuration (source-send) 143
set by MIMIX 58 MIMIX Dynamic Apply 150
auditing, i5/OS object 25 new preferred configuration 139
performed by MIMIX 297 pre-configuration 81
audits 487 collision points 511
job log 578 collision resolution 511
authorities, private 104 default value 240
automation 510 requirements 382
autostart job entry 190 working with 381
changing 191 commands
configuring 190 changing defaults 537
identifying 191 displaying a list of 528
commands, by mnemonic
B ADDDGDAE 290
backlog ADDMSGLOGE 521
comparing file data restriction 442 ADDRJLNK 225
backup system 23 CHGDGDAE 290
restricting access to files 240 CHGJRNDFN 217
basic ASP 565 CHGRJLNK 227
batch output 527 CHGSYSDFN 171
benefits CHGTFRDFN 186
independent ASPs 564 CHKDGFE 303, 580
LOB replication 107 CLOMMXLST 536
bi-directional data flow 361 CMPDLOA 420
broadcast configuration 68 CMPFILA 420
CMPFILDTA 440, 455
C CMPIFSA 420
candidate objects CMPOBJA 420
defined 400 CMPRCDCNT 437
cascade configuration 68 CPYCFGDTA 552
cascading distributions, configuring 365 CPYDGDAE 291
catchup mode 63 CPYDGFE 291
change management CPYDGIFSE 291
journal receivers 202 CRTCRCLS 383
overview 37 CRTDGDFN 247, 251
remote journal environment 37 CRTJRNDFN 215
CRTSYSDFN 170
733
CRTTFRDFN 184 WRKDGDLOE 291
DLTCRCLS 384 WRKDGFE 291
DLTDGDFN 256 WRKDGIFSE 291
DLTJRNDFN 256 WRKDGOBJE 291
DLTSYSDFN 256 WRKJRNDFN 255
DLTTFRDFN 256 WRKRJLNK 310
DSPDGDAE 293 WRKSYSDFN 255
DSPDGFE 293 WRKTFRDFN 255
DSPDGIFSE 293 commands, by name
ENDJRNFE 327 Add Data Group Data Area Entry 290
ENDJRNIFSE 331 Add Message Log Entry 521
ENDJRNOBJE 335 Add Remote Journal Link 225
ENDJRNPF 327 Change Data Group Data Area Entry 290
LODDGDAE 289 Change Journal Definition 217
LODDGFE 272 Change RJ Link 227
LODDGOBJE 268 Change System Definition 171
MIMIX 91 Change Transfer Definition 186
OPNMMXLST 536 Check Data Group File Entries 303, 580
RMVDGDAE 292 Close MIMIX List 536
RMVDGFE 292 Compare DLO Attributes 420
RMVDGFEALS 292 Compare File Attributes 420
RMVDGIFSE 292 Compare File Data 440, 455
RMVRJCNN 231 Compare IFS Attributes 420
RUNCMD 529 Compare Object Attributes 420
RUNCMDS 529 Compare Record Counts 437
SETDGAUD 297 Copy Configuration Data 552
SETIDCOLA 373 Copy Data Group Data Area Entry 291
SNDNETDLO 509 Copy Data Group File Entry 291
SNDNETIFS 508 Copy Data Group IFS Entry 291
SNDNETOBJ 475, 506 Create Collision Resolution Class 383
STRJRNFE 326 Create Data Group Definition 247, 251
STRJRNIFSE 330 Create Journal Definition 215
STRJRNOBJE 334 Create System Definition 170
STRMMXMGR 296 Create Transfer Definition 184
STRSVR 189 Delete Collision Resolution Class 384
SWTDG 25 Delete Data Group Definition 256
SYNCDFE 473 Delete Journal Definition 256
SYNCDGACTE 473, 479 Delete System Definition 256
SYNCDGFE 480, 489 Delete Transfer Definition 256
SYNCDLO 472, 478, 499 Display Data Group Data Area Entry 293
SYNCIFS 472, 478, 495, 505 Display Data Group File Entry 293
SYNCOBJ 472, 478, 491, 505 Display Data Group IFS Entry 293
VFYCMNLNK 194, 195 End Journal Physical File 327
VFYJRNFE 328 End Journaling File Entry 327
VFYJRNIFSE 332 End Journaling IFS Entries 331
VFYJRNOBJE 336 End Journaling Obj Entries 335
VFYKEYATR 359 Load Data Group Data Area Entries 289
WRKCRCLS 383 Load Data Group File Entries 272
WRKDGDAE 289, 291 Load Data Group Object Entries 268
WRKDGDFN 255 MIMIX 91
734
Open MIMIX List 536 #MBRRCDCNT audit performance 351
Remove Data Group Data Area Entry 292 journal standby state, journal cache 341, 344
Remove Data Group File Entry 292 journaled IFS objects 73
Remove Data Group IFS Entry 292 communications
Remove Remote Journal Connection 231 APPC/SNA 163
Run Command 529 configuring system level 159
Run Commands 529 job names 48
Send Network DLO 509 native TCP/IP 159
Send Network IFS 508 OptiConnect 163
Send Network Object 506 protocols 159
Send Network Objects 475 starting TCP sever 189
Set Data Group Auditing 297 compare commands
Set Identity Column Attribute 373 completion and escape messages 514
Start Journaling File Entry 326 outfile formats 419
Start Journaling IFS Entries 330 report types and outfiles 418
Start Journaling Obj Entries 334 spooled files 418
Start Lakeview TCP Server 189 comparing
Start MIMIX Managers 296 DLO attributes 434
Switch Data Group 25 file and member attributes 425
Synchronize Data Group Activity Entry 479 IFS object attributes 431
Synchronize Data Group File Entry 480, 489 object attributes 428
Synchronize DG Activity Entry 473 when file content omitted 389
Synchronize DG File Entry 473 comparing attributes
Synchronize DLO 472, 478, 499 attributes to compare 422
Synchronize IFS 478 overview 420
Synchronize IFS Object 472, 495, 505 supported object attributes 421, 445
Synchronize Object 472, 478, 491, 505 comparing file data 440
Verify Communications Link 194, 195 active server technology 440
Verify Journaling File Entry 328 advanced subsetting 451
Verify Journaling IFS Entries 332 allocated and not allocated records 442
Verify Journaling Obj Entries 336 comparing a random sample 451
Verify Key Attributes 359 comparing a range of records 448
Work with Collision Resolution Classes 383 comparing recently inserted data 448
Work with Data Group Data Area Entries 289, comparing records over time 451
291 data correction 440
Work with Data Group Definition 255 first and last subset 453
Work with Data Group DLO Entries 291 interleave factor 451
Work with Data Group File Entries 291 keys, triggers, and constraints 443
Work with Data Group IFS Entries 291 multi-threaded jobs 441
Work with Data Group Object Entries 291 number of subsets 451
Work with Journal Definition 255 parallel processing 441
Work with RJ Links 310 processing with DBAPY 441, 461
Work with System Definition 255 referential integrity considerations 444
Work with Transfer Definition 255 repairing files in *HLDERR 441
commands, run on remote system 529 restrictions 441
commit cycles security considerations 442
effect on audit comparison 582, 583 thread groups 450
effect on audit results 587 transfer definition 450
policy effect on compare record count 351 transitional states 441
commitment control 107 using active processing 464
735
using subsetting options 467 support 370
wait time 450 when journal is in standby state 341
with repair capability 458 constraints, physical files with
with repair capability when files are on hold apply session ignored 111
461 configuring 107
without active processing 455 legacy cooperative processing 111
comparing file record counts 437 constraints, referential 111
configuration contacting Lakeview Technology 19
additional supporting tasks 294 container send process 56
auditing 580 defaults 243
copying existing data 558 description 54
configuring threshold 243
advanced replication techniques 353 contextual transfer definitions
bi-directional data flow 361 considerations 183
cascading distributions 365 RJ considerations 182
choosing the correct checklist 137 continuous mode 63
classes, collision resolution 383 conventions
data areas and data queues 112 product 14
DLO documents and folders 124 publications 14
file routing, file combining 363 convert data group
for improved performance 337 to advanced journaling 154
IFS objects 118 COOPDB (Cooperate with database) 113, 120
independent ASP 568 cooperative journal (COOPJRN)
Intra communications 560, 561 behavior 106
job restart time 313 cooperative processing
keyed replication 356 and omitting content 389
library-based objects 100 configuring files 105
message queue objects for user profiles 104 file, preferred method for 50
omitting T-ZC journal entry content 388 introduction 50
spooled file replication 102 journaled objects 51
to replicate SQL stored procedures 393 legacy 51
unique key replication 356 legacy limitations 111
configuring, collision resolution 382 MIMIX Dynamic Apply limitations 110
confirmed journal entries 64 cooperative processing, legacy
considerations limitations 111
journal for independent ASP 569 requirements and limitations 111
what to not replicate 83 COOPJRN 106
constraints COOPJRN (Cooperative journal) 236
*CST attribute for CMPFILA 591 COOPTYPE (Cooperating object types) 113
apply session for dependent files 371 copying
auditing with CMPFILA 420 data group entries 291
CMPFILA file-specific attribute 591 definitions 255
comparing file data 443 create operation, how replicated 129
omit content and legacy cooperative process- customer support 19
ing 389 customizing 510
referential integrity considerations 444 replication environment 511
requirements 370
requirements when synchronizing 481 D
restrictions with high availability journal perfor- data area
mance enhancements 344
736
retrictions of journaled 113 data library 34, 168
data areas data management techniques 361
journaling 72 data queue
polling interval 238 restrictions of journaled 113
polling process 77 data queues
synchronizing an object tracking entry 505 journaling 72
data distribution techniques 361 synchronizing journaled objects 505
data group 24 data source 234
convert to remote journaling 147 database apply
database only 110 serialization 85
determining if RJ link used 310 with compare file data (CMPFILDTA) 441,
ending 40, 67 461
RJ link differences 67 database apply process 76
sharing an RJ link 66 description 66
short name 234 threshold warning 241
starting 40 database reader process 66
switching 24 description 66
switching, RJ link considerations 70 threshold 241
timestamps, automatic 237 database receive process 76
type 235 database send process 76
data group data area entry 289 description 76
adding individual 290 filtering 236
loading from a library 289 threshold 241
data group definition 35, 233 DDM
creating 247 password validation 306
parameter tips 234 server in startup programs 305
data group DLO entry 287 server, starting 308
adding individual 288 defaults, command 537
loading from a folder 287 definitions
data group entry 401 data group 35
defined 93 journal 35
description 24 named 34
object 267 remote journal link 35
procedures for configuring 265 renaming 258
data group file entry 272 RJ link 35
adding individual 278 system 35
changing 279 transfer 35
loading from a journal definition 276 delay times 167
loading from a library 275, 276 delay/retry processing
loading from FEs from another data group first and second 238
277 third 239
loading from object entries 273 delete management
sources for loading 272 journal receivers 203
data group IFS entry 282 overview 37
with independent ASPs 569 remote journal environment 38
data group object entry delete operations
adding individual 268 journaled *DTAARA, *DTAQ, IFS objects 134
custom loading 267 legacy cooperative processing 134
independent ASP 569 deleting
with independent ASP 569 data group entries 292
737
definitions 256 port alias, complex 161
delivery mode port alias, simple 160
asynchronous 65 querying content of an output file 696
synchronous 63 SETIDCOLA command increment values 377
detail report 525 WRKDG SELECT statements 696
detected differences exit points 511
viewing and resolving 575, 576 journal receiver management 538, 541
directory entries MIMIX Monitor 538
managing 178 MIMIX Promoter 539
RDB 178 exit programs
display output 524 journal receiver management 204, 542
displaying requesting customized programs 540
data group entries 293 expand support 526
definitions 257 extended attribute cache 345
distribution request, data-retrieval 55 configuring 345
DLOs
example, entry matching 125 F
generic name support 124 failed request resolution 43
keeping same name 242 FEOPT (file and tracking entry options) 239
object processing 124 file id (FID) 75
duplicate identity column values 373 files
dynamic updates combining 363
adding data group entries 278 omitting content 387
removing data group entries 292 output 526
routing 364
E sharing 361
end journaling synchronizing 480
data areas and data queues 335 filtering
files 327 database replication 76
IFS objects 331 messages 45
IFS tracking entry 331 on database send 236
object tracking entry 335 on source side 237
ending CMPFILDTA jobs 454 remote journal environment 66
examples firewall, using CMPFILDTA with 442
convert to advanced journaling 86 folder path names 124
DLO entry matching 125
IFS object selection, subtree 415 G
job restart time 316 generic name support 402
journal definitions for multimanagement envi- DLOs 124
ronment 209 generic user exit 538
journal definitions for switchable data group
207 H
journal receiver exit program 545 help, accessing 14
load file entries for MIMIX Dynamic Apply 273 history retention 168
object entry matching 102 hot backup 21
object retrieval delay 391
object selection process 407
I
object selection, order precedence in 408
IBM i5/OS option 42 341
object selection, subtree 410
IBM OS/400 objects
738
to not replicate 83 system definition procedure 319
IFS directory, created during installation 29 jobs, restarted automatically 313
IFS file systems 118 journal 25
unsupported 118 improving performance of 337
IFS object selection maximum number of objects in 26
examples, subtree 415 security audit 53
subtree 405 system 53
IFS objects 118 journal analysis 43
file id (FID) use with journaling 75 journal at create 127, 238
journaled entry types, commitment control requirements 323
and 73 requirements and restrictions 324
journaling 72 journal caching 202, 342
not supported 118 journal definition 35
path names 119 configuring 197
supported object types 118 created by other processes 200
IFS objects, journaled creating 215
restrictions 121 fields on data group definition 235
supported operations 130 parameter tips 201
sychronizing 482, 505 remote journal environment considerations
independent ASP 565 205
limitations 567 remote journal naming convention 206
primary 565 remote journal naming convention, multiman-
replication 563 agement 208
requirements 567 remote journaling example 207
restrictions 567 journal entries 25
secondary 565 confirmed 64
synchronizing data within an 477 filtering on database send 236
information and additional resources 17 minimized data 339
installations, multiple MIMIX 23 OM journal entry 130
interleave factor 451 receive journal entry (RCVJRNE) 346
Intra configuration 559 unconfirmed 64, 70
IPL, journal receiver change 37 journal entry codes
for data area and data queues 114
J supported by MIMIX user journal processing
job classes 30 122
job description parameter 527 journal image 239, 355
job descriptions 30, 168 journal manager 33
in data group definition 243 journal receiver 25
in product library 30 change management 37, 202
list of MIMIX 30 delete management 37, 38, 203
job log prefix 202
for audit 578 RJ processing earlier receivers 38
job name parameter 527 size for advanced journaling 213
job names 47 starting point 26
job restart time 313 stranded on target 39
data group definition procedure 319 journal receiver management
examples 315 interaction with other products 38
overview 313 recommendations 37
parameter 168, 244 journal sequence number, change during IPL
37
739
journal standby state 341 user exit program 108
journaled data areas, data queues large objects (LOBs)
planning for 85 minimized journal entry data 339
journaled IFS objects legacy cooperative processing
planning for 85 configuring 108
journaled object types limitations 111
user exit program considerations 87 requirements 111
journaling 25 libraries
cannot end 327 to not replicate 83
data areas and data queues 72 library list
ending for data areas and data queues 335 adding QSOC to 164
ending for IFS objects 331 library list, effect of independent ASP 570
ending for physical files 327 library-based objects, configuring 100
IFS objects 72 limitations
IFS objects and commitment control 73 database only data group 110
implicitly started 323 list detail report 525
requirements for starting 323 list summary report 525
starting for data areas and data queues 334 load leveling 57
starting for IFS objects 330 loading
starting for physical files 326 tracking entries 284
starting, ending, and verifying 322 LOB replication 107
verifying 487 local-remote journal pair 63
verifying for data areas and data queues 336 log space 26
verifying for IFS objects 332 logical files 105, 106
verifying for physical files 328 long IFS path names 119
journaling environment
automatically creating 236 M
building 219 manage directory entries 178
removing 231 management system 24
source for values (JRNVAL) 219 maximum size transmitted 177
journaling on target, RJ environment consider- MAXOPT2 value 213
ations 39 menu
journaling status MIMIX Configuration 295
data areas and data queues 334 MIMIX Main 91
files 326 message handling 167
IFS objects 330 message log 521
journaling, starting message queues
files 326 associated with user profiles 104
journal-related threshold 204
K messages 44
keyed replication 355 CMPDLOA 516
comparing file data restriction 442 CMPFILA 514
file entry option defaults 239 CMPFILDTA 517
preventing before-image filtering 237 CMPIFSA 515
restrictions 356 CMPOBJA 515
verifying file attributes 359 CMPRCDCNT 516
comparison completion and escape 514
L MIMIX AutoGuard 487
large object (LOB) support MIMIX Dynamic Apply
740
configuring 105, 108 notification of objects not in configuration 127
recommended for files 105 notification retention 168
reqirements and limitations 110
MIMIX environment 29 O
MIMIX installation 23 object apply process
MIMIX jobs, restart time for 313 defaults 243
MIMIX Model Switch Framework 538 description 54
MIMIX performance, improving 337 threshold 243
MIMIX Retry Monitor 43 object attributes, comparing 422
MIMIXOWN user profile 31, 306 object auditing 323
MIMIXQGPL library 34 object auditing level, i5/OS
MIMIXSBS subsystem 34, 90 manually set for a data group 297
minimized journal entry data 339 set by MIMIX 58, 297
LOBs 107 object auditing value
MMNFYNEWE monitor 127 data areas, data queues 112
monitor DLOs 124
new objects not configured to MIMIX 127 IFS objects 120
move/rename operations library-based objects 98
system journal replication 130 omit T-ZC entry considerations 388
user journal replication 131 object entry, data group
multimanagement creating 267
journal definition naming 208 object locking retry interval 238
multi-threaded jobs 441 object processing
data areas, data queues 112
N defaults 241
name pattern 405 DLOs 124
name space 53 high volume objects 350
names, displaying long 119 IFS objects 118
naming conventions retry interval 238
data group definitions 234 spooled files 102
journal definitions 201, 206, 208 object retrieval delay
multi-part 27 considerations 391
transfer definitions 176 examples 391
transfer definitions, contextual (*ANY) 183 selecting 391
transfer definitions, multiple network systems object retrieve process 56
172 defaults 243
network systems 24 description 53
multiple 172 threshold 243
new objects with high volume objects 350
automatically journal 238 object selection 399
automatically replicate 127 commands which use 399
files 127 examples, order precedence 408
files processed by legacy cooperative pro- examples, process 407
cessing 128 examples, subtree 410
files processed with MIMIX Dynamic Apply name pattern 405
127 order precedence 401
IFS object journal at create requirements 323 parameter 401
IFS objects, data areas, data queues 128 process 399
journal at create selection criteria 324 subtree 404
741
object selector elements 401 considerations 523
by function 402 display 524
object selectors 401 expand support 526
object send process 54 file 526
description 53 parameter 523
threshold 242 print 524
object types supported 96, 549 output file
Omit content (OMTDTA) parameter 388 querying content, examples of 696
and comparison commands 389 output file fields
and cooperative processing 389 Difference Indicator 582, 587
open commit cycles System 1 Indicator field 589
audit results 582, 583, 587 System 2 Indicator field 589
OptiConnect, configuring 163 output queues 168
outfiles 621 overview
MCAG 623 MIMIX operations 40
MCDTACRGE 626 remote journal support 61
MCNODE 628 starting and ending replication 40
MXCDGFE 630 support for resolving problems 42
MXCMPDLOA 632 support for switching 24, 44
MXCMPFILA 634 working with messages 44
MXCMPFILD 636
MXCMPFILR 639 P
MXCMPIFSA 644 parallel processing 441
MXCMPOBJA 647 path names, IFS 119
MXCMPRCDC 640 policy, CMPRCDCNT commit threshold 351
MXDGACT 649 polling interval 238
MXDGACTE 651 port alias 160
MXDGDAE 659 complex example 161
MXDGDFN 660 creating 162
MXDGDLOE 668 simple example 160
MXDGFE 670 print output 524
MXDGIFSE 674, 726, 728 printing
MXDGIFSTE 726 controlling characteristics of 168
MXDGOBJE 703 data group entries 293
MXDGOBJTE 728 definitions 257
MXDGSTS 676 private authorities, *MSGQ replication of 104
MXDGTSP 706 problems, journaling
MXJRNDFN 709 data areas and data queues 334
MXSYSDFN 716 files 326
MXTFRDFN 720 IFS objects 330
MZPRCDFN 722 process
MZPRCE 723 container send and receive 56
user profile password 619 database apply 76
user profile status 615 database reader 66
WRKRJLNK 713 database receive 76
outfiles, supporting information database send 76
record format 621 names 47
work with panels 622 object apply 56
output object retrieve 56
batch 527
742
object send 54 MIMIX support 61
process, object selection 399 relational database 178
processing defaults remote journal environment
container send 243 changing 222
database apply 241 contextual transfer definitions 182
file entry options 239 receiver change management 37
object apply 243 receiver delete management 38
object retrieve 243 restrictions 62
user journal entry 236 RJ link 66
production system 23 security implications 306
publications switch processing changes 44
conventions 14 remote journal link 35, 66
formatting used in 15 remote journal link, See also RJ link
IBM 17 remote journaling
data group definition 236
Q repairing
QAUDCTL system value 53 file data 458
QAUDLVL system value 53, 103 files in *HLDERR 441
QDFTJRN data area 238 files on hold 461
restrictions 324 replicating
role in processing new objects 324 user profiles 476
QSOC what to not replicate 83
library 164 replication
subsystem 305 advanced topic parameters 237
by object type 96
R configuring advanced techniques 353
RCVJRNE (Receive Journal Entry) 346 constraint-induced modifications 371
configuring values 347 data area 77
determining whether to change the value of defaults for object types 96
347 direction of 23
understanding its values 346 ending data group 40
RDB 178 ending MIMIX 40
directory entries 178 independent ASP 563
RDB directory entry 188 maximum size threshold 177
reader wait time 235 positional vs. keyed 355
receiver library, changing for RJ target journal process, remote journaling environment 66
222 retrieving extended attributes 345
receivers spooled files 102
change management 202 SQL stored procedures 393
delete management 203 starting data group 40
recommendation starting MIMIX 40
multimanagement journal definitions 208 system journal process 53
relational database (RDB) 178 unit of work for 24
entries 178, 186 user-defined functions 393
remote journal what to not replicate 83
benefits 61 replication path 46
i5/OS function 25, 61 reports
i5/OS function, asynchronous delivery 65 detail 525
i5/OS function, synchronous delivery 63 list detail 525
list summary 525
743
types for compare commands 418 S
requirement save-while-active 396
objects and journal in same ASP 26 considerations 396
requirements examples 397
independent ASP 567 options 397
journal at create 323 wait time 396
keyed replication 355 search process, *ANY transfer definitions 181
legacy cooperative processing 111 security
MIMIX Dynamic Apply 110 considerations, CMPFILDTA command 442
standby journaling 343 general information 80
user journal replication of data areas and data remote journaling implications 306
queues 112 security audit journal 53
restarted 313 sending
restore operations, journaled *DTAARA, DLOs 509
*DTAQ, IFS objects 134 IFS objects 508
restrictions library-based objects 506
comparing file data 441 serialization
data areas and data queues 113 database files and journaled objects 85
independent ASP 567 object changes with database 72
journal at create 324 servers
journal receiver management 38 starting DDM 308
journaled *DTAARA, *DTAQ objects 113 starting TCP 189
journaled IFS objects 121 short transfer definition name 176
keyed replication (unique key) 356 source physical files 105, 106
legacy cooperative processing 111 source system 23
LOBs 108 spooled files 102
MIMIX Dynamic Apply 110 compare commands 418
number of objects in journal 26 keeping deleted 103
QDFTJRN data area 324 options 103
remote journaling 62 retaining on target system 242
standby journaling 343 SQL stored procedures 393
retrying, data group activity entries 43 replication requirements 393
RJ link 35 SQL table identity columns 373
adding 225 alternatives to SETIDCOLA 375
changing 227 check for replication of 378
data group definition parameter 236 problem 373
description 66 SETIDCOLA command details 376
end options 67 SETIDCOLA command examples 377
identifying data groups that use 310 SETIDCOLA command limitations 374
sharing among data groups 66 SETIDCOLA command usage notes 377
switching considerations 70 setting attribute 378
threshold 237 when to use SETIDCOLA 374
RJ link monitors standby journaling
description 68 IBM i5/OS option 42 341
displaying status of 68 journal caching 342
ending 68 journal standby state 341
not installed, status when 68 MIMIX processing with 342
operation 68 overview 341
requirements 343
744
restrictions 343 IFS objects 495
start journaling IFS objects by path name only 496
data areas and data queues 334 IFS objects in a data group 495
file entry 326 IFS objects without a data group 496
files 326 IFS tracking entries 505
IFS objects 330 including logical files 481
IFS tracking entry 330 independent ASP, data in an 477
object tracking entry 334 initial 484
starting initial configuration 483
system and journal managers 296 initial configuration MQ environment 483
TCP server 189 limit maximum size 474
TCP server automatically 190 LOB data 476
startup programs object tracking entries 505
changes for remote journaling 305 object, IFS, DLO overview 478
MIMIX subsystem 90 objects 491
QSOC subsystem 305 objects in a data group 491
status, values affecting updates to 238 objects without a data group 492
storage, data libraries 168 related file 481
stranded journal on target, journal entries 39 resources for 483
subsystem status changes caused by 476
MIMIXSBS, starting 90 tracking entries 482
QSOC 305 user profiles 474, 476
subtree 404 synchronous delivery 63
IFS objects 405 unconfirmed entries 64
switching SYSBAS 563, 565
allowing 234 system ASP 564
data group 24 system definition 35, 166
enabling journaling on target system 235 changing 171
example RJ journal definitions for 207 creating 170
independent ASP restriction 568 parameter tips 167
MIMIX Model Switch Framework with RJ link system journal 53
70 system journal replication
preventing identity column problems 373 advanced techniques 353
remote journaling changes to 44 omitting content 387
removing stranded journal receivers 39 system library list 163, 570
RJ link considerations 70 system manager 32
synchronization check, automatic 237 system user profiles
synchronizing 472 to not replicate 83
activity entries overview 479 system value
commands for 474 QAUDCTL 53
considerations 474 QAUDLVL 53, 103
data group activity entries 503 QSYSLIBL 164
database files 489 system, roles 23
database files overview 480
DLOs 499 T
DLOs in a data group 499 target journal state 202
DLOs without a data group 500 target system 23
establish a start point 483 TCP/IP
file entry overview 480 adding to startup program 305
files with triggers 480
745
configuring native 159 U
creating port aliases for 160 unconfirmed journal entries 64, 70
temporary files unique key
to not replicate 83 comparing file data restriction 442
thread groups 450 file entry options for replicating 239
threshold, backlog replication of 355
adjusting 251 user ASP 565
container send 243 user exit points 541
database apply 241 user exit program
database reader/send 241 data areas and data queues 87
object apply 243 IFS objects 87
object retrieve 243 large objects (LOBs) 108
object send 242 user exit, generic 538
remote journal link 237 user journal replication
threshold, CMPRCDCNT commit 351 advanced techniques 353
timestamps, automatic 237 requirements for data areas and data queues
tracking entries 112
loading 284 supported journal entries for data areas, data
loading for data areas, data queues 285 queues 114
loading for IFS objects 284 tracking entry 74
purpose 74 user profile
tracking entry MIMIXOWN 306
file identifiers (FIDs) 312 password 619
transfer definition 35, 174, 450 status 615
changing 186 user profiles
contextual system support (*ANY) 28, 181 default 168
fields in data group definition 235 MIMIX 31
fields in system definition 167 replication of 104
multiple network system environment 172 specifying status 242
other uses 174 synchronizing 474
parameter tips 176 system distribution directory entries 476
short name 176 to not replicate 83
transfer protocols user-defined functions 393
OptiConnect parameters 177
SNA parameters 177 V
TCP parameters 176 verifying
trigger programs communications link 194, 195
defined 368 initial synchronization 487
synchronizing files 369 journaling, IFS tracking entries 332
triggers journaling, object tracking entries 336
avoiding problems 444 journaling, physical files 328
comparing file data 443 key attributes 359
disabling during synchronization 480 send and receive processes automatically
read 443 238
update, insert, and delete 443
T-ZC journal entries W
access types 387 wait time
configuring to omit 388 comparing file data 450
omitting 387 reader 235
746
WRKDG SELECT statement 696
747