Académique Documents
Professionnel Documents
Culture Documents
12c (12.3.0.1)
E88786-09
November 2018
Oracle Fusion Middleware Using Oracle GoldenGate for Heterogeneous Databases, 12c (12.3.0.1)
E88786-09
Copyright © 2011, 2018, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience xii
Documentation Accessibility xii
Related Information xii
Conventions xiii
iii
2.3 Setting the Session Character Set 2-5
2.4 Preparing for Initial Extraction 2-5
2.5 Specifying the DB2 LUW Database in Parameter Files 2-6
iv
5.3.1 Assigning Row Identifiers 5-4
5.3.1.1 How Oracle GoldenGate Determines the Kind of Row Identifier to
Use 5-4
5.3.1.2 Using KEYCOLS to Specify a Custom Key 5-4
5.3.2 Preventing Key Changes 5-4
5.3.3 Disabling Constraints on the Target 5-5
5.3.4 Enabling Change Capture 5-5
5.3.4.1 Specifying a Default Journal 5-6
5.3.4.2 Removing a Default Journal Specification 5-6
5.3.5 Maintaining Materialized Query Tables 5-6
5.3.6 Specifying the Oracle GoldenGate Library 5-6
5.4 Adjusting the System Clock 5-7
5.5 Configuring the ODBC Driver 5-7
5.5.1 Configuring ODBC on Linux 5-7
5.5.2 Configuring ODBC on Windows 5-11
v
7.3.2 Add Collision Handling 7-2
7.3.3 Prepare the Target Tables 7-3
7.4 Making the Instantiation Procedure More Efficient 7-3
7.4.1 Share Parameters Between Process Groups 7-3
7.4.2 Use Parallel Processes 7-3
7.5 Configuring the Initial Load 7-4
7.5.1 Configuring an Initial Load from File to Replicat 7-4
7.5.2 Configuring an initial load with a database utility 7-7
7.6 Adding Change-Capture and Change-Delivery processes 7-8
7.6.1 Add the Primary Extract 7-8
7.6.1.1 Understanding the Primary Extract Start Point 7-8
7.6.1.2 Establishing the Required and Optional Extract Start Points 7-9
7.6.2 Add the Local Trail 7-10
7.6.3 Add the Data Pump Extract Group 7-10
7.6.4 Add the Remote Trail 7-11
7.6.5 Add the Replicat Group 7-11
7.7 Performing the Target Instantiation 7-11
7.7.1 To Perform Instantiation from File to Replicat 7-12
7.7.2 To Perform Instantiation with a Database Utility 7-13
7.8 Monitoring Processing after the Instantiation 7-14
7.9 Backing up Your Oracle GoldenGate Environment 7-15
7.10 Positioning Extract After Startup 7-15
vi
10 Preparing the DB2 for z/OS Database for Oracle GoldenGate
10.1 Preparing Tables for Processing 10-1
10.1.1 Disabling Triggers and Cascade Constraints 10-1
10.1.2 Assigning Row Identifiers 10-2
10.1.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier
to Use 10-2
10.1.2.2 Using KEYCOLS to Specify a Custom Key 10-2
10.1.3 Handling ROWID Columns 10-3
10.2 Configuring a Database Connection 10-3
10.2.1 Setting Initialization Parameters 10-3
10.2.2 Specifying the Path to the Initialization File 10-4
10.2.3 Ensuring ODBC Connection Compatibility 10-4
10.2.4 Specifying the Number of Connection Threads 10-5
10.3 Accessing Load Modules 10-5
10.4 Specifying Job Names and Owners 10-5
10.5 Assigning WLM Velocity Goals 10-6
10.6 Monitoring Processes 10-7
10.6.1 Viewing Oracle GoldenGate Messages 10-7
10.6.2 Identifying Oracle GoldenGate Processes 10-7
10.6.3 Interpreting Statistics for Update Operations 10-8
10.7 Supporting Globalization Functions 10-8
10.7.1 Replicating From a Source that Contains Both ASCII and EBCDIC 10-8
10.7.2 Specifying Multi-Byte Characters in Object Names 10-9
vii
12.2.1 Limitations and Clarifications 12-2
12.3 Supported Objects and Operations for MySQL 12-3
12.4 Non-Supported MySQL Data Types 12-4
viii
15.4 Non-Supported Objects and Operations for SQL Server 15-4
ix
19.4 Supplemental Logging 19-2
19.5 Operational Requirements and Considerations 19-2
x
24 Preparing the System for Oracle GoldenGate
24.1 Preparing Tables for Processing 24-1
24.1.1 Disabling Triggers and Cascade Constraints 24-1
24.1.2 Assigning Row Identifiers 24-1
24.1.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier
to Use 24-2
24.1.2.2 Using KEYCOLS to Specify a Custom Key 24-2
xi
Preface
Preface
This guide helps you get started with using Oracle GoldenGate on heterogeneous
database systems supported with this release.
Topics:
• Audience
• Documentation Accessibility
• Related Information
• Conventions
Audience
Using Oracle GoldenGate for Heterogeneous Databases is intended for DBA and
system administrators who are responsible for implementing Oracle GoldenGate and
managing the databases for an organization.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
Related Information
The Oracle GoldenGate Product Documentation Libraries are found at
https://docs.oracle.com/en/middleware/goldengate/index.html
Additional Oracle GoldenGate information, including best practices, articles, and
solutions, is found at:
Oracle GoldenGate A-Team Chronicles
xii
Preface
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xiii
Part I
What is Oracle GoldenGate for
Heterogeneous Databases?
Oracle GoldenGate is a comprehensive software package for real-time data capture
and replication in heterogeneous IT environments.
The product set enables high availability solutions, real-time data integration,
transactional change data capture, data replication, transformations, and verification
between operational and analytical enterprise systems. Oracle GoldenGate 12c brings
extreme performance with simplified configuration and management, support for cloud
environments, expanded heterogeneity, and enhanced security.
You can use the following supported heterogeneous databases with Oracle
GoldenGate.
• DB2 LUW
• DB2 for i
• DB2 for z/OS
• MySQL
• SQL Server
• Teradata
Each database that Oracle GoldenGate supports has it’s own requirements and
configuration. This book is divided into parts so that you can easily find information
that is relevant to your environment. See Installing Oracle GoldenGate for system
requirements and installation details for each of these databases.
Part II
Using Oracle GoldenGate with DB2 LUW
With the Oracle GoldenGate for DB2 LUW databases, you can replicate data to and
from the supported DB2 LUW versions or between a DB2 LUW database and a
database of another type. Oracle GoldenGate for DB2 LUW supports data filtering,
mapping, and transformation unless noted otherwise in this documentation.
This part describes tasks for configuring and running Oracle GoldenGate on a DB2
LUW database.
• Understanding What's Supported for DB2 LUW
This chapter contains support information for Oracle GoldenGate on DB2 LUW
databases.
• Preparing the System for Oracle GoldenGate
• Configuring Oracle GoldenGate for DB2 LUW
1
Understanding What's Supported for DB2
LUW
This chapter contains support information for Oracle GoldenGate on DB2 LUW
databases.
Topics:
• Supported DB2 LUW Data Types
• Non-Supported DB2 LUW Data Types
• Supported Objects and Operations for DB2 LUW
• Non-Supported Objects and Operations for DB2 LUW
• Supported Object Names
Limitations of Support
Oracle GoldenGate has the following limitations for supporting DB2 LUW data types:
• Oracle GoldenGate supports multi-byte character data types and multi-byte data
stored in character columns. Multi-byte data is only supported in a like-to-like
configuration. Transformation, filtering, and other types of manipulation are not
supported for multi-byte character data.
• BLOB and CLOB columns must have a LOGGED clause in their definitions.
• GRAPHIC and VARGRAPHIC columns must be in a database, where the character set is
UTF16. Any other character set causes the Oracle GoldenGate to abend.
• The support of range and precision for floating-point numbers depends on the host
machine. In general, the precision is accurate to 16 significant digits, but you
should review the database documentation to determine the expected
approximations. Oracle GoldenGate rounds or truncates values that exceed the
supported precision.
• Extract fully supports the capture and apply of TIMESTAMP(0) through TIMESTAMP(9).
Extract also captures TIMESTAMP(10) through TIMESTAMP(12), but it truncates the
data to nanoseconds (maximum of nine digits of fractional time) and issues a
warning to the error log. Replicat truncates timestamp data from other sources to
nanoseconds when applying it to TIMESTAMP(10) through TIMESTAMP(12) in a DB2
LUW target.
• Oracle GoldenGate supports timestamp data from 0001/01/03:00:00:00 to
9999/12/31:23:59:59. If a timestamp is converted from GMT to local time, these
limits also apply to the resulting timestamp. Depending on the timezone,
1-1
Chapter 1
Non-Supported DB2 LUW Data Types
conversion may add or subtract hours, which can cause the timestamp to exceed
the lower or upper supported limit.
• Oracle GoldenGate does not support the filtering, column mapping, or
manipulation of large objects that are larger than 4K. You can use the full Oracle
GoldenGate functionality for objects that are 4K or smaller.
• Replication of XML columns between source and target databases with the same
character set is supported. If the source and target database character sets are
different, then XML replication may fail with a database error because some
characters may not be recognized (or valid) in the target database character set.
• DECFLOAT
• User-defined types
• Negative dates
• Multi Dimensional Clustered Tables (MDC) for DB2 LUW 9.5 and later.
• Materialized Query Tables. Oracle GoldenGate does not replicate the MQT itself,
but only the base tables. The target database automatically maintains the content
of the MQT based on the changes that are applied to the base tables by Replicat.
• Tables with ROW COMPRESSION. In DB2 LUW version 10.1 and later, the
optionsCOMPRESS YES STATIC and COMPRESS YES ADAPTIVE are supported. To support
the use of COMPRESS YES in DB2 LUW versions 9.7 and earlier, you must use the
TRANLOGOPTIONS parameter with the ALLOWTABLECOMPRESSION option, and the
compressed table must not contain LOBs.
• The extended row size feature is enabled by default. It is supported with a
workaround by using the FETCHCOLS option. For any column values that are VARCHAR
or VARGRAPHIC data types and are stored out of row in the database, you must fetch
these extended rows by specifying these columns using the FETCHCOLS option in the
TABLE parameter in the Extract parameter file. With this option set, when the
column values are out of row then Oracle GoldenGate will fetch the column value.
If the value is out of row and FETCHCOLS is not specified then the Extract process
abends to prevent any data loss. If you do not want to use this feature, set the
extended_row_size parameter to DISABLE.
• Temporal tables with DB2 LUW 10.1 FixPack 2 and greater are supported. This is
the default for the Replicat process.
• Limitations on Automatic Heartbeat Table support are as follows:
1-2
Chapter 1
Non-Supported Objects and Operations for DB2 LUW
This example sets the FREQUENCY to 150 seconds, which is converted to the
closest minute value of 2 minutes, so the heartbeat table is updated every 120
seconds instead of every 150 seconds. Setting PURGE_FREQUENCY to 20 means
that the history table is purged at midnight on every 20th day.
– The following are steps are necessary for the heartbeat scheduled tasks to
run:
1. Set the DB2_ATS_ENABLE registry variable to db2set DB2_ATS_ENABLE=YES.
2. Create the SYSTOOLSPACE tablespace if it does not already exist:
CREATE TABLESPACE SYSTOOLSPACE IN IBMCATGROUP MANAGED BY AUTOMATIC
STORAGE
EXTENTSIZE 4
3. Ensure instance owner has Database administration authority (DBADM):
GRANT DBADM ON DATABASE TO instance_owner_name
Note:
1-3
2
Preparing the System for Oracle
GoldenGate
This chapter describes how to prepare the environment to run Oracle GoldenGate on
DB2 LUW.
Topics:
• Configuring the Transaction Logs for Oracle GoldenGate
• Preparing Tables for Processing
• Setting the Session Character Set
• Preparing for Initial Extraction
• Specifying the DB2 LUW Database in Parameter Files
2-1
Chapter 2
Preparing Tables for Processing
To set LOGARCHMETH:
db2 update db cfg for database using LOGARCHMETH1 LOGRETAIN
db2 update db cfg for database using LOGARCHMETH2 OFF
2. Make a full backup of the database by issuing the following command.
db2 backup db database to device
3. Place the backup in a directory to which DB2 LUW has access rights. If you get
the following message, contact your systems administrator:
SQL2061N An attempt to access media "device" is denied.
Exclude the node itself from the path. For example, if the full path to the archive log
directory is /sdb2logarch/oltpods1/archive/OLTPODS1/NODE0000, then the OVERFLOWLOGPATH
value should be specified as /sdb2logarch/oltpods1/archive/OLTPODS1.
2-2
Chapter 2
Preparing Tables for Processing
2.2.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If none of the preceding key types exist (even though there might be other types of
keys defined on the table) Oracle GoldenGate constructs a pseudo key of all
columns that the database allows to be used in a unique key, excluding those that
are not supported by Oracle GoldenGate in a key or those that are excluded from
the Oracle GoldenGate configuration.
Note:
If there are other, non-usable keys on a table or if there are no keys at all on
the table, Oracle GoldenGate logs an appropriate message to the report file.
Constructing a key from all of the columns impedes the performance of
Oracle GoldenGate on the source system. On the target, this key causes
Replicat to use a larger, less efficient WHERE clause.
2-3
Chapter 2
Preparing Tables for Processing
ADD TRANDATA issues the following command, which includes logging the before
image of LONGVAR columns:
ALTER TABLE name DATA CAPTURE CHANGES INCLUDE LONGVAR COLUMNS;
Example 2-1 To Exclude LONGVAR Logging:
To omit the INCLUDE LONGVAR COLUMNS clause from the ALTER TABLE command, use ADD
TRANDATA with the EXCLUDELONG option.
2-4
Chapter 2
Setting the Session Character Set
Note:
If LONGVAR columns are excluded from logging, the Oracle GoldenGate
features that require before images, such as the GETUPDATEBEFORES,
NOCOMPRESSUPDATES, and NOCOMPRESSDELETES parameters, might return errors if
tables contain those columns. For a workaround, see the
REQUIRELONGDATACAPTURECHANGES | NOREQUIRELONGDATACAPTURECHANGES options of
the TRANLOGOPTIONS parameter.
When the Extract process starts for the first time, it captures all the transaction data
that it encounters after the specified start point, but none of the data that occurred
before that point. This can cause partial transactions to be captured if open
transactions span the start point.
2-5
Chapter 2
Specifying the DB2 LUW Database in Parameter Files
Note:
After the Extract is past the initialization, subsequent restarts of the Extract
do not extract partial transactions, because the process uses recovery
checkpoints to mark its last read position.
2-6
3
Configuring Oracle GoldenGate for DB2
LUW
This chapter provides an overview of the basic steps required to configure Oracle
GoldenGate for a DB2 LUW source and target database.
Topics:
• What to Expect from these Instructions
• Where to Get More Information
• Configuring the Primary Extract
• Configuring the Data Pump Extract
• Configuring Replicat
• Next Steps in the Deployment
• When to Start Replicating Transactional Changes
• Testing Your Configuration
3-1
Chapter 3
Configuring the Primary Extract
• Security options
• Data-integration options (filtering, mapping, conversion)
• Instructions for configuring complex topologies
• Steps to perform initial instantiation of the replication environment
• Administrative topics
Parameter Description
EXTRACT group group is the name of the Extract group. For more information, see Reference for
Oracle GoldenGate.
SOURCEDB database, Specifies the real name of the source DB2 LUW database (not an alias), plus the
USERIDALIAS alias alias of the database login credential of the user that is assigned to Extract. This
credential must exist in the Oracle GoldenGate credential store. For more
information, see Database User for Oracle GoldenGate Processes.
ENCRYPTTRAIL algorithm Encrypts the local trail. For more information about Oracle GoldenGate trail
encryption options, see Administering Oracle GoldenGate .
EXTTRAIL pathname Specifies the path name of the local trail to which the primary Extract writes
captured data for temporary storage.
3-2
Chapter 3
Configuring the Data Pump Extract
Parameter Description
TABLE schema.object; Specifies the database object for which to capture data.
• TABLE is a required keyword.
• schema is the schema name or a wildcarded set of schemas.
• object is the table name, or a wildcarded set of tables.
See Administering Oracle GoldenGate for information about how to specify object
names with and without wildcards. Note that only the asterisk (*) wildcard is
supported for DB2 LUW. The question mark (?) wildcard is not supported for this
database.
Terminate the parameter statement with a semi-colon.
To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.
See Reference for Oracle GoldenGate for more information about usage and
syntax.
For more information and for additional TABLE options that control data filtering,
mapping, and manipulation, see Reference for Oracle GoldenGate.
3. Enter any optional Extract parameters that are recommended for your
configuration. You can edit this file at any point before starting processing by using
the EDIT PARAMS command in GGSCI. For a list of parameters and links to their
detailed reference, see Reference for Oracle GoldenGate.
4. Save and close the file.
Parameter Description
EXTRACT group group is the name of the data pump Extract. For more information, see Oracle Fusion
Middleware Reference for Oracle GoldenGate for Windows and UNIX.
3-3
Chapter 3
Configuring Replicat
Parameter Description
SOURCEDB database, Specifies the real name of the source DB2 LUW database (not an alias), plus the alias
USERIDALIAS alias of the database login credential of the user that is assigned to Extract. This credential
must exist in the Oracle GoldenGate credential store. For more information, see
Database User for Oracle GoldenGate Processes.
• RMTHOST specifies the name or IP address of the target system.
RMTHOST hostname,
MGRPORT portnumber, • MGRPORT specifies the port number where Manager is running on the target.
[, ENCRYPT algorithm • ENCRYPT specifies optional encryption of data across TCP/IP.
KEYNAME keyname] For additional options and encryption details, see Oracle Fusion Middleware
Reference for Oracle GoldenGate for Windows and UNIX.
RMTTRAIL pathname Specifies the path name of the remote trail. For more information, see Oracle Fusion
Middleware Reference for Oracle GoldenGate for Windows and UNIX.
TABLE schema.object; Specifies a table or sequence, or multiple objects specified with a wildcard. In most
cases, this listing will be the same as that in the primary Extract parameter file.
• TABLE is a required keyword.
• schema is the schema name or a wildcarded set of schemas.
• object is the name of a table or a wildcarded set of tables.
See Oracle Golden Gate Administering Oracle GoldenGate for Windows and UNIX for
information about how to specify object names with and without wildcards. Note that
only the asterisk (*) wildcard is supported for DB2 LUW. The question mark (?)
wildcard is not supported for this database.
Terminate the parameter statement with a semi-colon.
To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.
See Oracle Fusion Middleware Reference for Oracle GoldenGate for Windows and
UNIX for more information about usage and syntax.
For more information and for additional TABLE options that control data filtering,
mapping, and manipulation, see Oracle Fusion Middleware Reference for Oracle
GoldenGate for Windows and UNIX.
3. Enter any optional Extract parameters that are recommended for your
configuration. You can edit this file at any point before starting processing by using
the EDIT PARAMS command in GGSCI. For a list of parameters and links to their
detailed reference, see Oracle Fusion Middleware Reference for Oracle
GoldenGate for Windows and UNIX.
4. Save and close the file.
3-4
Chapter 3
Configuring Replicat
REPLICAT financer
TARGETDB FINANCIAL USERID ogg, PASSWORD AACAAAAAAAAAAA, BLOWFISH ENCRYPTKEY mykey
ASSUMETARGETDEFS
-- Instead of ASSUMETARGETDEFS, use SOURCEDEFS if replicating from
-- DB2 for i to a different database type, or from a DB2 for i source
-- that is not identical in definitions to a target DB2 for i database.
-- SOURCEDEFS /users/ogg/dirdef/defsfile
DISCARDFILE /users/ogg/disc
MAP hr.*, TARGET hr2.*;
Parameter Description
SOURCEDEFS pathname | Specifies how to interpret data definitions. Use SOURCEDEFS if the source and target
ASSUMETARGETDEFS tables have different definitions, such as when replicating data between dissimilar IBM
for i databases or from an IBM for i database to an Oracle database. For pathname,
specify the source data-definitions file that you created with the DEFGEN utility in
"Creating a Data Definitions File". Use ASSUMETARGETDEFS if the source and target
tables are all DB2 for i and have the same definitions.
MAP owner.table, Specifies a relationship between a source and target table or tables. The MAP clause
TARGET owner.table; specifies the source objects, and the TARGET clause specifies the target objects to
which the source objects are mapped.
• owner is the schema or library name.
• table is the name of a table or a wildcard definition for multiple tables.
For supported object names, see Administering Oracle GoldenGate.
Terminate the MAP statement with a semi-colon.
To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter.
For more information and for additional options that control data filtering, mapping,
and manipulation, see MAP in Reference for Oracle GoldenGate.
3. Enter any optional Extract parameters that are recommended elsewhere in this
manual and any others shown in Summary of Extract Parameters.
4. Save and close the file.
• Creating a Temporal Table
• Creating a Checkpoint Table
• Configuring the Replicat Parameter File
3-5
Chapter 3
Configuring Replicat
3-6
Chapter 3
Configuring Replicat
• You can replicate a temporal table, with the associated history table, to a temporal
and history table respectively then you must specify the replicate parameter,
DBOPTIONS SUPPRESSTEMPORALUPDATES. You must specify both the temporal table and
history table to be captured in the Extract parameter file. Oracle GoldenGate
replicates the SYSTEM_TIME period and transactions id columns value. You must
ensure that the database instance has the execute permission to run the stored
procedure at the apply side.
Oracle GoldenGate cannot detect and resolve conflicts while using default replication
as SYSTEM_TIME period and transactionstart id columns remains auto generated.
These columns cannot be specified in set and where clause. If you use the
SUPPRESSTEMPORALUPDATES parameter, then Oracle GoldenGate supports CDR.
3.5.1.3 Converting
You can convert an already existing table into a temporal table, which changes the
structure of the table. This section describes how the structure of the tables changes.
The following sample existing table is converted into all three temporal tables types in
the examples in this section:.
Table policy_info
(
Policy_id char[4] not null primary key,
Coverage int not null
)
And the tables contains the following initial rows
POLICY_ID COVERAGE
------------- -----------
ABC 12000
DEF 13000
ERT 14000
Then you create a history table for the new temporal table using one of the following
two methods:
• CREATE TABLE hist_policy_info
(
policy_id CHAR(4) NOT NULL,
coverage INT NOT NULL,
sys_start TIMESTAMP(12) NOT NULL ,
sys_end TIMESTAMP(12) NOT NULL,
ts_id TIMESTAMP(12) NOT NULL
);
ALTER TABLE hist_policy_info ADD RESTRICT ON DROP;
3-7
Chapter 3
Configuring Replicat
The RESTRICT ON DROP clause will not allow the history table to get dropped while
dropping system-period temporal table. Otherwise the history table gets implicitly
dropped while dropping its associated temporal table. You can create a history
table without RESTRICT ON DROP. A history table cannot be explicitly dropped.
You should not use the GENERATED ALWAYS clause while creating a history table. The
primary key of the system-period temporal table also does not apply here as there
could be many updates for a particular row in the base table, which triggers many
inserts into the history table for the same set of primary keys. Apart from these,
the structure of a history table should be exactly same as its associated system-
period temporal table. The history table must have the same number and order of
columns as system-period temporal table. History table columns cannot explicitly
be added, dropped, or changed. You must associate a system-period temporal
table with its history table with the following statement:
ALTER TABLE policy_info ADD VERSIONING USE HISTORY TABLE hist_policy_info.
The GENERATED ALWAYS columns of the table are the ones that are always populated
by the database manager so you do not have any control over these columns.
The database manager populates these columns based on the system time.
The extra added SYSTEM_PERIOD and transaction id columns will have default
values for already existing rows as in the following:
POLICY_ID COVERAGE
SYS_START
SYS_END TS_ID
--------- ----------- --------------------------------
--------------------------------
-------------------------------------------------------------------------------
ABC 12000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
DEF 13000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
ERT 14000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
The associated history table is populated with the before images once you start
updating the temporal table.
While adding time columns, you need to make sure that while entering business
validity time values of the existing time columns, the bus_start column always has
value lesser than bus_end because these columns specify the business validity of the
rows.
The new application-period temporal table will look similar to:
POLICY_ID COVERAGE BUS_START BUS_END
--------- ----------- ---------- -------------------------------
3-8
Chapter 3
Configuring Replicat
ALTER TABLE policy_info ADD COLUMN bus_start DATE NOT NULL DEFAULT '10/10/2001'"
ALTER TABLE policy_info ADD COLUMN bus_end DATE NOT NULL DEFAULT '10/10/2002'
ALTER TABLE policy_info ADD PERIOD BUSINESS_TIME(bus_start, bus_end)
While adding the time columns, you must make sure that while entering business
validity time values of already existing time columns, the bus_start column always has
value lesser than bus_end because these columns specify the business validity of the
rows.
Then you create a history table for the new temporal table using one of the following
two methods:
• CREATE TABLE hist_policy_info
(
policy_id CHAR(4) NOT NULL,
coverage INT NOT NULL,
sys_start TIMESTAMP(12) NOT NULL ,
sys_end TIMESTAMP(12) NOT NULL,
ts_id TIMESTAMP(12) NOT NULL
);
ALTER TABLE hist_policy_info ADD RESTRICT ON DROP;
CREATE TABLE hist_policy_info LIKE policy_info with RESTRICT ON DROP;
• The RESTRICT ON DROP clause will not allow the history table to get dropped while
dropping system-period temporal table. Otherwise the history table gets implicitly
dropped while dropping its associated temporal table. You can create a history
table without RESTRICT ON DROP. A history table cannot be explicitly dropped.
You should not use the GENERATED ALWAYS clause while creating a history table. The
primary key of the system-period temporal table also does not apply here as there
could be many updates for a particular row in the base table, which triggers many
inserts into the history table for the same set of primary keys. Apart from these,
the structure of a history table should be exactly same as its associated system-
period temporal table. The history table must have the same number and order of
columns as system-period temporal table. History table columns cannot explicitly
be added, dropped, or changed. You must associate a system-period temporal
table with its history table with the following statement:
ALTER TABLE policy_info ADD VERSIONING USE HISTORY TABLE hist_policy_info.
3-9
Chapter 3
Configuring Replicat
The GENERATED ALWAYS columns of the table are the ones that are always populated
by the database manager so you do not have any control over these columns.
The database manager populates these columns based on the system time.
The extra added SYSTEM_PERIOD and transaction id columns will have default
values for already existing rows as in the following:
POLICY_ID COVERAGE
SYS_START
SYS_END TS_ID
--------- ----------- --------------------------------
--------------------------------
-------------------------------------------------------------------------------
ABC 12000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
DEF 13000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
ERT 14000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
The associated history table is populated with the before images once you start
updating the temporal table.
The extra added SYSTEM_TIME period, transaction id and time columns will have default
values for already existing rows as in the following:
POLICY_ID COVERAGE SYS_START
SYS_END TS_ID BUS_START
BUS_END
--------- ----------- --------------------------------
-------------------------------- -------------------------------- ----------
-------------------------------------
ABC 12000 0001-01-01-00.00.00.000000000000 9999-12-30-00.00.00.000000000000
0001-01-01-00.00.00.000000000000 10/10/2001 10/10/2002
DEF 13000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000 10/10/2001
10/10/2002
ERT 14000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000 10/10/2001
10/10/2002
The history table is populated with the before images once user starts updating the
temporal table.
3-10
Chapter 3
Configuring Replicat
POLICY_ID COVERAGE
SYS_START
SYS_END TS_ID
--------- ----------- --------------------------------
--------------------------------
-------------------------------------------------------------------------------
ABC 12000 0001-01-01-00.00.00.000000000000
9999-12-30-00.00.00.000000000000 0001-01-01-00.00.00.000000000000
To replicate the row into MySQL, you would use the colmap() function:
map source_schema.policy_info, target target_schema.policy_info colmap
(policy_id=policy_id, coverage=coverage, sys_start= @IF( ( @NUMSTR( @STREXT(sys_
start,1,4))) > 1000, sys_start, '1000-01-01 00.00.00.000000'), sys_end=sys_end,
ts_id= @IF( ( @NUMSTR( @STREXT(ts_id,1,4))) > 1000, ts_id, '1000-01-01
00.00.00.000000'));
Parameter Description
3-11
Chapter 3
Next Steps in the Deployment
Parameter Description
TARGETDB Specifies the real name of the target DB2 LUW database (not an
database, alias), plus the alias of the database login credential of the user that
USERIDALIAS alias is assigned to Replicat. This credential must exist in the Oracle
GoldenGate credential store. For more information, see Database
User for Oracle GoldenGate Processes.
3. Enter any optional Replicat parameters that are recommended for your
configuration. You can edit this file at any point before starting processing by using
the EDIT PARAMS command in GGSCI. For a list of parameters and links to their
detailed reference, see Oracle Fusion Middleware Reference for Oracle
GoldenGate for Windows and UNIX.
4. Save and close the file.
3-12
Chapter 3
When to Start Replicating Transactional Changes
3-13
Part III
Using Oracle GoldenGate with IBM DB2 for
i
This part describes tasks for configuring and running Oracle GoldenGateon a DB2 for i
database.
With Oracle GoldenGate for DB2 for i, you can:
• Map, filter, and transform transactional data changes between similar or dissimilar
supported DB2 for i versions, and between supported DB2 for i versions and other
supported types of databases.
• Perform initial loads from DB2 for i to target tables in DB2 for i or other databases
to instantiate a synchronized replication environment.
Topics:
• Understanding What's Supported for IBM DB2 for i
With Oracle GoldenGate on DB2 for i, you can replicate data to and from similar or
dissimilar supported DB2 for i versions, or you can replicate data between a DB2
for i database and a database of another type.
• Preparing the System for Oracle GoldenGate
• Configuring Oracle GoldenGate for DB2 for i
• Instantiating and Starting Oracle GoldenGate Replication
• Using Remote Journal
4
Understanding What's Supported for IBM
DB2 for i
With Oracle GoldenGate on DB2 for i, you can replicate data to and from similar or
dissimilar supported DB2 for i versions, or you can replicate data between a DB2 for i
database and a database of another type.
Oracle GoldenGate on DB2 for i supports the filtering, mapping, and transformation of
data unless otherwise noted in this documentation.
Oracle GoldenGate for DB2 for i runs directly on a DB2 for i source system to capture
data from the transaction journals for replication to a target system. To apply data to a
target DB2 for i database, Oracle GoldenGate can run directly on the DB2 for i target
system or on a remote Windows or Linux system. If installed on a remote system,
Replicat delivers the data by means of an ODBC connection, and no Oracle
GoldenGate software is installed on the DB2 for i target.
Note:
The DB2 for i platform uses one or more journals to keep a record of
transaction change data. For consistency of terminology in the supporting
administrative and reference Oracle GoldenGate documentation, the terms
"log" or "transaction log" may be used interchangeably with the term "journal"
where the use of the term "journal" is not explicitly required.
Topics:
• Supported DB2 for i Data Types
• Non-Supported DB2 for i Data Types
• Supported Objects and Operations for DB2 for i
• Non-Supported Objects and Operations for DB2 for i
• Oracle GoldenGate Parameters Not Supported for DB2 for i
• Supported Object Naming Conventions
• Supported Character Sets
4-1
Chapter 4
Non-Supported DB2 for i Data Types
Limitations of support
The Extract process fully supports the capture and apply of TIMESTAMP(0) through
TIMESTAMP(6). Extract also captures TIMESTAMP(7) through TIMESTAMP(12), but it
truncates the data to microseconds (maximum of six digits of fractional time) and
issues a warning to the error log. Replicat truncates timestamp data from other
sources to microseconds when applying it to TIMESTAMP(7) through TIMESTAMP(12) in a
DB2 for i target.
Oracle GoldenGate supports timestamp data from 0001/01/03:00:00:00.000000 to
9999/12/31:23:59:59.999999. If a timestamp is converted from GMT to local time,
these limits also apply to the resulting timestamp. Depending on the time zone,
conversion may add or subtract hours, which can cause the timestamp to exceed the
lower or upper supported limit.
• DATALINK
• DECFLOAT
• User-defined types
– An extra process named ogghb starts running from the time the ADD
HEARTBEATTABLE command is given and runs until you disable the heartbeat with
4-2
Chapter 4
Non-Supported Objects and Operations for DB2 for i
For native (system) names, Oracle GoldenGate supports the normal DB2 for i naming
rules for wildcarding, which allows *ALL or a partial name with a trailing asterisk (*)
wildcard. For example:
• library/*all(*all)
• library/a*(a*)
• library/abcde*
4-3
Chapter 4
Supported Character Sets
The member name is optional and may be left off. In that case, data for all of the
members will be extracted, but only the library and file names will be captured and
included in the records that are written to the trail. The result is that the data will
appear to have come from only one member on the source, and you should be aware
that this could cause integrity conflicts on the target if there are duplicate keys across
members. To include the member name in the trail records, include the member
explicitly or though a wildcarded member specification.
For SQL names, only the first member in the underlying native file is extracted in
accordance with the normal operation of SQL on an DB2 for i system. For SQL names,
Oracle GoldenGate supports the wildcarding of table names, but not schema names.
For instructions on wildcarding SQL names, see Specifying Object Names in Oracle
GoldenGate Input in Administering Oracle GoldenGate.
Where COL1, COL2, COL3 are CHAR or VARCHAR fields with a valid CCSID.
For a CCSID 65535 character field (equivalent to a BINARY or CHAR FOR BIT DATA field),
you can specify a column character set override with the valid character set of the
column. The field will no longer been seen as a binary field by the Oracle GoldenGate
processes and is seen as a normal character field. The data is converted to UTF8, and
you cannot use the PASSTHRU capability if a COLCHARSET is specified to provide a specific
character encoding for a binary field. The conversion to UTF-8 makes it possible for
this type of override to be compatible with HP NonStop or any other platform that
requires essentially ASCII compatible data from the source. The default behavior for a
CCSID 65535 CHAR or VARCHAR field is not to convert the field and bind it as binary. For
example:
TABLE GGSCHEMA.TABLE3, COLCHARSET(IBM037, COL1, COL2);
Where COL1 and COL2 are CHAR or VARCHAR fields with CCSID 65535.
You can set a column character override for any CHAR, CLOB, or VARCHAR column with a
valid character set (not CCSID 655350 when the data contained in the column has a
different character set than what you intended. For example, if there is a CHAR column
with CCSID set as 1047, and the data contained in it is actually in CCSID 37, you can
set a column override CCSID for the column in the Extract or DEFGEN parameter file
so that when it processed, the processes recognize that the data is actually in CCSID
37 and not in CCSID 1047. The data is treated as CCSID 37, but converted to UTF8.
4-4
Chapter 4
Supported Character Sets
You cannot override the CHARSET and use PASSTHRU together. For example:
TABLE GGSCHEMA.TABLE4, COLCHARSET(IBM037, COL1);
Where COL1 is a CHAR or VARCHAR field defined with CCSID 1047 and the data is
actually in CCSID 37.
If the column character set of a source column is not a valid character set supported
by Oracle GoldenGate ICU and you specify COLCHAR SETPASSTHRU in the Extract or
DEFGEN parameter file, then the PASSTHRU behavior is ignored, and the column data is
converted to Unicode. This ensures that the data is convertible on the Replicat. Since
the data has no ICU representation, there is no way to indicate what character set the
data is really in.
4-5
5
Preparing the System for Oracle
GoldenGate
This chapter contains guidelines for preparing the DB2 for i system to support Oracle
GoldenGate.
Topics:
• Preparing the Journals for Data Capture by Extract
• Specifying Object Names
• Preparing Tables for Processing
• Adjusting the System Clock
• Configuring the ODBC Driver
Note:
To ensure transaction integrity, all journals that correspond to any given
transaction must be read by the same Extract group. For more information
about using multiple Extract processes, see Tuning the Performance of
Oracle GoldenGate in Administering Oracle GoldenGate.
5-1
Chapter 5
Preparing the Journals for Data Capture by Extract
Note:
To check the attributes of a journal, use the command WRKJRNA JRN(LIB1/
JRN1) DETAIL(*CURATR).
When the journaling is set to the recommended parameter settings, you are assured
that the entries in the journals contain all of the information necessary for Oracle
GoldenGate processing to occur. These settings also ensure that the system does not
delete the journal receivers automatically, but retains them in case Extract needs to
process older data.
Where:
library and journal_receiver are the actual names of the library and journal
receiver to be deleted. See the DB2 for i Information Center for more information
about this command.
5-2
Chapter 5
Specifying Object Names
5-3
Chapter 5
Preparing Tables for Processing
5.3.1.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If none of the preceding key types exist (even though there might be other types of
keys defined on the table) Oracle GoldenGate constructs a pseudo key of all
columns that the database allows to be used in a unique key, excluding those that
are not supported by Oracle GoldenGate in a key or those that are excluded from
the Oracle GoldenGate configuration.
Note:
If there are other, non-usable keys on a table or if there are no keys at all on
the table, Oracle GoldenGate logs an appropriate message to the report file.
Constructing a key from all of the columns impedes the performance of
Oracle GoldenGate on the source system. On the target, this key causes
Replicat to use a larger, less efficient WHERE clause.
5-4
Chapter 5
Preparing Tables for Processing
2. Issue the following command until it returns EOF, indicating that it has processed all
of the existing journal data.
INFO EXTRACT group
3. Make the change to the key.
4. Start Extract.
START EXTRACT group
Where: SOURCEDB specifies the default DB 2 for i database, USERID specifies the
Extract user profile, and PASSWORD specifies that profile's password.
Note:
Only BLOWFISH encryption is supported for DB2 for i systems.
5-5
Chapter 5
Preparing Tables for Processing
Any ADD TRANDATA command used without a journal assumes the journal from
DEFAULTJOURNAL.
To display the current setting of DEFAULTJOURNAL, you can issue the command with no
arguments.
5-6
Chapter 5
Adjusting the System Clock
1. Download and install the 32-bit or 64-bit iSeries Access ODBC driver on the
remote Linux system according to the vendor documentation. The iSeries ODBC
driver is supplied as a free component of iSeries Access.
2. Issue one of the following commands, depending on the driver that you want to
use.
32-bit driver:
rpm -ivh iSeriesAccess-7.1.0-1.0.i386.rpm
64-bit driver:
rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm
3. You can create a user DSN (a connection that is available only to the user that
created it) or a system DSN (a connection that is available to all users on the
system). To create a user DSN, log on to the system as the user that you will be
using for the Replicat process.
4. Run the ODBC configuration utility.
5. On the initial page of the ODBC configuration tool, select the User DSN tab to
create a user DSN or the System DSN tab to create a system DSN. (These steps
create a user DSN; creating a system DSN is similar.)
5-7
Chapter 5
Configuring the ODBC Driver
5-8
Chapter 5
Configuring the ODBC Driver
Figure 5-3 Manually Editing Driver Properties When the Driver is Not Found
5-9
Chapter 5
Configuring the ODBC Driver
9. You are returned to the ODBC Data Source Administrator dialog. Click OK to exit
the ODBC configuration utility.
10. To support GRAPHIC, VARGRAPHIC and DBCLOB types, edit the .odbc.ini file and add the
following line.
GRAPHIC = 1
Note:
If you created a user Data Source Name, this file is located in the home
directory of the user that created it. If you created a system DSN, this file
is in /etc/odbc.ini or /usr/local/etc/odbc.ini.
11. From the Oracle GoldenGate directory on the target, run GGSCI and issue the
DBLOGIN command to log into the target database. See Reference for Oracle
GoldenGate for detailed syntax.
DBLOGIN SOURCEDB database, USERID db_user [, PASSWORD pw [encryption options]]
Where:
• SOURCEDB database specifies the new Data Source Name.
• USERID db_user, PASSWORD pw are the Replicat database user profile and
password.
• encryption options is optional password encryption.
5-10
Chapter 5
Configuring the ODBC Driver
Note:
Only BLOWFISH encryption is supported for DB2 for i systems.
5-11
Chapter 5
Configuring the ODBC Driver
7. On the General tab of the DB2 for i Access for Windows ODBC Setup dialog,
provide a name (without any spaces) in the Data Source Name field, add an
optional description in the Description field, and then select the system name
from the System selection list.
8. On the Server tab, set Naming Convention to SQL Naming Convention (*SQL).
Leave the other fields set to their defaults.
5-12
Chapter 5
Configuring the ODBC Driver
9. On the Data Types tab, select the Report as Supported check box under Double
Byte Character Set (DBCS) graphic data types.
10. On the Conversions tab, clear the Convert binary data (CCSID 65535) to text
check box.
5-13
Chapter 5
Configuring the ODBC Driver
11. Click Apply, then OK. You are returned to the ODBC Data Source Administrator
dialog.
12. Confirm that the new Data Source Name appears under User Data Sources.
14. From the Oracle GoldenGate directory on the target, run GGSCI and issue the
DBLOGIN command to log into the target database. See Reference for Oracle
GoldenGate for detailed syntax.
DBLOGIN SOURCEDB database, USERID db_user [, PASSWORD pw [encryption_options]]
Where:
5-14
Chapter 5
Configuring the ODBC Driver
• USERID db_user, PASSWORD pw are the Replicat database user profile and
password.
• encryption_options is optional password encryption.
Note:
Only BLOWFISH encryption is supported for DB2 for i systems.
5-15
6
Configuring Oracle GoldenGate for DB2 for
i
This chapter contains instructions for configuring Oracle GoldenGate to capture source
DB2 for i data and apply it to a supported target database.
Topics:
• What to Expect from this Procedure
• Getting Started with Oracle GoldenGate
• Creating the Oracle GoldenGate Instance
• Creating a GLOBALS File
• Creating a Data Definitions File
• Encrypting the Extract and Replicat Passwords
• Configuring Extract for Change Capture from DB2 for i
• Configuring Replicat for Change Delivery to DB2 for i
• Next Steps in the Deployment
• When to Start Replicating Transactional Changes
• Testing Your Configuration
6-1
Chapter 6
Creating the Oracle GoldenGate Instance
6-2
Chapter 6
Encrypting the Extract and Replicat Passwords
Note:
The Oracle GoldenGate credential store is not supported by the iSeries
platform.
Parameter Description
EXTRACT group group is the name of the Extract group. For more information, see Reference for
Oracle GoldenGate.
6-3
Chapter 6
Configuring Extract for Change Capture from DB2 for i
Parameter Description
SOURCEDB database, Specifies the real name of the source DB2 LUW database (not an alias), plus the
USERIDALIAS alias alias of the database login credential of the user that is assigned to Extract. This
credential must exist in the Oracle GoldenGate credential store. For more
information, see Database User for Oracle GoldenGate Processes.
ENCRYPTTRAIL algorithm Encrypts the local trail. For more information about Oracle GoldenGate trail
encryption options, see Administering Oracle GoldenGate .
EXTTRAIL pathname Specifies the path name of the local trail to which the primary Extract writes
captured data for temporary storage.
TABLE schema.object; Specifies the database object for which to capture data.
• TABLE is a required keyword.
• schema is the schema name or a wildcarded set of schemas.
• object is the table name, or a wildcarded set of tables.
See Administering Oracle GoldenGate for information about how to specify object
names with and without wildcards. Note that only the asterisk (*) wildcard is
supported for DB2 LUW. The question mark (?) wildcard is not supported for this
database.
Terminate the parameter statement with a semi-colon.
To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.
See Reference for Oracle GoldenGate for more information about usage and
syntax.
For more information and for additional TABLE options that control data filtering,
mapping, and manipulation, see Reference for Oracle GoldenGate.
3. Enter any optional Extract parameters that are recommended for your
configuration. You can edit this file at any point before starting processing by using
the EDIT PARAMS command in GGSCI. For a list of parameters and links to their
detailed reference, see Reference for Oracle GoldenGate.
4. Save and close the file.
6-4
Chapter 6
Configuring Replicat for Change Delivery to DB2 for i
Parameter Description
3. Enter any optional Extract parameters that are recommended elsewhere in this
manual and any others shown in Summary of Extract Commands.
4. Save and close the file.
Note:
There does not have to be a database on a Windows or Linux machine to
support connection by ODBC by Replicat.
6-5
Chapter 6
Configuring Replicat for Change Delivery to DB2 for i
Where: library_name is the name of the library and journal_name is the name of the
default journal.
2. Add the checkpoint table.
ADD CHECKPOINTTABLE library_name.chkptab
6-6
Chapter 6
Next Steps in the Deployment
Parameter Description
SOURCEDEFS pathname | Specifies how to interpret data definitions. Use SOURCEDEFS if the source and target
ASSUMETARGETDEFS tables have different definitions, such as when replicating data between dissimilar IBM
for i databases or from an IBM for i database to an Oracle database. For pathname,
specify the source data-definitions file that you created with the DEFGEN utility in
"Creating a Data Definitions File". Use ASSUMETARGETDEFS if the source and target
tables are all DB2 for i and have the same definitions.
MAP owner.table, Specifies a relationship between a source and target table or tables. The MAP clause
TARGET owner.table; specifies the source objects, and the TARGET clause specifies the target objects to
which the source objects are mapped.
• owner is the schema or library name.
• table is the name of a table or a wildcard definition for multiple tables.
For supported object names, see Administering Oracle GoldenGate.
Terminate the MAP statement with a semi-colon.
To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter.
For more information and for additional options that control data filtering, mapping,
and manipulation, see MAP in Reference for Oracle GoldenGate.
3. Enter any optional Extract parameters that are recommended elsewhere in this
manual and any others shown in Summary of Extract Parameters.
4. Save and close the file.
6-7
Chapter 6
When to Start Replicating Transactional Changes
6-8
Chapter 6
Testing Your Configuration
Note:
Because the journals can have a transaction split among them, if a given
journal is independently repositioned far into the past, the resulting latency
from reprocessing the entries may cause the already-read journals to stall
until the reading of the latent journal catches up.
6-9
7
Instantiating and Starting Oracle
GoldenGate Replication
This chapter contains instructions for configuring an initial load of target data, adding
the required processes to instantiate replication, and perform the instantiation. The
expected outcome of these steps is that source-target data is made consistent (known
as the initial synchronization), and that Oracle GoldenGate captures and delivers
ongoing transactional changes so that consistency is maintained going forward.
Topics:
• About the Instantiation Process
• Overview of Basic Oracle GoldenGate Instantiation Steps
• Satisfying Prerequisites for Instantiation
• Making the Instantiation Procedure More Efficient
• Configuring the Initial Load
• Adding Change-Capture and Change-Delivery processes
• Performing the Target Instantiation
• Monitoring Processing after the Instantiation
• Backing up Your Oracle GoldenGate Environment
• Positioning Extract After Startup
7-1
Chapter 7
Overview of Basic Oracle GoldenGate Instantiation Steps
ALCOBJ command to lock the objects or libraries, or you can force all of the current
transactions on those tables to stop at a certain point.
After initialization is complete, remember to unlock any objects that you locked. To do
so, log off of the session that locked the objects or use the DLCOBJ command from the
OS/400 command line.
7-2
Chapter 7
Making the Instantiation Procedure More Efficient
• UPDATE and DELETE operations for which the row does not exist.
For more information about this parameter, see the Oracle GoldenGate Windows and
UNIX Reference Guide.
7-3
Chapter 7
Configuring the Initial Load
of processes. You can isolate large tables from smaller ones by using different sets of
processes, or simply apportion the load across any number of process sets. To
configure parallel processes correctly, see Administering Oracle GoldenGate for
Windows and UNIX.
To use Replicat to establish the target data, you use an initial-load Extract to extract
source records from the source tables and write them to an extract file in canonical
format. From the file, an initial-load Replicat loads the data using the database
interface. During the load, the change-synchronization groups extract and replicate
incremental changes, which are then reconciled with the results of the load.
During the load, the records are applied to the target database one record at a time, so
this method may be considerably slower than using a native DB2 for i load utility. This
method permits data transformation to be done on either the source or target system.
7-4
Chapter 7
Configuring the Initial Load
3. Enter the parameters listed in the following table in the order shown, starting a
new line for each parameter statement.
Parameter Description
SOURCEDB database USERID user id, PASSWORD Specifies database connection information.
password, BLOWFISH ENCRYPTKEY keyname • SOURCEDB specifies the name of the source
database.
• USERID specifies the Extract database user
profile.
• PASSWORD specifies the user's password that was
encrypted with the ENCRYPT PASSWORD command
(see "Encrypting the Extract and Replicat
Passwords"). Enter or paste the encrypted
password after the PASSWORD keyword.
• BLOWFISH ENCRYPTKEY keyname specifies the
name of the lookup key in the local ENCKEYS file.
• RMTHOST specifies the name or IP address of the
RMTHOST hostname, MGRPORT portnumber,[encryption
target system.
options]
• MGRPORT specifies the port number where
Manager is running on the target.
• encryption options specifies optional
encryption of data across TCP/IP.
For additional options and encryption details, see
Reference for Oracle GoldenGate for Windows and
UNIX.
ENCRYPTTRAIL BLOWFISH KEYNAME keyname Encrypts the remote file with Blowfish encryption. For
more information about security, see Administering
Oracle GoldenGate for Windows and UNIX.
RMTFILE path name, Specifies the remote file to which the load data will be
[MEGABYTES n] written. Oracle GoldenGate creates this file during the
load.
• path name is the relative or fully qualified name of the Note: The size of an extract file cannot exceed 2GB.
file.
• MEGABYTES designates the size of each file.
4. Enter any appropriate optional Extract parameters listed in Reference for Oracle
GoldenGate for Windows and UNIX.
5. Save and close the parameter file.
6. On the target system, issue the following command to create an initial-load
Replicat parameter file. This Replicat should have a different name from the
Replicat group that applies the transactional data.
EDIT PARAMS initial-load Replicat name
7-5
Chapter 7
Configuring the Initial Load
7. Enter the parameters listed in Table 7-1 in the order shown, starting a new line for
each parameter statement.
Table 7-1 Initial Load Replicat Parameters for Loading Data from File to Replicat
Parameter Description
SPECIALRUN Implements the initial-load Replicat as a one-time run that does not
use checkpoints.
END RUNTIME Directs the initial-load Replicat to terminate when the load is
finished.
EXTFILE path name | Specifies the input extract file specified with the Extract parameter
EXTTRAIL path name RMTFILE.
MAP owner.table, Specifies a relationship between a source and target table or tables.
TARGET owner.table;
7-6
Chapter 7
Configuring the Initial Load
8. Enter any appropriate optional Replicat parameters listed in the Reference for
Oracle GoldenGate for Windows and UNIX.
9. Save and close the file.
This graphic shows the parallel flows of the initial load and the ongoing capture and
replication of transactional changes during the load period. The copy utility writes the
data to a file, which is loaded to the target. Meanwhile, an Extract process captures
change data and sends it to a trail on the target for Replicat to read and apply to the
target.
For an initial load between two DB2 for i source and target systems, you can use the
DB2 for i system utilities to establish the target data. To do this, you save the file(s)
that you want to load to the target by using the SAVOBJ or SAVLIB commands, and then
you restore them on the target using the RSTOBJ or RSTLIB commands.
Another alternative is to use the DB2 for i commands CPYTOIMPF (Copy to Import File)
and CPYFRMIMPF (Copy from Import File) to create files that can be used with the bulk
load utilities of other databases. See the DB2 for i Information Center documentation
for more details on "Copying between different systems."
In both cases, no special configuration of any Oracle GoldenGate initial-load
processes is needed. You use the change-synchronization process groups that you
configured in Configuring Oracle GoldenGate for DB2 for i . You start a change-
synchronization Extract group to extract ongoing data changes while you are making
the copy and loading it. When the copy is finished, you start the change-
synchronization Replicat group to re-synchronize rows that were changed while the
copy was being applied. From that point forward, both Extract and Replicat continue
running to maintain data synchronization. See "Adding Change-Capture and Change-
Delivery processes".
7-7
Chapter 7
Adding Change-Capture and Change-Delivery processes
Note:
Perform these steps at or close to the time that you are ready to start the
initial load and change capture.
These steps establish the Oracle GoldenGate Extract, data pump, and Replicat
processes that you configured in Configuring Oracle GoldenGate for DB2 for i .
Collectively known as the "change-synchronization" processes, these are the
processes that:
• capture and apply ongoing source changes while the load is being performed on
the target
• reconcile any collisions that occur
Note:
Perform these steps as close as possible to the time that you plan to start the
initial load processes. You will start these processes during the initial load
steps.
7-8
Chapter 7
Adding Change-Capture and Change-Delivery processes
Where:
• group name is the name of the primary Extract group that captures the
transactional changes.
• TRANLOG specifies the journals as the data source.
• BEGIN specifies to begin capturing data as of a specific time. Select one of two
options: NOW starts at the first record that is timestamped at the same time that
BEGIN is issued. yyyy-mm-dd[hh:mi:[ss[.cccccc]]] starts at an explicit
timestamp. Logs from this timestamp must be available.
• SEQNO seqno specifies to begin capturing data at, or just after, a system
sequence number, which is a decimal number up to 20 digits in length.
3. (Optional) Issue the following command to alter any ADD EXTRACT start position to
set the start position for a specific journal in the same Extract configuration. A
specific journal position set with ALTER EXTRACT does not affect any global position
that was previously set with ADD EXTRACT or ALTER EXTRACT; however a global
position set with ALTER EXTRACT overrides any specific journal positions that were
previously set in the same Extract configuration.
ALTER EXTRACT group name,
{
ALTER EXTRACT {BEGIN {NOW | yyyy-mm-dd [hh:mi:[ss[.cccccc]]] [JOURNAL
journal_library/journal_name [[JRNRCV receiver_library/receiver_name]] |
, EOF [JOURNAL journal_library/journal_name [[JRNRCV receiver_library/
7-9
Chapter 7
Adding Change-Capture and Change-Delivery processes
receiver_name]] |
, SEQNO seqno [JOURNAL journal_library/journal_name [[JRNRCV receiver_library/
receiver_name]]
}
Note:
SEQNO, when used with a journal in ALTER EXTRACT, is the journal sequence
number that is relative to that specific journal, not the system sequence
number that is global across journals.
Where:
• EXTTRAIL specifies that the trail is to be created on the local system.
• pathname is the relative or fully qualified name of the trail, including the two-
character name.
• EXTRACT group name is the name of the primary Extract group.
Example 7-6
ADD EXTTRAIL /ggs/dirdat/lt, EXTRACT finance
7-10
Chapter 7
Performing the Target Instantiation
Where:
• group name is the name of the data-pump Extract group.
• EXTTRAILSOURCE trail name is the relative or fully qualified name of the local trail.
Example 7-7
ADD EXTRACT financep, EXTTRAILSOURCE c:\ggs\dirdat\lt
Where:
• RMTTRAIL specifies that the trail is to be created on the target system, and pathname
is the relative or fully qualified name of the trail, including the two-character name.
• EXTRACT group name is the name of the data-pump Extract group.
Example 7-8
ADD RMTTRAIL /ggs/dirdat/rt, EXTRACT financep
Where:
• group name is the name of the Replicat group.
• EXTTRAIL pathname is the relative or fully qualified name of the remote trail,
including the two-character name.
Example 7-9
ADD REPLICAT financer, EXTTRAIL c:\ggs\dirdat\rt
7-11
Chapter 7
Performing the Target Instantiation
3. On the source system, start the primary and data pump Extract groups to start
change extraction.
START EXTRACT primary Extract group name
START EXTRACT data pump Extract group name
4. From the directory where Oracle GoldenGate is installed on the source system,
start the initial-load Extract as follows:
$ /GGS directory/extract paramfile dirprm/initial-load Extract name.prm
reportfile path name
Where: initial-load Extract name is the name of the initial-load Extract that you
used when creating the parameter file, and path name is the relative or fully
qualified name of the Extract report file (by default the dirrpt sub-directory of the
Oracle GoldenGate installation directory).
5. Verify the progress and results of the initial extraction by viewing the Extract report
file using the operating system's standard method for viewing files.
6. Wait until the initial extraction is finished.
7. On the target system, start the initial-load Replicat.
$ /GGS directory/replicat paramfile dirprm/initial-load Replicat name.prm
reportfile path name
Where: initial-load Replicat name is the name of the initial-load Replicat that you
used when creating the parameter file, and path name is the relative or fully
qualified name of the Replicat report file (by default the dirrpt sub-directory of the
Oracle GoldenGate installation directory).
8. When the initial-load Replicat is finished running, verify the results by viewing the
Replicat report file using the operating system's standard method for viewing files.
9. On the target system, start change replication.
START REPLICAT Replicat group name
10. On the target system, issue the following command to verify the status of change
replication.
INFO REPLICAT Replicat group name
11. Continue to issue the INFO REPLICAT command until you have verified that Replicat
posted all of the change data that was generated during the initial load. For
example, if the initial-load Extract stopped at 12:05, make sure Replicat posted
data up to that point.
7-12
Chapter 7
Performing the Target Instantiation
12. On the target system, issue the following command to turn off the HANDLECOLLISIONS
parameter and disable the initial-load error handling.
SEND REPLICAT Replicat group name, NOHANDLECOLLISIONS
13. On the target system, edit the Replicat parameter file to remove the
HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled
again the next time Replicat starts.
Caution:
Do not use the VIEW PARAMS or EDIT PARAMS command to view or edit an
existing parameter file that is in a character set other than that of the
local operating system (such as one where the CHARSET option was used
to specify a different character set). View the parameter file from outside
GGSCI if this is the case; otherwise, the contents may become
corrupted.
Note:
Do not use the VIEW PARAMS or EDIT PARAMS command to view or edit an
existing parameter file that is in a character set other than that of the
local operating system (such as one where the CHARSET option was used
to specify a different character set). View the parameter file from outside
GGSCI if this is the case; otherwise, the contents may become
corrupted.
7-13
Chapter 7
Monitoring Processing after the Instantiation
Caution:
Do not use the VIEW PARAMS or EDIT PARAMS command to view or edit an
existing parameter file that is in a character set other than that of the
local operating system (such as one where the CHARSET option was used
to specify a different character set). View the parameter file from outside
GGSCI if this is the case; otherwise, the contents may become
corrupted.
7-14
Chapter 7
Backing up Your Oracle GoldenGate Environment
Note:
Because the extract will be synchronizing all of the journals in the extract by
system sequence number because it is possible for a transaction to be split
across them, if a given journal is independently repositioned far into the past,
the resulting latency from reprocessing the entries will cause the already-
read journals to stall until the reading of the latent journal catches up.
7-15
8
Using Remote Journal
This chapter contains instructions for remote journal preparation and adding a remote
journal. Remote Journal support in the IBM DB2 for i operating system provides the
ability for a system to replicate, in its entirety, a sequence of journal entries from one
DB2 for i system to another. Once setup, this replication is handled automatically and
transparently by the operating system. The entries that are replicated are placed in a
journal on the target system that is available to be read by an application in the same
way as on the source system.
You must have an understanding of how to setup and use remote journaling on an
DB2 for i system to use this feature with Oracle GoldenGate. There are no special
software requirements for either Oracle GoldenGate or the DB2 for i systems to use
remote journaling.
Topics:
• Preparing to Use Remote Journals
• Adding a Remote Journal
8-1
Chapter 8
Adding a Remote Journal
7. If one does not already exist, create the appropriate relational database (RDB)
directory entry that will be used to define the communications protocol for the
remote journal environment. When TCP communications are being used to
connect to an independent disk pool, the RDB entry to the independent disk pool
must have the Relational database value set to the target system's local RDB
entry and the relational database alias value set to the independent disk pool's
name.
8. Now you should be able to see the remote database connection by issuing the
WRKRDBDIRE command.
Position to . . . . . .
Remote
Option Entry Location Text
SYS1 system1
SYS2 system2
MYSYSTEM *LOCAL Entry added by system
Bottom
F3=Exit F5=Refresh F6=Print list F12=Cancel F22=Display entire field
(C) COPYRIGHT IBM CORP. 1980, 2007.
8-2
Chapter 8
Adding a Remote Journal
8-3
Chapter 8
Adding a Remote Journal
8-4
Part IV
Using Oracle GoldenGate with DB2 for
z/OS
Oracle GoldenGate for DB2 for z/OS runs remotely on Linux, zLinux, or AIX. With
Oracle GoldenGate, you can move data between similar or dissimilar supported DB2
for z/OS versions, or you can move data between a DB2 for z/OS database and a
database of another type, such as Oracle or DB2 LUW. Oracle GoldenGate for DB2
for z/OS platform supports the filtering, mapping, and transformation of data.
This part describes tasks for configuring and running Oracle GoldenGate on a DB2 for
z/OS database.
Topics:
• Understanding What's Supported for DB2 for z/OS
This chapter contains support information for Oracle GoldenGate on DB2 for z/OS
databases.
• Preparing the DB2 for z/OS Database for Oracle GoldenGate
• Preparing the DB2 for z/OS Transaction Logs for Oracle GoldenGate
9
Understanding What's Supported for DB2
for z/OS
This chapter contains support information for Oracle GoldenGate on DB2 for z/OS
databases.
Topics:
• Supported DB2 for z/OS Data Types
• Non-Supported DB2 for z/OS Data Types
• Supported Objects and Operations for DB2 for z/OS
• Non-Supported Objects and Operations for DB2 for z/OS
Limitations of Support
• The support of range and precision for floating-point numbers depends on the host
machine. In general, the precision is accurate to 16 significant digits, but you
should review the database documentation to determine the expected
approximations. Oracle GoldenGate rounds or truncates values that exceed the
supported precision.
• Oracle GoldenGate does not support the filtering, column mapping, or
manipulation of large objects greater than 4K in size. You can use the full Oracle
GoldenGate functionality for objects that are 4K or smaller.
9-1
Chapter 9
Supported Objects and Operations for DB2 for z/OS
• XML
• User-defined types
• Negative dates
• TRUNCATES are always captured from a DB2 for z/OS source, but can be ignored by
Replicat if the IGNORETRUNCATES parameter is used in the Replicat parameter file.
• UNICODE columns in EBCDIC tables are supported.
9-2
Chapter 9
Non-Supported Objects and Operations for DB2 for z/OS
• Replicating with BATCHSQL is not fully functional for DB2 for z/OS. Non-insert
operations are not supported so any update or delete operations will cause
Replicat to drop temporarily out of BATCHSQL mode. The transactions will stop and
errors will occur.
9-3
10
Preparing the DB2 for z/OS Database for
Oracle GoldenGate
Learn how to prepare your database and environment to support Oracle GoldenGate.
Topics:
• Preparing Tables for Processing
• Configuring a Database Connection
• Accessing Load Modules
• Specifying Job Names and Owners
• Assigning WLM Velocity Goals
• Monitoring Processes
• Supporting Globalization Functions
10-1
Chapter 10
Preparing Tables for Processing
10.1.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If none of the preceding key types exist (even though there might be other types of
keys defined on the table) Oracle GoldenGate constructs a pseudo key of all
columns that the database allows to be used in a unique key, excluding those that
are not supported by Oracle GoldenGate in a key or those that are excluded from
the Oracle GoldenGate configuration.
Note:
If there are other, non-usable keys on a table or if there are no keys at all on
the table, Oracle GoldenGate logs an appropriate message to the report file.
Constructing a key from all of the columns impedes the performance of
Oracle GoldenGate on the source system. On the target, this key causes
Replicat to use a larger, less efficient WHERE clause.
Note:
If you want to use the RRN of the records as the key for a table, you may
access the GGHEADER Oracle GoldenGate Environment Variable AUDITRBA
which will contain the RRN for each record processed.
10-2
Chapter 10
Configuring a Database Connection
ODBC error: SQLSTATE 428C9 native database error -798. {DB2 FOR OS/390}{ODBC DRIVER}
{DSN08015} DSNT408I SQLCODE = -798, ERROR: YOU CANNOT INSERT A VALUE INTO A COLUMN
THAT IS DEFINED WITH THE OPTION GENERATED ALWAYS. COLUMN NAME ROWIDCOL.
You can do one of the following to prepare tables with ROWID columns to be processed
by Oracle GoldenGate:
• Ensure that any ROWID columns in target tables are defined as GENERATED BY
DEFAULT.
• If it is not possible to change the table definition, you can work around it with the
following procedure.
The COLSEXCEPT clause excludes the ROWID column from being captured and
replicated to the target table.
2. For the target table, ensure that Replicat does not attempt to use the ROWID column
as the key. This can be done in one of the following ways:
• Specify a primary key in the target table definition.
• If a key cannot be created, create a Replicat MAP parameter for the table, and
use a KEYCOLS clause in that statement that contains any unique columns
except for the ROWID column. Replicat will use those columns as a key. For
example:
MAP tab1, TARGET tab1, KEYCOLS (num, ckey);
For more information about KEYCOLS, see Assigning Row Identifiers.
10-3
Chapter 10
Configuring a Database Connection
• LOCATION: set to the DB2 location name as stored in the DB2 Boot Strap Dataset.
• PLANNAME: set to the DB2 plan. The default plan name is DSNACLI.
Note:
When using the CAF attachment type, you must use the Oracle GoldenGate
DBOPTIONS parameter with the NOCATALOGCONNECT option in the parameter file of
any Extract or Replicat process that connects to DB2. This parameter
disables the usual attempt by Oracle GoldenGate to obtain a second thread
for the DB2 catalog. Otherwise, you will receive error messages, such as:
ODBC operation failed: Couldn't connect to data source for catalog
queries.
mv ODBC-1047.ini ODBC.ini
• Change your terminal emulator or terminal configuration to use CCSID IBM-1047
when you create or alter the file.
10-4
Chapter 10
Accessing Load Modules
10-5
Chapter 10
Assigning WLM Velocity Goals
10-6
Chapter 10
Monitoring Processes
9. Manager process (required only for startup of Oracle GoldenGate processes and
trail cleanup).
10. GGSCI and other user UNIX and TSO/E terminal work.
11. Initial-load Extract and any DB2 stored procedures that it calls.
12. Initial-load Replicat and any DB2 stored procedures that it calls.
If the DB2 accounting trace is also active to the SMF destination, DB2 will create an
SMF accounting record for each of the following Oracle GoldenGate processes:
• Extract
• Replicat
• Manager, if performing maintenance on Oracle GoldenGate tables. Examples of
Oracle GoldenGate tables are the marker table and the Replicat checkpoint table.
• GGSCI sessions that issue the Oracle GoldenGate DBLOGIN command to log into
the database.
10-7
Chapter 10
Supporting Globalization Functions
10-8
Chapter 10
Supporting Globalization Functions
10-9
11
Preparing the DB2 for z/OS Transaction
Logs for Oracle GoldenGate
Learn how to configure the DB2 transaction logging to support data capture by Oracle
GoldenGate Extract.
Topics:
• Making Transaction Data Available
11-1
Chapter 11
Making Transaction Data Available
Note:
The primary authorization ID, or one of the secondary authorization IDs, of
the ODBC plan executor also must have the MONITOR2 privilege.
11-2
Chapter 11
Making Transaction Data Available
Note:
The IBM documentation makes recommendations for improving the
performance of log reads. In particular, you can use large log output buffers,
large active logs, and make archives to disk.
11-3
Part V
Using Oracle GoldenGate with MySQL
Oracle GoldenGate for MySQL supports replication from a MySQL source database to
a MySQL target database or to a supported database of another type to perform an
initial load or change data replication.
This part describes tasks for configuring and running Oracle GoldenGate on a MySQL
database.
Topics:
• Understanding What's Supported for MySQL
This chapter contains support information for Oracle GoldenGate on MySQL
databases.
• Preparing and Configuring Your System for Oracle GoldenGate
• Using DDL Replication
12
Understanding What's Supported for
MySQL
This chapter contains support information for Oracle GoldenGate on MySQL
databases.
Topics:
• Character Sets in MySQL
• Supported MySQL Data Types
• Supported Objects and Operations for MySQL
• Non-Supported MySQL Data Types
Level Example
Database
create database test charset utf8;
Table
create table test( id int, name char(100)) charset utf8;
Column
create table test ( id int, name1 char(100) charset gbk, name2 char(100)
charset utf8));
Limitations of Support
• When you specify the character set of your database as utf8mb4/utf8, the default
collation is utf8mb4_unicode_ci/utf8_general_ci. If you specify
collation_server=utf8mb4_bin, the database interprets the data as binary. For
example, specifying the CHAR column length as four means that the byte length
returned is 16 (for utf8mb4) though when you try to insert data more than four
bytes the target database warns that the data is too long. This is the limitation of
database so Oracle GoldenGate does not support binary collation. To overcome
this issue, specify collation_server=utf8mb4_bin when the character set is utf8mb4
and collation_server=utf8_bin for utf8.
• The following character sets are not supported:
armscii8
keybcs2
utf16le
geostd8
12-1
Chapter 12
Supported MySQL Data Types
• VARCHAR
• INT
• TINYINT
• SMALL INT
• MEDIUM INT
• BIG INT
• DECIMAL
• FLOAT
• DOUBLE
• DATE
• TIME
• YEAR
• DATETIME
• TIMESTAMP
• BINARY
• VARBINARY
• TEXT
• TINYTEXT
• MEDIUMTEXT
• LONGTEXT
• BLOB
• TINYBLOB
• MEDIUMBLOB
• LONGBLOB
• ENUM
• BIT(M)
12-2
Chapter 12
Supported Objects and Operations for MySQL
• Oracle GoldenGate supports UTF8 and UCS2 character sets. UTF8 data is
converted to UTF16 by Oracle GoldenGate before writing it to the trail.
• UTF32 is not supported by Oracle GoldenGate.
• Oracle GoldenGate supports a TIME type range from 00:00:00 to 23:59:59.
• Oracle GoldenGate supports timestamp data from 0001/01/03:00:00:00 to
9999/12/31:23:59:59. If a timestamp is converted from GMT to local time, these
limits also apply to the resulting timestamp. Depending on the time zone,
conversion may add or subtract hours, which can cause the timestamp to exceed
the lower or upper supported limit.
• Oracle GoldenGate does not support negative dates.
• The support of range and precision for floating-point numbers depends on the host
machine. In general, the precision is accurate to 16 significant digits, but you
should review the database documentation to determine the expected
approximations. Oracle GoldenGate rounds or truncates values that exceed the
supported precision.
• When you use ENUM type in non-strict sql_mode, the non-strict sql_mode does not
prevent you from entering an invalid ENUM value and an error will be returned. To
avoid this situation, do one of the following:
– Use sql_mode as STRICT and restart Extract. This prevents users from entering
invalid values for any of the data types. An IE user can only enter valid values
for those data types.
– Continue using non-strict sql_mode, but do not use ENUM data types.
– Continue using non-strict sql_mode and use ENUM data types with valid values in
the database. If you specify invalid values, the database will silently accept
them and Extract will abend.
• To preserve transaction boundaries for a MySQL target, create or alter the target
tables to the InnoDB transactional database engine instead of the MyISAM engine.
MyISAM will cause Replicat records to be applied as they are received, which
does not guarantee transaction integrity even with auto-commit turned off. You
cannot roll back a transaction with MyISAM.
• Extraction and replication from and to views is not supported.
• Transactions applied by the slave are logged into the relay logs and not into the
slave's binlog. If you want a slave to write transactions the binlog that it receives
from the master , you need to start the replication slave with the log slave-updates
option as 1 in my.cnf. This is in addition to the other binary logging parameters.
After the master's transactions are in the slave's binlog, you can then setup a
regular capture on the slave to capture and process the slave's binlog.
12-3
Chapter 12
Non-Supported MySQL Data Types
• Oracle GoldenGate supports transactional tables up to the full row size and
maximum number of columns that are supported by MySQL and the database
storage engine that is being used. InnoDB supports up to 1017 columns.
• Oracle GoldenGate supports the AUTO_INCREMENT column attribute. The increment
value is captured from the binary log by Extract and applied to the target table in a
Replicat insert operation.
• Oracle GoldenGate supports the following DML operations on source and target
database transactional tables:
– Insert operation
– Update operation (compressed included)
– Delete operation (compressed included); cascade delete queries result in the
deletion of the child of the parent operation
– Truncate operation
• Oracle GoldenGate can operate concurrently with MySQL native replication.
• Oracle GoldenGate supports the DYNSQL feature for MySQL.
• Limitations on Automatic Heartbeat Table support are as follows:
– Ensure that the database in which the heartbeat table is to be created already
exists to avoid errors when adding the heartbeat table.
– In the heartbeat history lag view, the information in fields like
heartbeat_received_ts, incoming_heartbeat_age, and outgoing_heartbeat_age
are shown with respect to the system time. You should ensure that the
operating system time is setup with the correct and current time zone
information.
– Heartbeat Table is not supported on MySQL 5.5.
12-4
13
Preparing and Configuring Your System for
Oracle GoldenGate
Learn about how to prepare your system for running Oracle GoldenGate and how to
configure it with your MySQL database.
Topics:
• Ensuring Data Availability
• Setting Logging Parameters
• Adding Host Names
• Setting the Session Character Set
• Preparing Tables for Processing
• Changing the Log-Bin Location
• Configuring Bi-Directional Replication
• Capturing using a MySQL Replication Slave
• Establishing a Secure Database Connection to AWS Aurora MySQL
Oracle GoldenGate uses SSH tunneling to connect to the AWS Aurora instance
using the .pem file and then runs the Oracle GoldenGate MySQL delivery from on-
premise.
• Other Oracle GoldenGate Parameters for MySQL
• Positioning Extract to a Specific Start Point
13-1
Chapter 13
Setting Logging Parameters
Note:
Extract expects that all of the table columns are in the binary log. As a result,
only binlog_row_image set as full is supported and this is the default. Other
values of binlog_row_image are not supported.
Extract checks the following parameter settings to get this index file path:
1. Extract TRANLOGOPTIONS parameter with the ALTLOGDEST option: If this parameter
specifies a location for the log index file, Extract accepts this location over any
default that is specified in the MySQL Server configuration file. When ALTLOGDEST is
used, the binary log index file must also be stored in the specified directory. This
parameter should be used if the MySQL configuration file does not specify the full
index file path name, specifies an incorrect location, or if there are multiple
installations of MySQL on the same machine
To specify the index file path with TRANLOGICOPTIONS with ALTLOGDEST, use the
following command format on Windows:
TRANLOGOPTIONS ALTLOGDEST "C:\\Program Files\\MySQL\\logs\\binlog.index"
13-2
Chapter 13
Adding Host Names
• binlog_format: This parameter sets the format of the logs. It must be set to the
value of ROW, which directs the database to log DML statements in binary
format. Any other log format (MIXED or STATEMENT) causes Extract to abend.
Note:
MySQL binary logging does not allow logging to be enabled or
disabled for specific tables. It applies globally to all tables in the
database.
SOURCEDB database_name@host_name
Where: database_name is the name of the MySQL instance, and host_name is the name
or IP address of the local host. If using an unqualified host name, that name must be
properly configured in the DNS database. Otherwise, use the fully qualified host name,
for example myhost.company.com.
13-3
Chapter 13
Preparing Tables for Processing
13.5.1.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If none of the preceding key types exist (even though there might be other types of
keys defined on the table) Oracle GoldenGate constructs a pseudo key of all
columns that the database allows to be used in a unique key, excluding those that
are not supported by Oracle GoldenGate in a key or those that are excluded from
the Oracle GoldenGate configuration.
Note:
If there are other, non-usable keys on a table or if there are no keys at all
on the table, Oracle GoldenGate logs an appropriate message to the
report file. Constructing a key from all of the columns impedes the
performance of Oracle GoldenGate on the source system. On the target,
this key causes Replicat to use a larger, less efficient WHERE clause.
Target:
mysql> create unique index uq1 on ggvam.emp(last);
mysql> create unique index uq2 on ggvam.emp(first);
mysql> create unique index uq3 on ggvam.emp(middle);
The result of this sequence is that MySQL promotes the index on the source "first"
column to primary key, and it promotes the index on the target "last" column to primary
13-4
Chapter 13
Changing the Log-Bin Location
key. Oracle GoldenGate will select the primary keys as identifiers when it builds its
metadata record, and the metadata will not match. To avoid this error, decide which
column you want to promote to primary key, and create that index first on the source
and target.
13.5.1.3 How to Specify Your Own Key for Oracle GoldenGate to Use
If a table does not have one of the preceding types of row identifiers, or if you prefer
those identifiers not to be used, you can define a substitute key if the table has
columns that always contain unique values. You define this substitute key by including
a KEYCOLS clause within the Extract TABLE parameter and the Replicat MAP parameter.
The specified key will override any existing primary or unique key that Oracle
GoldenGate finds.
13-5
Chapter 13
Configuring Bi-Directional Replication
2. Let the extract finish processing all of the existing binary logs. You can verify this
by noting when the checkpoint position reaches the offset of the last log.
3. After Extract finishes processing the data, stop the Extract group and, if
necessary, back up the binary logs.
4. Stop the MySQL database.
5. Modify the log-bin path for the new location.
6. Start the MySQL database.
7. To clean the old log name entries from index file, use flush master or reset master
(based on your MySQL version).
8. Start Extract.
Note:
Although optional for other supported databases as a means of
enhancing recovery, the use of a checkpoint table is required for
MySQL when using bi-directional replication (and likewise, will
enhance recovery).
3. Edit the MySQL server configuration file to set the auto_increment_increment and
auto_increment_offset parameters to avoid discrepancies that could be caused by
the bi-directional operations. The following illustrates these parameters, assuming
two servers: ServerA and ServerB.
ServerA:
13-6
Chapter 13
Capturing using a MySQL Replication Slave
auto-increment-increment = 2
auto-increment-offset = 1
ServerB:
auto-increment-increment = 2
auto-increment-offset = 2
In the following example, note that the local port number is set to 3308 because the
port number 3306 may be used by MySQL that's installed locally.
For example:
bash-4.1$ ssh -i "test.pem" -v -N -f -L 3308:test-cluster.cluster-copyxiqzdjjl.us-
west-2.rds.amazonaws.com:3306
13-7
Chapter 13
Other Oracle GoldenGate Parameters for MySQL
ec2-user@ec2-52-11-244-17.us-west-2.compute.amazonaws.com -o "ProxyCommand=nc
-X connect -x www-proxy.us.oracle.com:80 %h %p" > mysql_remote_forward.log 2>&1
Also, to prevent a connection timeout, add the following variable to ~/.ssh/config file,
because AWS automatically disconnects connection after 60 secs of inactivity:
bash-4.1$ cat ~/.ssh/config
ServerAliveInterval 50
Parameter Description
DBOPTIONS with Required to specify to the VAM the TCP/IP connection port
CONNECTIONPORT number of the MySQL instance to which an Oracle GoldenGate
port_number process must connect if MySQL is not running on the default of
3306.
DBOPTIONS CONNECTIONPORT 3307
DBOPTIONS with HOST Specifies the DNS name or IP address of the system hosting
host_id MySQL to which Replicat must connect.
DBOPTIONS with Prevents Replicat from abending when replicated LOB data is
ALLOWLOBDATATRUNCATE too large for a target MySQL CHAR, VARCHAR, BINARY or
VARBINARY column.
13-8
Chapter 13
Positioning Extract to a Specific Start Point
Table 13-1 (Cont.) Other Parameters for Oracle GoldenGate for MySQL
Parameter Description
SOURCEDB with USERID and Specifies database connection information consisting of the
PASSWORD database, user name and password to use by an Oracle
GoldenGate process that connects to a MySQL database. If
MySQL is not running on the default port of 3306, you must
specify a complete connection string that includes the port
number: SOURCEDB dbname@hostname:port, USERID user,
PASSWORD password.Example:
SOURCEDB mydb@mymachine:3307, USERID myuser, PASSWORD
mypassword
If you are not running the MySQL database on port 3306, you
must also specify the connection port of the MySQL database in
the DBLOGIN command when issuing commands that affect the
database through GGSCI:
DBLOGIN SOURCEDB dbname@hostname:port, USERID user,
PASSWORD password
For example:
GGSCI> DBLOGIN SOURCEDB mydb@mymachine:3307, USERID
myuser, PASSWORD mypassword
• group is the name of the Oracle GoldenGate Extract group for which the start
position is required.
• log_num is the log file number. For example, if the required log file name is test.
000034, this value is 34. Extract will search for this log file.
• log_pos is an event offset value within the log file that identifies a specific
transaction record. Event offset values are stored in the header section of a log
record. To position at the beginning of a binlog file, set the log_pos as 4. The
log_pos 0 or 1 are not valid offsets to start reading and processing.
In MySQL logs, an event offset value can be unique only within a given binary file. The
combination of the position value and a log number will uniquely identify a transaction
record and cannot exceed a length of 37. Transactional records available after this
position within the specified log will be captured by Extract. In addition, you can
position an Extract using a timestamp.
13-9
14
Using DDL Replication
Learn how to install, use, configure, and remove DDL replication.
Data Definition Language (DDL) statements (operations) are used to define MySQL
database structures or schema. You can use these DDL statements for data
replication between MySQL source and target databases. MySQL DDL specifics are
found in the MySQL documentation at https://dev.mysql.com/doc/.
Topics:
• DDL Configuration Prerequisites and Considerations
• Installing DDL Replication
• Using the Metadata Server
• Using DDL Filtering for Replication
• Troubleshooting DDL Replication
• Uninstalling DDL Replication
14-1
Chapter 14
Installing DDL Replication
The installation script options are install, uninstall, start, stop, and restart.
The command to install DDL replication uses the install option, user id, password, and
port number respectively:
bash-3.2$ ./ddl_install.sh install-option user-id password port-number
For example:
bash-3.2$ ./ddl_install.sh install root welcome 3306
14-2
Chapter 14
Using the Metadata Server
Option Description
DDL INCLUDE OPTYPE CREATE OBJTYPE TABLE; Include create table.
DDL INCLUDE OBJNAME ggvam.* Include tables under the ggvamdatabase.
DDL EXCLUDE OBJNAME ggvam.emp*; Exclude all the tables under the ggvam
database and table name starting with the
empwildcard.
DDL INCLUDE INSTR ‘XYZ’ Include DDL that contains this string.
DDL EXCLUDE INSTR ‘WHY’ Excludes DDL that contains this string.
DDL INCLUDE MAPPED MySQL DDL uses this option and should be
used as the default for Oracle GoldenGate
MySQL DDL replication. DDL INCLUDE ALL and
DDL are not supported.
DDL EXCLUDE ALL Default option.
For a full list of options, see DDL in Reference for Oracle GoldenGate.
14-3
Chapter 14
Using DDL Filtering for Replication
In this preceding example, the exttrail a gets creates and drops for all objects
that belong to eric, except for objects that start with tab, exttrail a also gets all
alter index statements, unless the index name begins with tab (the rule is global
even though it’s included in exttrail b). exttrail b gets the same objects as a,
and it also gets all creates for objects that belong to joe when the string abcor xyz
is present in the DDL text. The ddlops.c module stores all DDL operation
parameters and executes related rules.
Additionally, you can use the DDLOPTIONS parameter to configure aspects of DDL
processing other than filtering and string substitution. You can use multiple DDLOPTIONS
statements and Oracle recommends using one. If you are using multiple DDLOPTIONS
statements, then make each of them unique so that one does not override the other.
Multiple DDLOPTIONS statements are executed in the order listed in the parameter file.
14-4
Chapter 14
Troubleshooting DDL Replication
For example, bash-3.2$ mysqldump -uroot -pwelcome oggddl history > outfile
The metadata plugins and server logs are located in the MySQL and Oracle
GoldenGate installation directories respectively.
If you find an error in the log files, you need to ensure that the metadata server is
running.
14-5
Part VI
Using Oracle GoldenGate with SQL Server
With Oracle GoldenGate for SQL Server, you can capture transactional data from user
tables of supported SQL Server versions and replicate the data to a SQL Server
database or other supported Oracle GoldenGate targets, such as an Oracle Database
or Big Data target.
Oracle GoldenGate for SQL Server supports data filtering, mapping, and
transformations unless noted otherwise in this documentation. And beginning with
Oracle GoldenGate 12.3, there will be two separate data capture methods. The first,
which is referred to as Classic Capture, is the transaction log based capture method.
The second method, newly introduced with Oracle GoldenGate 12.3, is the CDC
Capture method. The Classic Extract binary is available at My Oracle Support, under
Patches and Updates, and requires a Service Request in order to receive a password
to download the binary. The CDC Extract binary is available on the Oracle Software
Delivery Cloud.
This part describes tasks for configuring, and running Oracle GoldenGate on a SQL
Server database.
• Understanding What's Supported for SQL Server
This chapter contains the requirements for the system and database resources
that support Oracle GoldenGate.
• Preparing the System for Oracle GoldenGate
• Preparing the Database for Oracle GoldenGate — Classic Capture
• Preparing the Database for Oracle GoldenGate — CDC Capture
Process to configure database settings and supplemental logging to support CDC
Capture.
• Requirements Summary for Classic Extract in Archived Log Only (ALO) Mode
• Requirements Summary for Capture and Delivery of Databases in an AlwaysOn
Availability Group
Oracle GoldenGate for SQL Server features Capture support of the Primary and
read-only, synchronous mode Secondary databases of an AlwaysOn Availability
Group, and Delivery to the Primary database.
• Oracle GoldenGate Classic Extract for SQL Server Standard Edition Capture
• CDC Capture Method Operational Considerations
This section provides information about the SQL Server CDC Capture options,
features, and recommended settings.
15
Understanding What's Supported for SQL
Server
This chapter contains the requirements for the system and database resources that
support Oracle GoldenGate.
Topics:
• Supported SQL Server Data Types
• Non-Supported SQL Server Data Types and Features
• Supported Objects and Operations for SQL Server
• Non-Supported Objects and Operations for SQL Server
• LOBs
– (image, ntext, text)
• From this version of Oracle GoldenGate 12.3 release, Oracle GoldenGate for SQL
Server (CDC Extract only) can replicate column data that contains SPARSE settings.
15-1
Chapter 15
Supported SQL Server Data Types
Note:
The previous versions of Oracle GoldenGate 12.3 release didn’t support
column data with SPARSE settings.
• From this version of Oracle GoldenGate 12.3 release, the FILESTREAM feature is
supported for CDC Extract. This feature is already available for Classic Extract.
Note:
For details on SPARSE and FILESTREAM support, see New Features - March
2018 in Release Notes for Oracle GoldenGate
Limitations:
• Oracle GoldenGate does not support filtering, column mapping, or manipulating
large objects larger than 4KB. Full Oracle GoldenGate functionality can be used
for objects of up to 4KB.
• Oracle GoldenGate treats XML data as a large object (LOB), as does SQL Server
when the XML does not fit into a row. SQL Server extended XML enhancements
(such as lax validation, DATETIME , union functionality) are not supported.
• A system-assigned TIMESTAMP column or a non-materialized computed column
cannot be part of a key. A table containing a TIMESTAMP column must have a key,
which can be a primary key or unique constraint, or a substitute key specified with
a KEYCOLS clause in the TABLE or MAP statements. For more information see
Assigning Row Identifiers.
• Oracle GoldenGate supports multi byte character data types and multi byte data
stored in character columns. Multi byte data is supported only in a like-to-like, SQL
Server configuration. Transformation, filtering, and other types of manipulation are
not supported for multi byte character data.
• If data capture the TEXT, NTEXT, IMAGE, or VARCHAR (MAX), NVARCHAR(MAX) and
VARBINARY (MAX) columns will exceed the SQL Server default size set for the max
text repl size option, extend the size. Use sp_configure to view the current value
of max text repl size and adjust the option as needed.
• Oracle GoldenGate supports UDT and UDA data of up to 2 GB in size. All UDTs
except SQL_Variant are supported.
• Common Language Runtime (CLR), including SQL Server built-in CLR data types
(such as, geometry, geography, and hierarchy ID), are supported. CLR data types
are supported only in a like-to-like SQL Server configuration. Transformation,
filtering, and other types of manipulation are not supported for CLR data.
• From this release of Oracle GoldenGate 12.3, a VARBINARY (MAX) column with the
FILESTREAM attribute is supported up to a size of 4 GB. Extract uses standard
Win32 file functions to read the FILESTREAM file. Its supported for both Classic and
CDC Extracts.
15-2
Chapter 15
Supported SQL Server Data Types
Note:
Previous versions of Oracle GoldenGate 12.3 does not support this
feature for CDC Extracts.
• The range and precision of floating-point numbers depends on the host machine.
In general, precision is accurate to 16 significant digits, but you should review the
database documentation to determine the expected approximations. Oracle
GoldenGate rounds or truncates values that exceed the supported precision.
• Oracle GoldenGate supports time stamp data from 0001/01/03:00:00:00 to
9999/12/31:23:59:59. If a time stamp is converted from GMT to local time, these
limits also apply to the resulting time stamp. Depending on the time zone,
conversion may add or subtract hours, which can cause the time stamp to exceed
the lower or upper supported limit.
15-3
Chapter 15
Non-Supported SQL Server Data Types and Features
• If a unique key or index contains a non-persisted computed column and is the only
unique identifier in a table, Oracle GoldenGate must use all of the columns as an
identifier to find target rows. Because a non-persisted computed column cannot be
used in this identifier, Replicat may apply operations containing this identifier to the
wrong target rows.
• Tables that contain unsupported data types may cause Extract to Abend. As a
workaround, you must remove TRANDATA from those tables and remove them from
the Extract’s TABLE statement, or use the Extract’s TABLEEXCLUDE parameter for the
table.
15-4
Chapter 15
Non-Supported Objects and Operations for SQL Server
2. (CDC Extract) Ensure the CDC Capture job processes all remaining
transactions.
3. Ensure Extract process all the transactions prior to making any DDL
changes. An Event Marker table may help to ensure full completion.
4. Stop Extract.
5. At source, execute DELETE TRANDATA for the specific tables on which ALTER
TABLE (DDL changes) statement has to be performed.
6. Execute ALTER TABLE statement to add or drop the column in or from the table.
7. (CDC Extract) For cases when all tables previously enabled with TRANDATA had
TRANDATA removed, this will also disable CDC on the database and it will be
necessary to reposition the Extract with BEGIN NOW prior to restarting it.
8. Re-enable TRANDATA for the same table(s) at source.
9. Start Extract.
10. Restart your application.
• Capture from views. The underlying tables can be extracted and replicated.
• Operations by the TextCopy utility and WRITETEXT and UPDATETEXT statements. These
features perform operations that either are not logged by the database or are only
partially logged, so they cannot be supported by the Extract process.
• Partitioned tables that have more than one physical layout across partitions.
• Partition switching.
• (Classic Extract) Oracle GoldenGate does not support non-native SQL Server
transaction log backups, such as those offered by third-party vendors. However, if
using the TRANLOGOPTIONS parameter with the ACTIVESECONDARYTRUNCATIONPOINT
option, Extract does not need to read from any transaction log backups, so any log
backup utility may be used. For more information, see Preparing the Database for
Oracle GoldenGate — Classic Capture.
• (CDC Extract) Due to a limitation with SQL Server's Change Data Capture, column
level collations that are different from the database collation, may cause incorrect
data to be written to the CDC tables for character data and Extract will capture
them as they are written to the CDC tables. It is recommended that you use
NVARCHAR, NCHAR or NTEXT data type for columns containing non-ASCII data or use
the same collation for table columns as the database. For more information see,
About Change Data Capture (SQL Server).
• (CDC Extract) Due to a limitation with SQL Server's Change Data Capture,
NOOPUPDATES are not captured by the SQL Server CDC agent so there are no
records for Extract to capture for no-op update operations.
15-5
16
Preparing the System for Oracle
GoldenGate
This chapter contains steps to take so that the database with which Oracle
GoldenGate interacts is correctly configured to support Oracle GoldenGate capture
and delivery. Some steps apply only to a source system, some only to a target, and
some to both.
Topics:
• Configuring a Database Connection
• Preparing Tables for Processing
• Globalization Support
16-1
Chapter 16
Configuring a Database Connection
Note:
Because Replicat always uses ODBC to query for metadata, you must
configure a target ODBC connection.
Before you select a method to use, review the following guidelines and procedures to
evaluate the advantages and disadvantages of each.
• Using ODBC or Default OLE DB
• Using OLE DB with USEREPLICATIONUSER
Note:
OLE DB uses the ODBC connection settings to derive connection
information for OLE DB together with information on which driver to use.
16-2
Chapter 16
Configuring a Database Connection
Note:
Normal IDENTITY, trigger, and constraint functionality remains in effect for
any users other than the Replicat replication user.
1. In SQL Server Management Studio (or other interface) set the NOT FOR REPLICATION
flag on the following objects. For active-passive configurations, set it only on the
passive database. For active-active configurations, set it on both databases.
• Foreign key constraints
• Check constraints
• IDENTITY columns
• Triggers (requires textual changes to the definition; see the SQL Server
documentation for more information.)
2. Partition IDENTITY values for bidirectional configurations.
3. In the Replicat MAP statements, map the source tables to appropriate targets, and
map the child tables that the source tables reference with triggers or foreign-key
cascade constraints. Triggered and cascaded child operations are replicated by
Oracle GoldenGate, so the referenced tables must be mapped to appropriate
targets to preserve data integrity. Include the same parent and child source tables
in the Extract TABLE parameters.
Note:
If referenced tables are omitted from the MAP statements, no errors alert
you to integrity violations, such as if a row gets inserted into a table that
contains a foreign key to a non-replicated table.
16-3
Chapter 16
Configuring a Database Connection
4. In the Replicat parameter file, include the DBOPTIONS parameter with the
USEREPLICATIONUSER option.
Note:
Even when using OLE DB as the apply connection method, Replicat always
uses ODBC to query the target database for metadata. Therefore Replicat
always requires a DSN.
16-4
Chapter 16
Preparing Tables for Processing
In the following scenario, disable the triggers and constraints on the target:
• Uni-directional replication where all tables on the source are replicated.
In the following scenarios, enable the triggers and constraints on the target:
• Uni-directional replication where tables affected by a trigger or cascade operation
are not replicated, and the only application that loads these tables is using a
trigger or cascade operation.
• Uni-directional or -bi-directional replication where all tables on the source are
replicated. In this scenario, set the target table cascade constraints and triggers to
enable NOT FOR REPLICATION, and use the DBOPTIONS USEREPLICATIONUSER parameter
in Replicat.
16-5
Chapter 16
Globalization Support
16.2.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key (required for tables of a Standard Edition instance).
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If neither of these key types exist , Oracle GoldenGate constructs a pseudokey of
all columns that the database allows to be used in a unique key, excluding those
that are not supported by Oracle GoldenGate in a key or those that are excluded
from the Oracle GoldenGate configuration. For SQL Server, Oracle GoldenGate
requires the row data in target tables that do not have a primary key to be less
than 8000 bytes.
Note:
If there are types of keys on a table or if there are no keys at all on a
table, Oracle GoldenGate logs a message to the report file. Constructing
a key from all of the columns impedes the performance of Oracle
GoldenGate on the source system. On the target, this key causes
Replicat to use a larger, less efficient WHERE clause.
16-6
17
Preparing the Database for Oracle
GoldenGate — Classic Capture
This section contains information that helps you configure database settings and
supplemental logging to support Classic Capture of source transaction data by Oracle
GoldenGate.
Topics:
• Setting the Database to Full Recovery Model
• Backing Up the Transaction Log
• Enabling Supplemental Logging
• Managing the Secondary Truncation Point
• Retaining the Log Backups and Backup History
17-1
Chapter 17
Enabling Supplemental Logging
In either mode, Oracle GoldenGate Capture for SQL Server requires that the log
backup files meet the following conditions:
• The log backup is a native SQL Server log backup made by issuing the BACKUP LOG
command (or the corresponding GUI command). Third-party log backups are not
supported.
• The log backup can be compressed using native SQL Server compression
features.
• The log backup is made to a DISK device. Valid examples include:
BACKUP LOG dbname TO DISK = "c:\folder\logbackup.trn"
BACKUP LOG dbname TO DISK = "\\server\share\logbackup.trn"
Additional recommendations:
• Do not overwrite existing log backups.
• Striped log backups are not supported.
• Appending log backups to the same file is not recommended.
• Mixing compressed and uncompressed log backups to the same device or file is
not supported.
• Creates a Change Data Capture table for each base table enabled with
supplemental logging by running EXECUTE sys.sp_cdc_enable_table.
• Oracle GoldenGate does not use CDC tables except as necessary to enable
supplemental logging.
• When SQL Server enables CDC, SQL Server creates two jobs per database:
– cdc.dbname_capture
– cdc.dbname_cleanup
17-2
Chapter 17
Enabling Supplemental Logging
In this command:
• SOURCEDB DSN is the name of the SQL Server data source.
• USERID user is the database login and PASSWORD password that is required if the data
source connects via SQL Server authentication. If the credentials are stored in a
credential store, USERIDALIAS alias is the alias for the credentials. If you are using
DBLOGIN with a DSN that is using Integrated Windows authentication, the
connection to the database for the GGSCI session is that of the user running
GGSCI. In order to issue ADD TRANDATA or DELETE TRANDATA, this user must be a
member of the SQL Server sysadmin server role.
3. In GGSCI, issue the following command for each table that is, or will be, in the
Extract configuration. You can use a wildcard to specify multiple table names.
ADD TRANDATA owner.table
17-3
Chapter 17
Managing the Secondary Truncation Point
Note:
Using TRANLOGOPTIONS ACTIVESECONDARYTRUNCATIONPOINT or
MANAGESECONDARYTRUNCATIONPOINT for Extract when either SQL Server
transactional replication and/or CDC configured by applications other than
Oracle GoldenGate are running at the same time causes the SQL Server
Log Reader Agent or CDC capture job to fail.
17-4
Chapter 17
Managing the Secondary Truncation Point
17-5
Chapter 17
Retaining the Log Backups and Backup History
17-6
18
Preparing the Database for Oracle
GoldenGate — CDC Capture
Process to configure database settings and supplemental logging to support CDC
Capture.
This section contains information that helps you configure database settings and
supplemental logging to support CDC Capture of source transaction data by Oracle
GoldenGate.
You can learn more about CDC Capture with this Oracle By Example:
Using the Oracle GoldenGate for SQL Server CDC Capture Replication http://
www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/12c/sql_cdcrep/
sql_cdcrep.html.
Topics:
• Enabling CDC Supplemental Logging
• Retaining the CDC Table History Data
• Enabling Bi-Directional Loop Detection
• Creates a Change Data Capture table for each base table enabled with
supplemental logging by running EXECUTE sys.sp_cdc_enable_table, and creates a
trigger for each CDC table. The CDC table exists as part of the system tables
within the database and has a naming convention like,
cdc.OracleGG_basetableobjectid_CT.
18-1
Chapter 18
Enabling CDC Supplemental Logging
when the trigger for a CDC table is fired. The table will be owned by the schema
listed in the GLOBALS file’s, GGSCHEMA parameter.
• Creates a unique fetch stored procedure for each CDC table, as well as several
other stored procedures that are required for Extract to function. These stored
procedures will be owned by the schema listed in the GLOBALS file’s, GGSCHEMA
parameter.
• Also, as part of enabling CDC for tables, SQL Server creates two jobs per
database:
cdc.dbname_capture
cdc.dbname_cleanup
• The CDC Capture job is the job that reads the SQL Server transaction log and
populates the data into the CDC tables, and it is from those CDC tables that the
Extract will capture the transactions. So it is of extreme importance that the CDC
capture job be running at all times. This too requires that SQL Server Agent be set
to run at all times and enabled to run automatically when SQL Server starts.
• Important tuning information of the CDC Capture job can be found in CDC Capture
Method Operational Considerations.
• The CDC Cleanup job that is created by Microsoft does not have any
dependencies on whether the Oracle GoldenGate Extract has captured data in the
CDC tables or not. Therefore, extra steps need to be followed in order to disable
or delete the CDC cleanup job immediately after TRANDATA is enabled, and to
enable Oracle GoldenGate's own CDC cleanup job. See Retaining the CDC Table
History Data for more information.
The following steps require a database user who is a member of the SQL Server
System Administrators (sysadmin) role.
1. In the source Oracle GoldenGate installation, ensure that a GLOBALS (all CAPS and
no extension) file has been created with the parameter GGSCHEMA <schemaname>.
Ensure that the schema name used has been created (CREATE SCHEMA schemaname)
in the source database. This schema will be used by all subsequent Oracle
GoldenGate components created in the database, therefore it is recommended to
create a unique schema that is solely used by Oracle GoldenGate, such as ‘ogg’
2. On the source system, run GGSCI
3. Issue the following command to log into the database:
DBLOGIN SOURCEDB DSN [,{USERID user, PASSWORD password | USERIDALIAS alias}]
Where:
• SOURCEDB DSN is the name of the SQL Server data source.
• USERID user is the database login and PASSWORD password is the password that
is required if the data source connects via SQL Server authentication.
Alternatively, USERIDALIAS alias is the alias for the credentials if they are
stored in a credentials store. If using DBLOGIN with a DSN that is using
Integrated Windows authentication, the connection to the database for the
GGSCI session will be that of the user running GGSCI. In order to issue ADD
TRANDATA or DELETE TRANDATA, this user must be a member of the SQL
Server sysadmin server role.
4. In GGSCI, issue the following command for each table that is, or will be, in the
Extract configuration. You can use a wildcard to specify multiple table names.
ADD TRANDATA owner.table
18-2
Chapter 18
Retaining the CDC Table History Data
In the preceding, userid password is a valid SQL Server login and password for the
user with the sysadmin rights. The source database name and instance name are
databasename servername\instancename. If only the server name is listed, then the
default instance will be connected to. The schema is the schema name listed in the
GLOBALS file, with the GGSCHEMA parameter. This schema should be the same for all
18-3
Chapter 18
Enabling Bi-Directional Loop Detection
The Oracle GoldenGate CDC Cleanup job when created, is scheduled to run every ten
minutes, with a default retention period of seventy two hours. The job will not purge
data for an Extract’s recovery checkpoint however, regardless of the retention period.
Additional information of the Oracle GoldenGate CDC Cleanup job can be found in
CDC Capture Method Operational Considerations.
In the preceding example, the SOURCEDB DSN is the name of the SQL Server data
source. The USERID user is the database login and PASSWORD password is the
password that is required if the data source connects through SQL Server
authentication. Alternatively, USERIDALIAS alias is the alias for the credentials if they
are stored in a credentials store. If using DBLOGIN with a DSN that is using
Integrated Windows authentication, the connection to the database for the GGSCI
session is that of the user running GGSCI. In order to issue ADD TRANDATA or DELETE
TRANDATA, this user must be a member of the SQL Server sysadmin server role.
3. Create the Oracle GoldenGate checkpoint table that is used by the Replicat to
deliver data to the source database.
Example:ADD CHECKPOINTTABLE ogg.ggchkpt
It is recommended that you use the same schema name as used in the GGSCHEMA
parameter of the GLOBALS file.
4. Enable supplemental logging for the newly created checkpoint table.
Example: ADD TRANDATA ogg.ggchkpt
18-4
Chapter 18
Enabling Bi-Directional Loop Detection
6. Configure the Extract with the IGNOREREPLICATES (on by default) and FILTERTABLE
parameters, using the Replicat’s checkpoint table for the filtering table.
TRANLOGOPTIONS IGNOREREPLICATES
18-5
19
Requirements Summary for Classic Extract
in Archived Log Only (ALO) Mode
Oracle GoldenGate for SQL Server includes a feature of capturing DML from only the
SQL Server transaction log backups GoldenGate can run on the database server in an
ALO configuration, or optionally, GoldenGate can be installed and run on a middle tier
Windows server. It should be pointed out that when using an ALO mode configuration,
replication will have an induced lag which will be based on the log backup interval as
well as the time it takes to complete writing out each log backup during that interval,
and the time that it takes the Extract to fully process the log backup file.
Topics:
• Windows OS Requirements
• Transaction Log Backups
• ODBC Connection
• Supplemental Logging
• Operational Requirements and Considerations
– Set the middle tier server's date, time, and time zone to the same as the
primary source database server.
– Create a network share of the folder that contains the source database
transaction log backups. For example, if SQL Server writes log backups to D:
\SQLBackups, then create a share on this folder that can be accessed by the
Extract running on the middle tier Windows server.
• Oracle GoldenGate Manager must run as an account with READ permissions to
the log backup folder, the log backups, and the network share if configuring for
remote ALO mode capture.
19-1
Chapter 19
Transaction Log Backups
– The default of Local System Account will work if 'Everybody' has share and
folder access (not very secure).
– Oracle recommends that you use a Windows account to run the Manager
service and control share and folder access to that account.
Note:
Tables to be captured only from the ALO mode are still required to have
Supplemental Logging enabled.
19-2
20
Requirements Summary for Capture and
Delivery of Databases in an AlwaysOn
Availability Group
Oracle GoldenGate for SQL Server features Capture support of the Primary and read-
only, synchronous mode Secondary databases of an AlwaysOn Availability Group, and
Delivery to the Primary database.
Topics:
• ODBC Connection
• Supplemental Logging
• Operational Requirements and Considerations
– If you modify the job's parameters from its default values using EXEC
sys.sp_cdc_change_job, then after adding the job to the new Primary database,
they would also have to re-run EXEC sys.sp_cdc_change_job against the capture
job on the new Primary database.
20-1
Chapter 20
Operational Requirements and Considerations
Note:
Consult the Microsoft documentation on how to enable the CDC Capture job
for AlwaysOn Secondary Replicas for more information:.
20-2
21
Oracle GoldenGate Classic Extract for SQL
Server Standard Edition Capture
Classic Extract for Oracle GoldenGate for SQL Server is designed to capture DML
from both SQL Server Standard Edition and SQL Server Enterprise Edition.
Topics:
• Overview
• SQL Server Instance Requirements
• Table Requirements
• Supplemental Logging
• Operational Requirements and Considerations
21.1 Overview
Oracle GoldenGate for SQL Server includes a Classic Capture support for SQL Server
Standard Edition. Oracle GoldenGate utilizes certain SQL Server Replication
components in order to enable supplemental logging. These SQL Server Replication
components are required to be installed and configured in order to enable
supplemental logging, and the instructions and limitations are outlined in the following
sections.
21-1
Chapter 21
Table Requirements
The Distributor can be a local or a remote Distributor, and can be one that has already
been configured for an existing SQL Server Replication implementation. Oracle
GoldenGate does not require the distribution database to store change data, but it
must be configured in order to enable supplemental logging.
21-2
Chapter 21
Operational Requirements and Considerations
The Article properties of the tables to be configured with supplemental logging will not
log data changes to the distribution database, but the creation of the Publication with
Articles is the requirement to enabling supplemental logging.
21-3
22
CDC Capture Method Operational
Considerations
This section provides information about the SQL Server CDC Capture options,
features, and recommended settings.
@job_type = N'capture’ ,
@pollinginterval = 1,
GO
EXEC [sys].[sp_cdc_stop_job]
@job_type = N'capture’
GO
EXEC [sys].[sp_cdc_start_job]
@job_type = N'capture’
22-1
Chapter 22
Valid and Invalid Parameters for SQL Server Change Data Capture
TRANLOGOPTIONS LOB_CHUNK_SIZE
The Extract parameter LOB_CHUNK_SIZE is added for the CDC Capture method to
support large objects. If you have huge LOB data sizes, then you can adjust
the LOB_CHUNK_SIZE from the default of 4000 bytes, to a higher value up to 65535 bytes,
so that the fetch size is increased, reducing the trips needed to fetch the entire LOB.
Example: TRANLOGOPTIONS LOB_CHUNK_SIZE 8000
TRANLOGOPTIONS MANAGECDCCLEANUP/NOMANAGECDCCLEANUP
TRANLOGOPTIONS EXCLUDEUSER/EXCLUDETRANS
The SQL Server CDC Capture job does not capture user information or transaction
names associated with a transaction, and as this information is not logged in the CDC
staging tables, Extract has no method of excluding DML from a specific user or DML of
a specific transaction name. The EXCLUDEUSER and EXCLUDETRANS parameters are
therefore not valid for the CDC Capture process.
TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT/NOMANAGESECONDARYTRUNCATIONPOINT/
ACTIVESECONDARYTRUNCATIONPOINT
The SQL Server Change Data Capture job is the only process that captures data from
the transaction log when using the Oracle GoldenGate CDC Capture method.
Therefore secondary truncation point management is not handled by the Extract, and
for the Change Data Capture Extract, these parameters are not valid.
22-2
Chapter 22
Details of the Oracle GoldenGate CDC Cleanup Process
2. On the source system, open a command prompt and change to the Oracle
GoldenGate installation folder.
3. Run the ogg_cdc_cleanup_setup.bat file, providing the following variable values:
4. You will be prompted to enter the name of the Extract that is to be removed, and
once entered, press the Enter/Return key to continue.
22-3
Chapter 22
Changing from Classic Extract to a CDC Extract
parameters to the cleanup stored procedure, and you can modify the value for
@retention_minutes to adjust the data retention policy as needed, or modify the
@threshold value to increase or decrease the purge batch size. In high transactional
environments, it may be necessary to increase the @threshold value to a number such
as 10000. Monitoring the amount of time that it takes for the job to run within each
cycle can be used to determine effective @threshold values.
22-4
Chapter 22
Changing from Classic Extract to a CDC Extract
4. Using the Oracle GoldenGate CDC Extract installation binaries, follow the steps
listed in Preparing the Database for Oracle GoldenGate — CDC Capture to re-enable
supplemental logging and other necessary components, and re-add the heartbeat
table.
22-5
Part VII
Using Oracle GoldenGate with Teradata
Only Oracle GoldenGate release 12c (12.3.0.1) and later for Teradata support the
delivery of data from other types of databases to a Teradata database.
This part describes the tasks for configuring and running Oracle GoldenGate on a
Teradata database.
• Overview of Oracle GoldenGate for Teradata
Oracle GoldenGate for Teradata supports the filtering, mapping, and
transformation of data unless noted otherwise in this documentation.
• Understanding What's Supported for Teradata
This chapter contains support information for Oracle GoldenGate on Teradata
databases.
• Preparing the System for Oracle GoldenGate
• Configuring Oracle GoldenGate
• Common Maintenance Tasks
Table 23-1 Supported Data Types by Oracle GoldenGate, Per Teradata Version
23-1
Chapter 23
Supported Teradata Data Types
Table 23-1 (Cont.) Supported Data Types by Oracle GoldenGate, Per Teradata
Version
23-2
Chapter 23
Supported Teradata Data Types
23-3
Chapter 23
Supported Objects and Operations for Teradata
• Enable the UseNativeLOBSupport flag in the ODBC configuration file. See the
Teradata ODBC documentation.
23-4
Chapter 23
Non-Supported Operations for Teradata
• DDL
23-5
24
Preparing the System for Oracle
GoldenGate
This chapter contains guidelines for preparing the database and the system to support
Oracle GoldenGate. This chapter contains the following sections:
Topics:
• Preparing Tables for Processing
24-1
Chapter 24
Preparing Tables for Processing
24.1.2.1 How Oracle GoldenGate Determines the Kind of Row Identifier to Use
Unless a KEYCOLS clause is used in the TABLE or MAP statement, Oracle GoldenGate
selects a row identifier to use in the following order of priority:
1. Primary key (required for tables of a Standard Edition instance).
2. First unique key alphanumerically that does not contain a timestamp or non-
materialized computed column.
3. If neither of these key types exist , Oracle GoldenGate constructs a pseudokey of
all columns that the database allows to be used in a unique key, excluding those
that are not supported by Oracle GoldenGate in a key or those that are excluded
from the Oracle GoldenGate configuration. For SQL Server, Oracle GoldenGate
requires the row data in target tables that do not have a primary key to be less
than 8000 bytes.
Note:
If there are types of keys on a table or if there are no keys at all on a
table, Oracle GoldenGate logs a message to the report file. Constructing
a key from all of the columns impedes the performance of Oracle
GoldenGate on the source system. On the target, this key causes
Replicat to use a larger, less efficient WHERE clause.
24-2
25
Configuring Oracle GoldenGate
This chapter describes how to configure Oracle GoldenGate Replicat. This chapter
contains the following sections:
Topics:
• Configuring Oracle GoldenGate Replicat
• Additional Oracle GoldenGate Configuration Guidelines
Use the EXTTRAIL argument to link the Replicat group to the remote trail that you
specified for the data pump on the source server.
5. Use the EDIT PARAMS command to create a parameter file for the Replicat group.
Include the parameters shown in Example 25-1 plus any others that apply to your
database environment.
Example 25-1 Parameters for the Replicat Group
-- Identify the Replicat group:
REPLICAT rep
-- State whether or not source and target definitions are identical:
SOURCEDEFS {full_pathname | ASSUMETARGETDEFS}
-- Specify database login information as needed for the database:
[TARGETDB dsn2,] [USERID user id[, PASSWORD pw]]
-- Specify error handling rules (See the NOTE following parameter file):
REPERROR (error, response)
-- Specify tables for delivery:
MAP owner.table, TARGET owner.table[, DEF template name];
25-1
Chapter 25
Additional Oracle GoldenGate Configuration Guidelines
Note:
In a recovery situation, it is possible that Replicat could attempt to apply
some updates twice. If a multiset table is affected, this could result in
duplicate rows being created. Use the REPERROR parameter in the Replicat
parameter file so that Replicat ignores duplicate rows.
commit;
25-2
26
Common Maintenance Tasks
This chapter contains instructions for performing some common maintenance tasks
when using the Oracle GoldenGate replication solution.
Topics:
• Modifying Columns of a Table
26-1