Vous êtes sur la page 1sur 16

Configuring Oracle GoldenGate OGG 11gR2 downstream integrated

capture

In the article you will have a look at an example configuration of an OGG


downstream integrated capture extract in a RAC environment (RAC database is
used for a source, downstream and target). You will briefly review some new
OGG 11gR2 concepts and have a glimpse of the prerequisites and steps to
configure a simple (one extract, one remote trail and one replicat) OGG
replication using a downstream integrated capture. You will see how I
resolved the problems that I encountered during the OGG configuration.

Introduced in OGG 11gR2, the integrated extract uses Oracle Streams API to
integrate with an Oracle logminig server to capture data changes as logical
change records (LCR). The transaction log capture used in OGG versions prior
to OGG 11gR2 was renamed to a CLASSIC extract. Downstream integrated capture
requires a dedicated downstream database, offloading the source database, and
deploys a redo shipping mechanism, conceptually similar to the redo transport
in an Oracle Data Guard, for transferring the redo changes from the source
database to the downstream database. The downstream database is responsible
for running the logmining server mining the redo received from the primary
database. Thus, the source and downstream mining database must be on the same
platform. For example, if the source database is running on Linux 64-bit, the
downstream database must also be on the Linux 64-bit platform.

Data flows as follows:

1. Extract capture data changes against the source database and write them
to the trail file
2. Replicat applies the changes from the trail file

Zooming into the integration of the Source database, downstream database and
OGG extract we have the additional links:

 A source database configured with a redo shipment to the downstream


database.
 A downstream database receiving and archiving the redo from the source
database
 The primary extract connects to the source database for table
definition lookup and/or fetches
 The primary extract connects to the downstream database to receive the
data changes presented as LCR from the logmining server

The following table summarizes the environment used.

Source Database Downstream database Target


Database
DB name RACD RACDB RACDB
DB RACD1/RACD2 RACDB1/ RACDB2 RACDB1/
instances RACDB2
Archive log Yes Yes Yes
mode
Local log_archive_dest_1 = LOG_ARCHIVE_DEST_1='LOCA
archive 'LOCATION=+DATADG/ TION=+DATADG/
destination VALID_FOR=(ALL_LOGFILES,A VALID_FOR=(ONLINE_LOGFIL
on all LL_ROLES) E,ALL_ROLES)
instances DB_UNIQUE_NAME=RACD'
Remote LOG_ARCHIVE_DEST_4='SERVI
archive CE=RACDB ASYNC NOREGISTER
destination VALID_FOR=(ONLINE_LOGFILE
to S,PRIMARY_ROLE) REOPEN=10
downstream DB_UNIQUE_NAME=RACDB'
LOG_ARCHIVE
_DEST_n on
all
instances
LOG_ARCHIVE 'DG_CONFIG=(RACD,RACDSTB, 'DG_CONFIG=(RACD,RACDSTB
_CONFIG RACDB)' ,RACDB)'
Archive log LOG_ARCHIVE_DEST_3='LOCA
destination TION=+DATA/
for foreign VALID_FOR=(STANDBY_LOGFI
logs on the LE,ALL_ROLES)'
downstream
database
recived
from source
db
Standby YES
redo logs
Extract OGG_EXTRACT
user
Logming OGG_EXTRACT
user
Replicat OGG_REPLICA
user T
Replicated test4.seat_booking test4.seat_
table booking
OGG extract extdown
OGG repdown
replicat
OGG remote ./dirdat/3x
trail
If there noting is specified than it is not relevant.

The downstream database should be 11.2.0.3 with patch as per MOS Doc ID
1411356.1. The source database can be any Oracle database supported by OGG
11gR2. The source database cannot be a version higher than the downstream
database. Look at the certification matrix for detail information for
certified combination of OS and databases here or in MOS.

I will deploy a real-time downstream integrated extract.

The configuration of a downstream integrated capture involves steps performed


on the source database and steps performed on the downstream database.
On the source database site perform the following configurations:

1. Make sure that a tnsnames.ora alias exists

RACDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RACDB)
)
)

2. Set redo shipment on the source database to the downstream database for
all instances

ALTER SYSTEM SET LOG_ARCHIVE_DEST_4='SERVICE=RACDB ASYNC NOREGISTER


VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) REOPEN=10
DB_UNIQUE_NAME=RACDB' scope=both sid='*';

3. Enable the destinations on the source database.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ENABLE scope=both sid='*';

4. Include the names of the downstream database and source database in a


Data Guard configuration for redo shipment. Note that RACD is the
source db and RACDB is the downstream database. RACDSTB is an existing
standby database.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACD,RACDSTB,RACDB)'


scope=both sid='*';

The article makes an assumption that OGG 11gR2 is installed and configured
and database users are dedicated on both source database for extract and the
target database for replicat. For an example how to install OGG in an Oracle
RAC 11.2.0.3 cluster environment look at here.

On the downstream database site perform the following configurations:

1. Make sure that the downstream database has a local archiving enabled.
Explicitly specify the location if FRA is used.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=+DATADG/


VALID_FOR=(ONLINE_LOGFILE,ALL_ROLES)' scope=both sid='*';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both sid='*';
2. Configure the downstream database to receive the redo from the source
database. Take a note that log files from all source databases must be
kept separate from local mining database log files, and from each other
and FRA should not be used for any source log files.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=+DATA/


VALID_FOR=(STANDBY_LOGFILE,ALL_ROLES)' scope=both sid='*';

3. Enable the log destination

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE scope=both sid='*';

4. Add standby redo logs on the downstream database. There are two things
to consider. Size of the standby redo log should be same or bigger as
the size of the redo logs on the source database(s). Add one more (n+1)
standby log groups as the existing (n) on the source database.

There are two source redo log groups per thread are as follows on the
source.

SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS


ARC
---------- ---------- ---------- ---------- ---------- ----------
---
STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------------- ------------- --------- ------------ ---------
1 1 352 52428800 512 2
NO
CURRENT 5374181 28-OCT-12 2.8147E+14

2 1 351 52428800 512 2


YES
INACTIVE 5374017 28-OCT-12 5374181 28-OCT-12

3 2 112 52428800 512 2


NO
CURRENT 5374474 28-OCT-12 2.8147E+14

GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS


ARC
---------- ---------- ---------- ---------- ---------- ----------
---
STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------------- ------------- --------- ------------ ---------
4 2 111 52428800 512 2
YES
ACTIVE 5347443 28-OCT-12 5374474 28-OCT-12

SQL>

I created 3 redo log groups size with the same size as on the
downstream database.

alter database add standby logfile thread 1 group 5 size 52428800;


alter database add standby logfile thread 1 group 6 size 52428800;
alter database add standby logfile thread 1 group 7 size 52428800;
alter database add standby logfile thread 2 group 8 size 52428800;
alter database add standby logfile thread 2 group 9 size 52428800;
alter database add standby logfile thread 2 group 10 size 52428800;

SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;

GROUP# THREAD# SEQUENCE# ARC STATUS


---------- ---------- ---------- --- ----------
5 1 0 YES UNASSIGNED
6 1 0 YES UNASSIGNED
7 1 0 YES UNASSIGNED
8 2 0 YES UNASSIGNED
9 2 0 YES UNASSIGNED
10 2 0 YES UNASSIGNED

6 rows selected .

5. Define a stream pool on the downstream database. The logmining server


uses memory from the streams pool for each integrated extract. By
default integrated extracts uses 1GB of streams pool memory. When using
multiple integrated extracts the streams pool should account the memory
requirements for each integrated extract and the number of extracts
running. Sizing information can be found in the Oracle documents listed
in the reference section and Oracle Installation and Setup Guide and
MOS Note 1485620.1.

alter system set streams_pool_size=1G scope=spfile sid='*';


alter system set memory_max_target=2G scope=spfile sid='*';
alter system set memory_target=2G scope=spfile sid='*';

6. Grant privileges to the mining user in the downstream database

SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE


(grantee=>'ogg_extract',
privilege_type=>'capture',grant_select_privileges=>true,
do_grants=>TRUE);

PL/SQL procedure successfully completed.


SQL>

7. Make sure that the downstream database is 11.2.0.3 with patch specified
in MOS Doc ID 1411356.1 and COMPATIBLE=’11.2.0.3’.
8. On all nodes running both source and downstream database create
password file for authentication in $ORACLE_HOME/dbs. Make sure that
you use ignorecase=y option.

orapwd file=orapwRACDB1 password=sys1 ignorecase=y


orapwd file=orapwRACDB2 password=sys1 ignorecase=y

9. Done.

On the source OGG execute the following configurations.

1. Create a test table for the replication on the source and target
database.

create user test4 identified by test4 ;


grant connect, resource, dba to test4;

create table test4.seat_booking (


id number(10) primary key,
passenger_name varchar2(50),
timestamp date,
flight_no number(10),
seat_no varchar2(19),
flight_time date);

2. Add supplemental logging for the table


GGSCI (raclinux1.gj.com) 3> dblogin userid ogg_extract , password ogg_extract
Successfully logged into database.

GGSCI (raclinux1.gj.com) 4> add trandata test4.seat_booking

Logging of supplemental redo log data is already enabled for table


TEST4.SEAT_BOOKING.

GGSCI (raclinux1.gj.com) 5>

3. Register the extract with the downstream database

GGSCI (raclinux1.gj.com) 5> dblogin userid ogg_extract, password


ogg_extract
Successfully logged into database.

GGSCI (raclinux1.gj.com) 6> miningdblogin userid


ogg_extract@racdb, password ogg_extract
Successfully logged into mining database.

GGSCI (raclinux1.gj.com) 7> register extract extdown database

2012-10-28 16:46:33 INFO OGG-02003 Extract EXTDOWN


successfully registered with database at SCN 5442542.

GGSCI (raclinux1.gj.com) 8>


GGSCI (raclinux1.gj.com) 8>

4. Create an extract parameter file for extract extdown

GGSCI (raclinux1.gj.com) 83> view params extdown

extract extdownSETENV (ORACLE_SID = "RACD1")


--TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
userid ogg_extract, password ogg_extract
TRANLOGOPTIONS MININGUSER ogg_extract@racdb MININGPASSWORD ogg_extract
rmthost raclinux1, mgrport 7809
rmttrail ./dirdat/3x
table test4.seat_booking ;

GGSCI (raclinux1.gj.com) 84>

5. Add an integrated extract

GGSCI (raclinux1.gj.com) 11> add extract extdown, integrated tranlog, begin now
EXTRACT added.

GGSCI (raclinux1.gj.com) 12>

GGSCI (raclinux1.gj.com) 12>

6. Add a remote trail

GGSCI (raclinux1.gj.com) 12> add rmttrail ./dirdat/3x, extract extdown megabytes


20
RMTTRAIL added.

GGSCI (raclinux1.gj.com) 13>

7. Create a replicat parameter file

GGSCI (raclinux1.gj.com) 2> view params repdown

replicat repdown
--reperror(default,ignore)
SETENV (ORACLE_SID = "RACDB1")
userid ogg_replicat, password ogg_replicat
assumetargetdefs
discardfile ./dirrpt/repdown.dsc, purge
map test4.seat_booking, target test4.seat_booking;

GGSCI (raclinux1.gj.com) 3>

8. Add a Replicat

GGSCI (raclinux1.gj.com) 3> add replicat repdown, exttrail ./dirdat/3x


REPLICAT added.

GGSCI (raclinux1.gj.com) 4>

9. Start the extract and replicat

GGSCI (raclinux1.gj.com) 18> start extract extdown

Sending START request to MANAGER ...


EXTRACT EXTDOWN starting

GGSCI (raclinux1.gj.com) 19> info extract extdown

EXTRACT EXTDOWN Initialized 2012-10-28 16:50 Status RUNNING


Checkpoint Lag 00:00:00 (updated 00:05:53 ago)
Log Read Checkpoint Oracle Integrated Redo Logs
2012-10-28 16:50:29
SCN 0.0 (0)

GGSCI (raclinux1.gj.com) 20>


GGSCI (raclinux1.gj.com) 4> start replicat repdown

Sending START request to MANAGER ...


REPLICAT REPDOWN starting

GGSCI (raclinux1.gj.com) 5> info replicat repdown

REPLICAT REPDOWN Last Started 2012-10-28 16:58 Status RUNNING


Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File ./dirdat/3x000000
First Record RBA 0

GGSCI (raclinux1.gj.com) 6>

10. Testing the OGG configuration

Insert the following records:

On RACD1

insert into test4.seat_booking values(1,'',sysdate,200,'B1',sysdate);


insert into test4.seat_booking values(2,'',sysdate,200,'B2', sysdate);
insert into test4.seat_booking values(3,'',sysdate,200,'B3',sysdate);
commit;

On RACD2

insert into test4.seat_booking values(4,'',sysdate,200,'B4',sysdate);


insert into test4.seat_booking values(5,'',sysdate,200,'B5',sysdate);
insert into test4.seat_booking values(6,'',sysdate,200,'B6',sysdate);
commit;

Verify the result on the target

SQL> select * from test4.seat_booking;

ID PASSENGER_NAME TIMESTAMP
---------- -------------------------------------------------- ---------
FLIGHT_NO SEAT_NO FLIGHT_TI
---------- ------------------- ---------
1 28-OCT-12
200 B1 28-OCT-12

2 28-OCT-12
200 B2 28-OCT-12

3 28-OCT-12
200 B3 28-OCT-12

ID PASSENGER_NAME TIMESTAMP
---------- -------------------------------------------------- ---------
FLIGHT_NO SEAT_NO FLIGHT_TI
---------- ------------------- ---------
4 28-OCT-12
200 B4 28-OCT-12

5 28-OCT-12
200 B5 28-OCT-12

6 28-OCT-12
200 B6 28-OCT-12

6 rows selected.

SQL>

Issue update on the source.

update test4.seat_booking set passenger_name='John Smith' where seat_no='B4';


update test4.seat_booking set passenger_name='Alejandro García'where seat_no='B5';
update test4.seat_booking set passenger_name='Frank Meyer' where seat_no='B6';
update test4.seat_booking set passenger_name='Scott Tiger' where seat_no='B3';
commit;
commit;

Verify the result on the target


SQL> select * from test4.seat_booking;

ID PASSENGER_NAME TIMESTAMP
---------- -------------------------------------------------- ---------
FLIGHT_NO SEAT_NO FLIGHT_TI
---------- ------------------- ---------
5 Alejandro Garc??a 28-OCT-12
200 B5 28-OCT-12

6 Frank Meyer 28-OCT-12


200 B6 28-OCT-12

1 28-OCT-12
200 B1 28-OCT-12

ID PASSENGER_NAME TIMESTAMP
---------- -------------------------------------------------- ---------
FLIGHT_NO SEAT_NO FLIGHT_TI
---------- ------------------- ---------
2 28-OCT-12
200 B2 28-OCT-12

3 Scott Tiger 28-OCT-12


200 B3 28-OCT-12

4 John Smith 28-OCT-12


200 B4 28-OCT-12

6 rows selected.

Issue delete on the source.

SQL> delete from test4.seat_booking;

6 rows deleted.

SQL> commit;

Commit complete.

SQL>

Verify the result on the target

SQL> select * from test4.seat_booking;

no rows selected

SQL>

The test confirms that OGG is configured successfully. Look at the


Appendix for a snip from the archive log file from the downstream
database reflecting OGG mining.
11. Troubleshooting tips and tricks

Make sure that redo is successfully shipped to the downstream database


and archived. Pay attention to the locations, service definition and
password file authentication. Useful way to troubleshoot redo
transportation is to use adrci>show alert –tail –f

I encountered the following issues:

1. Issue 1

2012-10-28 17:18:51 ERROR OGG-02028 Failed to attach to


logmining server OGG$EXTDOWN error 26926 - ORA-26926: Un
able to attach because capture "OGG$CAP_EXTDOWN" is missing
dictionary redo logs.

2012-10-28 17:18:51 ERROR OGG-01668 PROCESS ABENDING.

Further investigation revealed that redo shipment errors as follows


from source database alert log file.

Error 1031 received logging on to the standby


PING[ARC2]: Heartbeat failed to connect to standby 'RACDB'. Error is 1031.
2012-10-28 17:38:54.477000 +02:00
Error 1031 received logging on to the standby
PING[ARC2]: Heartbeat failed to connect to standby 'RACDB'. Error is 1031.
2012-10-28 17:39:55.407000 +02:00
Error 1031 received logging on to the standby
PING[ARC2]: Heartbeat failed to connect to standby 'RACDB'. Error is 1031.

Look at password file authentication. Make sure that ignorecase=y


option is used when creating the password files.

2. Issue 2

2012-10-28 16:57:43 ERROR OGG-02050 Not enough database memory


to honor requested MAX_SGA_SIZE of 300.

2012-10-28 16:57:43 ERROR OGG-01668 PROCESS ABENDING

Somehow, omitting an explicit specification of max_sga_size and


parallelism resolved the issue. I masked off the following parameter in
the extract. The

--TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)

3. Issue 3

LOGMINER: End mining logfile for session 2 thread 1 sequence 369,


+DATA/racd/onlinelog/group_2.1143.787360855
2012-10-28 17:45:41.009000 +02:00
FAL[server, ARCt]: Error 16047 creating remote archivelog file 'RACDB'
FAL[server, ARCt]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance RACD1 - Archival Error. Archiver continuing.
LOGMINER: Begin mining logfile for session 2 thread 1 sequence 370,
+DATA/racd/onlinelog/group_1.1142.787360841
2012-10-28 17:46:00.723000 +02:00
PING[ARC2]: Heartbeat failed to connect to standby 'RACDB'. Error is 16047.

Set on the downstream database the following

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACD,RACDSTB,RACDB)'


scope=both sid='*';

System altered.

SQL>

4. Issue 4

2012-10-28 19:00:01 ERROR OGG-02032 Failed to set


TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine, Y)

Do not set the TRANLOGOPTIONS INTEGRATEDPARAMS in two lines. Use one


line. I masked the parameter for explicit sizing and run with
default size.

--TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)


TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)

5. Issue 5

BEGIN DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE
(grantee=>'ogg_extract',
privilege_type=>'capture',grant_select_privileges=>true,
do_grants=>TRUE); END;

*
ERROR at line 1:
ORA-04031: unable to allocate 45664 bytes of shared memory
("shared
pool","GGS_DDL_TRIGGER_BEFORE","KKTSR^81131d18","trigger source")
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 2268
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 24
ORA-06512: at line 1

SQL>

Make sure that you have a proper memory allocation at database level
and at OS level /dev/shm if OEL 5.X is used.
6. Issue 6
Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_m000_10405.trc:
ORA-04031: unable to allocate 56 bytes of shared memory ("shared pool","unknown
object","KKSSP^98","kglseshtSegs")
Process m000 died, see its trace file
2012-10-28 14:32:16.520000 +02:00
Dumping diagnostic data in directory=[cdmp_20121028143222], requested by
(instance=2, osid=20338 (M000)), summary=[incident=88665].
2012-10-28 14:32:42.376000 +02:00
Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_mmon_9935.trc:
ORA-04031: unable to allocate 1072 bytes of shared memory ("shared pool","select
o.owner#,o.name,o.nam...","sga heap(1,1)","KQR L PO")
2012-10-28 14:32:49.260000 +02:00
Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_mmon_9935.trc:
ORA-04031: unable to allocate 1072 bytes of shared memory ("shared pool","select
i.obj#,i.ts#,i.file#,...","sga heap(1,1)","KQR L PO")
2012-10-28 14:33:08.989000 +02:00
Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_m001_10479.trc:
ORA-04031: unable to allocate 760 bytes of shared memory ("shared pool","unknown
object","KKSSP^29","kglss")
Process m001 died, see its trace file
2012-10-28 14:33:23.739000 +02:00
Starting background process CJQ0
CJQ0 started with pid=29, OS id=10494
2012-10-28 14:33:30.072000 +02:00
Restarting dead background process CJQ0
CJQ0 started with pid=29, OS id=10518
2012-10-28 14:33:33.473000 +02:00
Starting ORACLE instance (normal)
2012-10-28 14:33:37.766000 +02:00
Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_smon_9927.trc:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 1072 bytes of shared memory ("shared pool","select
i.obj#,i.ts#,i.file#,...","sga heap(1,1)","KQR L PO")

Make sure that you have a proper memory allocation at database level
and at OS level /dev/shm if OEL 5.X is used.

Summary

You looked at an example of configuring OGG downstream integrated extract and


how to troubleshoot some issues that can be encountered during the
configuration.

Appendix
GoldenGate downstream capture: turning on downstream_real_time_mine

GoldenGate process OGG$CAP_EXTDOWN altered (6) on instance 2

2012-10-28 19:35:08.447000 +02:00

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN started with pid=35, OS id=10691


2012-10-28 19:35:10.243000 +02:00

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN with pid=35, OS id=10691 is in combined capture and
apply mode.

Capture OGG$CAP_EXTDOWN is handling 1 applies.

-- capture is running in apply-state checkpoint mode.

GoldenGate downstream capture OGG$CAP_EXTDOWN uses downstream_real_time_mine: TRUE

Starting persistent Logminer Session with sid = 1 for GoldenGate Capture OGG$CAP_EXTDOWN

2012-10-28 19:35:12.407000 +02:00

LOGMINER: Parameters summary for session# = 1

LOGMINER: Number of processes = 4, Transaction Chunk Size = 1

LOGMINER: Memory Size = 512M, Checkpoint interval = 1000M

LOGMINER: SpillScn 0, ResetLogScn 1184647

LOGMINER: summary for session# = 1

LOGMINER: StartScn: 5511601 (0x0000.005419b1)

LOGMINER: EndScn: 0

LOGMINER: HighConsumedScn: 5511601 (0x0000.005419b1)

LOGMINER: session_flag: 0xf1

LOGMINER: DDL CKPT is on.

LOGMINER: Read buffers: 64

LOGMINER: Memory LWM limit: 10M, 97%

LOGMINER: Memory Release Limit: 15000

LOGMINER: LowCkptScn: 5510405 (0x0000.00541505)

LOGMINER: HighCkptScn: 0 (0x0000.00000000)

LOGMINER: SkipScn: 5510405 (0x0000.00541505)

2012-10-28 19:35:14.805000 +02:00

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), reader MS00 pid=37 OS id=10696 sid=34 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), builder MS01 pid=67 OS id=10698 sid=96 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS02 pid=68 OS id=10700 sid=94 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS03 pid=69 OS id=1 0702 sid=88 started

2012-10-28 19:35:15.988000 +02:00

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 383,


+DATA/racdb/archivelog/2012_10_28/thread_1_seq_383.1939.797887839
LOGMINER: Begin mining logfile for session 1 thread 2 sequence 139,
+DATA/racdb/archivelog/2012_10_28/thread_2_seq_139.1938.797887875

LOGMINER: End mining logfile for session 1 thread 1 sequence 383,


+DATA/racdb/archivelog/2012_10_28/thread_1_seq_383.1939.797887839

2012-10-28 19:35:17.352000 +02:00

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 384,


+DATA/racdb/archivelog/2012_10_28/thread_1_seq_384.1943.797887949

LOGMINER: End mining logfile for session 1 thread 2 sequence 139,


+DATA/racdb/archivelog/2012_10_28/thread_2_seq_139.1938.797887875

LOGMINER: Begin mining logfile for session 1 thread 2 sequence 140,


+DATA/racdb/archivelog/2012_10_28/thread_2_seq_140.1942.797887985

LOGMINER: End mining logfile for session 1 thread 2 sequence 140,


+DATA/racdb/archivelog/2012_10_28/thread_2_seq_140.1942.797887985

LOGMINER: Begin mining logfile for session 1 thread 2 sequence 141,


+DATA/racdb/archivelog/2012_10_28/thread_2_seq_141.1944.797888041

LOGMINER: End mining logfile for session 1 thread 1 sequence 384,


+DATA/racdb/archivelog/2012_10_28/thread_1_seq_384.1943.797887949

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 385,


+DATA/racdb/archivelog/2012_10_28/thread_1_seq_385.1947.797888017

LOGMINER: End mining logfile for session 1 thread 2 sequence 141,


+DATA/racdb/archivelog/2012_10_28/thread_2_seq_141.1944.797888041

2012-10-28 19:35:20.490000 +02:00

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), builder MS01 pid=67 OS id=10698 sid=96 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS02 pid=68 OS id=10700 sid=94 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS03 pid=69 OS id=10702 sid=88 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), reader MS00 pid=37 OS id=10696 sid=34 stopped

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN with pid=35, OS id=10691 stopped

2012-10-28 19:36:25.480000 +02:00

RFS[7]: Assigned to RFS process 10770

RFS[7]: Selected log 8 for thread 2 sequence 143 dbid 1643830466 branch 787454697

2012-10-28 19:36:38.259000 +02:00

RFS[8]: Assigned to RFS process 10790

RFS[8]: Selected log 9 for thread 2 sequence 142 dbid 1643830466 branch 787454697

2012-10-28 19:36:41.927000 +02:00

RFS LogMiner: Registered logfile


[+DATA/racdb/archivelog/2012_10_28/thread_2_seq_142.1950.797888201] to LogMiner session id [1]
Reference

1. Best Practice – Oracle GoldenGate 11gr2 integrated extract and Oracle


Data Guard – Switchover/Fail-over Operations Version 1.1 Document ID:
1436913.1
2. Oracle GoldenGate Best Practices: Configuring Downstream Integrated
Extract Document ID: 1485620.1
3. Oracle® GoldenGate Oracle Installation and Setup Guide Release 11.2.1
E35957-01 Oracle Installation and Setup Guide

4. OGG Administrator’s Guide


5. OGG Reference Guide
6. Oracle 11.2 Data Guard Concepts and Administration - here

Vous aimerez peut-être aussi