Vous êtes sur la page 1sur 34

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15

How do you start your day as a DBA?


Explain the Oracle Architecture
how to change environment of the 2 DB? If you want to login from instance to
another instance?
How to start db in restricted session?
How to check the sysaux tablespace occupants?
differentiate startup options in Oracle DB
differentiate shutdown options in Oracle DB
How to create a tablespace?
What are the default tablespace in oracle 11g/12c?
What is AWR in oracle?
What is ADDM?
What are the 2 types of rman backups?
What are the general steps in tuning database performance?
What are the advantage in using ASM over regular file system?
What is the worst scenario experienced as Oracle DBA?

ORACLE CAREER
Top 50 Exadata Interview Questions
Getting ready for a Exadata interview?
Make sure to refresh your knowledge by reviewing this list of Exadata Interview Questions.
What environment is a good fit for Exadata?

Exadata was originally designed for the warehouse environment. Later it was enhanced for use in
the OLTP databases as well.
What are the advantages of Exadata?

The Exadata cluster allows for consistent performance while allowing for increased throughput.
As load increases on the cluster the performance remains consistent by utilizing inter-instance
and intra-instance parallelism.
It should not be expected that just moving to Exadata will improve performance. In most cases it
will especially if the current database host is overloaded.
What is the secret behind Exadatas higher throughput?

Exadata ships less data through the pipes between the storage and the database nodes and other
nodes in the RAC cluster.
Also its ability to do massive parallelism by running parallel processes across all the nodes in
the cluster provides it much higher level of throughput.

It also has much bigger pipes in the cluster using Infiniband interconnect for inter-instance data
block transfers as high as 5X of fiberchannel networks.
What are the different Exadata configurations?

The Exadata Appliance configuration comes as a Full Rack, Half Rack, Quarter Rack or 1/8th
rack.
The Full Rack X2-2 has 6 CPUs per node with Intel Xeon 5670 processors and a total of 8
Database Server nodes. These servers have 96GB of memory on each node. A total of 14 Storage
server cells communicate with the storage and push the requested data from the storage to the
compute nodes.
The Half Rack has exactly half the capacity. It has 6 CPUs per node with core Intel Xeon 5670
processors and a total of 4 Database Server nodes. It has 96GB of memory per database server
node with a total of 7 Storage server cells.
The Exadata is also available in the 1/8th Rack configuration.
What are the key Hardware components?

DB Server

Storage Server Cells

High Speed Infiniband Switch

Cisco Switch

What are the Key Software Features?

Smart Scan,

Smart Flash Cache

Storage Index

Exadata Hybrid Columnar Compression (EHCC)

IORM (I/O Resource Manager)

What is a Cell and Grid Disk?

Cell and Grid Disk are a logical component of the physical Exadata storage. A cell or Exadata
Storage server cell is a combination of Disk Drives put together to store user data. Each Cell
Disk corresponds to a LUN (Logical Unit) which has been formatted by the Exadata Storage
Server Software. Typically, each cell has 12 disk drives mapped to it.

Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks.
Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can
have multiple Grid Disks per Cell disk.
What is IORM?

IORM stands for I/O Resource Manager.


It manages the I/O demand based on the configuration, with the amount of resources available. It
ensures that none of the I/O cells become oversubscribed with the I/O requests. This is achieved
by managing the incoming requests at a consumer group level.
Using IORM, you can divide the I/O bandwidth between multiple databases.
To implement IORM resource groups, consumers and plans need to be created first.
What is hybrid columnar compression?

Hybrid Columnar compression, also called HCC, is a feature of Exadata which is used for
compressing data at column level for a table.
It creates compression data units which consist of logical grouping of columns values typically
having several data blocks in it. Each data block has data from columns for multiple rows.
This logarithm has the potential to reduce the storage used by the data and reduce disk I/O
enhancing performance for the queries.
The different types of HCC compression include:
Query Low
Query High
Archive High
Archive Low
What is Flash cache?

Four 96G PCIe flash memory cards are present on each Exadata Storage Server cell which
provide very fast access to the data stored on it.
This is further achieved by also provides mechanism to reduces data access latency by retrieving
data from memory rather than having to access data from disk. A total flash storage of 384GB
per cell is available on the Exadata appliance.

What is Smart Scan?

It is a feature of the Exadata Software which enhances the database performance many times
over. It processes queries in an intelligent way, retrieving specific rows rather than the complete
blocks.
It applies filtering criteria at the storage level based on the selection criteria specified in the
query.
It also performs column projection which is a process of sending only required columns for the
query back to the database host/instance.
What are the Parallelism instance parameter used in Exadata?

The parameter PARALLEL_FORCE_LOCAL can be specified at the session level for a


particular job.
How do you Test performance of Exadata?

You can use the calibrate commands at the cellcli command line.
What are the ways to migrate onto Exadata?

Depending on the downtime allowed there are several options:

Oracle DataGuard

Traditional Export/Import

Tablespace transportation

Goldengate Replication after a data restore onto Exadata.

What types of operations does Exadata offload?

Some of the operations that are offloaded from the database host to the cell servers are:

Predicate filtering

Column project filtering

Join processing

Backups

What is cellcli?

This is the command line utility used to managed the cell storage.
How do you create obtain info on the Celldisks?

At the cellcli command line you can issue the list celldisk command.

How would you create a grid disk?

At the cellcli command you would need to issue the create grididsk all .. command.
What are the cellinit.ora and the cellip.ora files used for?

These files have the hostnames and the ip address of all the nodes in the cluster. They are used to
run commands on remote database and cellserver nodes from a local host.
Example:
cat /etc/oracle/cell/network-config/cellinit.ora
ipaddress1=192.168.47.21/24
$ cat /etc/oracle/cell/network-config/cellip.ora
cell=192.168.47.21:5042
cell=192.168.47.22:5042
cell=192.168.47.23:5042
What operating systems does Exadata support?

Exadata has traditionally run Oracle Linux OS. Recently, Solaris has also been made available on
this engineered system.

ORACLE CAREER
Top 30 RAC Interview Questions That Helped Me. Are You
Prepared?
Getting ready for a RAC interview? Make sure to refresh your knowledge by reviewing this list
of RAC Interview Questions.
What is cache fusion?

In a RAC environment, it is the combining of data blocks, which are shipped across the
interconnect from remote database caches (SGA) to the local node, in order to fulfill the
requirements for a transaction (DML, Query of Data Dictionary).
What is split brain?

When database nodes in a cluster are unable to communicate with each other, they may continue
to process and modify the data blocks independently. If the
same block is modified by more than one instance, synchronization/locking of the data blocks
does not take place and blocks may be overwritten by others in the cluster. This state is called
split brain.

What is the difference between Crash recovery and Instance recovery?

When an instance crashes in a single node database on startup a crash recovery takes place. In a
RAC enviornment the same recovery for an instance is performed by the surviving nodes called
Instance recovery.
What is the interconnect used for?

It is a private network which is used to ship data blocks from one instance to another for cache
fusion. The physical data blocks as well as data dictionary blocks are shared across this
interconnect.
How do you determine what protocol is being used for Interconnect traffic?

One of the ways is to look at the database alert log for the time period when the database was
started up.
What methods are available to keep the time synchronized on all nodes in the
cluster?

Either the Network Time Protocol(NTP) can be configured or in 11gr2, Cluster Time
Synchronization Service (CTSS) can be used.
What files components in RAC must reside on shared storage?

Spfiles, ControlFiles, Datafiles and Redolog files should be created on shared storage.
Where does the Clusterware write when there is a network or Storage missed
heartbeat?

The network ping failure is written in $CRS_HOME/log


How do you find out what OCR backups are available?

The ocrconfig -showbackup can be run to find out the automatic and manually run backups.
If your OCR is corrupted what options do have to resolve this?

You can use either the logical or the physical OCR backup copy to restore the Repository.
How do you find out what object has its blocks being shipped across the instance
the most?

You can use the dba_hist_seg_stats.


What is a VIP in RAC use for?

The VIP is an alternate Virtual IP address assigned to each node in a cluster. During a node
failure the VIP of the failed node moves to the surviving node and relays to the application that
the node has gone down. Without VIP, the application will wait for TCP timeout and then find
out that the session is no longer live due to the failure.
How do we know which database instances are part of a RAC cluster?

You can query the V$ACTIVE_INSTANCES view to determine the member instances of the
RAC cluster.

What is OCLUMON used for in a cluster environment?

The Cluster Health Monitor (CHM) stores operating system metrics in the CHM repository for
all nodes in a RAC cluster. It stores information on CPU, memory, process, network and other
OS data, This information can later be retrieved and used to troubleshoot and identify any cluster
related issues. It is a default component of the 11gr2 grid install. The data is stored in the master
repository and replicated to a standby repository on a different node.
What would be the possible performance impact in a cluster if a less powerful node
(e.g. slower CPUs) is added to the cluster?

All processing will show down to the CPU speed of the slowest server.
What is the purpose of OLR?

Oracle Local repository contains information that allows the cluster processes to be started up
with the OCR being in the ASM storage ssytem. Since the ASM file system is unavailable until
the Grid processes are started up a local copy of the contents of the OCR is required which is
stored in the OLR.
What is the default memory allocation for ASM?

In 10g the default SGA size is 1G in 11g it is set to 256M and in 12c ASM it is set back to 1G.
How do you backup ASM Metadata?

You can use md_backup to restore the ASM diskgroup configuration in-case of ASM diskgroup
storage loss.
What files can be stored in the ASM diskgroup?

In 11g the following files can be stored in ASM diskgroups.

Datafiles

Redo logfiles

Spfiles

In 12c the files below can also new be stored in the ASM Diskgroup

Password file

What it the ASM POWER_LIMIT?

This is the parameter which controls the number of Allocation units the ASM instance will try to
rebalance at any given time. In ASM versions less than 11.2.0.3 the default value is 11 however it
has been changed to unlimited in later versions.
What is a rolling upgrade?

A patch is considered a rolling if it is can be applied to the cluster binaries without having to
shutting down the database in a RAC environment. All nodes in the cluster are patched in a

rolling manner, one by one, with only the node which is being patched unavailable while all
other instance open.
What are some of the RAC specific parameters?

Some of the RAC parameters are:

CLUSTER_DATABASE

CLUSTER_DATABASE_INSTANCE

INSTANCE_TYPE (RDBMS or ASM)

ACTIVE_INSTANCE_COUNT

UNDO_MANAGEMENT

What is the future of the Oracle Grid?

The Grid software is becoming more and more capable of not just supporting HA for Oracle
Databases but also other applications including Oracles applications. With 12c there are more
features and functionality built-in and it is easier to deploy these pre-built solutions, available for
common Oracle applications.
What components of the Grid should I back up?

The backups should include OLR, OCR and ASM Metadata.


Is there an easy way to verify the inventory for all remote nodes

You can run the opatch lsinventory -all_nodes command from a single node to look at the
inventory details for all nodes in the cluster.

Golden Gate:
What type of Topology does Oracle Goldengate support?
GoldenGate supports the following topologies. More details can be found here.

Unidirectional

Bidirectional

Peer-to-peer

Broadcast

Consolidation

Cascasding

What are the main components of the Oracle Goldengate replication?


The replication configuration consists of the following processes.

Manager

Extract

Pump

Replicate

What database does Oracle GoldenGate support for replication?

Oracle Database

TimesTen

MySQL

IBM DB2

Microsoft SQL Server

Informix

Teradata

Sybase

Enscribe

SQL/MX

For the latest list, look here.

What transaction types does Oracle Goldengate support for Replication?

Goldengate supports both DML and DDL Replication from the source to target.

What are the supplemental logging pre-requisites?


The following supplemental logging is required.

Database supplemental logging

Object level logging

Why is Supplemental logging required for Replication?


When a transaction is committed on the source database, only new data is written to the Redo
log. However for Oracle to apply these transactions on the destination database, the before image
key values are required to identify the effected rows. This data is also placed in the trail file and
used to identify the rows on the destination, using the key value the transactions are executed
against them.

List important considerations for bi-directional replication?


The customer should consider the following points in an active-active replication environment.

Primary Key: Helps to identify conflicts and Resolve them.

Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.

Triggers: These should be disabled or suppressed to avoid using uniqueness issue

Data Looping: This can easy avoided using OGG itself

LAG: This should be minimized. If a customer says that there will not be any LAG due to
network or huge load, then we dont need to deploy CRDs. But this is not the case always
as there would be some LAG and these can cause Conflicts.

CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of DMLs
that can be used to detect and resolve them.

Packaged Application: These are not supported as it may contain data types which are not
support by OGG or it might not allow the application modification to work with OGG.

Are OGG binaries supported on ASM Cluster File System (ACFS)?


Yes, you can install and configure OGG on ACFS.

Are OGG binaries supported on the Database File System (DBFS)? What files
can be stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however store parameter files, data files
(trail files), and checkpoint files on DBFS.

What is the default location of the GLOBALS file?


A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)

Where can filtering of data for a column be configured?


Filtering of the columns of a table can be set at the Extract, Pump or Replicat level.

Is it a requirement to configure a PUMP extract process in OGG replication?


A PUMP extract is an option, but it is highly recommended to use this to safe guard against
network failures. Normally it is configured when you are setting up OGG replication across the
network.

What are the differences between the Classic and integrated Capture?
Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the database
redo logs (optionally archive logs) to capture the DML changes occurring on the objects
specified in the parameter files.

At the OS level, the GoldenGate user must be a part of the same database group which
owns the database redo logs.

This capture mode is available for other RDBMS as well.

There are some data types that are not supported in Classic Capture mode.

Classic capture cant read data from the compressed tables/tablespaces.

Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log mining
server to receive the data changes in the form of logical change records (LCRs).

IC mode does not require any special setup for the databases using ASM, transparent data
encryption, or Oracle RAC.

This feature is only available for oracle databases in Version 11.2.0.3 or higher.

It also supports various object types which were previously not supported by Classic
Capture.

This Capture mode supports extracting data from source databases using compression.

Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the extract
parameter file.

EXTRACT NAME

USERID

EXTTRAIL

TABLE

What are macros?


Macro is an easier way to build your parameter file. Once a macro is written it can be called from
different parameter files. Common parameters like username/password and other parameters can
be included in these macros. A macro can either be another parameter file or a library.

Where can macros be invoked?


The macros can be called from the following parameter files.

Manager

Extract

Replicat

Gobals

How is a macro defined?


A macro statement consists of the following.

Name of the Macro

Parameter list

Macro body

Sample:
MACRO #macro_name
PARAMS (#param1, #param2, )
BEGIN
< macro_body >
END;

I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you cant configure multiple
extracts to write to the same exttrail.

What type of Encryption is supported in Oracle Goldengate?


Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.

Password Encryption.

Network Encryption.

What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using

Blowfish algorithm and

Advance Encryption Standard (AES) algorithm

What are the different encryption levels in AES?


You can encrypt the password/data using the AES in three different keys
a) 128 bit
b) 192 bit and
c) 256 bit

Is there a way to check the syntax of the commands in the parameter file without
actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting. If there
is any error you will see it.

How can you increase the maximum size of the read operation into the buffer
that holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.

What information can you expect when there us data in the discard file?
When data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number

What command can be used to switch writing the trail data to a new trail file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER

How can you determine if the parameters for a process was recently changed
When ever a process is started, the parameters in the .prm file for the process is written to the
process REPORT. You can look at the older process reports to view the parameters which were
used to start up the process. By comparing the older and the current reports you can identify the
changes in the parameters.

List a few parameters which may help improve the Oracle


GoldenGate replicat performance?
The parameters below can be used to improve the replicat performance:

BATCHSQL

GROUPTRANSOPS

INSERTAPPEND

What are the areas to monitor in Oracle Goldengate Replication?


The lag and checkpoint latency of the Extract, pump and Replicat processes are normally
monitored.

What is the PASSTHRU mode used for?


In pass-through mode, the Extract process does not look up the table definitions, either from the
database or from a data definitions file. This increases the throughput of the data pump, as the
object definition look-up is bypassed.

What are the most common reasons of an Extract process slowing down?
Some of the possible reasons are:

Long running batch transactions on a table.

Insufficient memory on the Extract side. Uncommitted, long running transactions can
cause writing of a transaction to a temporary area (dirtmp) on disk. Once the transaction
is committed it is read from the temporary location on the file system and converted to
trail files.

Slow or overburdened Network.

What are the most common reasons of the Replicat process slowing down?
Some of the possible reasons are:

Large amount of transactions on a particular table.

Blocking sessions on the destination database where non-Goldengate transactions are also
taking place on the same table as the replicat processing.

If using DBFS, writing & reading of trail files may be slow if SGA parameters are not
tuned.

For slow Replicats, latency may be due to missing indexes on target.

Replicat having to process Update, delete of rows in very large tables.

My extract was running fine for a long time. All of a sudden it went down. I
started the extract processes after 1 hour. What will happen to my committed
transactions that occurred in the database during last 1 hour?
OGG checkpoint provides the fault tolerance and make sure that the transaction marked for
committed is capture and captured only once. Even if the extract went down abnormally, when
you start the process again it reads the checkpoint file to provide the read consistency and
transaction recovery.

I have configured Oracle GoldenGate integrated capture process using the


default values. As the data load increases I see that extract starts lagging behind
by an hour (or more) and database performance degrades. How you will resolve
this performance issue?
When operating in integrated capture mode, you must make sure that you have assigned
sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or
limiting the streams pool to use a specific amount of memory can cause troubles.
The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the
MAX. SGA at GG process level as below:
SQL> alter system set STREAMS_POOL_SIZE=3G
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2048, PARALLELISM 4)

Why would you segregate the tables in a repllication configuration? How would
you do it?
In OGG you can configure replicat at the data at the schema level or at the table level using
TABLE parameter of extract and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the
extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple
extracts and replicats to improve the performance of data replication. Alternatively youcan also
group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate
extract process to eliminte lag on the other tables.
Lets say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.

To improve the overall replication performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 > 25 tables
Ext_2/Rep_2 > 25 tables
Ext_3/Rep_3 > 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate
more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.

Troubleshoot:
How can we report on long running transactions?

The WARNLONGTRANS parameter can be specified with a threshold time that a transaction
can be open before Extract writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m
What command can be used to view the checkpoint information for the extract
process?

Use the following command to view the Extract checkpoint information.


GGSCI> info extract , showch
GGSCI> info extract ext_fin, showch
How is the RESTARTCOLLISION parameter different from HANDLECOLLISIONS?

The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when
the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in
the database but could not checkpoint the process information to the checkpoint file/table. On
recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in
the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process
transactions until the parameter is removed from the parameter files and the processes restarted.
How do you view the data which has been extracted from the redo logs?

The logdump utility is used to open the trail files and look at the actual records that have been
extracted from the redo or the archive log files.
What does the RMAN-08147 warning signify when your environment has a
GoldenGate Capture Processes configured?

This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required
by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The
RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files required
for mining. However if the database is in the mount state the V$ARCHIVED_LOG.
NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
How would you look at a trail file using logdump, if the trail file is Encrypted?

You must use the DECRYPT option before viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail
files.

Below are few logdump commands used on a daily basis for displaying or analyzing data stored
in a trail file.
$ ./logdump to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 to open a trail file in logdump
logdump> fileheader on to view the trail file header
logdump> ghdr on to view the record header with data
logdump> detail on to view column information
logdump> detail data to display HEX and ASCII data values to the column list
logdump> reclen 200 to control how much record data is displayed
logdump> pos 0 To go to the first record
logdump> next (or simply n) to move from one record to another in sequence
logdump> count counting records in a trail

MISC:
Why should I upgrade my GoldenGate Extract processes to Integrated Extract?
Oracle is able to provide faster integration of the new database features by moving the
GoldenGate Extraction processes into the database. Due to this, the GoldenGate Integrated
Extract has a number of features like Compression which are not supported in the traditional
Extract. You can read more about how to upgrade to Integrated Extract and more about

Integrated Delivery. Going forward preference should be give to create new extracts as
Integrated Extracts and also to upgrade existing traditional Extracts.

What is the minimum Database version which supports Integrated Delivery?


Oracle 11.2.0.4 is the the minimum required database version which supports both Integrated
extract and Integrated Reaplicat.

What databases supports GoldenGate Integrated Delivery?


Oracle Integrated Delivery is only available for Oracle Databases.

With Integrated Delivery, where can we look for the performance stats?
Yes with 12c, performance statistics are collected in the AWR repository and the data is available
via the normal AWR reports.

What are the steps required to add a new table to an existing replication setup?
The steps to be executed would be the following:

Include the new table to the Extract & pump process.

Obtain starting database SCN and Copy the source table data to the target database

Start Replicat on target at the source SCN database point.

What does the GoldenGate CSN equate to, in the Oracle Database?
It is equivalent of the Oracle database SCN transaction number.

How do you create a CSV file from the Extracted data?


You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the
extract trail files which use the configuration of the adaptor settings to generate CSV files.

What is the purpose of the DEFGEN utility?


When the source and the target schema objects are not the same (different DDLs) the Replicat
process needs to know the source definition of the objects. The output from the DEFGEN utility
is used in conjunction with the trail data to determine which column value in the trail belongs to
which column.

We want to setup one-way data replication for my online transaction processing


application. However there are compressed tables in the environment. Please
suggest how I can achieve it.
You must use OGG 11.2 and configure GoldenGate Integrated Capture process to extract data
from compressed tables.
Note: Pre OGG 11.2 doesnt support extracting data from compressed tables

We want to setup one-way data replication using Integrated Extract for my


Oracle database running 10.2.0.4. Please suggest the best architecture to achieve
it.
Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if
you are running Oracle database 10.2 and want to you Oracle GoldenGate Integrated Capture
process then you must configure downstream topology.

I am migrating my Oracle database from non-exadata to exadata but my source


OGG software and target OGG software versions are different (Say Source is
running OGG 11.1 and target is running OGG 11.2). How can I configure my
OGG process to work in such configuration?
It is recommended that all instances of Oracle GoldenGate be the same version to take advantage
of the new functionality, but this is not possible all the time and is not required. In this scenario,
OGG provides a parameter called FORMAT RELEASE which allows customers to use
different versions of Oracle GoldenGate Extract, trail files, and Replicat together.
Example: RmtTrail /u01/app/oracle/dirdat, Format Release 11.1
Note: The input and output trails of a data pump must have the same trail file version.

What are the different OGG Initial load methods available?


OGG has 2 functionalities, one it is used for Online data Replication and second for Initial
Loading.
If you are replicating data between 2 homogeneous databases then the best method is to use
database specific method (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so
on). Database specific methods are usually faster than the other methods.
If you are replicating data between 2 heterogeneous databases or your replicat involves
complex transformations, then the database specific method cant be used. In those cases you can
always use Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4 different ways to perform initial load.

Direct Load Faster but doesnt support LOB data types (12c include support for LOB)

Direct Bulk Load Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER

File to replicat Fast but the rmtfile limit is 2GB. If the table cant be fit in 1 rmtfile you
can use maxfiles but the replicat need to be registered on the target OGG home to read
the rmtfiles from source.

File to Database utility depending on the target database, use SQL*LOAD for Oracle
and SSIS for MS SQL SERVER and so on.

Oracle GoldenGate initial loading reads data directly from the source database tables without
locking them. So you dont need downtime but it will use database resources and can cause
performance issues. Take extra precaution to perform the initial load during the non-peak time so
that you dont run into resource contention.

I have a table called TEST on source and target with same name, structure and
data type but in a different column order. How can you setup replication for this
table?
OGG by default assumes that the sources and target tables are identical. A table is said to be
identical if and only if the table structure, data type and column order are the same on both the
source and the target.
If the tables are not identical you must use the parameter SOURCEDEFS pointing to the source
table definition and COLMAP parameter to map the columns from source to target.

What is the best practice to delete the extract files in OGG?


Use the manager process to delete the extract files after they are consumed by the extract/replicat
process
PURGEOLDEXTRACTS /u01/app/oracle/dirdat/et*, USECHECKPOINTS, MINKEEPHOURS
2

What parameters can be used to configure Oracle GoldenGate to extract data


from Archived Redo log files only?
Use the TRANLOGOPTIONS ARCHIVEDLOGONLY option in the parameter file.

I have a one-way replication setup. The system administration team wants to


apply an OS patch to both the OGG source host and the target servers. Provide
the sequence of steps that you will carry before and after applying this patch.
Procedure:

Check to make sure that the Extract has processed all the records in the data source
(Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)

Verify the extract, pump and replicat has zero lag.


GGSCI> send extract , getlag
GGSCI> send extract , getlag
GGSCI> send replicat , getlag
(The above command should pring At EOF, no more records to process.)

Stop all application and database activity.

Make sure that the primary extract is reading the end of the redolog and that there is no
LAG at all for the processes.

Now proceed with stopping the processes:

Source:
1. Stop the primary extract
2. Stop the pump extract
3. Stop the manager process
4. Make sure all the processes are down.
Target:
1. Stop replicat process
2. Stop mgr
3. Make sure that all the processes are down.
4. Proceed with the maintenance
5. After the maintenance, proceed with starting up the processes:
Source:
1. Start the manager process
2. Start the primary extract

3. Start the pump extract


(Or simply all the extract processes as GGSCI> start extract *)
4. Make sure all that the processes are up.
Target:
1. Start the manager process
2. Start the replicat process.
3. Make sure that all the processes are up.

What are the basic resources required to configure Oracle GoldenGate high
availability solution with Oracle Clusterware?
There are 3 basic resources required:

Virtual IP

Shared storage

Action script

How would you comment out a line in the parameter file?


You can use the character to comment out a line.

12c:
What are some of the key features of GoldenGate 12c?
The following are some of the more interesting features of Oracle GoldenGate 12c:

Support for Multitenant Database

Coordinated Replicat

Integrated Replicat Mode

Use of Credential store

Use of Wallet and master key

Trigger-less DDL replication

Automatically adjusts threads when RAC node failure/start

Supports RAC PDML Distributed transaction

RMAN Support for mined archive logs

What are the installation options available in OGG 12c?


You can install Oracle GoldenGate 12c using in 2 ways:
1) Interactive Installation with OUI Graphical interface
2) Silent Installation with OUI Command Interface

What is a Credential Store in OGG 12c?


OGG Crendential Store manages Encrypted Passwords and USERIDs that are used to interact
with the local database and Associate them with an Alias. Instead of specifying actual USERID
and Password in a command or a parameter file, you can use an alias. The Credential Store is
implemented as an autologin wallet within the Oracle Credential Store Framework (CSF).

How to configure Credentail Store in OGG 12c?


Steps to configure Oracle Credential Store are as follows:
1) By Default Credential Store is is located under dircrd directory.
If you want to specify a different location use can specidy CREDENTIALSTORELOCATION
parameter in GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
2) Goto OGG home and connect to GGSCI.
cd $OGG_HOME
./ggsci
GGSCI>

What command is used to create the credential store?


ADD CREDENTIALSTORE

How do you add credentials to the credential store?

ALTER CREDENTIALSTORE ADD USER userid,


[PASSWORD password]
[ALIAS alias]
[DOMAIN domain]
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD
oracle ALIAS extorcl DOMAIN Oracle GoldenGate

How do you retrieve information from the Oracle Credential Store?


GGSCI> INFO CREDENTIALSTORE
OR
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate

What are the different data encyption methods available in OGG 12c?
In OGG 12c you can encrypt data with the following 2 methods:
1) Encrypt Data with Master Key and Wallet
2) Encrypt Data with ENCKEYS

How do you enable Oracle GoldenGate for Oracle database 11.2.0.4?


The database services required to support Oracle GoldenGate capture and apply must be enabled
explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.
To enable Oracle GoldenGate, set the following database initialization parameter. All instances in
Oracle RAC must have the same setting.
ENABLE_GOLDENGATE_REPLICATION=true

How does the Replicat works in a Coordinated Mode?


In a Coordinated Mode Replicat operates as follows:
1. Reads the Oracle GoldenGate trail.
2. Performs data filtering, mapping, and conversion.
3. Constructs SQL statements that represent source database DML or DDL transactions (in
committed order).
4. Applies the SQL to the target through the SQL interface that is supported for the given
target database, such as ODBC or the native database interface.

What is the difference between Classic and Coordinated Replicat?

The difference between classic mode and coordinated mode is that Replicat is multi-threaded in
coordinated mode. Within a single Replicat instance, multiple threads read the trail
independently and apply transactions in parallel. Each thread handles all of the filtering,
mapping, conversion, SQL construction, and error handling for its assigned workload. A
coordinator thread coordinates the transactions across threads to account for dependencies
among the threads.

How do you create a COORDINATED REPLICATE in OGG 12c?


You can create the COORDINATED REPLICATE with the following OGG Command:
ADD REPLICAT rfin, COORDINATED MAXTHREADS 50, EXTTRAIL dirdat/et

If have created a Replicat process in OGG 12c and forgot to specify


DISCARDFILE parameter. What will happen?
Starting with OGG 12c, if you dont specify a DISCARDFILE OGG process now generates a
dicard file with default values whenever a process is started with START command through
GGSCI.

Is it possible to start OGG EXTRACT at a specific CSN?


Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or
trail.
Example:
START EXTRACT fin ATCSN 12345
START EXTRACT finance AFTERCSN 67890

Can you talk about parallelism for replicats in Oracle 12c?


Yes. The database parallel process are leveraged on the target database for automatic dependency
aware parallel apply.
This key enhancement makes it very easy to maintain throughput and reducing latency.
Previously the Replicat process had to manually be broken down into multiple threads.

What are the changes at the pump level in 12c when using integrated delivery?
The trail generated by the extract process is read by Integrated Delivery and Logical Chase
Records (LCR) are created. These LCRS are then shipped over the network to the destination
database.

What is the difference between Integrated delivery and Coordinated delivery?

Integrated delivery is the new 12c mechanism of sending extract trail to the destination in an
Oracle enviornment. Coordinated delivery is the new mechanism to send data between nowOracle databases.

RMAN:
Interview Questions & Answer on RMAN
What is RMAN?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery
activities.
What is the difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be
preserved in recovery catalog.
In recovery catalog we can store scripts.
Recovery catalog is central and can have information of many databases.
Can we use same target database as catalog?
No, The recovery catalog should not reside in the target database (database should be backed up),
because the database cant be recovered in the mounted state.
How do you know that how much RMAN task has been completed?
By querying v$rman_status or v$session_longops
From where list & report commands will get input?
Both the commands command quering v$ and recovery catalog views. V$BACKUP_FILES or
many of the recovery catalog views such asRC_DATAFILE_COPY or RC_ARCHIVED_LOG.
Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;

How many times does oracle ask before dropping a catalog?


The default is two times one for the actual command, the other for confirmation.
How to view the current defaults for the database.
RMAN> show all;
What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level
information. This command only updates repository records with the status of the backups.
e.g. If user removes archived logs from disk with an operating system command, the repository
still indicates that the logs are on disk, when in fact they are not.

What are the differences between crosscheck and validate commands?


Validate command is to examine a backup set and report whether it can be restored. RMAN
scans all of the backup pieces in the specified backup sets and looks at the checksum to verify
that the contents are intact so that backup can be successfully restored if necessary.
Crosscheck command is to verify the status of backups and copies recorded in the RMAN
repository against media such as disk or tape. The crosscheck command only processes files
created on the same device type as the channel running crosscheck.
Which one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental
backup at level 1 or 0
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
A cumulative backup, which backs up all blocks changed after the most recent incremental
backup at level 0
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;
Cumulative backups are preferable to differential backups when recovery time is more important
than disk space, because during recovery each differential backup must be applied in succession.
Use cumulative incremental backups instead of differential, if enough disk space is available to
store cumulative incremental backups.

This is command for taking Level 0 backup.


RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE;
What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.
RMAN command to backup for creating standby database
RMAN> duplicate target database
You loss one datafile and DB is running in ARCHIVELOG mode. You have
full database backup of 1 week/day old and dont have backup of this (newly created)
datafile. How do you restore/recover file?
Create data file and recover datafile.
SQL> alter database create datafile /u01/app/oracle/oradata/xyz.dbf size 2G;
RMAN> recover datafile file_id;
What is obsolete backup & expired backup?
A status of expired means that the backup piece or backup set is not found in the backup
destination.
A status of obsolete means the backup piece is still available, but it is no longer needed. The
backup piece is no longer needed since RMAN has been configured to no longer need this piece
after so many days have elapsed, or so many backups have been performed.
What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN wont put database in backup mode.

How to put manual/user-managed backup in RMAN (recovery catalog)?


By using catalog command.
RMAN> CATALOG START WITH /tmp/backup.ctl;

What are the Architectural components of RMAN?


1. RMAN Executables
2. Sercer process
3. Channels
4. Target database
5. Recovery catalog database (optional)
6. Media management Layer (optional)
7. Backups, backup sets and backup pieces

What are channels?


A channel is an RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. It is
through the allocation of channels that you govern I/O characteristics:

Type of I/O device being read or written to, either a disk or an sbt_tape

Number of processes simultaneously accessing an I/O device

Maximize size of files created on I/O devices

Maximize rate at which database files are read

Maximize number of files open at a time

Why is the catalog optional?


Because RMAN manages backup and recovery operations, it requires a place to store necessary
information about the database. RMAN always stores this information in the target database
control file. You can also store RMAN metadata in a recovery catalog schema contained in a
separate database. The recovery catalog schema must be stored in a database other than the target
database.
What is a Backup set?
A logical grouping of backup files the backup pieces that are created when you issue an
RMAN backup command. A backup set is RMANs name for a collection of files associated with
a backup. A backup set is composed of one or more backup pieces.

What are the benefits of using RMAN?


1. Incremental backups that only copy data blocks that have changed since the
last backup.
2. Tablespaces are not put in backup mode, thus there is noextra redo log
generation during online backups.
3. Detection of corrupt blocks during backups.
4. Parallelization of I/O operations.
5. Automatic logging of all backup and recovery operations.
6. Built-in reporting and listing commands.

What are the various reports available with RMAN


RMAN>list backup;
RMAN> list archive;
In catalog database, if some of the blocks are corrupted due to system crash, How will you
recover?
using RMAN BLOCK RECOVER command
How do you enable the autobackup for the controlfile using RMAN?
Issue command at RMAN prompt.
RMAN> configure controlfile autobackup on;
Also we can configure controlfile backup format.
RMAN> configure controlfile autobackup format for device type disk to
2> $HOME/BACKUP/RMAN/ F.bkp;
How do you identify what are the all the target databases that are being backed-up with
RMAN database?
You dont have any view to identify whether it is backed up or not. The only option is connect to
the target database and give list backup this will give you the backup information with date and
timing.

How do you identify the block corruption in RMAN database? How do you fix it?
Using v$block_corruption view you can find which blocks corrupted.
RMAN> block recover datafile <fileid> block <blockid>;
Using the above statement You recover the corrupted blocks. First check whether the block is
corrupted or not by using this command
SQL>select file# block# from v$database_block_corruption;
file# block
2 507
the above block is corrupted
conn to Rman
To recover the block use this command
RMAN>blockrecover datafile 2 block 507;
the above command recover the block 507
Now just verify it..
Rman>blockrecover corruption list;
How do you clone the database using RMAN software? Give brief steps? When do you use
crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
Two commands available in RMAN to clone database:
1) Duplicate
2) Restore.
List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS

RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY
How do you install the RMAN recovery catalog?
Steps to be followed:
1) Create connection string at catalog database.
2) At catalog database create one new user or use existing user and give that user a
recovery_catalog_owner privilege.
3) Login into RMAN with connection string
a) export ORACLE_SID
b) rman target catalog @connection string
4) rman> create catalog;
5) register database;
What is the difference between physical and logical backups?
In Oracle Logical Backup is which is taken using either Traditional Export/Import or Latest
Data Pump. Where as Physical backup is known when you take Physical O/s Database related
Files as Backup.
What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a redundant array of independent disk
RAID0: Concatenation and stripping
RAID1: Mirroring
How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;
How do you set the flash recovery area?

SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;


SQL> ALTER SYSTEM SET db_recovery_file_dest = /u10/oradata/school;
What is auxiliary channel in RMAN? When do you need this?
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels
configured, then before issuing the DUPLICATE command, manually allocate at least one
auxiliary channel within the same RUN command.
How do you use the V$RECOVERY_FILE_DEST view to display information regarding
the flashrecovery area?
SQL> SELECT name, space_limit, space_used,space_reclaimable, number_of_filesFROM
v$recovery_file_dest;
How can you display warning messages?
SQL> SELECT object_type, message_type,message_level, reason, suggested_actionFROM
dba_outstanding_alerts;
How do you backup the entire database?
RMAN> BACKUP DATABASE;
How do you backup an individual tablespaces?
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> BACKUP TABLESPACE system;
How do you backup datafiles and control files?
RMAN> BACKUP DATAFILE 3;
RMAN> BACKUP CURRENT CONTROLFILE;
Use a fast recovery without restoring all backups from their backup location to the location
specified inthe controlfile.
RMAN> SWITCH DATABASE TO COPY;

Vous aimerez peut-être aussi