Vous êtes sur la page 1sur 134

Page 1 of 134

DBA Interview Questions with Answers Part1


What are four common errors found in an alert .log?
If we are getting any issue regarding database while performing any activity we should check alert
log file in dump destination.. The four common error we find in alert.log are:
Deadlock Errors (ORA-00060), Oracle Internal errors, Backup and recovery errors, Snapshot too old
error (O1555)
What is PCT Free/PCT Used/PCT increase parameter in segment? What is growth factor?
PCT-FREE is a block storage it uses to mention how much space should be left in database block
for future updates (updating the records eg. previously name Smith after that we will update the name
as Smith Taylor). If mention PCTFREE as 10, oracle will adding the new rows to block up to 90% it
allows 10% for future updates.
If the PCT used was set to 60 this means if the data inside the block is 60 it is FULL and if the data
inside the block is 59 it is Empty.
This is the parameter which specify in percent that a block can only used for insert or come in the free
list(list of blocks in segment ready for insert operation) when used space in a block is less than
PCTUSED.
Suppose value of pctused is 40 and pctfree is 20 then data can be inserted till 80 of the block directly.
And suppose the used space is 60 and some one has perform a delete operation in a row in the same
block which brings the used space to 50 .Now one cannot insert any record in the same block unless
the used space comes down below 40 i.e. pctused.
What is dump destination? What are bdump, cdump and udump?
The dump destination is the location where the trace files are located for all the Oracle process.
bdump-->Background processes + alert_SID.log file location
cdump--> Core Processes dump, udump--> User Processes dump, adump--> for ASM processes
These destinations contains useful information related to process failures.
UDUMP is specifying the user dump directory where all user error logs (trace files) will be placed.
BDUMP is specifying the background dump directory where all database error logs (trace files) will be
placed.
CDUMP is specifying the core dump directory where all OS error logs (core dump files) will be placed.
Default location is (ORACLE_BASE/admin/<SID>)
SQL>show parameters dump_dest;
It'll show you all the dump directories wherever it is currently located. You can change your
parameters in init.ora by creating spfile from pfile.
What will you do if in any condition you do not know how to troubleshoot the error at all and
there are no seniors or your co-workers around?
We need to find where in the compilation the error is occurring. We have to divide the code and check
for correctness of the code part-by-part. This is called debugging. Keep checking the code until you
find the code which is wrong.
Search forums for similar error codes or symptoms and make a plan then submit it to your supervising
DBA if you are not authorized to carry it out yourself.
I am getting error "No Communication channel" after changing the domain name? What is the
solution?
Here Question is not clear about Where the Oracle database is residing. If the Oracle Database is
resides on your local machine then the domain name must be updated in the tnsnames.ora file.
Change this file in ../Admin folder contained one. If you are accessing remote Database then there
are no changes required to your tnsnames.ora file only check with tnsping with the database service
name. Change the domain name in the sqlnet.ora file in NAMES.DEFAULT_DOMAIN parameter


Page 2 of 134

You have taken import of a table in a database. You have got the Integrity constraint violation
error. How you are going to resolve it.
If u wants to import the table just says constraints=n the movement table got imported then u create
constraint on that tables.
What is the most important action a DBA must perform after changing the database from
NOARCHIVELOG TO ARCHIVELOG?
First of all take an offline backup of whole database (including the (datafile controlfile and redolog
files). It is obvious that archive log process should be started by:
SQL>alter system Archivelog start;
Otherwise the database halts if unable to rotate redo logs
Show one instance when you encountered an error in alert log and you overcome that error.
What actions you took to overcome that error.
Oracle writes error in alert log file. Depending upon the error corrective action needs to be taken.
1) Deadlock Error: Take the trace file in user dump destination and analysis it for the error.
2) ORA-01555 Snapshot error: Check the query try to fine tune and check the undo size.
3) Unable to extent segment: Check the tablespace size and if require add space in the tablespace
by 'alter database datafile .... resize' or alter tablespace add datafile command.
What is Ora-1555 Snapshot too Old error? Explain in detail?
Oracle Rollback Segments (Undo more recently) hold a copy of data before it was modified and they
work in a round-robin fashion. Writing and then eventually overwriting the entries as soon as the
changes are committed.
They are needed to provide read consistency (a consistent set of data at a point in time) or to allow a
process to abandon or rollback the changes or for database recovery.
Heres a typical scenario:-
User A opens a query to fetch every row from a billion row table. If User B updates and commits the
last row of the billion row table a Rollback entry will be created so User A can see the data as it
was before the update.
Other users are busily updating rows in the database and this in turn generates rollback which may
eventually cause the entry needed for User A to be overwritten (after all User B did commit the
change so its OK to overwrite the rollback segment). Maybe 15 minutes later the query is still
running and User A finally fetches the last row of the billion row table but the rollback entry is gone.
He gets ORA-01555: Snapshot too old rollback segment too small
I have applied the following commands: Now what will happen, will the database will give an
error / it will work?
Shutdown abort;
Startup;
Definitely database will be start without error but all uncommitted data will be lost such as killed all
sessions, killed all transactions, and didn't write from the buffers because shutdown abort
directly shutdown instance without committing.
There is four modes to shutdown the database:
1) Shutdown immediate, 2) Shutdown normal, 3) Shutdown transactional, 4) Shutdown aborts
When the database is shutdown by first 3 methods checkpoint takes place but when is shutdown by
abort option it doesn't enforces checkpoints,it simply shutdowns without waiting any users to
disconnect.
What is mutated trigger? In single user mode we got mutated error, as a DBA how you will
resolve it?
Mutated error will occur when same table access more than once in one state. If you are using before
in trigger block then replace it with after.

Page 3 of 134

Explain Dual table. Is any data internally stored in dual Table. Lot of users is accessing select
sysdate from dual and they getting some millisecond differences. If we execute SELECT
SYSDATE FROM EMP; what error will we get. Why?
Dual is a system owned table created during database creation. Dual table consist of a single column
and a single row with value x. We will not get any error if we execute select sysdate from scott.emp
instead sysdate will be treated as a pseudo column and displays the value for all the rows retrieved.
For Example if there is 12 rows in emp table it will give result of date in 12 rows.
As an Oracle DBA what are the entire UNIX file you should be familiar with?
To check the process use: ps -ef |grep pmon or ps -ef
To check the alert log file: Tail -f alert.log
To check the cpu usage; Top vmstat 2 5
What is a Database instance?
A database instance also known as server is a set of memory structures and background processes
that access a set of database files. It is possible for a single database to be accessed by multiple
instances (this is oracle parallel server option).
What are the Requirements of simple Database?
A simple database consists of:
One or more data files, One or more control files, Two or more redo log files, Multiple users/schemas,
One or more rollback segments, One or more Tablespaces, Data dictionary tables, User objects
(table, indexes, views etc.)
The server (Instance) that access the database consists of:
SGA (Database buffer, Dictionary Cache Buffers, Redo log buffers, Shared SQL pool), SMON
(System Monitor),PMON (Process Monitor), LGWR (Log Write), DBWR (Data Base Write), ARCH
(ARCHiver), CKPT (Check Point), RECO, Dispatcher, User Process with associated PGS
Which process writes data from data files to database buffer cache?
The Background process DBWR rights data from datafile to DB cache.
How to DROP an Oracle Database?
You can do it at the OS level by deleting all the files of the database. The files to be deleted can be
found using:
1) select * from dba_data_files; 2) select * from v$logfile; 3) select * from v$controlfile; 4) archive log
list
5) initSID.ora 6) clean the UDUMP, BDUMP, scripts etc, 7) Cleanup the listener.ora and the
tnsnames.ora. Make sure that the oratab entry is also removed.
Otherwise, go to DBCA and click on delete database.
In Oracle 10g there is a new command to drop an entire database.
Startup restrict mount;
drop database <instance_name>;
In fact DBA should never drop a database via OS level commands rather use GUI utility DBCA to
drop the database
How can be determining the size of the log files.
Select sum(bytes)/1024/1024 "size_in_MB" from v$log;
What is difference between Logical Standby Database and Physical Standby database?
A physical or logical standby database is a database replica created from a backup of a primary
database. A physical standby database is physically identical to the primary database on a block-for-
block basis. It's maintained in managed recovery mode to remain current and can be set to read only;
archive logs are copied and applied.
A logical standby database is logically identical to the primary database. It is updated using SQL
statements



Page 4 of 134

How do you find whether the instance was started with pfile or spfile
1) SELECT name, value FROM v$parameter WHERE name = 'spfile';
This query will return NULL if you are using PFILE
2) SHOW PARAMETER spfile
This query will returns NULL in the value column if you are using pfile and not spfile
3) SELECT COUNT(*) FROM v$spparameter WHERE value IS NOT NULL;
If the count is non-zero then the instance is using a spfile, and if the count is zero then it is using a
pfile:
SQL> SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"
FROM sys.v_$parameter WHERE name = 'spfile';
What is full backup?
A full backup is an operating system backup of all data files, on-line redo log
files and control file that constitute oracle database and the parameter.If you are using the Rman for
backup then in Rman full backup means Incremental backup on 0 level.
While taking hot backup (begin end backup) what will happens back end?
When we r taking hot backup (begin backup - end backup) the datafile header associated with the
datafiles in the corresponding Tablespace is frozen. So Oracle will stop updating the datafile header
but will continue to write data into datafiles. In hot backup oracle will generate more redos this is
because oracle will write out complete changed blocks to the redo log files.
Which is the best option used to move database from one server to another serve on same
network and Why?
Import Export, Backup-Restore, Detach-Attach
Import-Export is the best option used to move database from one server to another serve on same
network. It reduces the network traffic.Import/Export works well if youre dealing with very small
databases. If we have few million rows its takes minutes to copy when compared to seconds using
backup and restore.
What is Different Type of RMAN Backup?
Full backup: During a Full backup (Level 0) all of the block ever used in datafile are backed up. The
only difference between a level 0 incremental backup and a full backup is that a full backup is never
included in an incremental strategy.
Comulative Backup: During a cumulative (Level 0) the entire block used since last full backup are
backed up.
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since
level 0
Differential Backup: During incremental backup only those blocks that have changed since last
cumulative (Level 1) or full backup (Level 0) are backed up. Incremental backup are differential by
default.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE
Give one method for transferring a table from one schema to another:
There are several possible methods: Export-Import, CREATE TABLE... AS SELECT or COPY.
What is the purpose of the IMPORT option IGNORE? What is its default setting?
The IMPORT IGNORE option tells import to ignore "already exists" errors. If it is not specified the
tables that already exist will be skipped. If it is specified, the error is ignored and the tables data will
be inserted. The default value is N.
What happens when the DEFAULT and TEMP tablespace clauses are left out from CREATE
USER statements?
The user is assigned the SYSTEM tablespace as a default and temporary tablespace. This is bad
because it causes user objects and temporary segments to be placed into the SYSTEM tablespace
resulting in fragmentation and improper table placement (only data dictionary objects and the system
rollback segment should be in SYSTEM).

Page 5 of 134

What happens if the constraint name is left out of a constraint clause?
The Oracle system will use the default name of SYS_Cxxxx where xxxx is a system generated
number. This is bad since it makes tracking which table the constraint belongs to or what the
constraint does harder.
What happens if a Tablespace clause is left off of a primary key constraint clause?
This result in the index that is automatically generated being placed in then USERS default
tablespace. Since this will usually be the same tablespace as the table is being created in, this can
cause serious performance problems.
What happens if a primary key constraint is disabled and then enabled without fully specifying
the index clause?
The index is created in the users default tablespace and all sizing information is lost. Oracle doesnt
store this information as a part of the constraint definition, but only as part of the index definition,
when the constraint was disabled the index was dropped and the information is gone.
Using hot backup without being in archive log mode, can you recover in the event of a failure?
Why or why not?
You can't recover the data because in archive log mode it take the backup of redo log files if it in
Active mode, If it in inactive mode then it is not possible to take the backup of redolog files once the
size is full, so in that case it is impossible to take hot backup
What causes the "snapshot too old" error? How can this be prevented or mitigated?
This is caused by large or long running transactions that have either wrapped onto their own rollback
space or have had another transaction write on part of their rollback space. This can be prevented or
mitigated by breaking the transaction into a set of smaller transactions or increasing the size of the
rollback segments and their extents.
How can you tell if a database object is invalid?
select STATUS from user_objects where object_type='TABLE' AND
OBJECT_NAME='LOGMNRT_TABPART$';

DBA Interview Questions with Answers Part2
A user is getting an ORA-00942 error yet you know you have granted them permission on the
table, what else should you check?
You need to check that the user has specified the full name of the object (SELECT empid FROM
scott.emp; instead of SELECT empid FROM emp;) or has a synonym that points to that object
(CREATE SYNONYM emp FOR scott.emp;)
A developer is trying to create a view and the database wont let him. He has the
"DEVELOPER" role which has the "CREATE VIEW" system privilege and SELECT grants on
the tables he is using, what is the problem?
You need to verify the developer has direct grants on all tables used in the view. You can't create a
stored object with grants given through a role.
If you have an example table, what is the best way to get sizing data for the production table
implementation?
The best way is to analyze the table and then use the data provided in the DBA_TABLES view to get
the average row length and other pertinent data for the calculation. The quick and dirty way is to look
at the number of blocks the table is actually using and ratio the number of rows in the table to its
number of blocks against the number of expected rows.
How can you find out how many users are currently logged into the database? How can you
find their operating system id?
To look at the v$session or v$process views and check the current_logins parameter in the v$sysstat
view. If you are on UNIX is to do a ps -ef|greporacle|wc -l? Command, but this only works against a
single instance installation.
Page 6 of 134

How can you determine if an index needs to be dropped and rebuilt?
Run the ANALYZE INDEX command on the index to validate its structure and then calculate the ratio
of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isnt near 1.0 (i.e. greater than 0.7 or so) then
the index should be rebuilt or if the ratio BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is nearing 0.3. It
is not so easy to decide so I personally suggest contact to the expert before going to rebuild.
What is tkprof and how is it used?
The tkprof tool is a tuning tool used to determine CPU and execution times for SQL statements. You
use it by first setting timed_statistics to true in the initialization file and then turning on tracing for
either the entire database via the sql_trace parameter or for the session using the ALTER SESSION
command. Once the trace file is generated you run the tkprof tool against the trace file and then look
at the output from the tkprof tool. This can also be used to generate explain plan output.
What is Explain plan and how is it used?
The EXPLAIN PLAN command is a tool to tune SQL statements. To use it you must have an
explain_table generated in the user you are running the explain plan for. This is created using the
utlxplan.sql script. Once the explain plan table exists you run the explain plan command giving as its
argument the SQL statement to be explained. The explain plan table is then queried to see the
execution plan of the statement. Explain plans can also be run using tkprof.
How do you prevent output from coming to the screen?
The SET option TERMOUT controls output to the screen. Setting TERMOUT OFF turns off screen
output. This option can be shortened to TERM.
How do you prevent Oracle from giving you informational messages during and after a SQL
statement execution?
The SET options FEEDBACK and VERIFY can be set to OFF.
How do you generate file output from SQL?
By use of the SPOOL command
A tablespace has a table with 30 extents in it. Is this bad? Why or why not.
Multiple extents in and of themselves arent bad. However if you also have chained rows this can hurt
performance.
How do you set up tablespaces during an Oracle installation?
You should always attempt to use the Oracle Flexible Architecture standard or another partitioning
scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG, DATA, TEMPORARY
and INDEX segments.
You see multiple fragments in the SYSTEM tablespace, what should you check first?
Ensure that users dont have the SYSTEM tablespace as their TEMPORARY or DEFAULT
tablespace assignment by checking the DBA_USERS view.
What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is steadily
decreasing performance with all other tuning parameters the same.
Guideline for sizing db_block_size and db_multi_block_read for an application that does many
full table scans?
Oracle almost always reads in 64k chunks. The two should have a product equal to 64 or a multiple of
64.
When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad -How do
you correct it?
If you get excessive disk sorts this is bad. This indicates you need to tune the sort area parameters in
the initialization files. The major sort parameter is the SORT_AREA_SIZE parameter.
When should you increase copy latches? What parameters control copy latches?
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit ratio.
You can increase copy latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to
twice the number of CPUs on your system.
Page 7 of 134

Where can you get a list of all initialization parameters for your instance? How about an
indication if they are default settings or have been changed?
You can look in the init.ora file for an indication of manually set parameters. For all parameters, their
value and whether or not the current value is the default value, look in the v$parameter view.
Describe hit ratio as it pertains to the database buffers. What is the difference between
instantaneous and cumulative hit ratio and which should be used for tuning?
The hit ratio is a measure of how many times the database was able to read a value from the buffers
verses how many times it had to re-read a data value from the disks. A value greater than 80-90% is
good, less could indicate problems. If you simply take the ratio of existing parameters this will be a
cumulative value since the database started. If you do a comparison between pairs of readings based
on some arbitrary time span, this is the instantaneous ratio for that time span. Generally speaking an
instantaneous reading gives more valuable data since it will tell you what your instance is doing for
the time it was generated over.
Discuss row chaining, how does it happen? How can you reduce it? How do you correct it?
Row chaining occurs when a VARCHAR2 value is updated and the length of the new value is longer
than the old value and would not fit in the remaining block space. This results in the row chaining to
another block. It can be reduced by setting the storage parameters on the table to appropriate values.
It can be corrected by export and import of the effected table.
You are getting busy buffer waits. Is this bad? How can you find what is causing it?
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check the
v$waitstat view to see what areas are causing the problem. The value of the "count" column tells
where the problem is, the "class" column tells you with what. UNDO is rollback segments, DATA is
data base buffers.
If you see contention for library caches how you can fix it?
Increase the size of the shared pool.
If you see statistics that deal with "undo" what are they really talking about?
Rollback segments and associated structures.
If a tablespace has a default pctincrease of zero what will this cause (in relationship to the
SMON process)?
The SMON process would not automatically coalesce its free space fragments.
If a tablespace shows excessive fragmentation what are some methods to defragment the
tablespace? (7.1,7.2 and 7.3 only)
In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace name coalesce level
ts#'; command is the easiest way to defragment contiguous free space fragmentation. The ts#
parameter corresponds to the ts# value found in the ts$ SYS table. In version 7.3 the alter tablespace
coalesce; is best. If the free space is not contiguous then export, drop and import of the tablespace
contents may be the only way to reclaim non-contiguous free space.
How can you tell if a tablespace has excessive fragmentation?
If a select against the dba_free_space table shows that the count of tablespaces extents is greater
than the count of its data files, then it is fragmented.
You see the following on a status report: redo log space requests 23 redo log space wait time
0 Is this something to worry about? What if redo log space wait time is high? How can you fix
this?
Since the wait time is zero, no problem. If the wait time was high it might indicate a need for more or
larger redo logs.
If you see a pin hit ratio of less than 0.8 in the estat library cache report is this a problem? If
so, how do you fix it?
This indicates that the shared pool may be too small. Increase the shared pool size.


Page 8 of 134

If you see the value for reloads is high in the estat library cache report is this a matter for
concern?
Yes, you should strive for zero reloads if possible. If you see excessive reloads then increase the size
of the shared pool.
You look at the dba_rollback_segs view and see that there is a large number of shrinks and
they are of relatively small size, is this a problem? How can it be fixed if it is a problem?
A large number of small shrinks indicates a need to increase the size of the rollback segment extents.
Ideally you should have no shrinks or a small number of large shrinks. To fix this just increase the size
of the extents and adjust optimal accordingly.
You look at the dba_rollback_segs view and see that you have a large number of wraps is this
a problem?
A large number of wraps indicates that your extent size for your rollback segments are probably too
small. Increase the size of your extents to reduce the number of wraps. You can look at the average
transaction size in the same view to get the information on transaction size.
You see multiple extents in the Temporary Tablespace. Is this a problem?
As long as they are all the same size this is not a problem. In fact, it can even improve performance
since Oracle would not have to create a new extent when a user needs one.
How do you set up your Tablespace on installation Level: Low?
The answer here should show an understanding of separation of redo and rollback, data and indexes
and isolation of SYSTEM tables from other tables. An example would be to specify that at least 7
disks should be used for an Oracle installation.
Disk Configuration:
SYSTEM tablespace on 1, Redo logs on 2 (mirrored redo logs), TEMPORARY tablespace on 3,
ROLLBACK tablespace on 4, DATA and INDEXES 5,6
They should indicate how they will handle archive logs and exports as well as long as they have a
logical plan for combining or further separation more or less disks can be specified.
You have installed Oracle and you are now setting up the actual instance. You have been
waiting an hour for the initialization script to finish, what should you check first to determine if
there is a problem?
Check to make sure that the archiver is not stuck. If archive logging is turned on during install a large
number of logs will be created. This can fill up your archive log destination causing Oracle to stop to
wait for more space.
When configuring SQLNET on the server what files must be set up?
INITIALIZATION file, TNSNAMES.ORA file, SQLNET.ORA file
When configuring SQLNET on the client what files need to be set up?
SQLNET.ORA, TNSNAMES.ORA
You have just started a new instance with a large SGA on a busy existing server. Performance
is terrible, what should you check for?
The first thing to check with a large SGA is that it is not being swapped out.
What OS user should be used for the first part of an Oracle installation (on UNIX)?
You must use root first.
When should the default values for Oracle initialization parameters be used as is?
Never
How many control files should you have? Where should they be located?
At least 2 on separate disk spindles (Mirrored by Oracle).
How many redo logs should you have and how should they be configured for maximum
recoverability?
You should have at least 3 groups of two redo logs with the two logs each on a separate disk spindle
(mirrored by Oracle). The redo logs should not be on raw devices on UNIX if it can be avoided.


Page 9 of 134

Why are recursive relationships bad? How do you resolve them?
A recursive relationship defines when or where a table relates to itself. It is considered as bad when it
is a hard relationship (i.e. neither side is a "may" both are "must") as this can result in it not being
possible to put in a top or perhaps a bottom of the table. For example in the EMPLOYEE table you
could not put in the PRESIDENT of the company because he has no boss, or the junior janitor
because he has no subordinates. These type of relationships are usually resolved by adding a small
intersection entity.
What does a hard one-to-one relationship mean (one where the relationship on both ends is
"must")?
This means the two entities should probably be made into one entity.
How should a many-to-many relationship be handled?
By adding an intersection entity table
What is an artificial (derived) primary key? When should an artificial (or derived) primary key
be used?
A derived key comes from a sequence. Usually it is used when a concatenated key becomes too
cumbersome to use as a foreign key.
When should you consider de-normalization?
Whenever performance analysis indicates it would be beneficial to do so without compromising data
integrity.
-UNIX-
How can you determine the space left in a file system?
There are several commands to do this: du, df, or bdf
How can you determine the number of SQLNET users logged in to the UNIX system?
SQLNET users will show up with a process unique name that begins with oracle, if you do a ps -
ef|grep oracle|wc -l you can get a count of the number of users.
What command is used to type files to the screen?
cat, more, pg
Can you remove an open file under UNIX?
Yes
What is the purpose of the grep command?
grep is a string search command that parses the specified string from the specified file or files
The system has a program that always includes the word nocomp in its name, how can you
determine the number of processes that are using this program?
ps -ef|grep *nocomp*|wc -l
The system administrator tells you that the system has not been rebooted in 6 months, should
he be proud of this?
Most UNIX systems should have a scheduled periodic reboot so file systems can be checked and
cleaned and dead or zombie processes cleared out. May be, Some UNIX systems do not clean up
well after themselves. Inode problems and dead user processes can accumulate causing possible
performance and corruption problems.
How can you find dead processes?
ps -ef|grep zombie -- or -- who -d depending on the system.
How can you find all the processes on your system?
Use the ps command
How can you find your id on a system?
Use the "who am i" command.
What is the finger command?
The finger command uses data in the passwd file to give information on system users.
What is the easiest method to create a file on UNIX?
Use the touch command

Page 10 of 134

What does >> do?
The ">>" redirection symbol appends the output from the command specified into the file specified.
The file must already have been created.
If you are not sure what command does a particular UNIX function what is the best way to
determine the command?
The UNIX man -k command will search the man pages for the value specified. Review the results
from the command to find the command of interest.
How can you determine if an Oracle instance is up from the operating system level?
There are several base Oracle processes that will be running on multi-user operating systems, these
will be smon, pmon, dbwr and lgwr. Any answer that has them using their operating system process
showing feature to check for these is acceptable. For example, on UNIX ps -ef|grep pmon will show
what instances are up.
Users from the PC clients are getting messages indicating : ORA-06114: NETTCP: SID lookup
failure. What could the problem be?
The instance name is probably incorrect in their connection string.
Users from the PC clients are getting the following error stack:
ERROR: ORA-01034: ORACLE not available ORA-07318: smsget: open error when opening
sgadef.dbf file. HP-UX Error: 2: No such file or directory What is the probable cause?
The Oracle instance is shutdown that they are trying to access, restart the instance.
How can you determine if the SQLNET process is running for SQLNET V1? How about V2?
For SQLNET V1 check for the existence of the orasrv process. You can use the command "tcpctl
status" to get a full status of the V1 TCPIP server, other protocols have similar command formats. For
SQLNET V2 check for the presence of the LISTENER process(s) or you can issue the command
"lsnrctl status".
What file will give you Oracle instance status information? Where is it located?
The alert.ora log. It is located in the directory specified by the background_dump_dest parameter in
the v$parameter table.
Users are not being allowed on the system. The following message is received: ORA-00257
archiver is stuck. Connect internal only, until freed. What is the problem?
The archive destination is probably full, backup the archivelogs and remove them and the archiver will
re-start.
Where would you look to find out if a redo log was corrupted assuming you are using Oracle
mirrored redo logs?
There is no message that comes to the SQLDBA or SRVMGR programs during startup in this
situation, you must check the alert. log file for this information.
You attempt to add a datafile and get: ORA-01118: cannot add anymore datafiles: limit of 40
exceeded. What is the problem and how can you fix it?
When the database was created the db_files parameter in the initialization file was set to 40. You can
shutdown and reset this to a higher value, up to the value of MAX_DATAFILES as specified at
database creation. If the MAX_DATAFILES is set to low, you will have to rebuild the control file to
increase it before proceeding.

You look at your fragmentation report and see that smon has not coalesced any of you
tablespaces, even though you know several have large chunks of contiguous free extents.
What is the problem?
Check the dba_tablespaces view for the value of pct_increase for the tablespaces. If pct_increase is
zero, smon will not coalesce their free space.
Your users get the following error: ORA-00055 maximum number of DML locks exceeded?
What is the problem and how do you fix it?
The number of DML Locks is set by the initialization parameter DML_LOCKS. If this value is set to
low (which it is by default) you will get this error. Increase the value of DML_LOCKS. If you are sure
Page 11 of 134

that this is just a temporary problem, you can have them wait and then try again later and the error
should clear.
You get a call from you backup DBA while you are on vacation. He has corrupted all of the
control files while playing with the ALTER DATABASE BACKUP CONTROLFILE command.
What do you do?
As long as all datafiles are safe and he was successful with the BACKUP controlfile command you
can do the following:
CONNECT INTERNAL STARTUP MOUNT (Take any read-only tablespaces offline before next step
ALTER DATABASE DATAFILE .... OFFLINE;
RECOVER DATABASE USING BACKUP CONTROLFILE
ALTER DATABASE OPEN RESETLOGS; (bring read-only tablespaces back online)
Shutdown and backup the system, then restart If they have a recent output file from the ALTER
DATABASE BACKUP CONTROL FILE TO TRACE; command, they can use that to recover as well.
If no backup of the control file is available then the following will be required: CONNECT INTERNAL
STARTUP NOMOUNT CREATE CONTROL FILE .....; However, they will need to know all of the
datafiles, logfiles, and settings for MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY,
MAXDATAFILES for the database to use the command.
You have taken a manual backup of a datafile using OS. How RMAN will know about it?
Whenever we take any backup through RMAN in the repository information of the backup is recorded.
The RMAN repository can be either controlfile or recovery catalog. However if you take a backup
through OS command then RMAN does not aware of that and hence recorded are not reflected in the
repository. This is also true whenever we create a new controlfile or a backup taken by RMAN is
transferred to another place using OS command then controlfile/recovery catalog does not know
about the prior backups of the database.
So in order to restore database with a new created controlfile we need to inform RMAN about the
backups taken before so that it can pick one to restore.
This task can be done by catalog command in RMAN.
Add information of backup pieces and image copies in the repository that are on disk.
Record a datafile copy as a level 0 incremental backup in the RMAN repository.
Record of a datafile copy that was taken by OS.
But CATALOG command has some restrictions. It can't do the following.
Can't catalog a file that belong to different database.
Can't catalog a backup piece that exists on an sbt device.
Example: Catalog Archive log
RMAN>CATALOG ARCHIVELOG '/oracle/oradata/arju/arc001_223.arc'
'/oracle/oradata/arju/arc001_224.arc';
Catalog Datafile
To catalog datafile copy '/oradata/backup/users01.dbf' as an incremental level 0 backup your
command will be
RMAN>CATALOG DATAFILE COPY '/oradata/backup/users01.dbf' LEVEL 0;
Note that this datafile copy was taken backup either using the RMAN BACKUP AS COPY command
or by using operating system utilities in conjunction with ALTER TABLESPACE BEGIN/END
BACKUP.
Catalog multiple copies in a directory:
RMAN>CATALOG START WITH '/tmp/backups' NOPROMPT;
Catalog files in the flash recovery area:
To catalog all files in the currently enabled flash recovery area without prompting the user for each
one issue
RMAN>CATALOG RECOVERY AREA NOPROMPT;
Catalog backup pieces:
RMAN>CATALOG BACKUPPIECE '/oradata2/o4jccf4';
Page 12 of 134

How to Uncatalog Backup?
In many cases you need to uncatalog command. Suppose you do not want a specific backup or copy
to be eligible to be restored but also do not want to delete it.
To uncatalog all archived logs issue:
RMAN>CHANGE ARCHIVELOG ALL UNCATALOG;
To uncataog tablespace USERS issue:
RMAN>CHANGE BACKUP OF TABLESPACE USERS UNCATALOG;
To uncatalog a backuppiece name /oradata2/oft7qq issue:
RMAN>CHANGE BACKUPPIECE '/oradata2/oft7qq' UNCATALOG;
How would you find total size of database in OS level
The size of the database is the total size of the datafiles that make up the tablespaces of the
database. These details are found in the dba_extents view.
select sum(bytes)/(1024*1024) from V$datafile;
select sum(bytes)/(1024*1024) from dba_data_files;
select sum(bytes)/(1024*1024) from dba_extents; Can we take incremental Backup with out
taking complete Backup?
No, First full backup is needed

DBA Interview Questions with Answers Part3
How to use "ALTER DATABASE BEGIN BACKUP;" command in Oracle 9i.
SQL>alter tablespace <tablespace_name> begin backup;
copy all the datafile redolog file using command prompt
querying v$datafile, v$controlfile to check the file status and path
after backing up end the command.
SQL>alter tablespace <tablespace_name> end backup;
repeat this for all tablespaces
How will you rectify if one of the rollback segments gets corrupted
The only option available is to restore and recover the database followed by opening the database
with resetlogs. In this case you will lose the entire prior database backup so must make fresh backup.
How many days, we are going to retain the data after taking the backup. For example the data
which backed up today that will get expire in 90 days. That means, it is 90 days retention
policy for backup
You can configure retention policy command to create a persistent and automatic backup retention
policy. When a backup retention policy is in effect RMAN considers backups of datafiles and control
files as obsolete that is no longer needed for recovery according to criteria that you specify in the
CONFIGURE command. You can then use the REPORT OBSOLETE command to view obsolete files
and DELETE OBSOLETE to delete them. That means it is 90 days retention policy for backup
Difference Retention Policy of REDUNDANCY/RECOVERY WINDOW Parameters?
RETENTION POLICY: (REDUNDANCY/RECOVERY WINDOW) REDUNDANCY defines a
fixed number of backup to be retained. Any backup in excess of this number can be deleted. The
default value 1 says as soon as a new backup is created the old one is no longer needed and can be
deleted. The other option of retention policy is RECOVERY WINDOW specified in days, to define
period of time in which point in time recovery must be possible. Thus it defines how long backup
should retain.
Page 13 of 134

What kind of backup you take Physical / Logical? Which one is better and Why?
Logical backup means backing up the individual database objects such as tables, views , indexes
using the utility called EXPORT, provided by Oracle. The objects exported in this way can be
imported into either same database or into any other database. The backed-up copy of information is
stored in a dumpfile, and this file can be read only using another utility called IMPORT. There is no
other way you can use this file. In this backup Oracle Export utility stores data in Binary file at OS
level.
Physical backups rely on the Operating System to make a copy of the physical files like data files, log
files, control files that comprise the database.In this backup physically CRD (datafile, controlfile,
redolog file) files are copied from one location to another (disk or tape)
We don't preferred logical backup. It is very slow and recoveries are almost not possible.
What is Partial Backup?
A Partial Backup is any operating system backup short of a full backup, taken while the database is
open or shut down.
A partial backup is an operating system backup of part of a database. The backup of an individual
table spaces data files or the backup of a control file are examples of partial backups. Partial backups
are useful only when the database is in ARCHIVELOG ...
What are the name of the available VIEW in oracle used for monitoring database is in backup
mode (begin backup).
V$backup : Status column of this view shows whether a tablespace is in hotbackup mode. The status
'ACTIVE' shows the datafile to be in backup mode.
V$datafile_header : The fuzzy column also helps a dba to monitor datafile which are in backup
mode.
The fuzzy NO indicates that the datafile is in hotbackup 9begin backup) mode.
NOTE : The database doesn't startup when a datafile is in backup mode. So put datafile back in the
normal mode before shutting down the database.
What is Tail log backup? Where can we use it?
Tail Log Backup is the log backup taken after data corruption (Disaster). Even though there is file
corruption we can try to take log backup (Tail Log Backup). This will be used during point in time
recovery.
Consider a scenario where in we have full backup of 12:00 noon one Transactional log backup at 1:00
PM. The log backup is scheduled to run for every 1 hr. If disaster happens at 1:30 PM then we can try
to take tail log backup at 1:30 (after disaster). If we can take tail log backup then in recovery first
restore full backup of 12:00 noon then 1:00 PM log backup recovery and then last tail backup of 1:30
(After Disaster).
How to check the size of SGA?
SQL> show SGA
Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 58721636 bytes
Database Buffers 104857600 bytes
Redo Buffers 2945024 bytes
Page 14 of 134

How to define data block size
The primary block size is defined by the Initialization parameter DB_BLOCK_SIZE.
How can we determine the size of the log files.
SQL>Select sum(bytes)/(1024*1024) size_in_mb from v$log;
What do you do when the server cannot start due to a corrupt master database?
If the master database is corrupt then surely others also do have the problems and thus the need of
MDF recovery comes to an immediate. However you can try out to rebuild it with rebuild.exe and
restore it.
What do you do when temp db is full?
You need to clean up the space and add more space in order to prevent this error in future.
SQL>Alter database tempfile temp01.dbf resize 200M;
Use V$TEMP_SPACE_HEADER to check the free space in Tempfile or use the query
SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;
The above query will displays for each sort segment in the database the tablespace the
segment resides in, the size of the tablespace, the amount of space within the sort segment
that is currently in use, and the amount of space available.
What is the frequency of log Updated..?
Whenever commit, checkpoint or redolog buffer is 1/3rd full, Time out occurs (3 sec.), 1 MB of redo
log buffer
What are the Possibilities of Logical Backup (Export/Import)
- We can export from one user and import into another within the same database.
- We can export from one database and import into another database (but both source and
destination databases
Page 15 of 134

should be are ORACLE databases)
- When migrating from one platform to another like from windows to sun Solaris then export is the only
method
to transfer the data.
What is stored in Oratab file
"oratab" is a file created by Oracle in the /etc or /var/opt/oracle directory when installing database
software. Originally ORATAB was used for SQL*Net V1, but lately it is being used to list the
databases and software versions installed on a server.
database_sid:oracle_home_dir:Y|N
The Y|N flags indicate if the instance should automatically start at boot time (Y=yes, N=no).
Besides acting as a registry for what databases and software versions are installed on the server,
ORATAB is also used for the following purposes:
Oracle's "dbstart" and "dbshut" scripts use this file to figure out which instances are to be start up
or shut down (using the third field, Y or N).
The "oraenv" utility uses ORATAB to set the correct environment variables.
One can also write Unix shell scripts that cycle through multiple instances using the information in the
oratab file.
In your database some blocks of particular datafile are corrupted. What statement will you
issue to know how many blocks are corrupted?
You can check the " Select * from V$DATABASE_BLOCK_CORRUPTION; " view to determine the
corrupted blocks.

What is a flash back query? This feature is also available in 9i. What are the difference
between 9i and 10g (related to flash back query).
Oracle 9i flashback 10g enhancement
Flashback query:
Flashback version query
Flashback_Transactional_query view
10g new Features:
Flashback Table
Flashback database
Setup for new feature:
AUM
Flash Recovery Area
Page 16 of 134

Describe the use of %ROWTYPE and %TYPE in PL/SQL
%ROWTYPE allows you to associate a variable with an entire table row. The %TYPE associates a
variable with a single column type.
How can the problem be resolved if a SYSDBA, forgets his password for logging into
enterprise manager?
There are two ways to do that:
1. Login as SYSTEM and change the SYS password by using ALTER USER.
2. Recreate the password file using orapwd and set remote_password_file exclusive and then restart
the instance.
3. Also you can enter as / as sysdba and then after change the password Alter user sys identified by
xxx;
How many maximum number of columns can be part of primary key in a table in 9i and 10g.
You can set primary key in a single table up to 16 columns of table in oracle 9i and 10g.
What is RAC?
RAC stands for Real Application Cluster. In previous versions, it is known as PARALLEL SERVER.
RAC is a mechanism that allows multiple instances (on different hosts/nodes) to access the
same database. The benefits: It provides more memory resources, since more hosts are being used;
If one host gets down, then other host assumes it's work load.
What is Data Pumping?
Data Pumping is a data movement utility. This is a replacement to imp/exp utilities. The earlier
imp/exp utilities are also data movement utilities, but they work within the local servers only. Where
as, impdp/expdp (Data pumping) are very fast and perform data movements from one database to
another database on same as well as different host. In other words, it provides secure transports.
What is Data Migration?
Data migration is actually the translation of data from one format to another format or from one
storage device to another storage device. Data migration is necessary when a company upgrades its
database or system software, either from one version to another or from one program to an entirely
different program.
What is difference between spfile and init.ora file
init.ora or spfile both are contains Database parameters information. Both are supported by oracle.
Every database instance required either any one. If both are present first choice is given
to spfile only. init.ora saved in the format of ASCII where as SPFILE saved in the format of
binary.init.ora information is read by oracle engine at the time of database instance started only that
means any modification made in this those are applicable in the next startup only. But in spfile
modifications (through alter system..... command) can applicable without restarting oracle database
(restarting instance).
What is SCN? Where the SCN does resides?
SCN - System Change Number - is always getting incremented by Oracle server and will be used to
make sure the consistency across the database. The system change number (SCN) is an ever-
increasing value that uniquely identifies a committed version of the database. Every time a user
commits a transaction. Oracle records a new SCN. You can obtain SCNs in a number of ways for
example from the alert log. You can then use the SCN as an identifier for purposes of recovery. For
Page 17 of 134

example you can perform an incomplete recovery of a database up to SCN 1030. Oracle uses SCNs
in control files datafile headers and redo records. Every redo log file has both a log sequence number
and low and high SCN. SCN number will be updated in almost all places of the database.
CONTROLFILE, DATAFILE HEADERS, REDOLOG FILES (and hence ARCHIVE LOG FILES), DATA
BLOCK HEADERS but not in ALERT LOG file as it is not part of database.
How to know which query is taking long time?
By testing with the help of these tools tkprof or using explain plan. tkprof is available to DBA Only
where as explain plan can run programmer as well as DBA also. As well as tkprof generates
complexilty after sucessful execution only where as explain plan can show Oracle internal plan &
other details. Even though they are not alternatives for one to another. But both are designed for one
purpose only. They are two different tools they are engaged in different useful situations also you can
use STATSPACK to take Snaps while running those queries and get the report with details of SQL
taking more time to respond otherwise, you can search Top ten sql with the following views:
SQL>SELECT * FROM V$SQL;
SQL>SELECT * FROM V$SQLAREA;
SQL>SELECT * FROM (SELECT rownum Substr(a.sql_text 1 200) sql_text
Trunc(a.disk_reads/Decode(a.executions 0 1 a.executions)) reads_per_execution a.buffer_gets
a.disk_reads a.executions a.sorts a.address FROM v$sqlarea a ORDER BY 3 DESC)WHERE
rownum < 10;
How can you check which user has which Role.
Sql>Select * from DBA_ROLE_PRIVS order by grantee;
What are clusters
Groups of tables physically stored together because they share common columns and are often used
together is called clusters.
Name (init.ora) parameters which affects system performance.
These are the Parameters for init.ora which affect system performance
DB_BLOCK_BUFFERS; SHARED_POOL_SIZE; SORT_AREA_SIZE; DBWR_IO_SLAVES;
ROLLBACK_SEGMENTS; SORT_AREA_RETAINED_SIZE;
B_BLOCK_LRU_EXTENDED_STATISTICS
SHARED_POOL_RESERVE_SIZE
How do you rename a database?
Prior to the introduction of the DBNEWID (NID) utility alteration of the internal DBID of an instance
was impossible and alteration of the DBNAME required the creation of a new controlfile. The
DBNEWID utility allows the DBID to be altered for the first time and makes changing the DBNAME
simpler.
Steps: Change DBNAME only
1. Mount the database after clean shutdown.
2. Invoke the DBNEWID utility (NID) from the command line using sys user.
nid TARGET=sys/password@TSH2 DBNAME=TSH3 SETNAME=YES
Page 18 of 134

Assuming the validation is successful the utility prompts for confirmation before performing the
actions.
Note: The SETNAME parameter tells the DBNEWID utility to only alter the database name.
3. clean shutdown the database
SQL>SHUTDOWN IMMEDIATE
Set the DB_NAME initialization parameter in the initialization parameter file (PFILE) to the new
database name.
Note:The DBNEWID utility does not change the server parameter file (SPFILE). Therefore, if you use
SPFILE to start your Oracle database, you must re-create the initialization parameter file from the
server parameter file, remove the server parameter file, change the DB_NAME in the initialization
parameter file, and then re-create the server parameter file. Because you have changed only
the database name, and not thedatabase ID, it is not necessary to use the RESETLOGS option
when you open the database. This means that all previous backups are still usable.
4. Create a new password file.
orapwd file=c:\oracle\920\database\pwdTSH2.ora password=password entries=10
5. Open the database without Reset logs option
SQL>Startup;
Steps: change DBID only
Repeat the same above procedure
nid TARGET=sys/password@TSH3
Shutdown and open the database with RESETLOGS option
What is the view name where we can get the space for tables or views?
DBA_Segments;
SELECT SEGMENT_NAME, SUM(BYTES) FROM DBA_SEGMENTS
WHERE SEGMENT_NAME='TABLE_NAME' AND OWNER='OWNER OF THE TABLE GROUP BY
SEGMENT_NAME;
We cannot get the space of view because view does not have its own space it depend on base table.
What background process refreshes materialized views?
Job Queue processes
What view would you use to determine free space in a tablespace?
It is dba_free_space
SQL>SELECT TABLESPACE_NAME , BYTES FROM sm$ts_free;
SQL>SELECT TABLESPACE_NAME,SUM(BYTES/1024/1024) FROM
DBA_FREE_SPACE GROUP BY TABLESPACE_NAME;
If CPU is very slow, what can u do to speed?
Use VMSTAT to check the CPU enqueues or use also TOP and SAR commands for CPU load.
What would you use to improve performance on an insert statement that places millions of rows into that
table?
Drop the indexes and recreate after insert.
Page 19 of 134

DML Triggers to be DISABLED and then ENABLED once the insert completed.
DISABLE the Clustered Index and then ENABLED once the insert completed.
If Monday take full backup and Tuesday it was cumulative backup and Wednesday we taken incremental
backup, Thursday some disaster happen then what type of recovery and how it will take?
Restore the Monday full backup + Tuesday cumulative backup + Wednesday Incremental backup.
Becausecumulative and incremental clears the archives every backup
What is the difference between local managed Tablespace & dictionary managed Tablespace ?
The basic diff between a locally managed tablespace and a dictionary managed tablespace is that in
the dictionary managed tablespace every time a extent is allocated or deallocated data dictionary is
updated which increases the load on data dictionary while in case of locally managed tablespace the
space information is kept inside the datafile in the form of bitmaps every time a extent is allocated or
deallocated only the bitmap is updated which removes burden from data dictionary. The Tablespaces
that record extent allocation/deallocation in the dictionary are called dictionary managed tablespaces
and tablespaces that record extent allocation in the tablespace header are called locally managed
tablespaces.

While installing the Oracle 9i ( 9.2) version, automatically system takes the space of
approximately 4 GB. That is fine.... Now, if my database is growing up and it is reaching the
4GB of my database space...Now, I would like to extend my Database space to 20 GB or 25
GB... what are the things i have to do?
Following steps can be performed:
1. First check for available space on the server.
2. You can increase the size of the datafiles if you have space available on the server and also you
can make auto extend on. So that in future you don't need to manually increase the size.
The alternative better idea is that make the autoextend off and add more datafiles to the Tablespace.
Making a single datafile to a bigger size is risky. By making autoextend off you can monitor the growth
of the tablespace schedule a growth monitoring script with a threshold of 85 full.

DBA Interview Questions with Answers art
How to handle data corruption for ASM type files?
The storage array should contain one or more spare disks (often called hot spares). When a physical
disk starts to report errors to the monitoringinfrastructure or fails suddenly the firmware should
immediately restore fault tolerance by mirroring the contents of the failed disk onto a spare disk
When a user comes to you and asks that a particular SQL query is taking more time. How will
you solve this?
If you find the SQL Query (which make problem) then take a SQLTRACE with explain plan it will show
how the SQL query will executed by oracle depending upon the report you will tune your database.
For example: one table has 10,000 records but you want to fetch only 5 rows but in that query oracle
does the full table scan. Only for 5 rows full table is scan is not a good thing so create an index on the
particular column by this way to tune the database.
By default Maximum Enabled Role in a database.
The MAX_ENABLED_ROLES init.ora parameter limits the number of roles any user can have
enabled simultaneously. The default is 30 in both oracle 8i and 9i. When you create a role it is
enabled by default. If you create many roles, then you may exceed the MAX_ENABLED_ROLE
setting even if you are not the user of this role.


Page 20 of 134

User Profiles:
The user profile are used to limits the amount of system and database resources available to a user
and to manage password restrictions. If no profile are created in a database then the default profile
are, which specify unlimited resources for all users, will be used.
How to convert local management Tablespace to dictionary managed Tablespace?
>execute dbms_space_admin.tablespace_convert_to_local('tablespace_name');
>execute dbms_space_admin.tablespace_convert_from_local('tablespace_name');
What is a cluster Key ?
The related columns of the tables are called the cluster key. The cluster key is using a cluster index
and its value is stored only once for multiple tables in the cluster.
What are four performance bottlenecks that can occur in a database server and how are they
detected and prevented?
CPU bottlenecks
Undersized memory structures
Inefficient or high-load SQL statements
Database configuration issues
Four major steps to detect these issues:-
Analyzing Optimizer Statistics
Analyzing an Execution Plan
Using Hints to Improve Data Warehouse Performance
Using Advisors to Verify SQL Performance
Analyzing Optimizer Statistics
Optimizer statistics are a collection of data that describes more details about the database and the
objects in the database. The optimizer statistics are stored in the data dictionary. They can be viewed
using data dictionary views similar to the following:
SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME 'GATHER_STATS_JOB';
Because the objects in a database can constantly change statistics must be regularly updated so that
they accurately describe these database objects. Statistics are maintained automatically by Oracle
Database or you can maintain the optimizer statistics manually using the DBMS_STATS package.
Analyzing an Execution Plan
General guidelines for using the EXPLAIN PLAN statement are:
To use the SQL script UTLXPLAN.SQL to create a sample output table called PLAN_TABLE in your
schema.
To include the EXPLAIN PLAN FOR clause prior to the SQL statement.
After issuing the EXPLAIN PLAN statement to use one of the scripts or packages provided by Oracle
Database to display the most recent plan table output.
The execution order in EXPLAIN PLAN output begins with the line that is indented farthest to the
right. If two lines are indented equally then the top line is normally executed first.
To analyze EXPLAIN PLAN output:
EXPLAIN PLAN FOR (YOUR QUERY);
EXPLAIN PLAN FOR SELECT p.prod_name c.channel_desc SUM(s.amount_sold) revenue
FROM products p channels c sales s
WHERE s.prod_id p.prod_id
AND s.channel_id c.channel_id
AND s.time_id BETWEEN '01-12-2001' AND '31-12-2001'GROUP BY p.prod_name c.channel_desc;
SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY);
Using Advisors how to Verify SQL Performance?
Using the SQL Tuning Advisor and SQL Access Advisor you can invoke the query optimizer in
advisory mode to examine a given SQL statement or set of SQL statements and provide
recommendations to improve their efficiency. The SQL Tuning Advisor and SQL Access Advisor can
make various types of recommendations such as creating SQL profiles restructuring SQL statements
Page 21 of 134

creating additional indexes or materialized views and refreshing optimizer statistics.
Additionally Oracle Enterprise Manager enables you to accept and implement many of these
recommendations in very few steps
Difference between Rman Recovery Catalog or nocatalog Option?
The recovery catalog is an optional feature of RMAN though Oracle, recommends that you use it, it
isnt required. One major benefit of the recovery catalog is that it stores metadata about backups in
a database that can be reported or queried. Catalog means you have a recovery catalog database,
nocatalog means that you are using the controlfile as rman repository. Of course catalog option
can only be used when recovery catalog is present (which is not mandatory). From functional point of
view there is no difference either taking backup in catalog or nocatlaog mode.
What is Oracle Net?
Oracle Net is responsible for handling client-to-server and server to- server communications in an
Oracle environment. It manages the flow of information in the Oracle network infrastructure. Oracle
Net is used to establish the initial connection to the Oracle server and then it acts as the messenger,
which passes requests from the client back to the server or between two Oracle servers.
Difference of Backup Sets and Backup Pieces?
RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of
an RMAN backup. A backup set contains the data from one or more datafiles, archived redo logs, or
control files or server parameter file. Backup sets, which are only created and accessed through
RMAN, are the only form in which RMAN can write backups to media managers such as tape drives
and tape libraries.
A backup set contains one or more binary files in an RMAN-specific format. This file is known as
a backup piece. A backup set can contain multiple datafiles. For example, you can back up ten
datafiles into a single backup set consisting of a single backup piece. In this case, RMAN creates one
backup piece as output. The backup set contains only this backup piece.
What is an UTL_FILE? What are different procedures and functions associated with it?
The UTL_FILE package lets your PL/SQL programs read and write operating system (OS) text files. It
provides a restricted version of standard OS stream file input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.
Differentiate between TRUNCATE and DELETE?
The Delete commands will log the data changes in the log file where as the truncate will simply
remove the data without it. Hence Data removed by Delete command can be rolled back but not the
data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.
What is an Oracle Instance?
Instance is a combination of memory structure and process structure. Memory structure is SGA
(System or Shared Global Area) and Process structure is background processes.
Components of SGA:
Database Buffer Cache: It is further divided into Library Cache and Data Dictionary Cache or Row
Cache,
Shared Pool/large pool/stream pool/java pool
Page 22 of 134

Redo log Buffer,
Background Process:
Mandatory Processes (SMON, PMON, DBWR, LGWR, CKPT, RECO)
Optional Process (ARCN, RBAC, MMAN, MMON, MMNL)
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for
database information, and creates background processes. At this point, no database is associated
with these memory structures and processes.
What information is stored in Control File?
The database name, The timestamp of database creation, The names and locations of associated
datafiles and redo log files, Tablespace information, Datafile offline ranges, The log history, Archived
log information, Backup set and backup piece information, Backup datafile and redo log
information, Datafile copy information, The current log sequence number
When you start an Oracle DB which file is accessed first?
To Start an instance, oracle server need a parameter file which contains information about the
instance, oracle server searches file in following sequence:
1) SPFILE ------ if finds instance started .. Exit
2) Default SPFILE -- if it is spfile is not found
3) PFILE -------- if default spfile not find, instance started using pfile.
4) Default PFILE -- is used to start the instance.
What is the Job of SMON, PMON processes?
SMON: System monitor performs instance recovery at instance startup in a multiple
instances. Recovers other instances that have failed in cluster environment .It cleans up
temporary segments that are no longer in use. Recovers dead transactions skipped during crash
and instance recovery. Coalesce the free extents within the database, to make free space
contiguous and easy to allocate.
PMON: Process monitor performs recovery when a user process fails. It is responsible
for cleaning up the cache, freeing resources used by the processes. In the mts environment
it checks on dispatcher and server processes, restarting them at times of failure.
What is Instance Recovery?
When an Oracle instance fails, Oracle performs an instance recovery when the associated database
is re-started.
Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database buffer cache.
These changes are also recorded in online redo log files simultaneously. When there are enough data
in the database buffer cache, they are written to data files. If an Oracle instance fails before the data
in the database buffer cache are written to data files, Oracle uses the data recorded in the online redo
log files to recover the lost data when the
associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the before image of the
modified data is stored in an undo segment. The data stored in the undo segment is used to restore
the original values in case a transaction is rolled back. At the time of an instance failure, the database
may have uncommitted transactions. It is possible that changes made by these uncommitted
transactions have gotten saved in data files. To maintain read consistency, Oracle rolls back all
uncommitted transactions when the associated database is re-started. Oracle uses the undo data
stored in undo segments to accomplish this. This process is called transaction recovery.
1. Rolling forward the committed transactions
2. Rolling backward the uncommitted transactions



Page 23 of 134

What is written in Redo Log Files?
Log writer (LGWR) writes redo log buffer contents Into Redo Log Files. Log writer does this every
three seconds, when the redo log buffer is 1/3 full and immediately before the Database Writer
(DBWn) writes its changed buffers into the data file.
How do you control number of Datafiles one can have in an Oracle database?
When starting an Oracle instance, the database's parameter file indicates the amount of SGA space
to reserve for datafile information; the maximum number of datafiles is controlled by the DB_FILES
parameter. This limit applies only for the life of the instance.
How many Maximum Datafiles can there be in an Oracle Database?
Default maximum datafile is 255 that can be defined in the control file at the time of database
creation.
It can be increased by setting the initialization parameter value up to higher at the time of database
creation. Setting this value too higher can cause DBWR issues.
Before 9i Maximum number of datafile in database was 1022.After 9i the limit is applicable to the
number of datafile in the Tablespace.
What is a Tablespace?
A tablespace is a logical storage unit within the database. It is logical because a tablespace is not
visible in the file system of the machine on which database resides. A tablespace in turn consists of at
least one datafile, which, in tune are physically located in the file system of the server. The tablespace
builds the bridge between the Oracle database and the file system in which the table's or index' data
is stored.
There are three types of tablespaces in Oracle:
Permanent tablespaces, Undo tablespaces, Temporary tablespaces
What is the purpose of Redo Log files?
The purpose of redo log file is to record all changes made to the data during the recovery of
database. It always advisable to have two or more redo log files and keep them in a separate disk, so
you can recover the data during the system crash.
Which default Database roles are created when you create a Database?
Connect , resource and dba are three default roles
What is a Checkpoint?
A checkpoint performs the following three operations:
1. Every block in the buffer cache is written to the data files. That is, it synchronizes the data blocks in
the buffer cache with the datafiles on disk. It's the DBWR that writes all modified database blocks
back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.
The update of the datafile headers and the control files is done by the LGWR (if CKPT is enabled). As
of version 8.0, CKPT is enabled by default. The date and time of the last checkpoint can be retrieved
through checkpoint_time in v$datafile_header. The SCN of the last checkpoint can be found
in v$database as checkpoint_change#.
Which Process reads data from Datafiles?
The Server process reads the blocks from datafiles to buffer cache
Which Process writes data in Datafiles?
DBWn Process is writing the dirty buffers from db cache to data files.
Can you make a Datafile auto extendible. If yes, then how?
You must be logged on as a DBA user, then issue
For Data File:
SQL>Alter database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize 40G
SQL>Alter database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize unlimited;
For Temp File:
SQL>Alter database tempfile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize unlimited;
Page 24 of 134

This would turn on autoextend, grab new disk space of 10m when needed and have no upper limit on
the size of the datafile.
Note: This would be bad on a 32bit machine, where the max size is typically 4gig.
What is a Shared Pool?
It is the area in SGA that allows sharing of parsed SQL statements among concurrent users. It is to
store the SQL statements so that the identical SQL statements do not have to be parsed each time
they're executed.
The shared pool is the part of the SGA where (among others) the following things are stored:
Optimized query plans, Security checks, Parsed SQL statements, Packages, Object information
What is kept in the Database Buffer Cache?
Database Buffer cache is one of the most important components of System Global Area (SGA).
Database Buffer Cache is the place where data blocks are copied from datafiles to perform SQL
operations. Buffer Cache is shared memory structure and it is concurrently accessed by all server
processes. Oracle allows different block size for different tablespaces. A standard block size is
defined in DB_BLOCK_SIZE initialization parameter. System tablespace uses standard block
size. DB_CACHE_SIZE parameter is used to define size for Database buffer cache. For example to
create a cache of 800 MB, set parameter as below
DB_CACHE_SIZE=800M
If you have created a tablesapce with bock size different from standard block size, for example your
standard block size is 4k and you have created a tablespace with 8k block size then you must create
a 8k buffer cache as
DB_8K_CACHE_SIZE=256
How many maximum Redo Logfiles one can have in a Database?
Maximum number of log files a database can accommodate depends on the parameter
"MAXLOGMEMBERS" specified during database creation. In a database we can create 255
maximum redo log files. It depends on what you specified for MAXLOGFILES during database
creation (manually) or what you specified for "Maximum no. of redo log files" with DBCA.
What is PGA_AGGREGRATE_TARGET parameter?
PGA_AGGREGATE_TARGET is an Oracle server parameter that specifies the target aggregate PGA
memory available to all server processes attached to the instance. Some of the properties of the
PGA_AGGREGATE_TARGET parameter are given below:
Parameter type: Big integer
Syntax: PGA_AGGREGATE_TARGET = integer [K M G]Default value: 20% of SGA size or 10MB,
whichever is greater or modifiable by ALTER SYSTEM
Large Pool is used for what?
Large Pool is an optional memory structure used for the following purposes: -
(1) Session information for shared server
(2) I/O server processes
(3) Parallel queries
(4) Backup and recovery if using through RMAN.
The role of Large Pool is important because otherwise memory would be allocated from the Shared
pool. Hence Large pool also reduces overhead of Shared pool.
What is PCT Increase setting?
PCTINCREASE refers to the percentage by which each next extent (beginning with the third extend)
will grow. The size of each subsequent extent is equal to the size of the previous extent plus this
percentage increase.
What is PCTFREE and PCTUSED Setting?
PCTFREE is a block storage parameter used to specify how much space should be left in a database
block for future updates. For example, for PCTFREE=10, Oracle will keep on adding new rows to a
block until it is 90% full. This leaves 10% for future updates (row expansion).
When using Oracle Advanced Compression, Oracle will trigger block compression when
Page 25 of 134

the PCTFREE is reached. This eliminates holes created by row deletions and maximizes contiguous
free space in blocks.
PCTUSED is a block storage parameter used to specify when Oracle should consider a database
block to be empty enough to be added to the freelist. Oracle will only insert new rows in blocks that is
enqueued on the freelist. For example, if PCTUSED=40, Oracle will not add new rows to the block
unless sufficient rows are deleted from the block so that it falls below 40% empty.
SQL> SELECT Pct_free FROM user_tables WHERE table_name = EMP;

DBA Interview Questions with Answers Part5
What is Row Migration and Row Chaining?
There are two circumstances when this can occur, the data for a row in a table may be too large to fit
into a single data block. This can be caused by either row chaining or row migration.
Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this
case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that
segment. Row chaining most often occurs with large rows, such as rows that contain a column of data
type LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.
Migration: Occurs when a row that originally fitted into one data block is updated so that the overall
row length increases, and the blocks free space is already completely filled. In this case, Oracle
migrates the data for the entire row to a new data block, assuming the entire row can fit in a new
block. Oracle preserves the original row piece of a migrated row to point to the new block containing
the migrated row: the rowid of a migrated row does not change. When a row is chained or migrated,
performance associated with this row decreases because Oracle must scan more than one data
block to retrieve the information for that row.
1. INSERT and UPDATE statements that cause migration and chaining perform poorly,
because they perform additional processing.
2. SELECTs that use an index to select migrated or chained rows must perform
additional I/Os.
Detection: Migrated and chained rows in a table or cluster can be identified by using the ANALYZE
command with the LIST CHAINED ROWS option. This command collects information about each
migrated or chained row and places this information into a specified output table. To create the table
that holds the chained rows,
execute script UTLCHAIN.SQL.
SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;
SQL> SELECT * FROM chained_rows;
You can also detect migrated and chained rows by checking the table fetch continued row statistic in
the v$sysstat view.
SQL> SELECT name, value FROM v$sysstat WHERE name = table fetch continued row;
Although migration and chaining are two different things, internally they are represented by Oracle as
one. When detecting migration and chaining of rows you should analyze carefully what you are
dealing with.
What is Ora-01555 - Snapshot Too Old error and how do you avoid it?
1. Increase the size of rollback segment. (Which you have already done)
2. Process a range of data rather than the whole table.
3. Add a big rollback segment and allot your transaction to this RBS.
4. There is also possibility of RBS getting shrunk during the life of the query by setting optimal.
5. Avoid frequent commits.
6. Google out for other causes.


Page 26 of 134

What is a locally Managed Tablespace?
A Locally Managed Tablespace is a tablespace that manages its own extents maintaining a bitmap in
each data file to keep track of the free or used status of blocks in that data file. Each bit in the bitmap
corresponds to a block or a group of blocks. When the extents are allocated or freed for reuse, Oracle
changes the bitmap values to show the new status of the blocks. These changes do not generate
rollback information because they do not update tables in the data dictionary (except for tablespace
quota information), unlike the default method of Dictionary - Managed Tablespaces.
Following are the major advantages of locally managed tablespaces
Reduced contention on data dictionary tables
No rollback generated
No coalescing required
Reduced recursive space management.
Can you audit SELECT statements?
Yes, we can audit the select statements. Check out the below example:
SQL> show parameter audit
NAME TYPE VALUE

audit_file_dest string E:\ORACLE\PRODUCT\10.2.0\DB_2\
ADMIN\SRK\ADUMP
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL> begin
dbms_fga.add_policy ( object_schema => SCOTT,
object_name => EMP2,
policy_name => EMP_AUDIT,
statement_types => SELECT );
end;
/
PL/SQL procedure successfully completed.
SQL>select * from dba_fga_audit_trail;
no rows selected
In HR schema:
SQL> create table bankim(
name varchar2 (10),
roll number (20));
Table created.
SQL> insert into bankim values (bankim, 10);
1 row created.
SQL> insert into bankim values (bankim2, 20);
1 row created.
SQL> select * from bankim;
NAME ROLL
- -
bankim 10
bankim2 20
SQL> select name from bankim;
NAME
-
bankim
bankim2
In sys schema:
Page 27 of 134

SQL>set head off
SQL> select sql_text from dba_fga_audit_trail;
select count(*) from emp2
select * from emp2
select * from emp3
select count(*) from bankim
select * from bankim
select name from bankim
What does DBMS_FGA package do?
The dbms_fga Package is the central mechanism for the FGA is implemented in the package
dbms_fga, where all the APIs are defined. Typically, a user other than SYS is given the responsibility
of maintaining these policies. With the convention followed earlier, we will go with the user
SECUSER, who is entrusted with much of the security features. The following statement grants the
user SECUSER enough authority to create and maintain the auditing facility.
Grant execute on dbms_fga to secuser;
The biggest problem with this package is that the polices are not like regular objects with owners.
While a user with execute permission on this package can create policies, he or she can drop policies
created by another user, too. This makes it extremely important to secure this package and limit the
use to only a few users who are called to define the policies, such as SECUSER, a special user used
in examples.
What is Cost Based Optimization?
The CBO is used to design an execution plan for SQL statement. The CBO takes an SQL statement
and tries to weigh different ways (plan) to execute it. It assigns a cost to each plan and chooses the
plan with smallest cost.
The cost for smallest is calculated: Physical IO + Logical IO / 1000 + net IO.
How often you should collect statistics for a table?
CBO needs some statistics in order to assess the cost of the different access plans. These statistics
includes:
Size of tables, Size of indexes, number of rows in the tables, number of distinct keys in an index,
number of levels in a B* index, average number of blocks for a value, average number of leaf blocks
in an index
These statistics can be gathered with dbms_stats and the monitoring feature.
How do you collect statistics for a table, schema and Database?
Statistics are gathered using the DBMS_STATS package. The DBMS_STATS package can gather
statistics on table and indexes, and well as individual columns and partitions of tables. When you
generate statistics for a table, column, or index, if the data dictionary already contains statistics for the
object, then Oracle updates the existing statistics. The older statistics are saved and can be restored
later if necessary. When statistics are updated for a database object, Oracle invalidates any currently
parsed SQL statements that access the object. The next time such a statement executes, the
statement is re-parsed and the optimizer automatically chooses a new execution plan based on the
new statistics.
Collect Statistics on Table Level
sqlplus scott/tiger
exec dbms_stats.gather_table_stats ( -
ownname => 'SCOTT', -
tabname => 'EMP', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size auto', -
cascade => true, -
degree => 5 - )
/
Page 28 of 134

Collect Statistics on Schema Level
sqlplus scott/tiger
exec dbms_stats.gather_schema_stats ( -
ownname => 'SCOTT', -
options => 'GATHER', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size auto', -
cascade => true, -
degree => 5 - )


Collect Statistics on Other Levels
DBMS_STATS can collect optimizer statistics on the following levels, see Oracle Manual
GATHER_DATABASE_STATS
GATHER_DICTIONARY_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_SYSTEM_STATS
GATHER_TABLE_STATS
Can you make collection of Statistics for tables automatic?
Yes, you can schedule your statistics but in some situation automatic statistics gathering may not be
adequate. It suitable for those databases whose object is modified frequently. Because the automatic
statistics gathering runs during an overnight batch window, the statistics on tables which are
significantly modified during the day may become stale.
There may be two scenarios in this case:
Volatile tables that are being deleted or truncated and rebuilt during the course of the day.
Objects which are the target of large bulk loads which add 10% or more to the objects total size.
So you may wish to manually gather statistics of those objects in order to choose the optimizer the
best execution plan. There are two ways to gather statistics.
1. Using DBMS_STATS package.
2. Using ANALYZE command
How can you use ANALYZE statement to collect statistics?
ANALYZE TABLE emp ESTIMATE STATISTICS FOR ALL COLUMNS;
ANALYZE INDEX inv_product_ix VALIDATE STRUCTURE;
ANALYZE TABLE customers VALIDATE REF UPDATE;
ANALYZE TABLE orders LIST CHAINED ROWS INTO chained_rows;
ANALYZE TABLE customers VALIDATE STRUCTURE ONLINE;
To delete statistics:
ANALYZE TABLE orders DELETE STATISTICS;
To get the analyze details:
SELECT owner_name, table_name, head_rowid, analyze_timestamp FROM chained_rows;
On which columns you should create Indexes?
The following list gives guidelines in choosing columns to index:
You should create indexes on columns that are used frequently in WHERE clauses.
You should create indexes on columns that are used frequently to join tables.
You should create indexes on columns that are used frequently in ORDER BY clauses.
You should create indexes on columns that have few of the same values or unique values in
the table.
Page 29 of 134

You should not create indexes on small tables (tables that use only a few blocks) because a
full table scan may be faster than an indexed query.
If possible, choose a primary key that orders the rows in the most appropriate order.
If only one column of the concatenated index is used frequently in WHERE clauses, place
that column first in the CREATE INDEX statement.
If more than one column in a concatenated index is used frequently in WHERE clauses, place
the most selective column first in the CREATE INDEX statement.
What type of Indexes is available in Oracle?
B-tree indexes: the default and the most common.
B-tree cluster indexes: defined specifically for cluster.
Hash cluster indexes: defined specifically for a hash cluster.
Global and local indexes: relate to partitioned tables and indexes.
Reverse key indexes: most useful for Oracle Real Application Clusters.
Bitmap indexes: compact; work best for columns with a small set of values
Function-based indexes: contain the pre-computed value of a function/expression Domain
indexes: specific to an application or cartridge.
What is B-Tree Index?
B-Tree is an indexing technique most commonly used in databases and file systems where pointers
to data are placed in a balance tree structureso that all references to any data can be accessed in
an equal time frame. It is also a tree data structure which keeps data sorted so that searching,
inserting and deleting can be done in logarithmic amortized time.
A table is having few rows, should you create indexes on this table?
You should not create indexes on small tables (tables that use only a few blocks) because a full table
scan may be faster than an indexed query.
A Column is having many repeated values which type of index you should create on this
column
B-Tree index is suitable if the columns being indexed are high cardinality (number of repeated
values). In fact for this situation a bitmap index is very useful but bitmap index are vary expensive.
When should you rebuild indexes?
There is no thumb rule when you should rebuild the index. According to expert it depends upon your
database situation:
When the data in index is sparse (lots of holes in index, due to deletes or updates) and your query is
usually range based or If Blevel >3 then takes index in rebuild consideration; desc DBA_Indexes;
Because when you rebuild indexes then database performance goes down.
In fact binary tree index can never be unbalanced. Binary tree performance is good for both small and
large tables and does not degrade with the growth of table.
Can you build indexes online?
Yes, we can build index online. It allows performing DML operation on the base table during index
creation. You can use the statements:
CREATE INDEX ONLINE and DROP INDEX ONLINE.
ALTER INDEX REBUILD ONLINE is used to rebuild the index online.
A Table Lock is required on the index base table at the start of the CREATE or REBUILD process to
guarantee DDL information. A lock at the end of the process also required to merge change into the
final index structure.
A table is created with the following setting
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
Page 30 of 134

What will be size of 4th extent?
Percent Increase allows the segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next parameter (200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent (1.4*200=280k).
AND the 4th extent will be 1 + PCTINCREASE/100 times the third extent (1.4*280=392k!!!) and so
on...
Can you Redefine a table Online?
Yes. We can perform online table redefinition with the Enterprise Manager Reorganize Objects wizard
or with the DBMS_REDEFINITION package.
It provides a mechanism to make table structure modification without significantly affecting the table
availability of the table. When a table is redefining online it is accessible to both queries and DML
during the redefinition process.
Purpose for Table Redefinition
Add, remove, or rename columns from a table
Converting a non-partitioned table to a partitioned table and vice versa
Switching a heap table to an index organized and vice versa
Modifying storage parameters
Adding or removing parallel support
Reorganize (defragmenting) a table
Transform data in a table
Restrictions for Table Redefinition:
One cannot redefine Materialized Views (MViews) and tables with MViews or MView Logs defined on
them.
One cannot redefine Temporary and Clustered Tables
One cannot redefine tables with BFILE, LONG or LONG RAW columns
One cannot redefine tables belonging to SYS or SYSTEM
One cannot redefine Object tables
Table redefinition cannot be done in NOLOGGING mode (watch out for heavy archiving)
Cannot be used to add or remove rows from a table
Can you assign Priority to users?
Yes, we can do this through resource manager. The Database Resource Manager gives a database
administrators more control over resource management decisions, so that resource allocation can be
aligned with an enterprise's business objectives.
With Oracle database Resource Manager an administrator can:
Guarantee certain users a minimum amount of processing resources regardless of the load
on the system and the number of users
Distribute available processing resources by allocating percentages of CPU time to different
users and applications.
Limit the degree of parallelism of any operation performed by members of a group of users
Create an active session pool. This pool consists of a specified maximum number of user
sessions allowed to be concurrently active within a group of users. Additional sessions beyond the
maximum are queued for execution, but you can specify a timeout period, after which queued jobs
terminate.
Allow automatic switching of users from one group to another group based on administrator-
defined criteria. If a member of a particular group of users creates a session that runs for longer than
a specified amount of time, that session can be automatically switched to another group of users with
different resource requirements.
Prevent the execution of operations that are estimated to run for a longer time than a
predefined limit
Create an undo pool. This pool consists of the amount of undo space that can be consumed
in by a group of users.
Page 31 of 134

Configure an instance to use a particular method of allocating resources. You can
dynamically change the method, for example, from a daytime setup to a nighttime setup, without
having to shut down and restart the instance.

DBA Interview Questions with Answers Part6
Can one switch to another database user without a password?
Users normally use the "CONNECT" statement to connect from one database user to another.
However, DBAs can switch from one user to another without a password. Of course it is not advisable
to bridge Oracle's security, but look at this example:
SQL> CONNECT / as sysdba
SQL> SELECT password FROM dba_users WHERE username='SCOTT';
F894844C34402B67
SQL> ALTER USER scott IDENTIFIED BY anything;
SQL> CONNECT scott/anything
OK, we're in. Let's quickly change the password back before anybody notices.
SQL> ALTER USER scott IDENTIFIED BY VALUES 'F894844C34402B67';
User altered.
How do you delete duplicate rows in a table?
There is a several method to delete duplicate row from the table:
Method1:
DELETE FROM SHAAN A WHERE ROWID >
(SELECT min(rowid) FROM SHAAN B
WHERE A.EMPLOYEE_ID = B.EMPLOYEE_ID);
Method2:
delete from SHAAN t1
where exists (select 'x' from SHAAN t2
where t2.EMPLOYEE_ID = t1.EMPLOYEE_ID
and t2.EMPLOYEE_ID = t1.EMPLOYEE_ID
and t2.rowid > t1.rowid);
Method3:
DELETE SHAAN
WHERE rowid IN
Page 32 of 134

( SELECT LEAD(rowid) OVER
(PARTITION BY EMPLOYEE_ID ORDER BY NULL)
FROM SHAAN );
Method4:
delete from SHAAN where rowid not in
( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method5:
delete from SHAAN
where rowid not in ( select min(rowid)
from SHAAN group by EMPLOYEE_ID);
Method6:
SQL> create table table_name2 as select distinct * from table_name1;
SQL> drop table table_name1;
SQL> rename table_name2 to table_name1;
What is Automatic Management of Segment Space setting?
Automatic Segment Space Management (ASSM) introduced in Oracle9i is an easier way of managing
space in a segment using bitmaps. It eliminates the DBA from setting the parameters pctused,
freelists, and freelist groups.
ASSM can be specified only with the locally managed tablespaces (LMT). The CREATE
TABLESPACE statement has a new clause SEGMENT SPACE MANAGEMENT. Oracle uses
bitmaps to manage the free space. A bitmap, in this case, is a map that describes the status of each
data block within a segment with respect to the amount of space in the block available for inserting
rows. As more or less space becomes available in a data block, its new state is reflected in the
bitmap.
CREATE TABLESPACE myts DATAFILE '/oradata/mysid/myts01.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M
SEGMENT SPACE MANAGEMENT AUTO;
What is COMPRESS and CONSISTENT setting in EXPORT utility?
If COMPRESS=Y, the INITIAL storage parameter is set to the total size of all extents allocated for the
object. The change takes effect only when the object is imported.
Setting CONSISTENT=Y exports all tables and references in a consistent state. This slows the
export, as rollback space is used. If CONSISTENT=N and a record is modified during the export, the
data will become inconsistent.

Page 33 of 134

What is the difference between Direct Path and Convention Path loading?
When you use SQL loader by default it use conventional path to load data. This method competes
equally with all other oracle processes for buffer resources. This can slow the load. A direct path load
eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the
data blocks directly to the database files. If load speed is most important to you, you should use direct
path load because it is faster.
What is an Index Organized Table?
An index-organized table (IOT) is a type of table that stores data in a B*Tree index structure. Normal
relational tables, called heap-organized tables, store rows in any order (unsorted).
CREATE TABLE my_iot (id INTEGER PRIMARY KEY, value VARCHAR2 (50)) ORGANIZATION
INDEX;
What are a Global Index and Local Index?
When you create a partitioned table, you should create an index on the table. The index may be
partitioned according to the same range values that were used to partition the table. Local keyword in
the index partition tells oracle to create a separate index for each partition of the table.
TheGlobal clause in create index command allows you to create a non-partitioned index or to specify
ranges for the index values that are different from the ranges for the table partitions. Local indexes
may be easier to manage than global indexes however, global indexes may perform uniqueness
checks faster than local (portioned) indexes perform them.
What is difference between Multithreaded/Shared Server and Dedicated Server?
Oracle Database creates server processes to handle the requests of user processes connected to an
instance.
A dedicated server process, which services only one user process
A shared server process, which can service multiple user processes
Your database is always enabled to allow dedicated server processes, but you must specifically
configure and enable shared server by setting one or more initialization parameters.
Can you import objects from Oracle ver. 7.3 to 9i?
We can not import from lower version export to higher version in fact. But not sure may be now
concept is changed.
How do you move tables from one tablespace to another tablespace?
Method 1:
Export the table, drop the table, create the table definition in the new tablespace, and then import the
data (imp ignore=y).
Method 2:
Create a new table in the new tablespace with the "CREATE TABLE x AS SELECT * from y"
command:
CREATE TABLE temp_name TABLESPACE new_tablespace AS SELECT * FROM real_table;
Then drop the original table and rename the temporary table as the original:
Page 34 of 134

DROP TABLE real_table;
RENAME temp_name TO real_table;
Note: After step #1 or #2 is done, be sure to recompile any procedures that may have been
invalidated by dropping the table. Prefer method #1, but #2 is easier if there are no indexes,
constraints, or triggers. If there are, you must manually recreate them.
Method 3:
If you are using Oracle 8i or above then simply use:
SQL>Alter table table_name move tablespace tablespace_name;
How do see how much space is used and free in a tablespace?
SELECT * FROM SM$TS_FREE;
SELECT TABLESPACE_NAME, SUM(BYTES) FROM DBA_FREE_SPACE GROUP BY TABLESPA
CE_NAME;
Can view be the based on other view?
Yes, the view can be created from other view by directing a select query to use the other view data.
What happens, if you not specify Dictionary option with the start option in case of LogMinor
concept?
It is recommended that you specify a dictionary option. If you do not, LogMiner cannot translate
internal object identifiers and datatypes to object names and external data formats. Therefore, it
would return internal object IDs and present data as hex bytes. Additionally,
the MINE_VALUE andCOLUMN_PRESENT functions cannot be used without a dictionary.
What is the Benefit and draw back of Continuous Mining?
The continuous mining option is useful if you are mining in the same instance that is generating the
redo logs. When you plan to use the continuous mining option, you only need to specify one archived
redo log before starting LogMiner. Then, when you start LogMiner specify
theDBMS_LOGMNR.CONTINUOUS_MINE option, which directs LogMiner to automatically add and mine
subsequent archived redo logs and also the online catalog.
Continuous Mining is not available in Real Application Cluster.
What is LogMiner and its Benefit?
LogMiner is a recovery utility. You can use it to recover the data from oracle redo log and archive
log file. The Oracle LogMiner utility enables you to query redo logs through a SQL interface. Redo
logs contain information about the history of activity on a database.
Benefit of LogMiner?
1. Pinpointing when a logical corruption to a database; suppose when a row is accidentally deleted
then logMiner helps to recover the database exact time based and changed based recovery.
2. Perform table specific undo operation to return the table to its original state. LogMiner reconstruct
the SQL statement in reverse order from which they are executed.
Page 35 of 134

3. It helps in performance tuning and capacity planning. You can determine which table gets the
most update and insert. That information provides a historical perspective on disk access statistics,
which can be used for tuning purpose.
4. Performing post auditing; LogMiner is used to track any DML and DDL performed on database in
the order they were executed.
What is Oracle DataGuard?
Oracle DataGuard is a tools that provides data protection and ensures disaster recovery for enterprise
data. It provides comprehensive set of services that create, maintain, manage, and monitor one or
more standby databases to enable production Oracle databases to survive disasters and data
corruption. Dataguard maintains these standsby databases as transitionally consistent copies of the
production database. Then, if the production database becomes failure Data Guard can switch any
standby database to the production role, minimizing the downtime associated with the outage. Data
Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level
of data protection and data availability.
What is Standby Databases
A standby database is a transitionally consistent copy of the primary database. Using a backup copy
of the primary database, you can create up to9 standby databases and incorporate them in a Data
Guard configuration. Once created, Data Guard automatically maintains each standby database by
transmitting redo data from the primary database and then applying the redo to the standby database.
Similar to a primary database, a standby database can be either a single-instance Oracle database or
an Oracle Real Application Clusters database. A standby database can be either a physical
standby database or a logical standby database:
Difference between Physical standby Logical standby databases
Provides a physically identical copy of the primary database on a block-for-block basis. The database
schema, including indexes, is the same. A physical standby database is kept synchronized with the
primary database, though Redo Apply, which recovers the redo data, received from the primary
database and applies the redo to the physical standby database.
Logical Standby database contains the same logical information as the production database, although
the physical organization and structure of the data can be different. The logical standby database is
kept synchronized with the primary database though SQL Apply, which transforms the data in the
redo received from the primary database into SQL statements and then executing the SQL
statements on the standby database.
If you are going to setup standby database what will be your Choice Logical or Physical?
We need to keep the physical standby database in recovery mode in order to apply the received
archive logs from the primary database. We can open physical stand by database to read only and
make it available to the applications users (Only select is allowed during this period). Once the
database is opened in Read only mode then we can not apply redo logs received from primary
database.
We do not see such issues with logical standby database. We can open up the database in normal
mode and make it available to the users. At the same time, we can apply archived logs received from
primary database.
Page 36 of 134

If the primary database needed to support pretty large user community for the OLTP system and
pretty large Reporting Group then better to uselogical standby as primary database instead of
physical database.
What are the requirements needed before preparing standby database?
OS Architecture of primary database secondary database should be same.
The version of secondary database must be the same as primary database.
The Primary database must run in Archivelog mode.
Require the same hardware architecture on the primary and all standby site.
Does not require the same OS version and release on the primary and secondary site.
Each Primary and secondary database must have its own database.
What are Failover and Switchover in case of dataguard?
Failover is the operation of bringing one of the standby databases online as the new primary database
when failure occurs on the primary database and there is no possibility of recover primary database in
a timely manner. The switchover is a situation to handle planned maintenance on the primary
database. The main difference between switchover operation and failover operation is that switchover
is performed when primary database is still available or it does not require a flash back or re-
installation of the original primary database. This allows the original primary database to the role of
standby database almost immediately. As a result schedule maintenance can performed more easily
and frequently.
When you use WHERE clause and when you use HAVING clause?
HAVING clause is used when you want to specify a condition for a group function and it is written
after GROUP BY clause The WHERE clause is used when you want to specify a condition for
columns, single row functions except group functions and it is written before GROUP BY clause if it is
used.
What is a cursor and difference between an implicit & an explicit cursor?
A cursor is a PL/SQL block used to fetch more than one row in a Pl/SQl block. PL/SQL declares a
cursor implicitly for all SQL data manipulation statements, including quries that return only one row.
However, queries that return more than one row you must declare an explicit cursor or use a cursor
FOR loop.
Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement via
the CURSOR...IS statement. An implicit cursor is used for all SQL statements Declare, Open, Fetch,
Close. An explicit cursors are used to process multirow SELECT statements An implicit cursor is used
to process INSERT, UPDATE, DELETE and single row SELECT. .INTO statements.
Explain the difference between a data block, an extent and a segment.
A data block is the smallest unit of logical storage for a database object. As objects grow they take
chunks of additional storage that are composed of contiguous data blocks. These groupings of
contiguous data blocks are called extents. All the extents that an object takes when grouped together
are considered the segment of the database object.

Page 37 of 134

You have just had to restore from backup and do not have any control files. How would you go
about bringing up this database?
I would create a text based backup control file, stipulating where on disk all the data files where and
then issue the recover command with the using backup control file clause.
A table is classified as a parent table and you want to drop and re-create it. How would you do
this without affecting the children tables?
Disable the foreign key constraint to the parent, drop the table, re-create the table, and enable the
foreign key constraint.
How to Unregister database from Rman catalog
First we start up RMAN with a connection to the catalog and the target, making a note of the DBID in
the banner:
C:\>rman catalog=rman/rman@shaan target=HRMS/password@orcl3
connected to target database: W2K1 (DBID=691421794)
connected to recovery catalog database
Note the DBID from here. Next we list and delete any backupset recorded in the repository:
RMAN> LIST BACKUP SUMMARY;
RMAN> DELETE BACKUP DEVICE TYPE SBT;
RMAN> DELETE BACKUP DEVICE TYPE DISK;
Next we connect to the RMAN catalog owner using SQL*Plus and issue the following statement:
SQL> CONNECT rman/rman@shaan
SQL> SELECT db_key, db_id FROM db
WHERE db_id = 1487421514;
DB_KEY DB_ID
---------- ----------
1 691421794
The resulting key and id can then be used to unregister the database:
SQL> EXECUTE dbms_rcvcat.unregisterdatabase(1, 691421794);
PL/SQL procedure successfully completed.

DBA Interview Questions with Answers Part7
My database was terminated while in BACKUP MODE, do I need to recover?
If a database was terminated while one of its tablespaces was in BACKUP MODE (ALTER
TABLESPACE xyz BEGIN BACKUP;), it will tell you that media recovery is required when
Page 38 of 134

you try to restart the database. The DBA is then required to recover the database and
apply all archived logs to the database. However, from Oracle 7.2, one can simply take
the individual datafiles out of backup mode and restart the database.
SQL> ALTER DATABASE DATAFILE C:\PATH\FILENAME END BACKUP;
One can select from V$BACKUP to see which datafiles are in backup mode From Oracle9i
onwards, the following command can be used to take all of the datafiles out of
hotbackup mode:
SQL>ALTER DATABASE END BACKUP;
Note: This command must be issued when the database is mounted, but not yet opened.
Does Oracle write to data files in begin/hot backup mode?
When a tablespace is in backup mode, Oracle will stop updating its file headers, but will
continue to write to the data files. When in backup mode, Oracle will write complete
changed blocks to the redo log files. Because of this, increased log activity and archiving
during on-line backups. To solve this problem, simply switch to RMAN backups.
Difference Consistent and Inconsistent Backup
The backup taken in shutdown state or in same point in time are referred to as
consistent. Unlike an inconsistent backup, a consistent whole database backup does not
require recovery after it is restored, here all header of datafile belongs to writable
tablespace have the same SCN. These datafile donot have any change past this check
point SCN. The SCN of datafile header matches exactly controlfile checkpoint.
An inconsistent backup is a backup of one or more database files that you make while
the database is open or after the database has shut down abnormally. This means that
the files in the backup contain data taken from different points in time. This can occur
because the datafiles are being modified as backups are being taken. Not any of the
above mentioned properties are exist here. A recovery (Applying all the archive and
online redo logs) is needed in order to make the backup consistent.
Difference between restoring and recovering?
Restoring involves copying backup files from secondary storage (backup media) to disk.
This can be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can
roll-forward until a specific point-in-time (before the disaster occurred), or roll-forward
until the last transaction recorded in the log files.
Difference between Complete and Incomplete Recovery?
Complete recovery involves using redo data or incremental backups combined with a
backup of a database, tablespace, or datafile to update it to the most current point in
time. It is called complete because Oracle applies all of the redo changes contained in
the archived and online logs to the backup. Typically, you perform complete media
recovery after a media failure damages datafiles or the control file.
Incomplete recovery, or point-in-time recovery we do not apply all of the redo records
generated after the most recent backup or when archive redo log is missing.
Page 39 of 134

Because you are not completely recovering the database to the most current time, you
must tell Oracle when to terminate recovery. You can perform the following types of
media recovery.
Time based Recovery, Cancel based Recovery, Change based Recovery, Log sequence
Recovery
What happens when we open the database with Resetlogs option after
incomplete recovery?
The RESETLOGS operation creates a new incarnation of the databasein other words, a
database with a new stream of log sequence numbers starting with log sequence 1.
Before using the OPEN RESETLOGS command to open the database in read/write mode
after an incomplete recovery, it is a good idea to first open the database in read-only
mode, and inspect the data to make sure that the database was recovered to the correct
point. If the recovery was done to the wrong point, then it is easier to re-run the
recovery if no OPENRESETLOGS has been done.
Difference between online and offline backups?
A hot (or on-line) backup is a backup performed while the database is open and available
for use (read and write activity). Except for Oracle exports, one can only do on-line
backups when the database is ARCHIVELOG mode. A cold (or off-line) backup is a
backup performed while the database is off-line and unavailable to its users. Cold
backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG
mode.
It is easier to restore from off-line backups as no recovery (from archived logs) would be
required to make the database consistent. Nevertheless, on-line backups are less
disruptive and doesn't require database downtime.
Point-in-time recovery (regardless if you do on-line or off-line backups) is only available
when the database is in ARCHIVELOG mode.
What is the difference between Views and Materialized Views in Oracle?
Views evaluate the data in the tables underlying the view definition at the time the view
is queried. It is a logical view of your tables, with no data stored anywhere else. The
upside of a view is that it will always return the latest data to you. The downside of a
view is that its performance depends on how good a select statement the view is based
on. If the select statement used by the view joins many tables, or uses joins based on
non-indexed columns, the view could perform poorly.
Materialized views are similar to regular views, in that they are a logical view of your
data (based on a select statement), however, the underlying query result set has been
saved to a table. The upside of this is that when you query a materialized view, you are
querying a table, which may also be indexed. Materialized views having several other
advantages over simple view.
What happens when you set CONTROL_FILE_RECORD_KEEP_TIME to 0
Never set CONTROL_FILE_RECORD_KEEP_TIME to 0. If you do, then backup records
may be overwritten in the control file before RMAN is able to add them to the catalog. As
we know that The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter
Page 40 of 134

determines the minimum number of days that records are retained in the control file
before they are candidates for being overwritten.
How to find the last refresh of your database (when the recovery with resetlogs
performed)?
If the cloned database has been opened with RESETLOGS option, you can try checking
out V$DATABASE.RESETLOGS_TIME. if the V$DATABASE.CREATED is not equal to
V$DATABASE.RESETLOGS_TIME...there is a possibility that it might be opened with
resetlogs option. I don't have the required set up to check and confirm this myself....but
this is something you can get it a shot.
Command to find files created a day before
find . -type f -mtime 1 -exec ls -lth {} \;
Initially Flashback Database was enabled but noticed Flashback was disabled
automatically long time ago. What is the Issue?
Reason:
It could be because the flashback area 100% Once Flashback Area become 100% full
then oracle will log in Alert that Flashback will be disabled and it will automatically turn
off Flash Back without user intervention.
How can I check if there is anything rolling back?
It depends on how you killed the process. If you did and alter system kill session you
should be able to look at the used_ublk block in v$transaciton to get an estimate for the
rollback being done. If you killed to server process in the OS and pmon is recovering the
transaction you can look at V$FAST_START_TRANSACTIONS view to get the estimate
How do you see how many instances are running?
In Linux, Unix the command: ps -ef|grep pmon
In Windows: services.msc
Which is more efficient Incremental Backups using RMAN or Incremental
Export?
Rman
The current logfile gets damaged. What you can do now?
Once current redolog file is damaged, instance is aborted and it needs recovery upto
undamaged part. Only undamaged part can be recovered. Here DBA must apply time
based recovery, means it can be a point in time or specified by SCN. It leads to
incomplete recovery
Where should the tuning effort be directed?
Consider the following areas for tuning in order to increase performance of DB
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems
are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after
peak working hours.
Memory Tuning:
Page 41 of 134

Properly size your database buffers (shared pool, buffer cache, log buffer, etc) by
looking at your buffer hit ratios. Pin large objects into memory to prevent frequent
reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk
subsystem throughput. Also look for frequent disk sorts, full table scans, missing
indexes, row chaining, data fragmentation, etc
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more
information, read the related Oracle FAQ dealing with your specific operating system.
What are the common Import/ Export problems?
ORA-00001: Unique constraint (...) violated - You are importing duplicate rows. Use
IGNORE=NO to skip tables that already exist (imp will give an error if the object is re-
created).
ORA-01555: Snapshot too old - Ask your users to STOP working while you are
exporting or use parameter CONSISTENT=NO
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set
parameter COMMIT=Y while importing
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import
parameter to ignore these errors, but be careful as you might end up with duplicate
rows.
By mistake a use drop or truncate a Table then what is the best method to
recover it?
There are several methods possibly through RMAN such as:
Restore and recover the primary database to a point in time before the drop. This is an
extreme measure for one table as the entire database goes back in time.
Restore and recover the tablespace to a point in time before the drop. This is a better
option, but again, it takes the entire tablespace back in time.
Restore and recover a subset of the database as a DUMMY database to export the table
data and import it into the primary database. This is the best option as only the dropped
table goes back in time to before the drop.
How to find running jobs in oracle database
select sid, job,instance from dba_jobs_running;
select sid, serial#,machine, status, osuser,username from v$session where
username!='NULL'; --all active users
Page 42 of 134

select owner, job_name from DBA_SCHEDULER_RUNNING_JOBS; --for oracle 10g
How to find long running jobs in oracle database
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, time_remaining
remaining, message from v$session_longops
where time_remaining = 0 order by time_remaining desc
Login without password knowledge
This is not the genuine approach consider it as a practice.
SQL> CONNECT / as sysdba
Connected.
SQL> SELECT password FROM dba_users WHERE username='SCOTT';
PASSWORD
--------------- ---------------
F894844C34402B67
SQL> ALTER USER scott IDENTIFIED BY anything;
User altered.
SQL> CONNECT scott/anything
Connected.
OK, we're in. Let's quickly change the password back before anybody notices.
SQL> ALTER USER scott IDENTIFIED BY VALUES 'F894844C34402B67';
User altered.
While applying the CPU Patch why we need to update the Oracle Inventory?
Because when you apply the CPU it updates the oracle binaries.

A nterview uestions with Answers art
Difference between locks and latches
Locks are used to protect the data or resources from the simultaneous use of them by
multiple sessions which might set them in inconsistent state. Locks are external
mechanism, means user can also set locks on objects by using various oracle
statements.
Latches are for the same purpose but works at internal level. Latches are used to Protect
and control access to internal data structures like various SGA buffers. They are handled
and maintained by oracle and we cant access or set it.
Setting the audit_trail parameter in the database to db, it generates lot of
records in sys.aud$ table. Can you suggest any method to overcome this issue?
Page 43 of 134

1. When you set audit it does audit for every single activity on the database. So it may
lead into performance problem.
You have to disable every single audit(<> noaudit) before or after you set the parameter
and then enable one by one based on the requirement.
2. You should monitor the growth of sys.aud$ and archive it properly or maintain the
space.
How to change the topnsql of AWR Snapshot in 10g
Select * from DBA_HIST_WR_CONTROL
1898043910 +00 01:00:00.000000 +01 00:00:00.000000 DEFAULT
exec DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(topnsql => 30);
Select * from DBA_HIST_WR_CONTROL
1898043910 +00 01:00:00.000000 +01 00:00:00.000000 30
How to detect whos causing excessive redo generation
SELECT S1.SID, S1.SERIAL_NUM, S1.USER_NAME, S1.PROGRAM, T1.USED_UBLK,
T1.USED_UREC FROM V$SESSION S1, V$TRANSACTION T1 WHERE S1.TADDR = T1.ADDR
ORDER BY 5 DESC, 6 DESC, 1, 2, 3, 4;
Tracking undo generation by all session
SELECT S1.SID, S1.USER_NAME, R1.NAME, T1.START_TIME, T1.USED_UBLK ,
T1.USED_UREC FROM V$SESSION S1, V$TRANSACTION T1, V$ROLLNAME R1 WHERE
T1.ADDR = S1.TADDR AND R1.USN = T1.XIDUSN;
Or you can collect Statistics from V$SESSTAT to AWR
How do you remove an SPFILE parameter (not change the value of, but actually
purge it outright)?
Use "ALTER SYSTEM RESET ..." (For database versions 9i and up)
Syntax:
ALTER SYSTEM RESET PARAMETER SID='SID|*'
ALTER SYSTEM RESET "_TRACE_FILES_PUBLIC" SCOPE=SPFILE SID='*';
NOTE: The "SID='SID|*'" argument is REQUIRED!
Can you use RMAN to recover RMAN?
Yes, you can!
Which situation Exist condition is better than IN
If the resultant of sub query is small then IN is typically more appropriate where as
resultant of sub query is big/large/long then EXIST is more appropriate. The Exist
always results full scan of table where as first query can make use of index on Table.
Is Oracle really quicker on Windows than Solaris?
I found in my experience that Yes, windows perform better on comparable hardware just
about any UNIX box. I am working on Windows but once I installed Solaris trying to test.
I found the windows installations always outperformed the Solaris ones both on initial
loading the pool cache and subsequent runs. The test package is rather large (5000+
lines), which is used in a form to display customer details. On Solaris I was typically
getting an initial return time of 5 seconds and on windows, typically, 1 second. Even
subsequent runs (i.e. cached) the windows outperformed Solaris. The parameter sizes
for the SGA were approx. the same and the file systems are the conventional method. In
both cases the disk configuration is local.
What is Difference between DBname and instance_name?
A database is a set of files (data, redo, ctl and so on) where as An instance is a set of
processes (SMON, PMON, DBWR, etc) and a shared memory segment (SGA).
A database may be mounted and opened by many INSTANCES (Parallel Server)
concurrently. An instance may mount and open ANY database -- however it may only
open a single database at any time. There for you need unique (for the set of files).

Page 44 of 134

Does DBCA create instance while creating database?
DBCA does not create instance. It create database (set of files). The instance is only
feelings do a shutdown and goodbye instance and on windows it registers the necessary
services that can be used to start an instance when you want.
Is there any way to create database without DBCA?
Yes, you can used oradim directly
What's the difference between connections, sessions and processes?
A connection is a physical circuit between you and the database. A connection might be
one of many types -- most popular begin DEDICATED server and SHARED server. Zero,
one or more sessions may be established over a given connection to the database as
show above with sqlplus. A process will be used by a session to execute statements.
Sometimes there is a one to one relationship between CONNECTION->SESSION-
>PROCESS (eg: a normal dedicated server connection). Sometimes there is a one to
many from connection to sessions (eg: like autotrace, one connection, two sessions, one
process). A process does not have to be dedicated to a specific connection or session
however, for example when using shared server (MTS), your SESSION will grab a
process from a pool of processes in order to execute a statement. When the call is over,
that process is released back to the pool of processes.
SQL>select username from v$session where username is not null;
you can see one session, me
SQL>select username, program from v$process;
you can see all of the backgrounds and my dedicated server...
Autotrace for statistics uses ANOTHER session so it can query up the stats for your
CURRENT session without impacting the STATS for that session!
SQL>select username from v$session where username is not null;
now you can see two session but...
SQL>select username, program from v$process;
Same 14 processes...
What about Fragmentation situation (LMT) in oracle 8i and up?
Fragmentation is that if you have many small holes (regions of contiguous free space)
that are too small to be the next extent of any object. These holes of free space resulted
from dropping some object (or truncating them) and the resulting free space cannot be
used by any other object in that tablespace. This is a direct result of using pctincrease
that is not zero and having many weird sized extents (every extents is unique size and
shape). In oracle 8i and above we all are using locally managed tablespace. These would
use either uniform sizing or our automatic allocation scheme. In either case it is almost
impossible to get into a situation where you have unusable free space.
To see if you suffer from fragmentation you can query from DBA_FREE_SPACE (best to
do an alter tablespace to ensure all contiguous made into 1 big free region). You would
look any free space that is smaller then the smallest next extent size for any object in
that tablespace. Check with below query:
Select * from dba_free_space
where tablespace_name = 'T' and bytes <= ( select min(next_extent)
from dba_segments where tablespace_name = 'T') order by block_id
Is there a way we can flush out a known data set from the database buffer
cache?
No you dont, in real life; the cache would never be empty. It is true that 10g introduce
an alter system flush buffer_cache, but it is not really worthwhile. Having empty buffer
cache is fake, if no more so than what you are currently doing.

Page 45 of 134

What would be the best approach to benchmark the response time for a
particular query?
run query q1 over and over (with many different inputs)
run query q2 over and over (with many different inputs)
discard first couple of observations, and last couple
use the observations in the middle
What is difference between Char and Varchar2 and which is better approach?
A CHAR datatype and VARCHAR2 datatype are stored identically (eg: the word 'WORD'
stored in a CHAR(4) and a varchar2(4) consume exactly the same amount of space on
disk, both have leading byte counts).
The difference between a CHAR and a VARCHAR is that a CHAR(n) will ALWAYS be N
bytes long, it will be blank padded upon insert to ensure this. A varchar2(n) on the
other hand will be 1 to N bytes long, it will NOT be blank padded. Using a CHAR on a
varying width field can be a pain due to the search semantics of CHAR.
Consider the following examples:
SQL> create table t ( x char(10) );
Table created.
SQL> insert into t values ( 'Hello' );
1 row created.
SQL> select * from t where x = 'Hello';
X
----------
Hello
SQL> variable y varchar2(25)
SQL> exec :y := 'Hello'
PL/SQL procedure successfully completed.
SQL> select * from t where x = :y;
no rows selected
SQL> select * from t where x = rpad(:y,10);
X
----------
Hello
Notice how when doing the search with a varchar2 variable (almost every tool in the
world
uses this type), we have to rpad() it to get a hit. If the field is in fact ALWAYS 10 bytes
long, using a CHAR will not hurt -- HOWEVER, it will not help either.
Rman always shows date in DD-MON-YY format. How to set date format to
M/DD/YYYY HH24:MI:SS in rman ?
You can just set the NLS_DATE_FORMAT before going into RMAN:
In Rman list backup how do i get time column that shows me date and time
including seconds as generally it is showing only date.
Before connecting the rman target set the date format on command prompt:
export NLS_DATE_FORMAT=dd-mon-yyyy hh24:mi:ss - Linux
Set NLS_DATE_FORMAT=dd-mon-yyyy hh24:mi:ss - windows
then try to connect rman target
rman target sys/oralce@orcl3 catalog rman/rman@shaan
rman> list backupset 10453
Why not use O/S backups instead of RMAN?
There is nothing wrong with doing just OS backups. OS backups are just as valid as
RMAN backups. RMAN is a great tool but it is not the only way to do it. Many people still
prefer using a scripting tool of there choice such as perl or ksh to do this.
Page 46 of 134

RMAN is good if you have lots of databases. The catalog it uses remembers lots of
details for you. You don't have as much to think about.
RMAN is good if you do not have good "paper work" skills in place. Using OS backups, it
is more or less upto you to remember where they are, what they are called and so on.
You have to do all of the book keeping RMAN would do.
RMAN provides incremental backups, something you cannot get without RMAN.
RMAN provides tablespace point in time recovery. You can do this without RMAN but you
have to do it by yourself and it can be rather convoluted.
RMAN is more integrated with OEM. If you do OS backups, you'll have to do everything
yourself. With RMAN you may have less scripting to develop, test and maintain.
RMAN if the catalog/controlfile are damaged? what is the next step?
If you lose rman, you rebuild from the controlfiles of the backed up databases but, you
should not lose the rman catalog using proper techniques of backup itself.
How to switch between Noarchivelog and archivelog in oracle 10g
connect "/ as sysdba"
alter system set log_archive_start=true scope=spfile;
alter system set log_archive_dest='......' scope=spfile;
shutdown immediate;
startup mount
alter database archivelog;
alter database open;
connect /
-and-
connect "/ as sysdba"
shutdown immediate
startup mount
alter database noarchivelog;
alter database open;
connect /
How to Update millions or records in a table?
If we had to update millions of records I would probably opt to NOT update.
I would more likely do:
CREATE TABLE new_table as select <do the update "here"> from old_table;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
You can do that using parallel query, with nologging on most operations generating very
little redo and no undo at all in a fraction of the time it would take to update the
data.
SQL>create table new_emp as select empno, LOWER(ename) ename, JOB,
MGR, HIREDATE, SAL, COMM, DEPTNO from emp;
SQL>drop table emp;
SQL>rename new_emp to emp;
How to convert database server sysdate to GMT date?
Select sysdate,
sysdate+(substr(tz_offset(dbtimezone),1,1)||1)*to_dsinterval(0
||substr(tz_offset( DBTIMEZONE ),2, 5)||:00) from dual;

Page 47 of 134

Interview question with Answer Part 9
What is the difference between to back up the current control file and to
backup up control file copy?
If you backup current control file you backup control file which is currently open by an
instance where as If you backup controlfile file copy" you backup the copy of control file
which is created either with SVRMGRL command "alter system backup controlfile to .." or
with RMAN command "copy current controlfile to ...". In the other words, the control file
copy is not current controlfile backup current controlfile creates a BACKUPSET containing
controlfile. You don't have to give the FILENAME where as backup controlfile copy
<filename> creates a BACKUPSET from a copy of controlfile. You
have to give the FILENAME.
How much of overhead in running BACKUP VALIDATE DATABASE and RESTORE
VALIDATE DATABASE commands to check for block corruptions using RMAN?
Can I run these commands anytime?
Backup validate works against the backups not against the live database so no impact on
the live database, same for restore validate they do not impact the real thing (it is
reading the files there only).
Is there a way to force rman to use these obsolete backups or once it is marked
obsolete?
As per my understanding it is just a report, they are still there until you delete them.
Can I use the same snapshot controlfile to backup multiple databases(one after
another) running on the same server?
This file is only use temporarily like a scratch file. Only one rman session can access the
snapshot controlfile at any time so this would tend to serialize your backups if you do
that.
Why does not oracle keep RMAN info after recreating the controlfile?
Creating the new controlfile from scratch how do you expect the create controlfile to
"make up" the missing data? that would be like saying similarly we have drop and
recreated my table and now it is empty similarly here recreating from the scratch means
the contents there will be naturally will be gone. Use the rman catalog to deal this
situation. It is just a suggestion.
What is the advantage of using PIPE in rman backups? In what circumstances
one would use PIPE to backup and restore?
It lets 3rd parties (anyone really) build an alternative interface to RMAN as it permits
anyone
that can connect to an Oracle instance to control RMAN programmatically.
How To turn Debug Feature on in rman?
run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;
now you will see a output
You can always turn debug off by issuing
rman>debug off;




Page 48 of 134

Assuming I have a "FULL" backup of users01.dbf containing employees table
that contains 1000 blocks of data. If I truncated employees table and then an
incremental level 1 backup of users tablespace is taken, will RMAN include
1000 blocks that once contained data in the incremental backup?
The blocks were not written to the only changes made by the truncate was to the data
dictionary (or file header) so no, it won't see them as changed blocks since they were
not changed.
Where should the catalog be created?
The recovery catalog to be used by Rman should be created in a separate database
other than the target database. The reason is that the target database will be shutdown
while datafiles are restored.
How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
What are the various reports available with RMAN?
rman>list backup; rman> list archive;
What is the use of snapshot controlfile in terms of RMAN backup?
Rman uses the snapshot controlfile as a way to get a read consistent copy of the
controlfile, it uses this to do things like RESYNC the catalog (else the controlfile is a
moving target, constantly changing and Rman would get blocked and block the
database)
Can RMAN write to disk and tape Parallel? Is it possible?
Rman currently won't do tape directly, you need a media manager for that, regarding
disk and tape parallel not as far as I know, you would run two backups separately (not
sure). May be trying to maintain duplicate like that could get the desired.
What is the difference between DELETE INPUT and DELETE ALL command in
backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we
archive the files, when a command is issued through rman to backup archivelogs it uses
one of the location to backup the data. When we specify delete input the location which
was backed up will get deleted, if we specify delete all (all log_archive_dest_n) will
get deleted.
DELETE all applies only to archived logs.
delete expired archivelog all;
Is it possible to restore a backupset (actually backup pieces) from a different
location to where RMAN has recorded them to be.
With 9.2 and earlier it is not possible to restore a backupset (actually backup pieces)
from a
different location to where RMAN has recorded them to be. As a workaround you would
have to create a link using the location of where the backup was originally located. Then
when restoring, RMAN will think everything is the same as it was.
Starting in 10.1 it is possible to catalog the backup pieces in their new location into the
controlfile and recovery catalog. This means they are available for restoration by RMAN
without creating the link.
What is difference between Report obsolete and Report obsolete orphan
Report obsolete backup are reported unusable according to the users retention policy
where as Report obsolete orphan report the backup that are unusable because they
belong to incarnation of the database that are not direct ancestor of the current
incarnation.
How to Increase Size of Redo Log
1. Add new log files (groups) with new size
ALTER DATABASE ADD LOGFILE GROUP
Page 49 of 134

2. Switch with alter system switch log file until a new log file group is in state current
3. Now you can delete the old log file
ALTER DATABASE DROP LOGFILE MEMBER
What is the difference between alter database recover and sql*plus recover
command?
ALTER DATABASE recover is useful when you as a user want to control the recovery
where as SQL*PLUS recover command is useful when we prefer automated recovery.
Difference of two view V$Backup_Set and Rc_Backup_Set in respect of Rman
The V$Backup_Set is used to check the backup details when we are not managing
Rman catalog that is the backup information is stored in controlfile where
as Rc_Backup_Set is used when we are using catalog as a central repository to list the
backup information.
Can I cancel a script from inside the script? How I cancil a select on Windows
client?
Use ctl-c
How to Find the Number of Oracle Instances Running on Windows Machine
C:\>net start |find OracleService
How to create an init.ora from the spfile when the database is down?
Follow the same way as you are using
SQL> connect sys/oracle as sysdba
SQL> shutdown;
SQL> create pfile from spfile;
SQL> create spfile from pfile;
When you shutdown the database, how does oracle maintain the user session
i.e.of sysdba?
You still have your dedicated server
!ps -auxww | grep ora920
sys@ORA920> !ps -auxww | grep ora920
sys@ORA920> shutdown
sys@ORA920> !ps -auxww | grep ora920
You can see you still have your dedicated server. When you connect as sysdba, you fire
up dedicated server that is where it is.
What is ORA-002004 error? What you will do in that case?
A disk I/O failure was detected on reading the control file. Basically you have to check
whether the control file is available, permissions are right on the control file,
spfile/init.ora right to the right location, if all checks were done still you are getting the
error, then from the multiplexed control file overlay on the corrupted one.
Let us say you have three control files control01.ctl, control02.ctl and control03.ctl and
now you are getting errors on control03.ctl then just copy control01.ctl over to
control03.ctl and you should be all set.
In order to issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE; database should
be mounted and in our case it is not mounted then the only other option available is to
restore control file from backup or copy the multiplexed control file over to the bad one.
Why do we need SCOPE=BOTH clause?
BOTH indicates that the change is made in memory and in the server parameter file. The
new setting takes effect immediately and persists after the database is shut down and
started up again. If a server parameter file was used to start up the database, then
BOTH is the default. If a parameter file was used to start up the database, then MEMORY
is the default, as well as the only scope you can specify.
How to know Number of CPUs on Oracle
Login as SYSDBA
Page 50 of 134

SQL>show parameter cpu_count
NAME TYPE VALUE
cpu_count integer 2
Could you please tell me what are the possible reason for Spfile corruption and
Recovery?
It should not be corrupt under normal circumstances, if it were, it would be a bug or
failure of some component in your system. It could be a file system error or could be a
bug.
You can easily recover however from
a) Your alert log has the non-default parameters in it from your last restart.
b) it should be in your backups
c) strings spfile.ora > init$ORACLE_SID.ora - and then edit the resulting file to clean it
up would be options.
How you will check flashback is enabled or not?
Select flashback_on from v$database;
In case Revoke CREATE TABLE Privilege from an USER giving ORA-01952. What
is the issue? How to do in that case?
SQL> revoke create table from Pay_payment_master;
ORA-01952: system privileges not granted to PAY_PAYMENT_MASTER
This is because this privilege is not assigned to this user directly rather it was assigned
through role CONNECT If you remove connect role from the user then you will not be
able to create session (Connect) to database. So basically we have to Revoke the
CONNECT Role and Grant other than create table privilege to this user.
What kind of information is stored in UNDO segments?
Only before image of data is stored in the UNDO segments. If transaction is rolled back
information from UNDO is applied to restore original datafile. UNDO is never multiplexed.
How to Remove Oracle Service in windows environment?
We can add or remove Oracle Service using oradim which is available in
ORACLE_HOME/bin
C:\Oradim delete sid
or
Oradim delete svrc
Why ORA-28000: the account is locked? What you will do in that case?
The Oracle 10g default is to lock an account after 10 bad password attempts and giving
ORA-28000: the account is locked. In that case one of the solutions is increase default
limit of the login attempts.
SQL> Alter profile default limit FAILED_LOGIN_ATTEMPTS unlimited;
How to Reduce the Physical Reads on Statistics?
You need to increase the Buffer Cache
Consider the situation Buffer Cache of the database is 300MB. One SQL gave the
Physical read as 100. I increased as 400MB and now the same SQL giving the Physical
read value is 0
How many redo groups are required for a Oracle DB?
At least 2 redo groups are required for a Oracle database to be working normally.
My spfile is corrupt and now I cannot start my database running on my laptop. Is there a
way to build spfile again?
if you are on unix then
$ cd $ORACLE_HOME/dbs
$ strings spfilename temp_pfile.ora
edit the temp_pfile.ora, clean it up if there is anything "wrong" with it and then
SQL> startup pfile=temp_pfile.ora
Page 51 of 134

SQL> create spfile from pfile;
SQL> shutdown
SQL> startup
On windows -- just try editing the spfile [do not try with the prod db first try to check on
test db. It can be dangerous], create a pfile from it. save it, and do the same or if you
got problem you can startup the db from the command line using sqlplus create a pfile,
do a manual startup (start the oracle service, then use sqlplus to start the database)
What is a fractured block? What happens when you restore a file containing
fractured block?
A block in which the header and footer are not consistent at a given SCN. In a user-
managed backup, an operating system utility can back up a datafile at the same time
that DBWR is updating the file. It is possible for the operating system utility to read a
block in a half-updated state, so that the block that is copied to the backup media is
updated in its first half, while the second half contains older data. In this case, the block
is fractured.
For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE
BEGIN BACKUP command is the solution for the fractured block problem. When a
tablespace is in backup mode, and a change is made to a data block, the database logs a
copy of the entire block image before the change so that the database can reconstruct
this block if media recovery finds that this block was fractured.
The block that the operating system reads can be split, that is, the top of the block is
written at one point in time while the bottom of the block is written at another point in
time. If you restore a file containing a fractured block and Oracle reads the block, then
the block is considered a corrupt.
You recreated the control file by using backup control file to trace and using
alter database backup controlfile to location command what have you lost in
that case?
You lost all of the backup information as using backup controlfile to trace where as using
other ALTER DATABASE BACKUP CONTROLFILE to D:\Backup\control01.ctl. All backup
information is retained when you take binary control file backup.
If a backup is issued after shutdown abort command what kind of backup
is that?
It is an inconsistent backup. If you are in noarchivelog mode ensure that you issue the
shutdown immediate command or startup force is another option that you can issue:
startup force->shutdown abort; followed by shutdown immediate

Interview Question with Answer Part 10
How can I check if there is anything rolling back?
It depends on how you killed the process. If you did and alter system kill session you
should be able to look at the used_ublkblock in v$transaciton to get an estimate for
the rollback being done. If you killed to server process in the OS and PMON is recovering
the transaction you can look at V$FAST_START_TRANSACTIONS view to get the
estimate
How to find out how much rollback a session has to do
select time_remaining from v$session_longops
where sid =<sid of the session doing the rollback>;
How to Drop a column of a Table?
Consider the below Example
Create table x(a date, b date, c date
Page 52 of 134

Now to drop column B:
Alter table x set unused column b -- it will mark column as UNUSED
Select * from sys.dba_unused_col_tabs;
Alter table x drop unused columns;
Alternative method to drop column:
Alter table x drop column c cascade constraints;
How can we see the oldest flashback available?
You can use the following query to see the flashback data available.
SELECT to_char(sysdate,'YYYY-MM-DD
HH24:MI') current_time, to_char(f.oldest_flashback_time, 'YYYY-MM-DD
HH24:MI') OLDEST_FLASHBACK_TIME,
(sysdate - f.oldest_flashback_time)*24*60 HIST_MIN FROM v$database
d, V$FLASHBACK_DATABASE_LOG f;
How to get current session id, process id, client process id?
select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session
b
where a.addr = b.paddr
and b.audsid = userenv('sessionid');
V$SESSION.SID and V$SESSION.SERIAL# are database process id
V$PROCESS.SPID Shadow process id on the database server
V$SESSION.PROCESS Client process id, on windows it is : separated the first # is
the process id on the client and 2nd one is the thread id.
What is MRC ? What you do as application DBA for MRC?
MRC also called as Multiple Reporting Currency in oracle application. Default you have
currency in US Dollars but if your organization operating books are in other currency
then you as application DBA need to enable MRC in applications.
How will you find Invalid Objects in database?
select count(*) from dba_objects where status like 'INVALID';
select * from dba_objects where status like 'INVALID';
Can you use both ADPATCH and OPATCH in application?
Yes you have to use both in application , for application patches you will use ADPATCH
UTILITY and for applying database patch in application you will use opatch UTILITY.
Do you have idea how to trace a running process on Linux?
Using strace you can trace the system calls being executed by a running process
$ strace -p 1435
Process 1435 attached interrupt to quit
Pressed <control-C> - press control-C to stop the strace
$ strace -cfo smon_strace.log -p 1435
Process 1435 attached interrupt to quit
Process 1435 detached
What are database link? Differenciate the use of each of them?
A database link is a named object that describes a "path" from one database to another.
There are different types of database link such as: Private database link, public database
link & network database link.
Private database link is created on behalf of a specific user. A private database link can
be used only when the owner of the link specifies a global object name in a SQL
statement or in the definition of the owner's views or procedures.
Public database link is created for the special user group PUBLIC. A public database link
can be used when any user in the associated database specifies a global object name in
a SQL statement or object definition.
Page 53 of 134

Network database link is created and managed by a network domain service. A network
database link can be used when any user of any database in the network specifies a
global object name in a SQL statement or object definition.

How to know which version of database you are working?
select * from v$version;
In Reference to Rman point in time Recovery which scenario is better for you
(Until time or until sequence)?
I am practicing various scenarios for backup and recovery using RMAN. I find until SCN
better than until time, with log_seq in the middle. Until time is still going to use
(ultimately) an SCN to recover, so if you know the SCN it would be preferred if not then
time is fine.
If you have forgotten the root password on CentOS then what you will do?
If you are on CentOS then follow these steps:
- At the splash screen during boot time, press any key which will take you an interactive
menu.
- Then select a Linux version you wish to boot and press a to append option to the line
this will bring you to a line with the boot command
- Next at the end of that line type single as an option/parameter and then Press
Enter to exit and execute the boot this will start the OS with single user mode which
allow you to reset the root password by typing passwd and you can set new password
for root.
How to determine whether the datafiles are synchronized or not?
select status, checkpoint_change#, to_char(checkpoint_time, 'DD-MON-
YYYY HH24:MI:SS') as checkpoint_time, count(*)
from v$datafile_header
group by status, checkpoint_change#, checkpoint_time
order by status, checkpoint_change#, checkpoint_time;
Check the results of the above query if it returns one and only one row for the online
datafiles, means they are already synchronized in terms of their SCN. Otherwise the
datafiles are still not synchronized yet.
You have just restored from backup and do not have any control files. How
would you go about bringing up this database?
If you do not have a control file, you can create one from scratch in SQL*Plus as follows:
1. sqlplus /nolog
2. connect / as sysdba
3. Startup nomount;
4. the either create controlfile or restore it from the backup (if you have)
5. alter dataase mount;
6. Recover database using backup controlfile;
7. Alter database open;
From more details follow my blog post "Disaster Recovery from the
scratch": http://shahiddba.blogspot.com/2012/05/rman-disaster-recovery-from-
scratch.html
Is there any way to find the last record from the table?
select * from employees where rowid in(select max(rowid) from employees);
select * from employees minus select * from employees where rownum < (select count(
*) from employees);



Page 54 of 134

How you will find Oracle timestamp from current SCN?
select dbms_flashback.get_system_change_number scn from dual; -- Oracle Ver.
9i
SCN
------------
8843525
SQL> Select to_char(CURRENT_SCN) from v$database; -- oracle Ver. 10g or above
SQL>
select current_scn, dbms_flashback.get_system_change_number from v$database;
--standby case
SQL> select scn_to_timestamp(8843525) from dual;
How to suspend/resume a process using oradebug?
SQL> oradebug setorapid 14
Unix process pid: 14962, image: oracle@localhost.localdomain (TNS V1-V3)
SQL> oradebug suspend
Statement processed.
SQL> oradebug resume
Statement processed.
How to find the last time a session performed any activity?
In v$session the column last_call_et has value which tells us the last time (seconds) ago
when the session performed any activity within the database.
select username, floor(last_call_et / 60) "Minutes", status
from v$session
where username is not null order by last_call_et;
How to find parameters that will take into effect for new sessions?
Using the following query one can find the list of parameters that will take info effect for
new sessions if the value of the parameter is changed.
SQL> SELECT name FROM v$parameter WHERE issys_modifiable = 'DEFERRED';
You can change the parameter using the deferred option:
SQL> alter system set sort_area_size=65538 deferred;
System altered
How to free (Flush) buffer cache?
How to free buffer cache?
Note: you may only want to do this on Dev or Test environment as it would affect
performance on production. I already written on my earlier
post http://shahiddba.blogspot.com/2012/05/dba-interview-questions-with-
answers_14.html in real life; the cache would never be empty
-- displays the status and number of pings for every buffer in the SGA
SQL> select distinct status from v$bh;
STATUS
---
cr
free
xcur
-- flush buffer cache for 10g and upwards
SQL> alter system flush buffer_cache;
System altered.
-- flush buffer cache for 9i and upwards
SQL> alter session set events immediate trace name flush_cache;
Session altered.
-- Shows buffer cache was freed after flushing buffer cache
Page 55 of 134

SQL> select distinct status from v$bh;
STATUS
-
Free



How to suspend all jobs from executing in dba_jobs?
By setting the value of 0 to the parameter job_queue_processes you can suspend all
jobs from executing in DBA_JOBS. The value of this parameter can be changed without
instance restart.
SQL> show parameter job_queue_processes;
NAME TYPE VALUE

job_queue_processes integer 400
Now set the value of the parameter in memory, which will suspend jobs from starting
SQL> alter system set job_queue_processes=0 scope=memory;
System altered.
How to see the jobs currently being executed?
By using dba_jobs_running to can see all the job currently executing
SQL> select djr.sid, djr.job, djr.failures, djr.this_date, djr.this_sec, dj.what from
dba_jobs_running djr, dba_jobs dj where djr.job = dj.job;
What is GSM in Oracle application E-Business Suite?
GSM stands for Generic Service Management Framework. Oracle E-Business Suite
consist of various compoennts like Forms, Reports, Web Server, Workflow, Concurrent
Manager. Earlier each service used to start at their own but managing these services
(given that) they can be on various machines distributed across network. So Generic
Service Management is extension of Concurrent Processing which manages all your
services , provide fault tolerance (If some service is down ICM through FNDSM and other
processes will try to start it even on remote server) With GSM all services are centrally
managed via this Framework.
How can you license a product after installation?
You can use ad utility adlicmgr to licence product in Oracle application.
In a situation when you want to know which was the last query fired by the
user. How to check?
Select S.USERNAME||'('||s.sid||')-'||s.osuser UNAME
,s.sid||'/'||s.serial# sid,s.status "Status",p.spid,sql_text sqltext
from v$sqltext_with_newlines t,V$SESSION s , v$process p
where t.address =s.sql_address and p.addr=s.paddr(+) and t.hash_value = s.sql_hash_
value
order by s.sid,t.piece;
Can one copy Oracle software from one machine to another?
Yes, one can copy or FTP the Oracle Software between similar machines. Look at the
following example:
# use tar to copy files and directorys with permissions and ownership
tar cf $ORACLE_HOME | rsh cd $ORACLE_HOME; tar xf
To copy the Oracle software to a different directory on the same server:
cd /new/oracle/dir/
(cd $ORACLE_HOME; tar cf . ) | tar xvf -
NOTE: Remember to relink the Intelligent Agent on the new machine to prevent
messages like Encryption key supplied is not the one used to encrypt file:
Page 56 of 134

cd /new/oracle/dir/
cd network/lib
make -f ins_agent.mk install


A single transaction can have multiple deletes and a single SCN number
identifying all of these deletes. What if I want to flash back only a single
individual delete?
You would flash back to the SYSTEM (not your transactions) SCN at that point in time.
The SYSTEM has an SCN, your transaction has an SCN. You care about the SYSTEM SCN
with flashback, not your transactions SCN.
Are flash back queries useful for the developer or the DBA both? How can I as a
developer and DBA get to know the SCN number of a transaction?
Oracle Flashback is a tool is useful for both either DBA and Developer. If you deleted
data accidently then either DBA or Developer both can flashback, recover and fix this
problem. As a developer you can use "dbms_flashback.get_system_change_number" to
returns the current system SCN and as DBA you can use Log Miner utility to to look back
in time at various events to find SCN's as well.
After Performing DML operation you are using flashback query to retun back
your committed data can you use flashback concept after Truncating any data?
In version 9i, Flashback is limited to Data Manipulation Language (DML) commands such
as SELECT,INSERT, UPDATE, and DELETE. Truncate doesn't generate any undo for the
table truncate just cuts it all loses where as delete puts the deleted data into undo.
Flashback query works on undo.

Interview Question with Answer part 11
What is SID and what is it used for? Where can I find out the SID of my
database?
The SID is a site identifier. It plus the Oracle_home are hashed together in Unix
to create a unique key name for attaching an SGA. If your Oracle_sid or Oracle_home is
not set correctly, you will get "oracle not available". you can get instance name with the
following command:
select instance from v$thread;
select instance_name from v$instance;
If you are buying a new server that will be a mirror image of the current
Production Server what would be the step for that?
In same environment and directory structure setup server, install oracle, use oradim to
setup the registry (register the instance) and restore from backup.
I am cloning database A as database B, both exactly identical, running in
NOARCHIVELOG mode. Database A will be shutdown before copying files. I am
using the CREATE CONTROLFILE statement to clone.
a) Do I need to copy redo log files from A to B if I need to open B with
RESETLOGS option?
b) Do I need to copy control files from A to B since I will be creating controlfile
for B?
a) You do not need to, but you would avoid having to open resetlogs if that makes you
feel better.
b) Not if you are doing the create controlfile trick. You could just copy EVERYTHING,
startup mount, and issue a series of alter database rename file 'old name' to 'new
Page 57 of 134

names'; and then alter database open (assuming logs are in the same place, else you'll
drop and create them).
Note: My understanding is that if you use RESETLOGS option in CREATE CONTROLFILE,
the redo log files will be created by Oracle as per the specifications given in the create
controlfile statement.
I have a new server. What is the best way I can have the same oracle setup
that is there on a prodn db? Either we need to restore the file systems and
relink oracle without doing any installation?
My suggestion is install the same software on another server then then apply restore and
recover procedure on the same environment or directory structure.
No idea about "relink oracle without doing any installation", see the admin guide for your
OS for details on things like this.
There is any difference between Oracle TCL and DCL command?
DCL stands for Data Control Language. These command are used to configure and
control database objects such as GRANT, REVOKE where as TCL stands for Transaction
Control language. It is used to manage the changes made by DML statements. It allows
statements to be grouped together into logical transactions such as
COMMIT - save work done
SAVEPOINT - identify a point in a transaction to which you can later roll back
ROLLBACK - restore database to original since the last COMMIT
SET TRANSACTION - Change transaction options like isolation level and what rollback
segment to use
What happens when the lock is disabling on the table?
When you disabling the lock on table then you are not able to perform DDL operation on
that table but you still to manage DML operation easily
For Example:
Create Table s1 (Eno number(2), ename varchar2(15), salary number(5,2));
insert into s1 values (1, 'shahid', 400);
insert into s1 values (1, 'javed', 200);
insert into s1 values (2, 'karim', 100);
--disable lock on table
Alter table s1 disable table lock;
-- cannot drop/truncate table as table lock is disable
drop table s1;
truncate table s1;
--you cannot able to add/modify/drop column
Alter table s1 add comm number(5,2);
Alter table s1 modify s1 salary number (10,4);
Alter table s1 drop column salary;
-- But still you are able to perform DML
update s1 set salary= 800 where eno=2;
select * from s1;
delete from s1 where eno=2;
insert into s1 values (2, 'mohan', 250);
What is the importance of clock time in case of database cloning?
My personal experience sometimes just cloning a database is not enough if moving it to
another machine you also have to ensure:
1. The environment on the new machine is setup, to match the cloned system this would
include memory & disc allocation space.
2. The "new" machine time is the same or greater than the machine you were cloning
from
Page 58 of 134

How much space does it take to clone a database?
The clone needs the same space.
In which case %LIKE (before or after use) operator performance increases?
LIKE% works the fastest because it uses the index to search on the column provided an
index is specified on the column. Using % after LIKE, results in faster results.
Do you have idea about Fuzz testing or fuzzing?
Fuzz testing or fuzzing is a software testing technique that provides random data
("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by
failing built-in code assertions), the defects can be noted. The great advantage of fuzz
testing is that the test design is extremely simple, and free of preconceptions about
system behavior.
Using the expdp/impdp (Data Pump in 10g), can export and import data from
one schema/Database to another schema/Data is it possible?
Yes, you can use dblink for that
What is DataMapper?
DataMAPPER is a high-performance data migration tool designed for large-scale data
movement projects. Its distinct client/server design allows users to work in a graphical
environment, without sacrificing the performance.
How to Start Enterprise Manager from command line?
C:\cd ORACLE_HOME/bin
C:\emctl start dbconsole
Now type on the browser http://localhost.localdomain:5500/em/
How will you find current and max utilization of session and number of
processes?
SQL>select resource_name, current_utilization, max_utilization from v$resource_limit w
here resource_name in('processes','sessions');
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION
--------------------- ------------------- ---------------
processes 14 18
sessions 12 17
As the table is being modified, can ROWID of a row change?
A rowid is assigned to a row upon insert and is imutable (never changing) unless the row
is deleted and re-inserted (meaning it is another row, not the same row!).
What happened when I updated narrow rows, setting character to wide values?
In this case row will migrate but the rowid for the row stays the same even when the
row migrate.
Session 1: retrieves a row with rowid X
Session 2: deletes the row with rowid X, commits
/* rowid X is now free for re-use */
Session 3: inserts a new row with rowid X, commits
Session 1: update .... where rowid = X
Session 1's update is not updating the same row that it had earlier retrieved.
Consider the above scenario what should be the solution
Use the Primary Key with the table. If you combine rowid with the primary key then it
will be perfectly safe to use rowid id in all cases.
If you have a single delete statement that deletes many records using rowids.
Would there ever be a time when the rowid within this table change during the
execution of this delete statement?
In order for a rowid to change you have to enable row movement first so if row
movement is not enabled then answer is NO. If it is, then flashback table could change a
Page 59 of 134

rowid incase of DDL statement and would not happen concurrently with a delete (so it
would not affect it).
For Example:
Alter table s1 shrink space compact, that moves rows and would change rowids.
Update of a partition key that causes a row to move, that moves rows and would change
rowids.
If I fire two inserts in a table, whether the rowid of the 2nd record will be
greater than
rowid of the 1st record?
The answer is NO see the example below
if you insert A
then insert B
later insert C
delete A
insert D
It is quite possible in above example that D will be "first" in the table as it took over A's
place. If rowids always "grew", than space would never be reused (that would be an
implication of rowids growing always. We would never be able to reuse old space as the
rowid is just a file.block.slot-on-block - a physical address).
Difference between Stored Procedure and Macro?
Stored Procedure:
It does not return rows to the user.
It has to use cursors to fetch multiple rows
It used Inout/out to send values to user
It is stored in DATABASE or USER PERM
A stored procedure also provides output/Input capabilities
Macros:
It returns set of rows to the user.
It is stored in DBC PERM space
A macro that allows only input values
If the port 1521 is default port for the TNSLinstener. I have a database server
on port 1527 how can I make the clients connect on this port or can I have one
listener service connect to listen for 2 servers?
If you are using "Host naming" convention (this is a method that does not require the
client to have a tnsnames.ora file at all. You must be using TCP or you must only have
one default database per host. The client only needs to know the hostname of the
server to connect) then yes, 1521 is the default and only port.
If you are using tnsnames.ora, the Oracle nameserver, or any other method to connect
then no, 1521 is not a default port. In this case, 1521 is simply the port used by
"convention". The clients would, typically in their tnsnames.ora, connect to the listener
on some specified port number. 1521 is the convention used by many people; it is
neither mandatory nor necessary.
What is an IPC protocol and where and how it is used? I have experience only
in TCP/IP protocol. Is there any advantage in using IPC over TCP?
IPC is interposes communication you have messages, pipes, socket pairs and so on it is
alot like just using sockets with TCP/IP. IPC is generally limited to "a machine", not over
a network. IPC used to be a tad faster than TCP but recent tests have shown this to be
less and less true.


Page 60 of 134

In your absence any body has done any alteration then how did you notice or
How to know last DDL fired from the particular schema and particular table?
To find the last ddl performed check out the last_ddl_time from all_objects, dba_objects,
user_objects view because each time and object changes the last_ddl_time is updated
from these view.
Select CREATED, TIMESTAMP, last_ddl_time from all_objects
WHERE OWNER='HRMS' AND OBJECT_TYPE='TABLE' AND OBJECT_NAME='PAYROLL_MA
IN_FILE';
In the above query HRMS is the schema name and payroll_main_file is the table name.
How to find tables that have a specific column name?
SELECT owner, table_name, column_name
FROM dba_tab_columns
WHERE column_name like 'AMOUNT'
ORDER by table_name;
Differentiate Row level and statement level Trigger?
Row Level Trigger is fired each time row is affected by Insert, Update or Delete
command. If statement doesnt affect any row then no trigger action happens where as
Statement level trigger fires when a SQL statement affects the rows of the table. The
trigger activates and performs its activity irrespective of number of rows affected due to
SQL statement. They get fired once for each triggering statement.

DBA Interview Question with Answer part 12
I exported one table with a name of user, how to import that table with another name of user?
EXPDP user1/pwd TABLES=test DUMPFILE=test.DMP DIRECTORY=abc;
IMPDP user2/pwd REMAP_SCHEMA=user1:user2 DUMPFILE=test.DMP DIRECTORY=abc ;
-or-
IMPDP user2/pwd directory=directory_name tables=table_name
dumpfile=dump_name.dmp;
SQL>Grant read, write on directory directory_name to public;
SQL>Grant read, write on directory <dir_name> to <user>;
Just careful to give grant to public if it is production Environment
I have two server of same configuration having single database of 10GB and 20 GB size
respectively, I want to merge into single server what are the prerequisites and steps to follow
in this case.
In my view Export/Import is the best solution to merge the database. You can export the schemas
from one database and import it into other database.
Can one monitor how fast a table is imported?
If you need to monitor how fast rows are imported from a running import job, try one of the following
methods:
Method 1:
select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1)
minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
Page 61 of 134

and open_versions > 0;
If the import has more than one table, this statement will only show information about the current table
being imported.
Method 2:
Use the FEEDBACK=n import parameter. This command will tell IMP to display a dot for every N rows
imported.
How we will increase performance on particular table? Here I am inserting 2GB data in table,
its takes more time to insert in a table. Is there any way to increase performance on a
particular table?
Index on huge table while doing insert will not only solution to improve performance. Get your table
partitioned that will make table insertion faster and also easy to manage the archive data.
Alternatively do one thing first disable constraints as well as index then perform insertion then again
enable.
You can use high-speed solid-state disk (RAM-SAN) to make Oracle inserts run up to 300x faster
than platter disk.
How to reduce alert log Size?
If you move or delete your Alert log file, it is recreated automatically in next startup, alternatively you
can put a script at OS level to move the archives and use new one. So the best way to reduce the
size of log is just move your aler.log to some other place. Oracle will recreate it in next startup.
How you will know the instance is Primary or Standby?
By querying v$database one can tell if the host is primary or standby
On the primary database:
SQL> select database_role from v$database;
DATABASE_ROLE
------------------
PRIMARY
OR check the value of controlfile_type in V$database i.e is CURRENT for primary and "STANDBY"
for standby
SQL> SELECT controlfile_type FROM V$database;
CONTROL
-------------
CURRENT
On the Standby database:
SQL> select database_role from v$database;
DATABASE_ROLE
-------------------
PHYSICAL STANDBY
SQL> SELECT controlfile_type FROM V$database;
CONTROL
-------------
STANDBY
Note: You may need to connect to as sys if the instance is in mount state
How would you determine what sessions are connected and what resources they are waiting
for?
Use of V$SESSION and V$SESSION_WAIT
Give two methods you could use to determine what DDL changes have been made.
You could use Logminer or Streams
How would you determine who has added a row to a table?
Turn on fine grain auditing for the table.

Page 62 of 134

Explain the differences between PFILE and SPFILE
A PFILE is a Static, text file that initializes the database parameter in the moment that its started. If
you want to modify parameters in PFILE, you have to restart the database.
A SPFILE is a dynamic, binary file that allows you to overwrite parameters while the database is
already started (with some exceptions).
Name some clients that can connect with Oracle?
There are several such as SQL Developer, SQL-Plus, TOAD, dbvisualizer, PL/SQL Developer.
In which view can you find information about every view and table of oracle dictionary?
DICT or DICTIONARY view. You can query as:
SQL> SELECT * FROM DICT;
How can we change which databases are started during a reboot in Linux Env.?
Edit the /etc/oratab
How can we reduce the space of TEMP datafile?
Prior to Oracle 11g, you have to re-create the datafile. In Oracle 11g a new feature was introduced
and you can shrink the TEMP tablespace.
How can you view all the current users connected in your database in this moment?
SELECT COUNT(*),USERNAME FROM V$SESSION GROUP BY USERNAME;
What is the difference between a view and a materialized view?
A view is a select that is executed each time a user accesses to it. A materialized view stores the
result of this query in memory for faster access purposes.
Can we have different database versions in the same RAC Env.?
Yes, but Clusterware version must be greater than the database version.
How can you difference a usual parameter and an undocumented parameter of oracle?
The undocumented parameters have the prefix _. Such as: _allow_resetlogs_corruption
What should be the result of logical comparision (NULL != NULL)
False in both cases:
In case of SELECT * FROM MY_SCHEMA.MY_TABLE why we are getting this error: SP2-0678:
Column or attribute type can not be displayed by SQL*Plus?
Check for sure the table has a BLOB column.
Which are the default passwords of SYSTEM/SYS?
MANAGER / CHANGE_ON_INSTALL
Is it possible to center an object horizontally in a repeating frame that has a variable horizontal
size?
Yes
Can a field be used in a report without it appearing in any data group?
Yes
When a form is invoked with call_form, Does oracle forms issues a save point?
Yes
You have just had to restore from backup and do not have any control files. How would you go
about bringing up this database?
I would create a text based backup control file, stipulating where on disk all the data files where and
then issue the recover command with the using backup control file clause.
Explain the difference between a data block, an extent and a segment.
A data block is the smallest unit of logical storage for a database object. As objects grow they take
chunks of additional storage that are composed of contiguous data blocks. These groupings of
contiguous data blocks are called extents. All the extents that an object takes when grouped together
are considered the segment of the database object.
A table is classified as a parent table and you want to drop and re-create it. How would you do
this without affecting the children tables?
Disable the foreign key constraint to the parent, drop the table, re-create the table, enable the foreign
key constraint.
Page 63 of 134

What column differentiates the V$ views to the GV$ views and how?
The INST_ID column which indicates the instance in a RAC environment the information came from.
How would you go about increasing the buffer cache hit ratio?
Use the buffer cache advisory over a given workload and then query the v$db_cache_advice table. If
a change was necessary then I would use the alter system set db_cache_size command.
How would you determine the time zone under which a database was operating?
select DBTIMEZONE from dual;


Explain the use of setting GLOBAL_NAMES equal to TRUE.
Setting GLOBAL_NAMES indicates how you might connect to a database. This variable is either
TRUE or FALSE and if it is set to TRUE it enforces database links to have the same name as the
remote database to which they are linking.
What background process refreshes materialized views?
The Job Queue Processes.
When a user process fails, what background process cleans up after it?
PMON
What are the roles and user accounts created automatically with the database?
DBA - role Contains all database system privileges.
SYS user account - The DBA role will be assigned to this account. All of the base tables and views for
the database's dictionary are store in this schema and are manipulated only by ORACLE.
SYSTEM user account - It has all the system privileges for the database and additional tables and
views that display administrative information and internal tables and views used by oracle tools are
created using this username.
What are the minimum parameters should exist in the parameter file (init.ora) ?
DB NAME - Must set to a text string of no more than 8 characters and it will be stored inside the
datafiles, redo log files and control files and control file while database creation.
DB_DOMAIN - It is string that specifies the network domain where the database is created. The
global database name is identified by setting these parameters
(DB_NAME & DB_DOMAIN) CONTORL FILES - List of control filenames of the database. If name is
not mentioned then default name will be used.
DB_BLOCK_BUFFERS - To determine the no of buffers in the buffer cache in SGA.
PROCESSES - To determine number of operating system processes that can be connected to
ORACLE concurrently. The value should be 5 (background process) and additional 1 for each user.
ROLLBACK_SEGMENTS - List of rollback segments an ORACLE instance acquires at database
startup. Also optionally LICENSE_MAX_SESSIONS,LICENSE_SESSION_WARNING and
LICENSE_MAX_USERS.
What is the difference between NAME_IN and COPY ?
Copy is package procedure and writes values into a field.
Name in is a package function and returns the contents of the variable to which you apply.
How do you implement the If statement in the Select Statement
We can implement the if statement in the select statement by using the Decode statement. e.g select
DECODE (EMP_CAT,'1','First','2','Second'Null); Here the Null is the else statement where null is done
.
How many rows will the following SQL return?
Select * from emp Where rownum = 10;
No rows
Can dual table be deleted, dropped or altered or updated or inserted?
Yes

Page 64 of 134

DBA interview Question with Answer Part 13
Why it is not necessary to take UNDO backup
In fact when you do some transaction, redo entries will be generated and accepted just like that
whenever some change happen to UNDO tablespace or UNDO segments oracle will generate redo
entries.
So even though you does not backup UNDO, you have the redo entries through which you can
recover or rollback the transactions.

What happens with the datafile during hot backup process?
The below three action will happen in case of hot backup process in database
1. The Tablespace checkpointed.
2. The checkpoint SCN in datafile header will freeze to increment with checkpoint.
3. Full image of changed DB block are written to redologs.
Why more redologs are generated during hotbackup?
During the hotbackup in initial checkpointing, the datafile that comprise the tablespace generates full
image of changed Db block in these tablespace to the redologs. Normally oracle logs an entry in the
redologs for every change in database but it does not log the whole image of database blog. By
logging full images of changed DB blocks to the redologs during hot backup mode, oracle eliminates
the possibility of the backup containing fractured blocks and guarantees that in the event of a
recovery, any fractured that might be in the backup copy of the datafile will be resolved by replacing
them with the full image of the block from the redologs.
How do you increase the performance of % like operator?
The % placed after the search word (ss%) can enable the use of index if one is specified in the index
column. This performance is better than the other two ways using % such as before the search word
(like %ss) and before and after the search word (%ss%).
What is cache Fusion Technology?
Cache fusion treats multiple buffer caches as one joint global cache. This solves the issues like data
consistency internally, without any impact on the application code or design. Cache fusion technology
eases the process of a very high number of concurrent users and SQL operations without
compromising data consistency.
Do you have idea about reports server?
Reports server is also a component of the middle tier and is hosted in the same node of the
concurrent processing server. Reports server is used to produce business intelligence reports.
What is importance of replication and their use in oracle?
Replication is the process of copying and maintaining database objects in multiple databases that
make up a distributed database system. Changes applied at one site are captured and stored locally
before being forwarded and applied each of the remote location. Replication provides user with fast,
local access to shared data, and protects availability of applications because alternate data access
options exist. Even if one site becomes unavailable, users can continue to query or even update the
remaining locations.
In simple replication, you create a snapshot, a table corresponding to the query's column list. When
the snapshot is refreshed, that underlying table is populated with the results of the query. As data
changes in a table in the master database, the snapshot is refreshed as scheduled and moved to the
replicated database.
Advanced replication allows the simultaneous transfer of data between two or more Master Sites.
There are considerations to keep in mind when using multi-master replication. The important ones are
sequences (which cannot be replicated), triggers (which can turn recursive if you're not careful) and
conflict resolution.


Page 65 of 134

What is the basic difference between Cloning and Standby databases?
The clone database is a copy of the database which can be opened in read write mode. It is treated
as a separate copy of the database that is functionally completely separate. The standby database is
a copy of the production database used for disaster protection. In order to update the standby
database; archived redo logs from the production database can be used. If the primary database is
destroyed or its data becomes corrupted, one can perform a failover to the standby database, in
which case the standby database becomes the new primary database.
Why we are using materialized view instead of a table?
Materialized views are basically used to increase query performance since it contains results of a
query. They should be used for reporting instead of a table for a faster execution.
Which BG process refreshes the materialized view?
Job Queue Process
What is the importance of transportable Tablespace in oracle?
The transportable tablespace enable us to transport data objects across different platform. Moving
data using transportable can be much faster than performing either export/import or unload or load of
the same because transporting a tablespace only requires the copying of datafiles & integrating the
tablespace structure information.
Can we reduce the size of TEMP datafile?
Yes, we can reduce the space of the TEMP datafile. Prior to oracle 11g, you had to recreate the
datafile but in oracle 11g you reduce space of TEMP datfile by shrinking the TEMP tablespace. It is a
new feature to 11g. The dynamic performance view DBA_temp_files can be very useful in
determining which table space to shrink.
SELECT TABLESPACE_NAME, ROUND(BYTES/1048576/1024, 2) "IN
GB", FILE_ID, FILE_NAME FROM DBA_TEMP_FILES;
ALTER TABLESPACE TEMP SHRINK TEMPFILE D:\ORACLE\ORADATA\SADHAN\TEMP02.DBF KEEP 5G;
New data dictionary to check free space
Select * from dba_temp_free_space;
How can we move table from one schema to another?
The simplest way is Login with the SCOTT schema and use the below command to move EMP table
from HR Schema. You can also use Copy and Import/Export for that.
CREATE TABLE EMP
AS SELECT * FROM HR.EMP;
How we can prevent fragmentation in oracle Tablespace.
Tablespace fragmentation can be prevented by using PCTINCREASE command. PCTINCREASE is
the percentage a new subsequent extent will grow. This value should be ideally set to 0 or 100 to
avoid tablespace fragmentation. Alternate and strange values for PCTINCREASE results in strange
sizes of extents. Same size of each extent of all segments must be used.
Do you know the use of iostat, vmstat and netstat?
Iostat report on terminal, disk and terminal IO activities.
Vmstat reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat reports on the contents of network data structures.
Name the different types of indexes available in Oracle?
Oracle provides several Indexing schemas
B-tree index Retrieves a small amount of information from a large table.
Global and Local index Relates to partitioned tables and indexes.
Reverse Key Index - It Is most useful for oracle real application clusters applications.
Domain Index Refers to an application
Hash cluster Index Refers to the index that is defined specifically for a hash cluster.
What is a user process trace file?
It is an optional file which is produced by user session.
It is generated only if the value of SQL_TRACE parameter is set to true for a session.
Page 66 of 134

SQL_TRACE parameter can be set at database, instance, or session level.
If it set at instance level, trace file will be created for all connected sessions.
If it is set at session level, trace file will be generated only for specified session.
The location of user process trace file is specified in the USER_DUMP_DEST parameter.
How can you use automatic PGA memory management with oracle 9i or above?
Set the WORK_AREA_SIZE_POLICY parameter to AUTO and set PGA_AGGREGATE_TARGET

When a user comes to you and asks that a particular SQL query is taking more time. How will
you solve this?
If you find the particular query is taking time to execute, then take a SQLTRACE with explain plan, it
will show how the SQL query will be executed by oracle, depending upon the report you will tune your
database.
Then determine the table size and check the user requirement is % of data from query table. If it is
less then
For example: one table has 10000 records, but you want to fetch only 5 rows, but in that query oracle
does the full table scan. Only for 5 rows full table scan is not a good, so create an index on that
particular column.
If the user requirement is more than 80% of data from query table then in that case if we create index,
again user will get poor performance because oracle will get contention on db buffer cache since first
of all index block need to be picked up as well as almost all block from that table will be pull out.
Hence it will increase the I/O, also other user request may get slow performance since existing data in
cache will be flush out and reloaded.
Additionally we need to check system level performance, either any problem with dbwn either dbwn
writing slow any modified data which is in buffer to datafile and either user server process is waiting
for space in buffer cache?
Check alert log file too.
Check if user query needed join or sorting?
Check either there is not enough space in temporary tablespace?
If user again user again facing issue then we need drill down to check either any issue with table
block level either table needs defragments if watermark reached high.
What is Difference between sqlnet.ora, listener.ora, tnsname.ora network file?
sqlnet.ora: The normal location for this file is D:\oracle\ora92\network\admin. The sqlnet.ora file is the
profile configuration file, and it resides on the client machines and the database server. The sqnet.ora
is text file (optional) that contain basic configuration details used by the SQL*Net. It contain network
configuration details such domain name, as what path to take in resolving then name of an instance,
order of naming method, authentication services etc.
listener.ora: The normal location for this file is D:\oracle\ora92\network\admin. This file is client side
file (typically on remote PC). The client uses this tnsname.ora file to obtain connection details from the
desired database.
tnsname.ora: The normal location for this file is D:\oracle\ora92\network\admin. This file is located on
both client and server. If you make configuration changes on the server ensure you can connect to
the database through the listener if you are logged on to the server. If you make configuration change
on the client ensure you can connect from your client workstation to the database through the listener
running on the server.
What is the address of official oracle support?
Metalink.oracle.com or support.oracle.com
Is the password in oracle case sensitive?
In oracle 10g and earlier version NO and since 11g is YES



Page 67 of 134

What is the difference between ISNULL and IS NOT NULL operators?
The IS NULL and IS NOT NULL operators are used to find the NULL and not NULL values
respectively. The IS NULL operator returns TRUE, when the value is NULL; and FALSE, when the
value is not NULL. The IS NOT NULL operator returns TRUE, when the value is not NULL; and
FALSE, when the value is NULL.

DBA Interview Questions with Answer Part14
Why drop table is not going into Recycle bin?
If you are using SYS user to drop any table then users object will not go to the recyclebin as there is
no recyclebin for SYSTEM tablespace, even we have already SET recycle bin parameter TRUE.
Select * from v$parameter where name = 'recyclebin';
Show parameter recyclebin;
How to recover password in oracle 10g?
You can query with the table user_history$. The password history is store in this table.
How to detect inactive session to kill automatically?
You can use the SQLNET.EXPIRE_TIME for the dead connections (for abnormal disconnections) by
specifying a time interval in minute to send a problem message that verify client/server connections
are active. Setting the value greater than 0 to this parameter ensures that connection is not left open
indefinitely, due to abnormal client termination. If probe finds a terminated connection, or connection
that is no longer in use, it returns an error, causing the server process to exit.
SQLNET.EXPIRE_TIME=10
Why we need CASCADE option with DROP USER command whenever dropping a user and
why "DROP USER" commands fails when we don't use it?
If a user having any object then YES in that case you are not able to drop that user without using
CASCADE option. The DROP USER with CASCADE option command drops user along with its all
associated objects. Remember it is a DDL command after the execution of this command rollback
cannot be performed.
Can you suggest the best steps to refresh a Database?
Refreshing the database is nothing but applying the change on one database (PROD) to another
(Test). You can use import/export and RMAN method for this purpose.
Import/Export Method: If you database is small and if you need to refresh particular schema only
then it is always better to use this method.
1. Export the dump file from source DB
2. Drop and recreate Test environment User.
3. Import the dump to destination DB.
RMAN Method: Now days RMAN is most likely to be used for backup and recovery. It is relatively
easier and better method for full database refresh to be refreshed. It is taking less time as compare to
import/export method. Here also you can use particular SCN based refreshing.
#!/usr/bin/ksh
export ORAENV_ASK='NO'
export ORACLE_SID=PRD
/usr/local/bin/oraenv
export NLS_LANG=American_america.us7ascii;
export NLS_DATE_FORMAT="Mon DD YYYY HH24:MI:SS";
$ORACLE_HOME/bin/rman target / nocatalog
log=/tmp/duplicate_tape_TEST.log connect auxiliary sys/PASSWORD@TEST;
run
{
allocate auxiliary channel aux1 device type disk;
Page 68 of 134

set until SCN 42612597059;
duplicate target database to "TEST"
pfile='/u01/app/xxxx/product/10.2.0/db_1/dbs/initTEST.ora' NOFILENAMECHECK;
}
EOF
\

How will we know the IP address of our system in Linux environment?
Either use ipconfig command or ip addr show
It will give you all IP address and if you have oracle 9i you can query from SQL prompt.
SELECT UTL_INADDR.GET_HOST_ADDRESS "Host Address", UTL_INADDR.GET_HOST_NAME
"Host Name" FROM DUAL;
Can we create Bigfile Tablespace for all databases?
Infact your question do we create bigfile tablespace for every database is not clear for me. If you are
asking can we create bigfile for every database?
Yes you can but it is not ideal for every datafile if your work is suitable for small file then why you
create bigfile but if your mean is impact of bigfile that depends on your requirements and storage.
A bigfile tablespace is having single very big datafile which can store 4GB to 128 TB.
Creating single large datafile reducing the requirement of SGA and also it will allow you modification
at tablespace level. In fact it is ideal for ASM, logical device supporting stripping.
Avoid using bigfile tablespace where there is limited space availability. For more details impact,
advantage, disadvantage of bigfile on my blog.
Can you gice more explanation on logfile states?
CURRENT state means that redo records are currently being written to that group. It will be until a
log switch occurs. At a time there can be only one redo group current.
If a redo group containing redos of a dirty buffer that redo group is said to be ACTIVE state. As we
know log file keep changes made to the data blocks then data blocks are modified in buffer cache
(dirty blocks). These dirty blocks must be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it is in an "INACTIVE"
state. These inactive redolog can be overwritten.
One more state UNUSED initially when you create new redo log group its log file is empty on that
time it is unused. Later it can be any of the above mentioned state.
What is difference between oracle SID and Oracle service name?
Oracle SID is the unique name that uniquely identifies your instance/database where as the service
name is the TNS alias can be same or different as SID.
How to find session for Remote users?
-- To return session id on remote session:
SELECT distinct sid FROM v$mystat;
-- Return session id of you in remote Environment:
Select sid from v$mystat@remot_db where rownum=1;
We have a complete cold Backup taken on Sunday. The database crashed on Wednesday.
None of the database files are available. The only files we have are the taped backup archive
files till Wednesday. Is there a possibility of recovering the database until the recent archive
which we have in the tape using the cold backup.
Yes, if you have all the archive logs since the cold backup then you can recover to your last log
Steps:
1) Restore all backup datafiles, and controlfile. Also restore the password file and init.ora if you lost
those too. Don't restore your redo logs if you backed them up.
2) Make sure that ORACLE_SID is set to the database you want to recover
3) startup mount;
Page 69 of 134

4) Recover database using backup controlfile;
At this point Oracle should start applying all your archive logs, assuming that they're in
log_archive_dest
5) alter database open resetlogs;
How to check RMAN version in oracle?
If you want to check RMAN catalog version then use the below query from SQL*plus
SQL> Select * from rcver;
If you want to check simply database version.
SQL> Select * from v$version;
What is the minimum size of Temporary Tablespace?
1041 KB
Difference b/w image copies and backup sets?
An image copy is identical, byte by byte, to the original datafile, control file, or archived redo log file.
RMAN can write blocks from many files into the same backup set but cant do so in the case of an
image copy.
An RMAN image copy and a copy you make with an operating system copy command such as dd
(which makes image copies) are identical. Since RMAN image copies are identical to copies made
with operating system copy commands, you may use user-made image copies for an RMAN restore
and recovery operation after first making the copies known to RMAN by using the catalog command.
You can make image copies only on disk but not on a tape device. "backup as copy database;"
Therefore, you can use the backup as copy option only for disk backups, and the backup as
backupset option is the only option you have for making tape backups.
How can we see the C:\ drive free space capacity from SQL?
create an external table to read data from a file that will be as below
create BAT file free.bat as
@setlocal enableextensions enable delayedexpansion
@echo off
for /f "tokens=3" %%a in ('dir c:\') do (
set bytesfree=%%a
)
set bytesfree=%bytesfree:,=%
echo %bytesfree%
endlocal && set bytesfree=%bytesfree%
You can create a schedular to run the above free.bat, free_space.txt inside the oracle directory.
Differentiate between Tuning Advisor and Access Advisor?
The tuning Advisor:
It suggests indexes that might be very useful.
It suggests query rewrites.
It suggests SQL profile
The Access Advisor:
It suggest indexes that may be useful
Suggestion about materialized view.
Suggestion about table partitions also in latest version of oracle.
How to give Access of particular table for particular user?
GRANT SELECT (EMPLOYEE_NUMBER), UPDATE (AMOUNT) ON HRMS.PAY_PAYMENT_MASTER
TO SHAHID;
The Below command checks the SELECT privilege on the table PAY_PAYMENT_MASTER on the
HRMS schema (if connected user is different than the schema)
SELECT PRIVILEGE
FROM ALL_TAB_PRIVS_RECD
WHERE PRIVILEGE = 'SELECT'
Page 70 of 134

AND TABLE_NAME = 'PAY_PAYMENT_MASTER'
AND OWNER = 'HRMS'
UNION ALL
SELECT PRIVILEGE
FROM SESSION_PRIVS
WHERE PRIVILEGE = 'SELECT ANY TABLE';

What are the problem and complexities if we use SQL Tuning Advisor and Access Advisor
together?
I think both the tools are useful for resolving SQL tuning issues. SQL Tuning Advisor seems to be
doing logical optimization mainly by checking your SQL structure and statistics and the SQL Access
Advisor does suggest good data access paths, that is mainly work which can be done better on disk.
Both SQL Tuning Advisor and SQL Access Advisor tools are quite powerful as they can source the
SQL they will tune automatically from multiple different sources, including SQL cache, AWR, SQL
tuning Sets and user defined workloads.
Related with the argument complexity and problem of using these tools or how you can use these
tools together better to check oracle documentation.

DBA Interview Questions with Answer Part 15
Can you differentiate Redo vs. Rollback vs. Undo?
I find there is always some confusion when talking about Redo, Rollback and Undo. They all sound
like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all
changes made to data, including both uncommitted and committed changes. In addition to the online
redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations.
Rollback: More specifically rollback segments. Rollback segments store the data as it was before
changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo
tablespace. Undo is helpful in building a read consistent view of data.
Alert. log showing this error ORA-1109 signalled during: alter database close. What is the
reason behind it?
The ORA-1109 error just indicates that the database is not open for business. You'll have to open it
up before you can proceed.
It may be while you are shutting down the database, somebody trying to open the database
respectively. It is a failure attempt to open the database while shutdown is progress.Wait for the time
to successfully shutdown the database and open it again for use. Alternatively you have to restart
your oracle services on windows environment.
Which factors are to be considered for creating index on Table? How to select column for
index?
Creation of index on table depends on size of table, volume of data. If size of table is large and we
need only few data for selecting or in report then we need to create index. There are some basic
reason of selecting column for indexing like cardinality and frequent usage in where condition of
select query. Business rule is also forcing to create index like primary key, because configuring
primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on table
because in single transaction should need to perform on various index segments and table
simultaneously.


Page 71 of 134

What is Secure External password Store (SEPS)?
Through the use of SEPS you can store password credentials for connecting to database by using a
client side oracle wallet, this wallet stores signing credentials. This feature introduced since oracle
10g. Thus the application code, scheduled job, scripts no longer needed embedded username and
passwords. This reduces risk because the passwords are no longer exposed and password
management policies are more easily enforced without changing application code whenever
username and password change.


Differentiate DB file sequential read wait/DB File Scattered Read?
Sequential read associated with index read where as scattered read has to do with full table scan.
The sequential read, reads block into contiguous memory and DB scattered read gets from multiple
block and scattered them into buffer cache.
I install oracle 10g on windows 7 successfully. I found every thing working fine except the toad
is giving cannot load oci.dll error. s this compatibility issue?
Read the toad user guide. You will get important information related to compatibility issue. In fact toad
works with both 32 bit and 64 bit oracle server where as toad only work with 32 bit client. If you need
64 bit client for other applications, you can install both 32 bit and 64 bit client on a single machine and
just tell the toad to use the 32 bit client.
What are the differences between Physical/Logical standby databases? How would you decide
which one is best suited for your environment?
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary
database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure
can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary database
into SQL statements and then executing these SQL statements on the standby DB.
We can open physical stand by DB to read only and make it available to the applications users
(Only select is allowed during this period). we can not apply redo logs received from primary database
at this time.
We do not see such issues with logical standby database. We can open the database in normal
mode and make it available to the users. At the same time, we can apply archived logs received from
primary database.
For OLTP large transaction database it is better to choose logical standby database.
How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using
import/export and data pump utility you can recreate or re-organize your schema.
To configure RMAN Backup for 100GB database? How we would estimate backup size and
backup time?
Check the actual size of your database. For rman backup size almost depends on your actual size of
database.
SELECT SUM(BYTES)/1024/1024/1024 FROM DBA_SEGMENTS;
Backup time depends on your hardware configuration of your server such as CPU, Memory, and
Storage.
Later you can also minimize the backup time by configuring multiple channels with the backup scripts.
How can you control number of datafiles in oracle database?
The db_files parameter is a "soft limit " parameter that controls the maximum number of physical OS
files that can map to an Oracle instance. The maxdatafiles parameter is a different - "hard limit"
Page 72 of 134

parameter. When issuing a "create database" command, the value specified for maxdatafiles is stored
in Oracle control files and default value is 32. The maximum number of database files can be set with
the init parameter db_files.
Regardless of the setting of this parameter, maximum per database: 65533 (May be less on some
operating systems), Maximum number of datafiles per tablespace: OS dependent = usually 1022
You can also by Limited size of database blocks and by the DB_FILES initialization parameter for a
particular instance. Bigfile tablespaces can contain only one file, but that file can have up to 4G
blocks.

What is Latches and why they are used in oracle?
A latch is a serialization mechanism. It is used to gain access to shared data structure in order to
latches the structure that will prevent others from modifying it while you are modifying it.
Why it is not necessary to take UNDO backup?
In fact it is not necessary to take UNDO tablespace backup either with COLD or HOT backup scripts
but many of DBA include UNDO tablespace in their backup script.
You know when you do some transactions; redo entries will be generated and accepted! Just like that
other tablespace whenever any change happens to UNDO tablespace or UNDO segments oracle will
generate redo entries. So even you not backed up the UNDO tablespace, you have the redo entries
through which you can recover or rollback the transactions.
What should be effect on DB performance if virtual memory used to store SGA parameter?
For optimal performance in most systems, the entire SGA should fit in real memory. If it does not, and
if virtual memory is used to store parts of it, then overall database system performance can decrease
dramatically. The reason for this is that portions of the SGA are paged (written to and read from disk)
by the operating system.
What is the role of lock_sga parameter?
The LOCK_SGA parameter, when set to TRUE, locks the entire SGA into physical memory. This
parameter cannot be used with automatic memory management or automatic shared memory
management.
What is CSSCAN?
CSSCAN (Database Character Set Scanner) is a SCAN tool that allows us to see the impact of a
database character set change or assist us to correct an incorrect database nls_characterset setup.
This helps us to determine the best approach for converting the database characterset.
Differentiate between co-related sub-query and nested query?
Co-related sub query is one in which inner query is evaluated only once and from that result your
outer query is evaluated where as Nested query is one in which Inner query is evaluated for multiple
times for getting one row of that outer query.
Example: Query used with IN() clause is Co-related query.
SELECT EMPLOYEE_NUMBER, LOAN_CODE, DOCUMENT_NUMBER, LOAN_AMOUNT
FROM PAY_LOAN_TRANS
WHERE EMPLOYEE_NUMBER IN (SELECT EMPLOYEE_NUMBER
FROM PAY_EMPLOYEE_PERSONAL_INFO
WHERE EMPLOYEE_NUMBER BETWEEN 1 AND 100);
Example: Query used with = operator is Nested query
SELECT * FROM PARTIAL_PAYMENT_SEQUENCE
WHERE SEQCOD = (SELECT MAX(SEQCOD) FROM PARTIAL_PAYMENT_SEQUENCE);
One after noon suddenly you get a call from your application user and complaining the
database is slow then what will be your first step to solve this issue?
High performance is common expectation for end user, in fact the database is never slow or fast in
most of the case session connected to the database slow down when they receives unexpected hit.
Thus to solve this issue you need to find those unexpected hit. To know exactly what the second
session is doing join your query with v$session_wait.
Page 73 of 134

SELECT NVL(s.username, '(oracle)') AS username, s.sid, s.serial#,
sw.event, sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.sid = sw.sid and s.username = 'HRMS'
ORDER BY sw.seconds_in_wait DESC;
Check the events that are waiting for something, try to find out the objects locks for that particular
session. Follow the link: Find Locks : Blockers
Locking is not only the cause to effects the performance. Disk I/O contention is another case. When a
session retrieves data from the database datafiles on disk to the buffer cache, it has to wait until the
disk sends the data. The wait event shows up for the session as db file sequential read (for index
scan) or db file scattered read (for full table scan). Query link: DB File Sequential Read Wait/ DB File
Scattered Read , DB Locks
When you see the event, you know that the session is waiting for I/O from the disk to complete. To
improve session performance, you have to reduce that waiting period. The exact step depends on
specific situation, but the first technique reducing the number of blocks retrieved by a SQL statement
almost always works.
Reduce the number of blocks retrieved by the SQL statement. Examine the SQL statement to see if it
is doing a full-table scan when it should be using an index, if it is using a wrong index, or if it can be
rewritten to reduce the amount of data it retrieves.
Place the tables used in the SQL statement on a faster part of the disk.
Consider increasing the buffer cache to see if the expanded size will accommodate the additional
blocks, therefore reducing the I/O and the wait.Tune the I/O subsystem to return data faster.
DBA Interview Questions with Answer Part 16
What is Oracle database firewall?
The database firewall has the ability to analyze SQL statements sent from database clients and
determine whether to pass, block, log, alert or substitute SQL statements, based on a defined policy.
User can set whitelist and blacklist policy to control the firewall. It can detect the injected SQLs and
block them. The database firewall can do the following:
Monitor and block SQL traffic on the network with whitelist, blacklist and exception list policies.
Protect against application bypass, SQL injection and similar threats.
Report on database activity.
Supports other database as well MS-SQL Server, IBM DB2 and Sybase.
However there are some key issues that it does not address. For Example privilege user can login to
the OS directly and make local connections to the database. This bypasses the database firewall. For
these issues, would need use of other security options such as Audit Vault, VPD etc.
What is Oracle RAC One Node?
Oracle RAC one Node is a single instance running on one node of the cluster while the 2
nd
node is in
cold standby mode. If the instance fails for some reason then RAC one node detect it and restart the
instance on the same node or the instance is relocate to the 2
nd
node incase there is failure or fault in
1
st
node. The benefit of this feature is that it provides a cold failover solution and it automates the
instance relocation without any downtime and does not need a manual intervention. Oracle introduced
this feature with the release of 11gR2 (available with Enterprise Edition).
What are invalid objects in database?
Sometimes schema objects reference other objects such as a view contains a query that reference
table or other view and a PL/SQL subprogram invokes other subprograms or may reference another
tables or views. These references are established at compile time and if the compiler cannot resolve
them, the dependent object being compiled is marked invalid.
An invalid dependent object must be recompiled against the new definition of a referenced object
before the dependent object can be used. Recompilation occurs automatically when the invalid
dependent object is referenced
Page 74 of 134

How can we check DATAPUMP file is corrupted or not?
Sometimes we may be in situation, to check whether the dumpfile exported long time back is VALID
or not or our application team is saying that the dumpfile provided by us is corrupted.
Use SQLFILE Parameter with import script to detect corruption. The use of this parameter will read
the entire datapump export dumpfile and will report if corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp
logfile=corruption_check.log sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is performed) into the file which
we mentioned in the command.
How can we find elapsed time for particular object during Datapump or Export?
We have an undocumented parameter metrics in DATAPUMP to check how much it took to export
different objects types.
Expdp system/oracle directory = dump_dir dumpfile = exp_full.dmp logfile =
exp_full.log full = y metrics = y;
How to check oracle database service is running in server?
DBA using this command on daily basis to find running oracle service on server
On Linux: ps -ef

On Windows: Tasklist /svc | find "oracle"

How can we find different OS block size?
In oracle we can say that database block size should be multiple of OS block size.
On Windows: fsutil fsinfo ntfsinfo c: | find /i "bytes"

On Linux: tune2fs -l
On Solaris: df -g /tmp
How to find location of OCR file when CRS is down?
If you need to find the location of OCR (Oracle Cluster Registry) but your CRS is down.
When the CRS is down:
Look into ocr.loc file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
How can you Test your Standby database is working properly or not?
To test your standby database, make a change to particular table on the production server, and
commit the change. Then manually switch a logfile so those changes are archived. Manually ship the
newest archived redolog file, and manually apply it on the standby database. Then open your standby
database in read-only mode, and select from your changed table to verify those changes are
available. Once you have done, shutdown your standby and startup again in standby mode.
What is Dataguard & what is the purpose of Data Guard?
Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has been utilized in the
industry extensively at times of Primary site failure, failover, switchover scenarios.
Page 75 of 134

a) Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise
data.
b) Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor
one or more standby databases to enable production Oracle databases to survive disasters and data
corruptions.
c) With Data Guard, administrators can optionally improve production database performance by
offloading resource-intensive backup and reporting operations to standby systems.
What is role of Redo Transport Services in Dataguard?
It controls the automated transfer of redo data from the production database to one or more archival
destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and
automatically retrieve replacement archived redo log files from the
primary database or another standby database.
Is Opatch (utility) is also another type of patch?
OPatch is utility from oracle corp. (Java based utility) that helps you in applying interim patches to
Oracle's software and rolling back interim patches from Oracle's software. Opatch also able to Report
already installed interim patch and can detect conflict when already interim patch has been applied.
This program requires Java to be available on your system and requires installation of OUI. Thus from
the above discussion coming to your question it is not ideal to say OPATCH is another patch.
When we applying single Patch, can you use opatch utility?
Yes, you can use Opatch incase of single patch. The only type of patch that cannot be used with
OPatch is a patchset
When you applying Patchsets, You can use OUI.
Yes, Patcheset uses OUI. A patch set contains a large number of merged patches, to change the
version of the product or introduce new functionality. Patch sets are cumulative bug fixes that fix all
bugs and consume all patches since the last base release. Patch sets and the Patch Set Assistant
are usually applied through OUI-based product specific installers.
Can you Apply OPATCH without downtime?
As you know for apply patch your database and listener must be down. When you apply OPTACH it
will update your current ORACLE_HOME. Thus coming to your question to the point in fact it is not
possible without or zero downtime in case of single instance but in RAC you can Apply Opatch
without downtime as there will be more separate ORACLE_HOME and more separate instances
(running once instance on each ORACLE_HOME).
You have collection of patch (nearly 100 patches) or patchset. How can you apply only one
patch from it?
With Napply itself (by providing patch location and specific patch id) you can apply only one patch
from a collection of extracted patch. For more information check the opatch util NApply help. It
will give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of patch
installed in your ORACLE_HOME.
If both CPU and PSU are available for given version which one, you will prefer to apply?
From the above discussion it is clear once you apply the PSU then the recommended way is to apply
the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU (If you apply
CPU over PSU will considered you are trying to rollback the PSU and will require more effort in
fact). So if you have not decided or applied any of the patches then, I will suggest you to go to use
PSU patches. For more details refer:Oracle Products [ID 1430923.1], ID 1446582.1

Page 76 of 134

PSU is superset of CPU then why someone choose to apply a CPU rather than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues. It seems to be
theoretically more consecutive approach and can cause less trouble than PSU as it has less code
changing in it. Thus any one who is concerned only with security fixes and not functionality fixes, CPU
may be good approach.
Will Patch Application affect System Performance?
Sometimes applying certain patch could affect Application performance of SQL statements. Thus it is
recommended to collect a set of performance statistics that can serve as a baseline before we make
any major changes like applying a patch to the system.

What is your day to day activity as an Apps DBA?
As an Apps DBA we monitor the system for different alerts (Entreprise Manager or third party tools
used for configuring the Alerts) Tablespace Issues, CPU consumption, Database blocking sessions
etc., Regular maintenance activities like cloning, patching, custom code migrations (provided by
developers) and Working with user issues.
How often do you use patch in your organization?
Usually for non-production the patching request comes around weekly 4-6 and the same patches will
be applied to Production in the outage or maintenance window.
Production has weekly maintenance window (eg. Sat 6PM to 9PM) where all the changes (patches)
will applied on production.

How often do you use cloning in your organization?
Cloning happens weekly or monthly depending on the organization requirement. Generally when we
need to perform major task such as oracle financial annual closing etc.
DBA Interview Questions with Answer Part17
What are the common Tasks or Responsibilities for a Core DBA?
DBA responsibilities are varied from organization to organization. It depends on the organization
nature of work. Following are the overall responsibility for a DBA:
1. User Management: Create new user, remove existing user and provide the rights as
per the requirement.
2. Manage database storage (Timely space management of Tablespace or datafile)
3. Administrator users and security.
4. Manage Schema object.
5. Monitor and Manage database performance.
6. Perform backup and recovery.
7. Schedule and automate jobs.
8. Taking database snapshot or health report.
9. Working with user issues for managing overall smooth running of database.
What are your day to day activities as an APPS DBA?
In compare to Core DBA Apps DBA include all the responsibilities of Core DBA Plus Upgrade,
Cloning and Patching. As an Apps DBA we monitor the system for different alerts (EM or third party
tools used for configuring the Alerts), Tablespace issues, CPU consumption, Database blocking
session etc. Regular maintenance activities like cloning, patching and custom code migration
(provided by developer), working with user issues.
What type of failure occurs when oracle fails due to OS or Hardware failure?
Instance Failure


Page 77 of 134

An Oracle system change number (SCN):
A. is a value that is incremented whenever a dirty read occurs.
B. is incremented whenever a deadlock occurs.
C. is a value that keeps track of explicit locks
D. is a value that is incremented whenever database changes are made?
Answer: D
Which process read/write data from datafiles?
There is no background process which reads data from datafiles or database buffer. Oracle
creates server process to handle request from connected user processes. A server process
communicates with the user process and interacts with oracle to carry out request from the
associated user process.
For example: If a user queries some data not already in database buffer of the SGA, then the
associated server process reads the proper data block from the datafiles into the SGA.
DBWR background process is responsible to writes modified (dirty block from buffer cache to the
datafiles) block permanently to disk.
Why RMAN incremental backup fails even though full backup exists?
If you have taken the RMAN full backup using the command Backup database, where as a level 0
backup is physically identical to a full backup. The only difference is that the level 0 backup is
recorded as an incremental backup in the RMAN repository so it can be used as the parent for a level
1 backup. Simply the full backup without level 0 can not be considered as a parent backup from
which you can take level 1 backup.
How can you change or rename the database name?
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
The above command will create a text control file in user_dump_dest directory and change name of
the database in above file and also ininit.ora file.
Now startup your database in nomount phase using the modified pfile and then run the modified
controlfile script.
SQL> STARTUP NOMOUNT;
SQL> @D:\Backup\controlfile.txt
SQL> ALTER DATABASE OPEN RESETLOGS;
You can use DBNEWID utility NID for this purpose. For more information: DBNEWID, Changing
DBNAME
Temp Tablespace is 100% FULL and there is no space available to add datafiles to increase
temp tablespace. What can you do in that case to free up TEMP tablespace?
Try to close some of the idle sessions connected to the database will help you to free some TEMP
space. Otherwise you can also use Alter Tablespace PCTINCREASE 1 followed by Alter
Tablespace PCTINCREASE 0
What is the use of setting GLOBAL_NAMES equal to true?
Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either
true or false. If it is set to true enforces database link to have same link as the remote database
to which they are linking.
What is the purpose of fact and dimension table? What type of index is used with fact table?
Fact and dimension tables are involved in producing a star schema. A fact table contains
measurements while dimension table will contain data that will help to describe the fact table. A
Bitmap index is used with fact table.
If you got complain application is running very slow from your application user. Where do you
start looking first?
Below are some of very important step to identify the root cause of slowness in Application database.
Run TOP command in Linux to check CPU usage.
Page 78 of 134

Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and
possible blocking.
Run STATSPACK report to identify TOP 5 Events and Resource Intensive SQL statement.
If found poor written statements then run EXPLAIN PLAN on these statements and see whether
new index or use of HINT brings the cost of SQL down.
How do you add second or subsequent BLOCK SIZE to an existing database?
In fact the block size in an oracle database cannot be changed after the database is created. The
reason is because oracle track lot of information based on block number. If you change the block size
all the block number is changed and basically the database would have to re-create. But in the case
when you need to add second or subsequent BLOCK_SIZE for particular datafile then you have to re-
create the CONTROLFILE to specify the new block size for specific datafiles.
or Take the database OFFLINE, and then bring back online with a new BLOCK SIZE specification.
You need to restore from backup and do not have any control files. What will be your step to
recover the database?
Create a text based control files, saved on the disk same location where all the datafiles are located
then issue the recover command by using backup control file clause.
Shutdown abort; -- if db still open
Startup nomount;
create controlfile
database <name>
logfile '<online redo log groups>'
noresetlogs|resetlogs
maxlogfiles 10
maxlogmembers <your value>
datafile '<names of all data files>'
maxdatafiles 254
archivelog;
SQL> alter database mount;
recover database [until cancel] [using backup controlfile];
alter database open [noresetlogs/resetlogs];
Use alter database open if you created the control file with NORESETLOGS and have
performed no recovery or a full recovery (without until cancel).
Use alter database open noresetlogs if you created the control file with NORESETLOGS and
performed a full recovery despite the use of the until cancel option.
Use alter database open resetlogs if you created the control file with RESETLOGS or when you
performed a partial recovery.
In below list which SQL phrase is NOT supported by oracle?
A. ON DELETE CASCADE
B. ON UPDATE CASCADE
C. CREATE SEQUENCE [SequenceName]
D. DROP SEQUENCE [SequenceName]
Answer: B
What is the effect on working with Report when flex/confine mode are ON?
When flex mode is ON, reports automatically resize the parent when the child is resized.
When the confine mode is ON, the object cannot be moved outside its parent in layout.
How will you enforce security using stored procedure?
Dont grant user access directly to tables within the application. Instead grant the ability to access the
procedure that accesses the tables. When procedure execute it will execute the privilege of
procedures owner. Users cannot access except via the procedure.

Page 79 of 134

What is RAC? What is the benefit of RAC over single instance database?
In Real Application Clusters environments, all nodes concurrently execute transactions against the
same database. Real Application Clusters coordinates each node's access to the shared data to
provide consistency and integrity.
Benefits:
Improve response time
Improve throughput
High availability
Transparency
Can you configure primary server and standby server on different OS?
NO, Standby database must be on same version of database and same version of OS.

If you want users will change their passwords after every 60 days then how you will enforce
this?
Oracle password security is implemented through oracle PROFILES which are assigned to
users. PASSWORD_LIFE_TIME parameter limits the number of days the same password can be used
for authentication.
You have to first create database PROFILE and then assign each user to this profile or if you have
already having PROFILE then you need to just alter the above parameter.
create profile Sadhan_users
limit
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX 0
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME UNLIMITED;
Then create user or already created user assigned to this profile.
SQL> Create user HRMS identified by oracle profile sadhan_users;
If you have already assigned profile then you can directly modify the profile parameter:
SQL> Alter profile sadhan_users set PASSWORD_LIFE_TIME = 90;
What happens actually in case of instance Recovery?
While Oracle instance fails, Oracle performs an Instance Recovery when the associated database is
being re-started. Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database buffer cache as
well as redo log files simultaneously. When there are enough data in the database buffer cache, they
are written to data files. If an Oracle instance fails before these data are written to data files, Oracle
uses online redo log files to recover the lost data when the associated database is re-started. This
process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database (the before image of the
modified data is stored in an undo segment which is used to restore the original values in case the
transaction is rolled back). At the time of an instance failure, the database may have uncommitted
transactions. It is possible that changes made by these uncommitted transactions have gotten saved
in data files. To maintain read consistency, Oracle rolls back all uncommitted transactions when the
associated database is re-started. Oracle uses the undo data stored in undo segments to accomplish
this. This process is called transaction recovery.
What is the main purpose of CHECKONT in oracle database?
A checkpoint is a database event, which synchronize the database blocks in memory with the
datafiles on disk. It has two main purposes: To establish a data consistency and enable faster
database Recovery. For more information: Discussion on Checkpoint and SCN

Page 80 of 134

Can you change the Characterset of database?
No, you can not change the character set of database, you will need to re-create the database with
appropriate characterset.
What is Cascading standby database?
A CASCADING STANDBY is a standby database that receives its REDO information from another
standby database (not from primary database).
What the use of ANALYZE command?
To collect statistics about object used by the optimizer and store them in the data dictionary, delete
statistics about the object, validate the structure of the object and identify migrated and chained rows
of the table or cluster.
How will you check active shared memory segment?
ipcs -a

How will you check paging swapping in Linux?
vmstat s
prstat s
swap l
sar p
How do you check number of CPU installed on Linux server?
psrinfot v
When you moved oracle binary files from one ORACLE_HOME server to another server then
which oracle utility will be used to make this new ORACLE_HOME usable?
Relink all
In which months oracle release CPU patches?
JAN, APR, JUL, OCT
Oracle version 9.2.0.4.0 what does each number refers to?
Oracle version number refers:
9 Major database release number
2 Database Maintenance release number
0 Application server release number
4 Component Specific release number
0 Platform specific release number
What does database do during the mounting process?
While mounting the database oracle reads the data from controlfile which is used for verifying physical
database files during sanity check. Background processes are started before mounting the database
only.
When having multiple oracle homes on a single server or client what is the parameter that
points all Oracle installs at one TNSNAMES.ORA file.
TNS_ADMIN
How to implement the multiple controlfile for existing database?
1. Edit init.ora file, set controlfiles parameter with multiple location
2. Shutdown immediate
3. Copy controlfile to multiple locations & confirm from init.ora contolfiles parameter
4. Start the database.
5. Use the below query for changes confirmation
select name from v$controlfile;



Page 81 of 134

DBA Interview Questions with Answer Part 18
How would you decide your backup strategy and timing for backup?
In fact backup strategy is purely depends upon your organization business need. If no downtime then
database must be run on archivelog mode and you have to take frequently or daily backup. If
sufficient downtime is there and loss of data would not affect your business then you can run your
database in archivelog mode and backup can be taken in-frequently or weekly or monthly.
In most of the case in an organization when no downtime then frequent inconsistent backup needed
(daily backup), multiplex online redo log files (multiple copies), different location for redo log files,
database must run in archivelog mode and dataguard can be implemented for extra bit of protection
(to make less downtime during recovery).
What is Jinitiator and what its purpose?
It is a java virtual machine provided for running web based oracle forms applications inside a client
web browser. It is implemented as a plug-in or ActiveX object, allows you to specify the use of oracle
certified JVM instead of relying on default JVM provided by browser. It is automatically downloaded to
a client machine from the application. Its installation and update is performed by standard plug-in
mechanism provided by the browser.
What is the use of large pool, which case you need to set the large pool?
You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large
pool prevents RMAN & MTS from competing with other sub system for the same memory. RMAN
uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or
BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these
parameters is enabled, then Oracle allocates backup buffers from local process memory rather than
shared memory. Then there is no use of large pool.
How can you audit system operations?
Sys connection can be audited by setting init.ora parameter AUDIT_SYS_OPERATIONS=TRUE
How can you implement Encryption in database?
Data with database can be encrypted and decrypted using package: DBMS_OBFUSCATION_TOOLKIT
How do you list the folder files with hidden file in Linux
s ltra
How to execute Linux command in Background?
Use the "&" at the end of command or use nohup command
What Linux command will control the default permission when file are created?
Umask
Give the command to display space usage on the LINUX file system?
df lk
What is the use of iostat/vmstat/netstat command in Linux?
Iostat reports on terminal, disk and tape I/O activity.
Vmstat reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat reports on the contents of network data structures.
What are the steps to install oracle on Linux system. List two kernel parameter that effect
oracle installation?
Initially set up disks and kernel parameters, then create oracle user and DBA group, and finally run
installer to start the installation process. TheSHMMAX & SHMMNI two kernel parameter required to set
before installation process.
__________ Parameter change will decrease Paging/Swapping?
Answer: Decrease_Shared_Pool_size
_______ Command is used to see the contents of SQL* Plus buffer
Answer: LIST
Transaction per rollback segment is derived from ________
Answer: Processes
Page 82 of 134

LGWR process writes information into ___________
Answer: Redo log files.
A database over all structure is maintained in a file __________
Answer: Control files
What is the use of NVL function?
The NVL function is used to replace NULL values with another or given value.
For Example: NVL (Value, replace value);
What is WITH CHECK OPTION?
The WITH CHECK option clause specifies check level to be done in DML statements. It is used to
prevent changes to a view that would produce results that are not included in the sub query.
The concepts are different than previous concept in fact. In that case you can access the some of the
concept in your mind to achieve the target.
How can you track the password change for a user in oracle?
Oracle only tracks the date that the password will expire based on when it was latest changed. Thus
listing the view DBA_USERS.EXPIRY_DATEand subtracting PASSWORD_LIFE_TIME you can
determine when password was last changed. You can also check the last password change time
directly from the PTIME column in USER$ table (on which DBA_USERS view is based). But If you
have PASSWORD_REUSE_TIME and/orPASSWORD_REUSE_MAX set in a profile assigned to a user
account then you can reference dictionary table USER_HISTORY$ for when the password was
changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
What is the difference between a data block/extent/segment?
A data block is the smallest unit of logical storage for a database object. As objects grow they take
chunks of additional storage that are composed of contiguous data blocks. These groupings of
contiguous data blocks are called extents. All the extents that an object takes when grouped together
are considered the segment of the database object.
What is the difference between SQL*loader and Import utilities?
Both these utilities are used for loading the data into the database. The difference is that the import
utility relies on the data being produced by another oracle utility Export while SQL*Loader is a high
speed data loading mechanism allows data to be loaded that has been produced by other utilities
from different data source. Import is mainly used reading and writing operating system files.
Can you list the Step how to create Standby database?
1. Take a full hot backup of Primary database
2. Create standby control file
3. Transfer full backup, init.ora, standby control file to standby node.
4. Modify init.ora file on standby node.
5. Restore database
6. Recover Standby database
7. (Alternatively, RMAN DUPLICATE DATABASE FOR STANDBY DO RECOVERY can be
also used)
8. Setup FAL_CLIENT and FAL_SERVER parameters on both sides
9. Put Standby database in Managed Recover mode
How would you activate Physical Standby database in oracle 9i?
Perform below on primary database if available to transfer all pending archive logs to standby:
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL> ALTER SYSTEM SWITCH LOGFILE;
Now perform below on STANDBY database:
Page 83 of 134

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
Note: Once you start the Standby DB, your relation between primary databases to standby database
has been lost and at this time your standby database becomes primary database.
How to Switch from Primary to Physical Standby database?
Perform below step on Primary Database:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION
SHUTDOWN;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP NOMOUNT;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> RECOVER MANAGED STANDBY DATABASE DICONNECT FROM SESSION;
SQL> ALTER SYSEM SET LOG_ARCHIVE_DEST_2_STATUS= DEFER SCOPE=SPFILE;
Perform below steps on Secondary Database:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
SQL> ALTER SYSEM SET LOG_ARCHIVE_DEST_2_STATUS= ENABLE SCOPE=SPFILE;
How will you list only the empty lines in a file (using GREP)
GREP "^$" filename.txt
How will you shutdown your database if SHUTDOWN IMMEDIATE command is already tried
and failed to shutdown the database?
Kill the SMON process.
What is log switch?
The point at which oracle ends writing to one online redo log file and begins writing to another is
called a log switch. Sometimes you can force the log switch by using the command: ALTER SYSTEM
LOG SWITCH.
How can you pass the HINTS to the SQL processor?
Using comment line with (+) sign you can pass the HINTS to the SQL engine: For Example: /*
+PARALLEL() */
Give Example of available DB administrator utilities with their functionality?
SQL * DBA It allows DBA to monitor and control an oracle database.
SQL * Loader It loads data from standard OS files or flat file in oracle database tables.
Export/Import It allows moving existing data in oracle format to and from oracle database.
Can you built indexes online?
YES. You can create and rebuild indexes online. This enables you to update base tables at the same
time you are building or rebuilding indexes on that table. You can perform DML operations while the
index building is taking place, but DDL operations are not allowed. Parallel execution is not supported
when creating or rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
If an oracle database is crashed? How would you recover that transaction which is not in
backup?
If the database is in archivelog we can recover that transaction otherwise we cannot recover that
transaction which is not in backup.
What is the benefit of running the DB in archivelog mode over no archivelog mode?
When a database is in no archivelog mode whenever log switch happens there will be a loss of some
redoes log information in order to avoid this, redo logs must be archived. This can be achieved by
configuring the database in archivelog mode.


Page 84 of 134

What is SGA? Define structure of shared pool component of SGA?
The system global area is a group of shared memory area that is dedicated to oracle instance. All
oracle process uses the SGA to hold information. The SGA is used to store incoming data and
internal control information that is needed by the database. You can control the SGA memory by
setting the parameter db_cache_size, shared_pool_size and log_buffer.
Shared pool portion contain three major area: Library cache (parse SQL statement, cursor
information and execution plan), dictionary cache(contain cache, user account information, privilege
user information, segments and extent information, buffer for parallel execution message and control
structure.
You have more than 3 instances running on the Linux box? How can you determine which
shared memory and semaphores are associated with which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the
command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
How would you extract DDL of a table without using a GUI tool?
Select dbms_metadata.get_ddl('OBJECT','OBJECT_NAME') from dual;

f you are getting high usy uffer waits then how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There
could be the reason when the block may be busy in the cache and session is waiting for it. It could be
undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait
Where event = 'buffer busy waits';
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
Can flashback work on database without UNDO and with rollback segments?
No, flashback query enable us to query our data as it existed in a previous state. In other words, we
can query our data from a point in time before any other users made permanent changes to it.
Can we have same listener name for two databases?
No

DBA Interview Questions with Answers Part19
Why we look for CHUNKS_FREE space while tracking fragmentation details query?
The CHUNK_FREE return the number of chunks of contiguous free space based on dba_free_space
table. The motive is to find the largest size chunks of free space within a tableapce. This is because
as we know oracle server allocates space for segments in unit of one extent. When the existing extent
of segment is full, the server allocates another extent for the segment.
In order to do oracle searches free space in the tablespace (contiguous set of data block sufficient to
meet the required extent). If sufficient space not found then an error is returned by the oracle server.
What is the impact of NLS/Characterset in database?
NLS is a National language support and encompasses how to display currency, whenever we use a
comma or a dot to separate numbers, how the name of the day is spelled etc.
Charactersets are how we store data.
For Example: US7ASCII is a 7bit characterset and WE8ISO8859P1 8 bit character set. It can store 2
times as many characters as the 7bit characterset. If you try to export from 8 bit characterset
database and import into 7bit database then there is chance to loose data in 7bit characterset that
Page 85 of 134

have the high bit set and if you try from 7bit to 8bit would not encounter any issues since the 7bit
characterset is a subset of the 8bit characterset and can hold more types of characters and can
support many countries.
Can we perform RMAN level 1 backup without level 0?
If no level 0 is available, then the behavior depends upon the compatibility mode setting (oracle
version). If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files
contents at the time of backup. If compatibility is greater than 10.0.0 RMAN copies all block changes
since the file was created, and stores the results as level 1 backup.
What will happen if ARCHIVE process cannot copy an archive redolog to a mandatory archive
log destination?
Oracle will continue with cycle to the other online redolog groups until it return to the group that the
ARCH process is trying to copy to the mandatory archive log destination. If the mandatory archive log
destination copy has not occurred, the database operation will suspend until the copy is successful or
the DBA has intervened to perform force log switching.
Can you differentiate between HOTBACKUP and RMAN backup?
For hotbackup we have to put database in begin backup mode, then take backup where as RMAN
would not put database in begin backup mode. In fact RMAN has a number of advantages over
general backup. For more information please check: Benefit of RMAN Backup

How to put Manual/User managed backup in RMAN?
In case of recovery catalog, you can put by using catalog command:
RMAN> CATALOG START WITH /oraback/backup.ctl;
When you put any SQL statement how oracle responds them internally?
First it will check the syntax and semantics in library cache, after that it will created execution plan. If
already data in buffer cache (in case of identical query) it will directly return to the client. If not it write
the fetch to the database buffer cache after that it will send server and finally server send to the client.
Can we use Same target database as Catalog?
No, the recovery catalog should not reside in the target database (database to be backed up)
because the database can not be recovered in the mounted state.
Differentiate the use of what are PGA and UGA?
When you are running dedicated server then process information stored inside the process global
area (PGA) and when you are using shared server then the process information stored inside user
global area (UGA).
How do you automatically force the oracle to perform a checkpoint?
The following are the parameter that will be used by DBA to adjust time or interval of how frequently
its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
What is Cluster table in Oracle database?
A Cluster is a schema object that contains one or more tables that all have one or more common
columns. Rows of one or more tables that share the same value in these common columns are
physically stored together within the database. Generally, you should only cluster tables that are
frequently joined on the cluster key columns in SQL statements. Clustering multiple tables improves
the performance of joins, but it is likely to reduce the performance of full table scans, INSERT and
UPDATE statements that modify cluster key values.
Can you differentiate between complete and incomplete recovery?
An incomplete database recovery is a recovery that it does not reach to the point of failure. The
recovery can be either point of time or particular SCN or Particular archive log specially incase of
missing archive log or redolog failure where as a complete recovery recovers to the point of failure
possibly when having all archive log backup.

Page 86 of 134

What is difference between RMAN and Traditional Backup?
RMAN is faster can perform incremental (changes only) backup, and does not place tablespace in
hotbackup mode. Check: Benefit of RMAN Backup
What are bind variables and why are they important?
With bind variable in SQL, oracle can cache queries in a single time in the SQL cache area. This
avoids a hard parse each time, which saves on various locking and latching resource we use to check
object existence and so on.
How to recover database without backup?
If flash recovery is enabled then we can recover database without having backup? Otherwise we
cannot recover database without backup.
How to write explicit cursor to avoid oracle exception: no_data_found and too_many_rows?
In PL/SQL if you try to write select statement with into clause it may return two
exception no_data_found and too_many_rows to avoid this exception you have to write explicit
cursor.
Exception Block,
When no_data_found
// Put your code
When_too_many_rows
// put your code
When others then
// put your code
End;
What are differences between Reference cursor and Normal cursor?
Reference cursor gives the address of the location instead of putting item directly. It holds the
different type of structures. Normal cursor holds one structure of table.
Reference cursor is a dynamic cursor where as normal cursor is static cursor. In dynamic cursor
single statement are process multiple select statement dynamically at run time where as in normal
cursor we process only one select statement.
What is Pipeline view?
In case of normal views whenever you call the view it will get data from the base table where as in
case of pipeline view if you call the view it will get data from another intermediate view.
How would you find the performance issue of SQL queries?
Enable the trace file before running your queries
Then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query
Then tune them respectively.
What is difference between Recovery and Restoring of database?
Restoring means copying the database object from the backup media to the destination where
actually it is required where as recovery means to apply the database object copied earlier (roll
forward) in order to bring the database into consistent state.
What are the Jobs of SMON and PMON processes?
SMON System Monitor performs recovery after instance failure, monitor temporary segments and
extents; clean temp segment, coalesce free space. It is mandatory process of DB and starts by
default.
PMON Process Monitor failed process resources. In shared server architecture monitor and restarts
any failed dispatcher or server process. It is mandatory process of DB and starts by default.
When you should rebuild index?
In fact in 90% case never. When the data in index is sparse (lot of holes in index, due to delete and
updates) and your query is usually ranged based. Also index BLEVEL is one of the key indicators of
performance of SQL queries doing index range scan.

Page 87 of 134

What is key preserved table?
A table is set to be key preserved table if every key of the table can also be the key of the result of the
join. It guarantees to return only one copy of each row from the base table.
Which of the following is NOT an oracle supported trigger?
A. Before
B. During
C. After
D. Instead of
Answer: B
Which of the following is NOT true about modifying table column?
A. You can drop a column at any time.
B. You can add a column at any time as long as it is a NULL
column.
C. You can increase the number of characters in character columns
or number of digits in numeric columns.
D. You can not increase or decrease the number of decimal places.
Answer: D
How can you find SQL of the Currently Active Sessions?
Compare tables V$SQL view by SQL_address with V$SESSION view of currently active sessions
If you have ASM database that used by different production systems immediately shutdown
then what happens to the production system?
In that case the other database would need to shutdown abort.
How do you move table from one tablespace to another tablespace?
You can use any of the below method to do this:
1. Export the table, drop the table, create definition of table in new tablespace and then import the data
using (imp ignore=y).
2. Create new table in new tablespace then drop the original table and rename temporary table with
original table name.
CREATE TABLE temp_name TABLESPACE new_tablespace as select * from
source_table;
DROP TABLE real_table;
RENAME temp_name to real_table;

A nterview uestions with Answer art 0
What is Checkpoint SCN and Checkpoint Count? How we can check it?
Checkpoint is an event when the database writer is going to flush the dirty buffers into the datafiles.
This an ongoing activity and in the result checkpoint number constantly incremented in the datafile
header and controfile and the background process CKPT take care of this responsibility.
How can you find length of Username and Password?
You can find the length of username with below query. The password is hashed (#) so there is no way
to get their length.
You can use special characters ($, #, _) without single quotes and any other characters must be
enclosed in single quotation.
Select length (username), username
from dba_users;
The minimum length for password is at least 1 character where as maximum depends on database
version. In 10g it is restricted to 17 characters long.

Page 88 of 134

What are the restrictions applicable while creating view?
A view can be created referencing tables and views only in the current database.
A view name must not be the same as any table owned by that user.
You can build view on other view and on procedure that references views.
For More information you can click on the below link: Common Interview Question & Answer
What is difference between Delete/Drop/Truncate?
DELETE is a command that only removes data from the table. It is DML statement. Deleted data can
be rollback (when you delete all the data get copied into rollback first then deleted). We can use
where condition with delete to delete particular data from the table.
Where as DROP commands remove the table from data dictionary. This is DDL statement. We cannot
recover the table before oracle 10g, but flashback feature of oracle 10g provides the facility to recover
the drop table.
While TRUNCATE is a DDL command that delete data as well as freed the storage held by this table.
This free space can be used by this table or some other table again. This is faster because it performs
the deleted operation directly (without copying the data into rollback).
Alternatively you can enable the row movement for that table and can use shrink command while
using the delete command.
SQL> Create table test
(
Number s1, Number s2
);
SQL> Select bytes, blocks from user_segments
where segment_name = test;
Bytes block
---------- -------
65536 8
SQL> insert into t select level, level*3
From dual connect by level <= 3000;
3000 rows created.
SQL> Select bytes, blocks from user_segments
where segment_name = test;
Bytes block
---------- -------
131072 16
SQL> Delete from test;
3000 rows deleted.
SQL> select bytes,blocks from user_segments
where segment_name = 'test';
Bytes block
---------- -------
131072 16
SQL> Alter table t enable row movement;
SQL> Alter table t shrink space;
Table altered
SQL> Select bytes,blocks from user_segments
where segment_name = 'test';
Bytes block
---------- -------
65536 8


Page 89 of 134

What is difference between Varchar and Varchar2?
Varchar2 can store upto 4000 bytes where as Varchar can only store upto 2000 bytes. Varchar2 can
occupy space for NULL values where as Varchar2 will not specify any space for NULL values.
What is difference between Char and Varchar2?
A CHAR values have fixed length. They are padded with space characters to match the specified
length where as VARCHAR2 values have a variable length. They are not padded with any characters.
In which Language oracle has been developed?
Oracle is RDBMS package developed using C language.
What is difference between Translate and Replace?
Translate is used for character by character substitution where as Replace is used to substitute a
single character with a word.
What is the fastest query method to fetch data from table?
Using ROWID is the fastest method to fetch data from table.
What is Oracle database Background processes specific to RAC?
LCK0Instance Enqueue Process
LMSGlobal Cache Service Process
LMDGlobal Enqueue Service Daemon
LMONGlobal Enqueue Service Monitor
Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue
Service (GES) to ensure that each oracle RAC database instance obtain the block that it needs to
satisfy as query or transaction. The GCS and GES maintain records of the statuses of each data file
and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed
across all of the active instances.
What is SCAN in respect of oracle RAC?
Single client access name (SCAN) is a new oracle real application clusters (RAC) 11g releases 2
features that provides a single name for client to access an oracle database running in a cluster. The
benefit is clients using SCAN do not need to change if you add or remove nodes in the clusters.
Why do we have a virtual IP (VIP) in oracle RAC?
Without VIP when a node fails the client wait for the timeout before getting error where as with VIP
when a node fails, the VIP associated with it is automatically failed over to some other node and new
node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP
go to the new node, which will send error RST packets back to the clients. This results in the clients
getting errors immediately.
Why query fails sometimes?
Rollback segments dynamically extent to handle large transactions entry loads. A single transaction
may occupy all available free space in rollback segment tablespace. This situation prevents other
user using rollback segments. You can monitor the rollback segment status by querying
DBA_ROLLBACK_SEGS view.
What is ADPATCH and OPATCH utility? Can you use both in Application?
ADPATCH is a utility to apply application patch and OPATCH is a utility to apply database patch. You
have to use both in application for applying in application you have to use ADPATCH and for applying
in database you have to use OPATCH.
What is Automatic refresh of Materialized view and how you will find last refresh time of
Materialized view?
Since oracle 10g complete refresh of materialized view can be done with deleted instead of truncate.
To force the instance to do the refresh with truncate instead of deleted, parameter
AUTOMIC_REFRESH must be set to FALSE
When it is FALSE Mview will be faster and no UNDO will be generated and whole data will be
inserted.
When it is TRUE Mview will be slower and UNDO will be generated and whole data will be inserted.
Thus we will have access of all time even while it is being refreshed.
Page 90 of 134

If you want to find when the last refresh has taken place. You can query with these view: dba_mviews
or dba_mview_analysis or dba_mview_refresh_times
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,YYYY-MM-DD HH24:MI:SS)
from dba_mviews;
-or-
SQL> select NAME, to_char(LAST_REFRESH,YYYY-MM-DD HH24:MI:SS) from
dba_mview_refresh_times;
-or-
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,YYYY-MM-DD HH24:MI:SS)
from dba_mview_analysis;
Why more archivelogs are generated, when database is begin backup mode?
During begin backup mode datafiles headers get freezed so row information can not be retrieved as a
result the entire block is copied to redo logs thus more redo log generated or more log switch occur in
turn more archivelogs. Normally only deltas (change vector) are logged to the redo logs.
The main reason is to overcome the fractured block. A fractured block is a block in which the header
and footer are not consistent at a given SCN. In a user-managed backup, an operating system utility
can back up a datafile at the same time that DBWR is updating the file. It is possible for the operating
system utility to read a block in a half-updated state, so that the block that is copied to the backup
media is updated in its first half, while the second half contains older data. In this case, the block is
fractured.
For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE
BEGIN BACKUP when a tablespace is in backup mode, and a change is made to a data block, the
database logs a copy of the entire block image before the change so that the database can re-
construct this block if media recovery finds that this block was fractured.
The block that the operating system reads can be split, that is, the top of the block is written at one
point in time while the bottom of the block is written at another point in time. If you restore a file
containing a fractured block and Oracle reads the block, then the block is considered a corrupt.
Why is UNION ALL faster than UNION?
UNION ALL faster than a UNION because UNION ALL will not eliminate the duplicate rows from the
base tables instead it access all rows from all tables according to your query where as the UNION
command is simply used to select related distinct information from base tables like JOIN command.
Thus if you know that all the records of your query returns the unique records then always use UNION
ALL instead of UNION. It will give you faster results.
How will you find your instance is started with Spfile and Pfile?
You can query with V$spparameter view
SQL> Select isspecified, count(*) from v$spparameter
Group by isspecified;
ISSPEC COUNT(*)
------ ----------
FALSE 221
TRUE 39
As isspecified is TRUE with some count we can say that instance is running with spfile. Now try to
start your database with pfile and run the previous query again.
SQL> Select isspecified, count(*) from v$spparameter
Group by isspecified;
ISSPEC COUNT(*)
------ ----------
FALSE 258
Then you will not find any parameter isspecified in spfile they all come from pfile thus you can
say instance is started with pfile.Alternatively you can use the below query
SQL> show parameter spfile;
Page 91 of 134

SQL> Select decode(count(*), 1, 'spfile', 'pfile' )
from v$spparameter
where rownum=1 and isspecified='TRUE';
Why we need to enable Maintenance Mode?
To ensure optimal performance and reduce downtime during patching sessions, enabling this feature
shuts down the Workflow Business Events System and sets up function security so that Oracle
Applications functions are unavailable to users. This provides a clear separation between normal run
time operation and system downtime for patching..

Oracle DBA interview Question with Answer Part 21
How to change SQL Prompt in Oracle?
Go to ORACLE_HOME\sqlplus\admin and copy the glogin.sql to some other place for backup
purpose.
Now Edit glogin.sql and add these lines
SET TIME ON
SET TIMING ON
set sqlprompt "_user '@' _connect_identifier > "
Now try to connect using sqlplus.
What is Oracle Node Eviction?
A node is evicted from the cluster after it kills itself because it is not able to service the applications. It
generally happens during the communication failure between the instances, when the instance is not
able to send the information to the control file.
Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the
cluster if some critical problem is detected. The node eviction process is report with the error: ORA-
29740 in the alert log and LMON trace files.
How to extend the VMware root disk (C: Drive) after OS installed.
Open VMware Infrastructure client and connect to Virtual Center or the ESX host.
Right-click the virtual machine.
Click Edit Settings.
Select Virtual Disk.
Increase the size of the disk.
Note: You can extend the root disk only when the Virtual Machine should SCSI disk. If this option
display is gray then either the disk may be running on snapshots or the disk may be at the maximum
allowed size depending on the block size of the data store or Disk Type is IDE. In that case first
remove all the snapshots running on the VM or change the Disk Type from IDE to SCSI.
How to disable the firewall in Linux?
Stop the ipchains service:
# service ipchains stop
Stop the iptables service:
# service iptables stop
Stop the ipchains service from starting when you restart the server:
# chkconfig ipchains off
Stop the iptables service from starting when you restart the server:
# chkconfig iptables off
What are the methods to upgrade the database latest version? How would you decide the best
method?
There are different ways of upgrading to the latest release of oracle database or oracle provides
multiple methods to upgrade:
Database Upgrade Assistant (DBUA)
Page 92 of 134

Manual Upgrade
Transportable Tablespace
Datapump or Export/Import
Oracle Streams
Oracle GoldenGate.
Using DBUA to upgrade the existing database is the simple and quickest method.
For step by step details: Upgrade Oracle Database 11g to 12c
Is it possible to connect oracle database if all of its BG process is killed?
Yes, you can connect to the database and also able to query with database view and other
application schema views/tables.
Even you can update/select any record but when to try to commit/rollback; the instance gets
terminated with the following error:
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 8917
Session ID: 63 Serial number: 9
And following error message recorded in database alert.log
Wed Jun 23 02:37:14 2013
USER (ospid: 8917): terminating the instance due to error 472
Instance terminated by USER, pid = 8917
The user (client) session was able to retrieve data from the database as the shared memory was still
available and the client session does not need background process for this task.
What is the difference between Shared SQL and Cursor?
Shared SQL is the SQL residing in shared pool. The SQL statement can be shared among all the
database sessions. The shared SQL is at the database level, where all session can see and use
them.
A cursor points to some shared SQL in shared pool residing in the shared pool. You may have more
than one cursor pointing to the same shared SQL. A cursor is at the session level, so many database
sessions may point to the same shared SQL.
How will you find current and maximum utilization of process/session details?
Using the below SQL you can find current number of process and session details
Select resource_name, current_utilization, max_utilization from v$resource_limit where resource_nam
e in ('processes', 'sessions');
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION
processes 45 60
sessions 47 61
Can you explain the difference between database and instance?
The database and instance are closely related but not the same things. The database is a set of files
where application data and metadata is stored where as instance is a set of memory structure that
oracle uses to manipulate the data in database.
A database can be mounted by more than one instance where as an instance can open at most one
database.
What is difference between Translate and Replace?
Translate substitute character by character where as Replace is used to substitute a single character
with a word.
How to schedule task in UNIX platform?
Use DBMS_JOB/DBMS_SCHEDULAR and give the job_type executable and provide the proper path
name for shell script to execute.
Explain about dual table?
DUAL is built in relation in oracle which servers as a dummy relation to put in the FROM clause. The
built in function SYSDATE returns a DATE value containing the current date and time on your system.
Page 93 of 134

For example: SELECT 1+2 from DUAL where as select sysdate from emp will return sysdate in all
rows.
Which tables involved in producing start schema and the type of data they hold?
Fact and Dimension table. The FACT table contains measurement while DIMENSION table will
contain data that will help to describe the fact tables. The fact tables contain the real values that are
going to be used as metrics. The dimension table is the one that classify and categorize the facts and
help us to infer more info on the overall schema related scenarios.
What is the use of setting GLOBAL_NAMES = TRUE?
Setting GLOBAL_NAMES is indicating how you might connect to the database. The variable is either
TRUE or FALSE and if it is true then it enforces database links to have the same name as the
remote database to which they are linking.
Which background process refreshes materialized view?
The Job queue process.
What is rolling upgrade?
It is one of the ASM feature for database 11g. This enables to patch and upgrade ASM node in a
clustered environment without affecting database availability. During a rolling upgrade we can
maintain a functional cluster while one or more of the nodes in the cluster are running in different
software version.
What is difference between startup Upgrade and startup Migrate?
Both are having the same effect (it will adjust few database parameters automatically to certain values
in order to run upgrade script) the only difference is for oracle version. Startup Migrate is used to
upgrade the database till oracle 9i. From 10g onwards we are using startup upgrade to upgrade the
database.
Which oracle utility will be used to make useable to new ORACLE_HOME while you trying to
move it from old ORACLE_HOME?
Re-link all
What do you mean by defining Quota on Tablespace?
Defining quota on tablespace means allotting amount of tablespace to the object in a schema for that
particular tablespace.
How do you find used and free space in a Temporary Tablespace?
As we know unlike normal tableapace temporary tablespace information is not listed in v$datafile or
dba_data_files instead you can query withv$tempfile or dba_temp_files.
Select * from v$tempfile;
Select * from dba_temp_files
SELECT tablespace_name, SUM (bytes_used)/1024/1024 "Used in MB", SUM
(bytes_free)/1024/1024 "Free in MB"
FROM V$temp_space_header
GROUP BY tablespace_name;
Can we Upgrade database directly from 9i to 11g?
Yes, you can upgrade directly from 9i to 11g if your current database version is 9.2.0.4 onwards. If
you are having the same it is better to upgrade directly from 9i to 11g as oracle extended support for
10gR2 will ends on 31-Jul-2013 and also there are more features available in oracle 11g.
You can use any of the method to upgrade your database:
Manual Upgradation
Upgradation using DBUA
Using Export/Import
Using Data Copying.


Page 94 of 134

DBA interview Question and Answer Part 22
I have configured the RMAN with Recovery window of 3 days but on my backup destination
only one days archive log is visible while 3 days database backup is available there why?
I go through the issue by checking the backup details using the list command. I found there is already
3 days database as well as archivelog backup list is available. Also the backup is in Recoverable
backup. Thus it is clear due to any reason the backup is not stored on Backup place.
Connect rman target database with catalog
List backup Summary;
List Archivelog All;
List Backup Recoverable;
When I check the db_recovery_dest_size, it is 5 GB and our flash-recovery area is almost full
because of that it will automatically delete archive logs from backup location. When I increase
the db_recovery_dest_size then it is working fine.
If one or all of control file is get corrupted and you are unable to start database then how can
you perform recovery?
If one of your control file is missing or corrupted then you have two options to recover it either delete
corrupted CONTROLFILE manually from the location and copy the available rest of CONTROLFILE
and rename it as per the deleted one. You can check the alert.log for exact name and location of
the control file. Another option is delete the corrupted CONTROLFILE and remove the location from
Pfile/Spfile. After removing said control file from spfile and start your database.
In another scenario if all of your CONTROLFILE is get corrupted then you need to restore them using
RMAN.
As currently none of the CONTROLFILE is mounted so RMAN does not know about the backup or
any pre-configured RMAN setting. In order to use the backup we need to pass the DBID (SET
DBID=14214 ) to the RMAN.
RMAN>Restore Controlfile from H:\oracle\Backup\ C-1239150297-20130418
You are working as a DBA and usually taking HOTBACKUP every night. But one day around
3.00 PM one table is dropped and that table is very useful then how will you recover that table?
If your database is running on oracle 10g version and you already enable the recyclebin configuration
then you can easily recover dropped table fromuser_recyclebin or dba_recyclebin by using
flashback feature of oracle 10g.
SQL> select object_name,original_name from user_recyclebin;
BIN$T0xRBK9YSomiRRmhwn/xPA==$0 PAY_PAYMENT_MASTER
SQL> flashback table table2 to before drop;
Flashback complete.
In that case when no recyclebin is enabled with your database then you need to restore your backup
on TEST database and enable time based recovery for applying all archives before drop command
execution. For an instance, apply archives up to 2:55 PM here.
It is not recommended to perform such recovery on production database directly because it is a huge
database will take time.
Note: If you are using SYS user to drop any table then users object will not go to the recyclebin for
SYSTEM tablespace, even you have already set recyclebin parameter true.
And If you database is running on oracle 9i you require in-complete recovery for the same.
Sometimes why more archivelog is Generating?
There are many reasons such as: if more database changes were performed either using any
import/export work or batch jobs or any special task or taking hot backup (For more details why hot
backup generating more archive check my separate post).You can check it using enabling log Minor
utility.


Page 95 of 134

How can I know my require table is available in export dump file or not?
You can create index file for export dump file using import with index file command. A text file will be
generating with all table and index object name with number of rows. You can confirm your require
table object from this text file.
What is Cache Fusion Technology?
Cache fusion provides a service that allows oracle to keep track of which nodes are writing to which
block and ensure that two nodes do not updates duplicates copies of the same block. Cache fusion
technology can provides more resource and increase concurrency of users internally. Here multiple
caches can able to join and act into one global cache. Thus solving the issues like data consistency
internally without any impact on the application code or design.
Why we should we need to open database using RESETLOGS after finishing incomplete
recovery?
When we are performing incomplete recovery that means, it is clear we are bringing our database to
past time or re-wind period of time. Thus this recovery makes database in prior state of database. The
forward sequence of number already available after performing recovery, due to mismatching of this
sequence numbers and prior state of database, it needs open database with new sequence number
of redo log and archive log.
Why export backup is called as logical backup?
Export dump file doesnt backup or contain any physical structure of database such as datafiles,
redolog files, pfile and password file etc. Instead of physical structure, export dump contains logical
structure of database like definition of tablespace, segment, schema etc. Due to these reason export
dump is call logical backup.

What are difference between 9i and 10g OEM?
In oracle 9i OEM having limited capability or resource compares to oracle 10g grids. There are too
many enhancements in 10g OEM over 9i, several tools such as AWR and ADDM has been
incorporated and there is SQL Tuning advisor also available.
Can we use same target database as catalog DB?
The recovery catalog should not reside in the target database because recovery catalog must be
protected in the event of loss of the target database.
What is difference between CROSSCHECK and VALIDATE command?
Validate command is to examine a backup set and report whether it can be restored successfully
where as crosscheck command is to verify the status of backup and copies recorded in the RMAN
repository against the media such as disk or tape.
How do you identify or fix block Corruption in RMAN database?
You can use the v$block_corruption view to identify which block is corrupted then use the
blockrecover command to recover it.
SQL>select file# block# from v$database_block_corruption;
file# block
10 1435
RMAN>blockrecover datafile 10 block 1435;
What is auxiliary channel in RMAN? When it is required?
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channel configured,
then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within
the same RUN command.
Explain the use of Setting GLOBAL_NAME equal to true?
Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either
TRUE or FALSE and if it is set to TRUE which enforces database links to have the same name as
the remote database to which they are linking.


Page 96 of 134

How can you say your data in database is Valid or secure?
If data of the database is validated we can say that our database is secured. There is different way to
validate the data:
1. Accept only valid data
2. Reject bad data.
3. Sanitize bad data.
Write a query to display all the odd number from table.
Select * from (select employee_number, rownum rn from
pay_employee_personal_info)
where MOD (rn, 2) <> 0;
-or- you can perform the same things through the below function.
set serveroutput on;
begin
for v_c1 in (select num from tab_no) loop
if mod(v_c1.num,2) = 1 then
dbms_output.put_line(v_c1.num);
end if;
end loop;
end;
What is difference between Trim and Truncate?
Truncate is a DDL command which delete the contents of a table completely, without affecting the
table structures where as Trim is a function which changes the column output in select statement or to
remove the blank space from left and right of the string.
When to use the option clause "PASSWORD FILE" in the RMAN DUPLICATE command?
If you create a duplicate DB not a standby DB, then RMAN does not copy the password file by
default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the
existing password file on the auxiliary instance and if you create a standby DB, then RMAN copies the
password file by default to the standby host overwriting the existing password file.
What is Oracle Golden Gate?
Oracle GoldenGate is oracles strategic solution for real time data integration. Oracle GoldenGate
captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across
Oracle and heterogeneous environments with very low impact and preserved transaction integrity.
The transaction data management provides read consistency, maintaining referential integrity
between source and target systems.
What is meaning of LGWR SYNC and LGWR ASYNC in log archive destination parameter for
standby configuration.
When use LGWR with SYNC, it means once network I/O initiated, LGWR has to wait for completion of
network I/O before write processing. LGWR with ASYNC means LGWR doesnt wait to finish network
I/O and continuing write processing.
What is the truncate command enhancement in Oracle 12c?
In the previous release, there was not a direct option available to truncate a master table while child
table exist and having records.
Now the truncate table with cascade option in 12c truncates the records in master as well as all
referenced child table with an enabled ON DELETE constraint.
What if my
1)my db size is 2tb,i will take rman full backup what is my bkp size?
2)any rman command to estimate the size of db before taking the bkp?
3)default how many channels in rman?max how many possible to configure?
Generally RMAN full backup (without compression) is the actual size of datafiles. You can query with
the DBA segements and find the sum of data block to estimate the actual backup size.
SQL> select (sum(blocks)*block_size)/1024/1024/1024 from dba_segments;
Page 97 of 134

As per my knowledge there is no official way to find backup size before taking actual backup.
But as per the oracle RMAN does not take backup from NEVER USED blocks. So to estimate find the
total block size from dba_segements.
But if you are using any compression technique then the backup estimation also
depends on the compression technique you are using.
Related to channel configuration and their operation. Please search and visit my another post
"How RMAN behave with the allocated channel during backup".

How to Check Why Shutdown Immediate hangs or taking
longer time?
Ref. Doc ID 164504.1
In order to check reason why shutdown immediate hangs
SQL>connect / as SYSDBA
SQL>Select * from x$ktuxe where ktuxecfl = 'DEAD';
This shows dead transactions that SMON is looking to rollback.
Now Plan to shutdown again and gather some information. Before issuing the shutdown immediate
command set some events as follows:
SQL>alter session set events '10046 trace name context forever, level 12';
SQL>alter session set events '10400 trace name context forever, level 1';
SQL>shutdown immediate;
10046 turns on extended SQL_TRACE for the shutdown process.
10400 dumps a system state every 5 minutes.
The trace files should show where the time is going. To check the progress of SMON is very
important in this case. You can find it with the below query.
SELECT r.NAME "RB Segment Name", dba_seg.size_mb,
DECODE(TRUNC(SYSDATE - LOGON_TIME), 0, NULL, TRUNC(SYSDATE - LOGON_TIME) || ' Days'
|| ' + ') || TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), 'SSSSS'),
'HH24:MI:SS') LOGON, v$session.SID, v$session.SERIAL#, p.SPID, v$session.process,
v$session.USERNAME, v$session.STATUS, v$session.OSUSER, v$session.MACHINE,
v$session.PROGRAM, v$session.module, action
FROM v$lock l, v$process p, v$rollname r, v$session,
(SELECT segment_name, ROUND(bytes/(1024*1024),2) size_mb FROM dba_segments
WHERE segment_type = 'TYPE2 UNDO' ORDER BY bytes DESC) dba_seg
WHERE l.SID = p.pid(+) AND v$session.SID = l.SID AND
TRUNC (l.id1(+)/65536)=r.usn
-- AND l.TYPE(+) = 'TX' AND
-- l.lmode(+) = 6
AND r.NAME = dba_seg.segment_name
--AND v$session.username = 'SYSTEM'
--AND status = 'INACTIVE'
ORDER BY size_mb DESC;
Reason: Shut down immediate may hang because of various reasons.
Processes still continue to be connected to the database and do not terminate.
SMON is cleaning temp segments or performing delayed block cleanouts.
Uncommitted transactions are being rolled back.
Debugging a hung database in oracle version 11g
Back in oracle 10g a hung database was real problem, especially could not connect via SQL*plus
release the source of the hanging. There is a new feature in Oracle 11g SQL*Plus called the prelim
Page 98 of 134

option. This option is very useful for running oradebug and other utilities that do not require a real
connection to the database.
C:\>sqlplus prelim
-or- in SQL you can set
SQL>Set _prelim on
SQL>connect / as sysdba
Now you are able to run oradebug commands to diagnose a hung database issue:
SQL> oradebug hanganalyze 3
Wait at least 2 minutes to give time to identify process state changes.
SQL>oradebug hanganalyze 3
Open a separate SQL session and immediately generate a system state dump.
SQL>alter session set events 'immediate trace name SYSTEMSTATE level 10';
How to Check why shutdown immediate taking longer time to shutdown?
Ref. 1076161.6: Shutdown immediate or shutdown Normal hangs. SMON disabling TX recovery
Ref. Note 375935.1: What to do and not to do when shutdown immediate hangs.
Ref. Note 428688.1: Shutdown immediate very slow to close database.
When shutdown immediate taking longer time as compare to the normal time usually it is taking. You
must perform following task before performing actual shutdown immediate.
1. All active session.
2. Temporary Tablespace Recover.
3. Long Running Query in Database.
4. Large Transaction.
5. Progress of the Transaction that oracle is recovering.
6. Parallel Transaction Recovery.
SQL> Select sid, serial#, username, status, schemaname, logon_time from
v$session where status='ACTIVE' and username is not null;
If Active session is exist then, try to find out what is doing in the database by this session. Active
session makeshutdown slower
SQL> Select f.R "Recovered", u.nr "Need Recovered" from (select
count(block#) R , 1 ch from sys.fet$ ) f,(selectcount(block#) NR, 1 ch from
sys.uet$) u where f.ch=u.ch;
Check to see any long query is running into the database while you are trying to shutdown the
database.
SQL> Select * from v$session_longops where time_remaining>0 order by
username;
Check to ensure large transaction is not going on while you are trying to shutdown the database.
SQL>Select sum(used_ublk) from v$transaction;
Check the progress of the transaction that oracle is recovering.
SQL>Select * from v$fast_start_transactions;
Check to ensure that any parallel transaction recovery is going on before performing shutdown
immediate.
SQL>Select * from v$fast_start_servers;
Finally if you do not understand the reason why the shutdown is hanging or taking longer time to
shutdown then try to shutdown your database with abort option and startup with restrict option and
try shutdown with immediate option.
Check the alert.log, if you find any error related Thread 1 cannot allocate new log,
sequence then you need to enable your archival process. Your archival is disable due to any
reason.
Process:
Page 99 of 134

1. In command prompt set the oracle_sid first
C:\SET ORACLE_SID = your db_name
2. Now start the SQL*plus:
C:\sqlplus /nolog
SQL>connect sys/***@instance_name
SQL>Select instance_name from v$instance;
3. Try to checkpoint before shutdown abort
SQL>alter system checkpoint;
SQL> shutdown abort;
4. Start the database with restrict option so that no other user is able to connect you in the mean
time.
SQL>startup restrict;
SQL>select logins from v$instance;
RESTRICTED
SQL>shutdown immediate;
5. Mount the database and ensure archive process is enabling by using archive log list command. If it
is disabling then enable it.
SQL>startup mount;
SQL> archive log list; --if disable then enable it
SQL>Alter database archivelog;
SQL> Alter system archive log start;
Note: If your archivelog destination and format is already set no need to set again. After setting check
with the archive log list command archival is enable or not.
SQL> alter database open;
Now check if your database is still in restricted mode then remove the restriction.
SQL>select logins from v$instance;
SQL>alter system disable restricted session;
Note: Now try to generate archivelog with any command
SQL>alter system archivelog current;
SQL>alter system switch logfile;
Now try to check or perform normal shutdown and startup with the database.

Question with Answer on Oracle database Patches
Patches are a small collection of files copied over to an existing installation. They are associated with
particular versions of Oracle products.
The discussion will especially help for those beginners who are preparing for interview and
inexperienced to apply the patches. In this article you will find all those things briefly with an example.
For more details please study the oracle documentation and try to search with separate topics on this
blog.
What are different Types of Patches?
Regular Patcheset: To upgrade to higher version we use database patchset. Please do not confuse
between regular patchests and patch set updates (PSU). Consider the regular patchset is super set of
PSU. Regular Patchset contain major bug fixes. In comparison to regular patch PSU will not change
the version of oracle binaries such as sqlplus, import/export etc. The importance of PSU is
automatically minimized once a regular patchset is released for a given version. It is mainly divided
into two types:
Security or Critical Patch Update (CPU): Critical patch update quarterly delivered by oracle to fix
security issues.
Page 100 of 134

Patch set updated (PSU): It include CPU and bunch of other one-off patches. It is also quarterly
delivered by oracle.
Interim (one-off) Patch: It is also known as patchset exception or one-off patch or interim patch. This
is usually a single fix for single problem or enhancement. It released only when there is need of
immediate fix or enhancement that cannot wait until for next release of patchset or bundle patch. It is
applied using OPATCH utility and is not cumulative.
Bundle Patches: Bundle Patches includes both the quarterly security patches as well as
recommended fixes (for Windows and Exadata only). When you try to download this patch you will
find bundle of patches (different set of file) instead of single downloaded file (usually incase patchset).
Is Opatch (utility) is also another type of patch?
OPatch is utility from oracle corp. (Java based utility) that helps you in applying interim patches to
Oracle's software and rolling back interim patches from Oracle's software. Opatch also able to Report
already installed interim patch and can detect conflict when already interim patch has been applied.
This program requires Java to be available on your system and requires installation of OUI. Thus from
the above discussion coming to your question it is not ideal to say OPATCH is another patch.
When we applying single Patch, can you use OPATCH utility?
Yes, you can use Opatch incase of single patch. The only type of patch that cannot be used with
OPatch is a patchset
When you applying Patchsets, You can use OUI.
Yes, Patcheset uses OUI. A patch set contains a large number of merged patches, to change the
version of the product or introduce new functionality. Patch sets are cumulative bug fixes that fix all
bugs and consume all patches since the last base release. Patch sets and the Patch Set Assistant
are usually applied through OUI-based product specific installers.
Can you Apply OPATCH without downtime?
As you know for apply patch your database and listener must be down. When you apply OPTACH it
will update your current ORACLE_HOME. Thus coming to your question to the point in fact it is not
possible in case of single instance but in RAC you can Apply Opatch without downtime as there will
be more separate ORACLE_HOME and more separate instances (running once instance on each
ORACLE_HOME).
You have collection of patch (nearly 100 patches) or patchset. How can you apply only one
patch from patcheset or patch bundle at ORACLE_HOME?
With Napply itself (by providing patch location and specific patch id) you can apply only one patch
from a collection of extracted patch. For more information check the opatch util NApply help. It
will give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of patch
installed in your ORACLE_HOME.
How can you get minimum/detail information from inventory about patches applied and
components installed?
You can try below command for minimum and detail information from inventory
C:\ORACLE_HOME\Opatch\opatch lsinventory invPtrLoc location of
oraInst.loc file
$ORACLE_HOME\OPatch\opatch lsinventory -detail -invPtrLoc location of
oraInst.loc file
Differentiate Patcheset, CPU and PSU patch? What kind of errors usually resolved from them?
Critical Patch Update (CPU) was the original quarterly patches that were released by oracle to target
the specific security fixes in various products. CPU is a subset of patchset updates (PSU). CPU are
built on the base patchset version where as PSU are built on the base of previous PSU
Patch Set Updates (PSUs) are also released quarterly along with CPU patches are a superset of
CPU patches in the term that PSU patch will include CPU patches and some other bug fixes released
Page 101 of 134

by oracle. PSU contain fixes for bugs that contain wrong results, Data Corruption etc but it doe not
contain fixes for bugs that that may result in: Dictionary changes, Major Algorithm changes,
Architectural changes, Optimizer plan changes
Regular patchset: Please do not confuse between regular patchests and patch set updates (PSU).
Consider the regular patchset is super set of PSU. Regular Patchset contain major bug fixes. The
importance of PSU is minimizing once a regular patchset is released for a given version. In
comparison to regular patch PSU will not change the version of oracle binaries such as sqlplus,
import/export etc.
If both CPU and PSU are available for given version which one, you will prefer to apply?
From the above discussion it is clear once you apply the PSU then the recommended way is to apply
the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU (If you apply
CPU over PSU will considered you are trying to rollback the PSU and will require more effort in
fact). So if you have not decided or applied any of the patches then, I will suggest you to go to use
PSU patches. For more details refer:Oracle Products [ID 1430923.1], ID 1446582.1
PSU is superset of CPU then why someone choose to apply a CPU rather than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues. It seems to be
theoretically more consecutive approach and can cause less trouble than PSU as it has less code
changing in it. Thus any one who is concerned only with security fixes and not functionality fixes, CPU
may be good approach.
How can you find the PSU installed version?
PSU references at 5
th
place in the oracle version number which makes it easier to track such as (e.g.
10.2.0.3.1). To determine the PSU version installed, use OPATCH utility:
OPATCH lsinv -bugs_fixed | grep -i PSU
To find from the database:
Select substr(action_time,1,30) action_time, substr(id,1,10) id,
substr(action,1,10) action,substr(version,1,8) version,
substr(BUNDLE_SERIES,1,6) bundle, substr(comments,1,20) comments from
registry$history;
Note: You can find the details from the above query if you already executed
the catbundle.sql
Click to Check Existing Oracle Database Patch Status
Will Patch Application affect System Performance?
Sometimes applying certain patch could affect Application performance of SQL statements. Thus it is
recommended to collect a set of performance statistics that can serve as a baseline before we make
any major changes like applying a patch to the system.
Can you stop applying a patch after applying it to a few nodes? What are the possible issues?
Yes, it is possible to stop applying a patch after applying it to a few nodes. There is a prompt that
allows you to stop applying the patch. But, Oracle recommends that you do not do this because you
cannot apply another patch until the process is restarted and all the nodes are patched or the partially
applied patch is rolled back.
How you know impact of patch before applying a patch?
OPATCH <option> -report
You can use the above command to know the impact of the patch before actually applying it.
How can you run patching in scripted mode?
opatch <option> -silent
You can use the above command to run the patches in scripted mode.
Can you use OPATCH 10.2 to apply 10.1 patches?
No, Opatch 10.2 is not backward compatible. You can use Opatch 10.2 only to apply 10.2 patches.



Page 102 of 134

What you will do if you lost or corrupted your Central Inventory?
In that case when you lost or corrupted your Central Inventory and your ORACLE_HOME is safe, you
just need to execute the command with attachHomeflag, OUI automatically setup the Central
Inventory for attached home.
What you will do if you lost your Oracle home inventory (comps.xml)?
Oracle recommended backup your ORACLE_HOME before applying any patchset. In that case either
you can restore your ORACLE_HOME from the backup or perform the identical installation of the
ORACLE_HOME.
When I apply a patchset or an interim patch in RAC, the patch is not propagated to some of my
nodes. What do I do in that case?
In a RAC environment, the inventory contains a list of nodes associated with an Oracle home. It is
important that during the application of a patchset or an interim patch, the inventory is correctly
populated with the list of nodes. If the inventory is not correctly populated with values, the patch is
propagated only to some of the nodes in the cluster.
OUI allows you to update the inventory.xml with the nodes available in the cluster using the -
updateNodeList flag in Oracle Universal Installer.
When I apply a patch, getting the following errors:
"Opatch Session cannot load inventory for the given Oracle Home <Home_Location> Possible
causes are: No read or write permission to ORACLE_HOME/.patch_storage; Central Inventory
is locked by another OUI instance; No read permission to Central Inventory; The lock file
exists in ORACLE_HOME/.patch_storage; The Oracle Home does not exist in Central
Inventory". What do I do?
This error may occur because of any one or more of the following reasons:
The ORACLE_HOME/.patch_storage may not have read/write permissions. Ensure that you give
read/write permissions to this folder and apply the patch again.
There may be another OUI instance running. Stop it and try applying the patch again.
The Central Inventory may not have read permission. Ensure that you have given read
permission to the Central Inventory and apply the patch again.
The ORACLE_HOME/.patch_storage directory might be locked. If this directory is locked, you will
find a file named patch_locked inside this directory. This may be due to a previously failed
installation of a patch. To remove the lock, restore the Oracle home and remove
thepatch_locked file from the ORACLE_HOME/.patch_storage directory.
The Oracle home may not be present in the Central Inventory. This may be due to a corrupted
or lost inventory or the inventory may not be registered in the Central Inventory.
We should check for the latest security patches on the Oracle metalink
website http://metalink.oracle.com/ and we can find the regular security alert at the
location http://technet.oracle.com/deploy/security/alert.htm
Caution: It is not advisable to apply the patches directly into the production server. The ideal solution
is to apply or test the patches in test server before being moved into the production system.


DBA Daily/Weekly/Monthly or Quarterly Checklist
In response of some fresher DBA I am giving quick checklist for a production DBA. Here
I am including reference of some of the script which I already posted as you know each
DBA have its own scripts depending on database environment too. Please have a look
on into daily, weekly and quarterly checklist.
Note: I am not responsible of any of the script is harming your database so before using directly
on Prod DB. Please check it on Test environment first and make sure then go for it.
Page 103 of 134

Please send your corrections, suggestions, and feedback to me. I may credit your
contribution.
Thank you.
---------------------------------------------------------------------------------------------------
---------------------
Daily Checks:
Verify all database, instances, Listener are up, every 30 Min.
Verify the status of daily scheduled jobs/daily backups in the morning very first hour.
Verify the success of archive log backups, based on the backup interval.
Check the space usage of the archive log file system for both primary and standby DB.
Check the space usage and verify all the tablespace usage is below critical level once in a
day.
Verify Rollback segments.
Check the database performance, periodic basis usually in the morning very first hour
after the night shift schedule backup has been completed.
Check the sync between the primary database and standby database, every 20 min.
Make a habit to check out the new alert.log entry hourly specially if getting any error.
Check the system performance, periodic basis.
Check for the invalid objects
Check out the audit files for any suspicious activities.
Identify bad growth projections.
Clear the trace files in the udump and bdump directory as per the policy.
Verify all the monitoring agent, including OEM agent and third party monitoring agents.
Make a habit to read DBA Manual.
Weekly Checks:
Perform level 0 or cold backup as per the backup policy. Note the backup policy can be
changed as per the requirement. Dont forget to check out the space on disk or tape
before performing level 0 or cold backup.
Perform Export backups of important tables.
Check the database statistics collection. On some databases this needs to be done every
day depending upon the requirement.
Approve or plan any scheduled changes for the week.
Verify the schedule jobs and clear the output directory. You can also automate it.
Look for the object that break rule.
Look for security policy violation.
Page 104 of 134

Archive the alert logs (if possible) to reference the similar kind of error in future.
Visit the home page of key vendors.
Monthly or Quarterly Checks:
Verify the accuracy of backups by creating test databases.
Checks for the critical patch updates from oracle make sure that your systems are in
compliance with CPU patches.
Checkout the harmful growth rate.
Review Fragmentation.
Look for I/O Contention.
Perform Tuning and Database Maintenance.
Verify the accuracy of the DR mechanism by performing a database switch over test.
This can be done once in six months based on the business requirements.
---------------------------------------------------------------------------------------------------
----------------------------------------------------
Below is the brief description about some of the important concept including important
SQL scripts. You can find more scripts on my different post by using blog search option.
Verify all instances are up:
Make sure the database is available. Log into each instance and run daily reports or test
scripts. You can also automate this procedure but it is better do it manually. Optional
implementation: use Oracle Enterprise Manager's 'probe' event.
Verify DBSNMP is running:
Log on to each managed machine to check for the 'dbsnmp' process. For Unix: at the
command line, type ps ef | grep dbsnmp. There should be two dbsnmp processes
running. If not, restart DBSNMP.
Verify success of Daily Scheduled Job:
Each morning one of your prime tasks is to check backup log, backup drive where your
actual backup is stored to verify the night backup.
Verify success of database archiving to tape or disk:
In the next subsequent work check the location where daily archiving stored. Verify the
archive backup on disk or tape.
Verify enough resources for acceptable performance:
For each instance, verify that enough free space exists in each tablespace to handle the
days expected growth. As of <date>, the minimum free space for <repeat for each
tablespace>: [ < tablespace > is < amount > ]. When incoming data is stable, and
average daily growth can be calculated, then the minimum free space should be at least
<time to order, get, and install more disks> days data growth. Go to each instance, run
query to check free mb in tablespaces/datafiles. Compare to the minimum free MB for
that tablespace. Note any low-space conditions and correct it.
Page 105 of 134

Verify rollback segment:
Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a
special rollback segment for large batch jobs whose normal status is OFFLINE. Optional:
each database may have a list of rollback segment names and their expected
statuses.For current status of each ONLINE or FULL rollback segment (by ID not by
name), query on V$ROLLSTAT. For storage parameters and names of ALL rollback
segment, query on DBA_ROLLBACK_SEGS. That views STATUS field is less accurate
than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses,
showing these as OFFLINE and ONLINE respectively.
Look for any new alert log entries:
Connect to each managed system. Use 'telnet' or comparable program. For each
managed instance, go to the background dump destination, usually
$ORACLE_BASE/<SID>/bdump. Make sure to look under each managed database's SID.
At the prompt, use the Unix tail command to see the alert_<SID>.log, or otherwise
examine the most recent entries in the file. If any ORA-errors have appeared since the
previous time you looked, note them in the Database Recovery Log and investigate each
one. The recovery log is in <file>.
Identify bad growth projections.
Look for segments in the database that are running out of resources (e.g. extents) or
growing at an excessive rate. The storage parameters of these segments may need to be
adjusted. For example, if any object reached 200 as the number of current extents,
upgrade the max_extents to unlimited. For that run query to gather daily sizing
information, check current extents, current table sizing information, current index sizing
information and find growth trends
Identify space-bound objects:
Space-bound objects next_extents are bigger than the largest extent that the
tablespace can offer. Space-bound objects can harm database operation. If we get such
object, first need to investigate the situation. Then we can use ALTER TABLESPACE
<tablespace> COALESCE. Or add another datafile. Run spacebound.sql. If all is well,
zero rows will be returned.
Processes to review contention for CPU, memory, network or disk resources:
To check CPU utilization, go to =>system metrics=>CPU utilization page. 400 is the
maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We
need to investigate if CPU utilization keeps above 350 for a while.
Make a habit to Read DBA Manual:
Nothing is more valuable in the long run than that the DBA be as widely experienced,
and as widely read, as possible. Readingsshould include DBA manuals, trade journals,
and possibly newsgroups or mailing lists.
Look for objects that break rules:
For each object-creation policy (naming convention, storage parameters, etc.) have an
automated check to verify that the policy is being followed. Every object in a given
Page 106 of 134

tablespace should have the exact same size for NEXT_EXTENT, which should match the
tablespace default for NEXT_EXTENT. As of 10/03/2012, default NEXT_EXTENT for
DATAHI is 1 gig (1048576 bytes), DATALO is 500 mb (524288 bytes), and INDEXES is
256 mb (262144 bytes). To check settings for NEXT_EXTENT, run nextext.sql. To check
existing extents, run existext.sql
All tables should have unique primary keys:
To check missing PK, run no_pk.sql. To check disabled PK, run disPK.sql. All primary key
indexes should be unique. Run nonuPK.sql to check. All indexes should use INDEXES
tablespace. Run mkrebuild_idx.sql. Schemas should look identical between
environments, especially test and production. To check data type consistency, run
datatype.sql. To check other object consistency, run obj_coord.sql.
Look for security policy violations:
Look in SQL*Net logs for errors, issues, Client side logs, Server side logs and Archive all
Alert Logs to history
Visit home pages of key vendors:
For new update information made a habit to visit home pages of key vendors such as:
Oracle
Corporation:http://www.oracle.com, http://technet.oracle.com, http://www.oracle.com/
support, http://www.oramag.com
Quest Software: http://www.quests.com
Sun Microsystems: http://www.sun.com
Look for Harmful Growth Rates:
Review changes in segment growth when compared to previous reports to identify
segments with a harmful growth rate.
Review Tuning Opportunities and Perform Tuning Maintainance:
Review common Oracle tuning points such as cache hit ratio, latch contention, and other
points dealing with memory management. Compare with past reports to identify harmful
trends or determine impact of recent tuning adjustments. Make the adjustments
necessary to avoid contention for system resources. This may include scheduled down
time or request for additional resources.
Look for I/O Contention:
Review database file activity. Compare to past output to identify trends that could lead
to possible contention.
Review Fragmentation:
Investigate fragmentation (e.g. row chaining, etc.), Project Performance into the Future
Compare reports on CPU, memory, network, and disk utilization from both Oracle and
the operating system to identify trends that could lead to contention for any one of these
Page 107 of 134

resources in the near future. Compare performance trends to Service Level Agreement
to see when the system will go out of bounds.

How to Change DBTIMEZONE after Database Creation
How to Change DBTIMEZONE after Database Creation
DBTIMEZONE is a function which returns the current value of Database Time Zone. It can be queried
using the example below:
SELECT DBTIMEZONE FROM DUAL;
DBTIME
------
-07:00
Please note the return type of function is Time Zone Offset. The format ([+|-] TZH: TZM) contains
the lead (+) and lag (-) with hour and minutes specifications.
Notes:
1. Database Time zones can be queried from V$TIMEZONE_NAMES dictionary view.
2. A Time zone can be converted into Time Zone offset format using TZ_OFFSET function.
Example:
SELECT TZ_OFFSET('America/Menominee') FROM DUAL;
TZ_OFFS
--------
-06:00
3. Time zone is set during database creation or using CREATE DATABASE. It can be altered
using ALTER DATABASE command. Database time zone cannot be altered if a column of
type TIMESTAMP WITH [LOCAL] TIMEZONE exists in the
database because TIMESTAMP with LOCAL TIMEZONE columns are stored to normalize the
database. Time zone can be set in Location zone format or [+|-]HH:MM format.
In the case when you have any column with TIMESTAMP LOCAL TIMEZONE (TSLTZ) then you have
to follow the backupdrop that tablechange the timezone then restore that table. To check run the
below query and notice the output:
Select u.name || '.' || o.name || '.' || c.name "Col TSLTZ"
from sys.obj$ o, sys.col$ c, sys.user$ u
where c.type# = 231 and o.obj# = c.obj# and u.user# = o.owner#;
Col TSLTZ
--------------
Page 108 of 134

ASSETDVP.TEST.TSTAMP
For Example follow the below steps:
1- Backup the table that contains this column (ASSETDVP.TEST.TSTAMP Table).
2- Drop the table or the column only
3- Issue again the alter database to change the DB time Zone
4- Add the dropped column and restore the data OR restore the table if it's dropped
Example:
SQL> SELECT SESSIONTIMEZONE, DBTIMEZONE FROM DUAL;
SESSIONTIMEZONE DBTIMEZONE
+06:00 -07:00
SQL> ALTER DATABASE SET TIME_ZONE='America/Menominee';
Database altered.
SQL> ALTER DATABASE SET TIME_ZONE='-06:00';
Database altered.
SQL> Shutdown;
SQL> Startup;
SQL> SELECT SESSIONTIMEZONE, DBTIMEZONE FROM DUAL;
SESSIONTIMEZONE DBTIMEZONE
+06:00 +06:00
Note: Once the time zone is set, database must be bounced back to reflect this changes because
alter database didnt change the init.ora parameter.
4. Difference between SYSDATE and DBTIMEZONE- SYSDATE shows the date-time details provided
by the OS on the server. It has nothing to do with TIMEZONE of the database.
5. DBTIMEZONE and SESSIONTIMEZONE are different in their operational scope. DBTIMEZONE shows
the database time zone, whileSESSIONTIMEZONE shows it for the session. This implies that if the
time zone is altered at session level, only SESSIONTIMEZONE will change and not the DBTIMEZONE.
Block Change Tracking
RMAN incremental backup backups only the blocks that were changed since the latest base
incremental backups. But RMAN had to scan whole database to find the changed block. Hence the
incremental backup reads the whole database and writes only the changed blocks. Thus the RMAN
incremental backup saves space but no more reduction in backup time.
Block tracking is a new feature in oracle 10g which enables RMAN to reads only the changed blocks
and to writes only the changed blocks.
During the incremental backup, oracle scans the whole data file and compare the SCN between the
blocks in the data file and backup set files (If the blocks SCN is greater than the SCN in the base
Page 109 of 134

backup then that block is taken into consideration for new incremental backup). Usually only few
blocks changed between backups and the RMAN has to do unnecessary work for reading the whole
database. It will be a time consuming tasks.
Oracle introduced change tracking file to track the physical location of all database changes. During
an incremental backup, RMAN uses the change tracking file to quickly identify only the blocks that
have changed, avoiding the time consuming task of reading the entire data file to determine which
blocks have changed.
To enable/disable block change tracking:
SQL> Alter Database Enable Block Change Tracking;
-or-
SQL> Alter database enable block change tracking using file
D:/change_tracking/chg01.dbf;
SQL> Alter Database Disable Block Change Tracking;
The new background process that does this logging is Change Tracking Writer (CWTR). The
view V$BLOCK_CHANGE_TRACKING andV$BACKUP_DATAFILE can be useful shows where block
change tacking file is stored, whether it is enabled, and how large it is and BCT is used or not.
SQL> Select * from V$BLOCK_CHANGE_TRACKING;
SQL> Select Completion_time, datafile_blocks, blocks_read, blocks,
used_change_tracking
From v$backup_datafile
where to_char(completion_time, 'dd/mon/yy') = to_char(sysdate,
'dd/mon/yy');
ALTER SYSTEM SWTCH LOGFLE vs ARCHVE LOG
CURRENT
ALTER SYSTEM SWTCH LOGFLE vs ARCHVE LOG CURRENT
Both the command is used to force the log switch but they do it in different way. ARCHIVELOG
CURRENT waits for the writing to complete. This can take several minute to complete for large
redologs file where as SWITCH LOGFILE is fast as it does not wait for the archiver process to
complete writing the online redologs.
The ARCHIVELOG CURRENT command is safer because it waits for OS to acknowledge that the
redolog has been successfully written. Hence this command is best practice for production backup
script for RMAN.
The ARCHIVELOG CURRENT allows, you to specify the thread to archive (If you do not pass the
thread parameter then oracle will archive all full online redo logs) where as SWITCH LOGFILE
archives only the current thread.
Note: In case of RAC environment, the ARCHIVELOG CURRENT will switch the log of all nodes
(instances) where as SWITCH LOGFILE will only switch the log file on the instance where you issue
this command.
SWTCH LOGFLE is not a correct method after hot backup
Page 110 of 134

In order to be sure that all changes generated during a hot backup are available for a recovery,
Oracle recommends that the current online log be archived after the last tablespace in a hot backup is
taken out of backup mode. A similar recommendation stands for after a recovery manager online
database backup is taken.
Many DBAs or Oracle books will suggest using the ALTER SYSTEM SWITCH LOGFILE command
to achieve this. The use of SWITCH LOGFILE to obtain archives of the current logs is not a perfect
to follow because the command returns success as soon as the log writer has moved to the next log,
but before the previous log has been completely archived. This could allow a backup script to begin
copying that archived redolog before it is completely archived, resulting in an incomplete copy of the
log in the backup.
Thus the better command to use for archiving the current log is ALTER SYSTEM ARCHIVE LOG
CURRENT This command will not return until the current log is completely archived. For the same
reasons outlined above, a backup script should never just back up all the archived redologs. If a
script does not restrict itself only to those logs that it knows to be archived, it may improperly try to
back up an archived redolog that is not yet completely archived.
To determine which logs are archived, a backup script should query the v$archived_log view to
obtain a file list for copying.

How to find Oracle DBID
DBID stands for database identifier, which is a unique identifier for each oracle database running. It is
found in control files as well as datafile header. If the database is open you can directly querying with
the v$database and find the DBID. You can find it without access to the datafiles too.
SQL>startup mount;
SQL>select DBID from v$database;
The DBID is important when you need to recover spfile & controlfile. When you try to recover
controlfile without setting DBID through the RMAN it will give you the error:
RMAN-06495: must explicitly specify DBID with SET DBID command
Here if you are using recovery catalog then connect to the recovery catalog of RMAN and issue the
list incarnation command. You must first nomount the database.
C:\ rman target sys/oracle@orcl3 catalog catalog/catalog@rman
RMAN-06193: connected to target database (not started)
RMAN-06008: connected to recovery catalog database
RMAN> startup nomount;
RMAN-06196: Oracle instance started
Total System Global Area 94980124 bytes
Fixed Size 75804 bytes
Variable Size 57585664 bytes
Page 111 of 134

Database Buffers 37240832 bytes
Redo Buffers 77824 bytes
RMAN> list incarnation;
RMAN-03022: compiling command: list
List of Database Incarnations
DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 2 ORCL3 691421794 YES 542853 23-AUG-12

Incase of control file recovery if your backups are not in default location or you do not have recovery
catalog then RMAN cannot have the information, that which backup piece is appropriate for restore.
Here, you have to set DBID first then try the above recovery. Sometimes it becomes a challenging
task as we know DBID is stored in control file which we have already lost. So it is important for a DBA
to document DBID and server information in case of emergency.
This article stands here. Is there a way to find DBID if the database is down and you are not using
recovery catalog and if you did not maintain any document?
Best of my knowledge check the following way in this situation:
1. We can get DBID from RMAN output (either backup log or RMAN session)
2. From RMAN autobackup (while control file autobackup parameter ON)
3. We can configure alert log file to store DBID regularly
4. We can retrieve DBID from a file dump.
1. DBID from RMAN Output:
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ISSCOHR (DBID=2613999945)
connected to recovery catalog database
2. DBID From Control File autobackup:
If you choose to take RMAN control file autobackup on, then mention the format parameter with %F,
so that it will generate the file name as %F, this %F parameter includes the DBID in the filename.
Starting Control File and SPFILE Autobackup at 31-OCT-12
piece handle=E:\RMAN_BACKUP\C-2613999945-20121031-00 comment=NONE
Finished Control File and SPFILE autobackup at 31-OCT-12
Where C indicates that it is a control file backup
2613999945 indicates DBID
20121031 indicates the date backup was created
00 indicates a hex sequence number to make the filename unique on the same day
Page 112 of 134

3. DBID from the alert log file:
We can make sure that my database DBID is written to alert log file on regular basis with the help of a
package named DBMS_SYSTEM. To do that, include the following to your regular backup job or
script:
COL dbid NEW_VALUE hold_dbid
SELECT dbid FROM v$database;
exec dbms_system.ksdwrt(2,'DBID: '||TO_CHAR(&hold_dbid));
You will get an entry in alert log such as:
DBID: 1681257132
If you did not set autobackup ON then in that case try to check DBID from backup piece or any image
copy that holds either SYSTEM or SYSAUX or UNDO datafiles.
Note: If you backup your database as backup as compressed then with this method you will not be
able to see DBID.
If you have SYSTEM datafile or UNDO datafile either as image copy or as backup piece then you can
use below command on UNIX platform to find DBID respectively:
strings file_name |grep MAXVALUE
strings undotbs01.dbf |grep MAXVALUE
If you have SYSAUX datafile either as image copy or as backup piece then you can use:
strings file_name |grep DBID

4. DBID FROM A FILE DUMP:
Other way is, if any of the physical files(datafiles/logfiles and even archived log files) are available, we
can extract the DBID to a trace file. For that we do not need the database to be mounted:
SQL> connect /@rman as sysdba
SQL>startup nomount;
SQL>alter system dump datafile 'D:\Isscohr\oradata\SYSTEM.DBF' block min 1
block max 10;
Now search in the trace file generated in user_dump_dest location with the string 'Db ID' and you
will get an entry such as:
Db ID=1681257132=0x6435f2ac, Db Name='RMAN'
You can use the same syntax to make a dump of redo log files/archived log files as :
SQL> alter system dump logfile ' ';

If from all the above four steps you are not able to get the DBID because you have lost all database
files, you are not using a recovery catalog and you are not using autobackup of controlfile then if you
have old controlfile available then mount the database with the old controlfile then query
withv$database view.



Page 113 of 134

Use of SHRNK command to deal fragmented Table.
When the table are fragmented, queries on those table will automatically get slow due to scanning
many more blocks which has no data on it. To remove fragmentation from table (in 10g onwards) you
can use Shrink and DBMS_REDIFINITION method. Check this post to handle fragmentation before
version oracle 10g: How to Remove Fragmentation.
If the fragmentation table is small in size it is better to use SHRINK command else use
DBMS_REDEFINITION.
Consider the example table having fragmentation up to 83%
OWNER SEGMENT_NAME SEGMENT_TYPE TBS_NAME MBs WASTED_MB Wasted %
-------- ---------------- ------------ --------- ---- -------- -------
HRMS PAY_PAYMENT_MASTER TABLE HRMS 344 284 83
When we run any query on this table it has to scan all the 344 MB data, which is in fact time
consuming task and will increase execution time for queries. As this not very big table we can use
shrink command to remove the fragmentation.
Step1: Check row movement is enabled or not for the table PAY_PAYMENT_MASTER. If not we need to
enable it.
SQL> select row_movement from dba_tables where
table_name='PAY_PAYMENT_MASTER' and owner='HRMS';
ROW_MOVE
--------
DISABLED
SQL> alter table HRMS.PAY_PAYMENT_MASTER enable row movement;
Table altered.
SQL> alter table HRMS.PAY_PAYMENT_MASTER shrink space compact;
Table altered.
The above command will re-arrange used and empty block but the High Water Mark of this
table (HWM) still remains the same. Thus we need to reset the HWM for this table. Step2: Reset the
high water mark
SQL> alter table HRMS.PAY_PAYMENT_MASTER shrink space;
Table altered.
Step3: Check and Rebuild Indexes associated with this table.
SQL>Select index_name,index_type from dba_indexes where
table_name='PAY_PAYMENT_MASTER';
INDEX_NAME INDEX_TYPE
Page 114 of 134

----------------------- ---------------------------
PAY_PAYMENT_MASTER_PK NORMAL
SQL> alter index HRMS.PAY_PAYMENT_MASTER rebuild online;
Index altered.
Now gather statistics on this table and finally check the table details again. You will see the table size
drastically reduced.
SQL> EXEC DBMS_STATS.gather_table_stats('HRMS', 'PAY_PAYMENT_MASTER',
estimate_percent => 55, cascade => TRUE);
PL/SQL procedure successfully completed.
OWNER SEGMENT_NAME SEGMENT_TYPE TBS_NAME MBs WASTED_MB Wasted%
-------- ------------------- ------------ --------- ---- -------- -------
HRMS PAY_PAYMENT_MASTER TABLE HRMS 58 8 14
From the above example you can see the table size get reduced from 344 to 58 MB (waste only
14%). Now if you try queries against this table will definitely faster than before.
How to recover or re-create temporary tablespace in 10g
In database you may discover that your temporary tablespace is deleted from OS or it might get
corrupted. In order to get it back you might think about recover it. The recovery process is simply
recover the temporary file from backup and roll the file forward using archived log files if you are in
archivelog mode.
Another solution is simply drop the temporary tablespace and then re-create a new one and assign
new one as a default tablespace to the database users.
SQL> Select File_Name, File_id, Tablespace_name from DBA_Temp_Files;
FILE_NAME FILE_ID TABLESPACE_NAME
----------------------------- ------- ----------------
D:\ORACLE\ORADATA\SADHAN\TEMP02.DBF 1 TEMP
Make the affected temporary files offline and create new TEMP tablespace and assign it default
temporary tablespace:
SQL> Alter database tempfile 1 offline;
SQL> Create temporary tablespace TEMP1 tempfile
'D:\ORACLE\ORADATA\SADHAN\TEMP02.DBF' size 1500M;
SQL> alter database default temporary tablespace TEMP1;
Check the users who are not pointed to default temp tablespace and assign them externally
then finally drop the old tablespace.
SQL> Select temporary_tablespace, username from dba_users where
temporary_tablespace<>'TEMP';
TEMPORARY_TABLESPACE USERNAME
-------------------- ---------
Page 115 of 134

TEMP SH1
TEMP SH2

SQL>alter user SH1 temporary tablespace TEMP1;
SQL>alter user SH2 temporary tablespace TEMP1;
SQL>Drop tablespace temp;
How to drop and re-create TEMP Tablespace in Oracle
1. Create Temporary Tablespace Temp
CREATE Temporary Tablespace TEMP2 tempfile d:\oracle\oradata\oradata\temp01 SIZE 1500M,
d:\oracle\oradata\oradata\temp02 SIZE 1500M;
2. Move Default Database temp tablespace
Alter database default TEMPORARY tablespace TEMP2;
3. Make sure no sessions are using your Old Temp tablespace
SQL>Select username, session_num, session_addr from v$sort_usage;
If the result set contains any rows then your next step will be to find the SID from the V$SESSION
view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous result set.
SQL> Select sid, serial#, status from v$session where serial#=session_num;
or
SQL> Select sid, serial#, status from v$session where saddr=session_addr;
Now kill the session with IMMEDIATE option or you can directly using from toad
SQL> Alter system kill sid,serial# immediate;
4. Drop temp tablespace
SQL> drop tablespace temp including contents and datafiles;
5. Recreate Tablespace Temp
SQL> create TEMPORARY tablespace TEMP tempfile D:\oracle\oradata\temp\temp01 size 1500m;
6 Move Tablespace Temp, back to new temp tablespace
SQL> Alter database default temporary tablespace TEMP;
7. Drop temporary for tablespace temp
SQL> drop tablespace TEMP2 including contents and datafiles;
In fact there is no need to shutdown while doing these operation. If any thing happens with temp
tablespace, oracle database will ignore the error, but DML and SELECT query will suffer.
Is it Possible to 'DROP' a Datafile from a Tablespace?
In fact Oracle does not provide an interface for dropping datafiles like you could drop a schema and
its object in case of table, a view, a user, etc. Once you add a datafile in the tablespace then the
datafile cannot be removed but in some case you need do it then you can perform some work to find
closer results.
How to Deal different scenario Need to remove Datafile:
Select file_name, tablespace_name from dba_data_files where tablespace_name ='SDH_TIMS_DBF';
FILE_NAME TABLESPACE_NAME
------------------------------------ ---------------
Page 116 of 134

D:\ORACLE\ORADATA\SADHAN\SDH_TIMS01.DBF SDH_TIMS_DBF
D:\ORACLE\ORADATA\SADHAN\SDH_TIMS02.DBF SDH_TIMS_DBF
If the datafile you want to remove is the only datafile in that tablespace then simply drop the
entire tablespace:
DROP TABLESPACE <tablespace name> INCLUDING
Note: Before performing certain operations such as taking tablespaces/datafiles offline, and trying to
drop them, ensure you have a full backup.
The DROP TABLESPACE command removes the tablespace, the datafiles, and its contents from
data dictionary. Oracle will no longer have access to ANY object that was contained in this
tablespace. The physical datafile must then be removed using an operating system command
(Oracle NEVER physically removes any datafiles).
If you have more than one datafile in the tablespace and you want to keep the objects of first
datafile then you must export all the objects you want to keep then Drop the tablespace.
Select owner,segment_name,segment_type from dba_segments where tablespace_name='<name

Note: Make sure you specify the tablespace name in capital letters.
OWNER SEGMENT_NAME SEGMENT_TYPE
TIMS GEN_BUYER_OPEN_BALANCE TABLE
TIMS GEN_BUYER_PROFILE TABLE
TIMS GEN_BUYER_STATEMENT TABLE
TIMS GEN_COMPANY_QUANTITY_TYPE TABLE
TIMS GEN_CONTRACT_PROFILE TABLE
TIMS GEN_CONTRACT_WH_LOCATIONS TABLE
TIMS GEN_DEPOSIT_CU TABLE
TIMS GEN_DEPOSIT_INSTALLMENT TABLE
TIMS STK_ITEM_STATEMENT TABLE PARTITION
TIMS USR_SMAN_SALESMAN_FK_I INDEX
TIMS AG_DTL_PK INDEX
TIMS AG_DTL_AGING_FK_I INDEX
Now Re-create the tablespace with the desired datafiles then import the objects into that tablespace.
If you just added the datafile and Oracle has not yet allocated any space within this datafile,
then you can resize to make the datafile smaller than 5 Oracle blocks. If the datafile is resized
to smaller than 5 oracle blocks, then it will never be considered for extent allocation. At some
later date, the tablespace can be rebuilt to exclude the incorrect datafile.
Page 117 of 134

ALTER DATABASE DATAFILE <filename> RESIZE;
Here we are not including the OFFLINE DROP command because is not meant to allow you to
remove a datafile.
ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP;
ALTER DATABASE DATAFILE <datafile name> OFFLINE; --in archivelog mod
What the above command really means is that you are offlining the datafile with the intention of
dropping the tablespace. Once the datafile is offline, Oracle no longer attempts to access it, but it is
still considered part of that tablespace. This datafile is marked only as offline in the controlfile and
there is no SCN comparison done between the controlfile and the datafile during startup. The entry for
that datafile is not deleted from the controlfile to give us the opportunity to recover that datafile.
Datafile/Tempfile/Undofile Alteration in Oracle
Datafiles are physical files of the operating system that store the data of all logical
structures in the database. Database assigns each datafile two associated file numbers,
an absolute file number and a relative file number that are used to uniquely identify it.
Note: Relative and absolute File Number usually having the same number however when the
number of datafiles in database exceeds a threshold (1023) then relative file number differ from
absolute file number.
Number and size of Datafiles:
The DB_FILES initialization parameter indicates the amount of SGA space to reserve for
datafile information and thus the maximum number of datafiles that can be created for
the instance. You can change the value of DB_FILES (by changing the initialization
parameter setting), but the new value does not take effect until you shut down and
restart the instance.
When determining a value for DB_FILES, take the following into consideration:
If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit
without first shutting down the database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.
Note: The number of datafiles contained in a tablespace, and ultimately the database, can have
an impact upon performance. A Tablespace must have at least one datafile. Further you can add
more datafile to increase the size of datafile.
Place Datafiles Separately:
Place the datafiles on separate disk so that users query information disk drives can work
simultaneously retrieving data at the same time.
Store Datafiles Separate from Redo Log Files
Datafiles should not be stored on the same disk drive that stores the redo log files. If the
datafiles and redo log files are stored on the same disk drive and that disk drive fails
then the files cannot be used in your database recovery procedures. Multiplex your redo
log files, then the chance of losing all of your redo log files is low and you can store
datafiles on the same drive as some redo log files.
Page 118 of 134

Creating and Adding Datafiles to a Tablespace: (Using any of the statement)
CREATE TABLESPACE
ALTER TABLESPACE ... ADD DATAFILE
CREATE DATABASE
ALTER DATABASE ... CREATE DATAFILE
Note: If you add new datafiles to a tablespace and do not fully specify the filenames then the
location of datafiles in the default database directory or the current directory. Oracle recommends
you always specify a fully qualified name for a datafile.
Changing Datafile Size:
There are two way to Alter the size of datafiles.
Enabling and Disabling Automatic Extension for a Datafile
Manually Resizing a Datafile
Enabling/Disabling Auto-Extension for a Datafile:
ALTER TABLESPACE users ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE
10M
AUTOEXTEND ON NEXT 512K MAXSIZE 250M;
Note: The value of NEXT is the minimum size of the increments added to the file when it extends.
The value of MAXSIZE is the maximum size to which the file can automatically extend.
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' AUTOEXTEND OFF;
Manually Resizing a Datafile:
For a bigfile tablespace you can use the ALTER TABLESPACE statement to resize a
datafile. You are not allowed to add a datafile to a bigfile tablespace.
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' RESIZE 100M;
Note: The size can be decreased of a file to specific value. In case of bigfile tablespace you are not
allowed to add a datafile.
Taking Datafiles Online or Taking Offline in ARCHIVELOG Mode:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' ONLINE;
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' OFFLINE;
Note: The datafiles of a read-only tablespace can be taken offline or brought online, but bringing a
file online does not affect the read-only status of the tablespace. You cannot write to the datafile
until the tablespace is returned to the read/write state. You can make all datafiles of a tablespace
temporarily unavailable by taking the tablespace itself offline.

Page 119 of 134

Taking Datafiles Offline/Online:
ALTER TABLESPACE <Tablespace_name> DATAFILE {ONLINE|OFFLINE};
ALTER TABLESPACE < Tablespace_name> TEMPFILE {ONLINE|OFFLINE};
Note: The ALTER TABLESPACE statement takes datafiles offline as well as the tablespace but it
cannot be used to alter the status of a temporary tablespace or its tempfile(s).
To take a datafile offline when the database is in NOARCHIVELOG mode, use the ALTER
DATABASE statement with both the DATAFILE and OFFLINE FOR DROP clauses:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' OFFLINE FOR DROP;
It does not actually drop the datafile. It remains it in the data dictionary, and you must
drop it yourself using An ALTER TABLESPACE ... DROP DATAFILE statement.
Note: The datafiles of a read-only tablespace can be taken offline or online, but bringing a file
online does not affect the read-only status of the tablespace. You cannot write to the datafile until
the tablespace is returned to the read/write state. You can make all datafiles of a tablespace
temporarily unavailable by taking the tablespace itself offline.
Renaming and Relocating Datafiles (Single Tablespace)
1. Take the tablespace that contains the datafiles offline (database must be open).
ALTER TABLESPACE users OFFLINE NORMAL;
2. Rename the datafiles using the operating system command.
3. Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the
filenames within the database.
ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/user1.dbf',
'/u02/oracle/rbdb1/user02.dbf' TO '/u02/oracle/rbdb1/users01.dbf',
'/u02/oracle/rbdb1/users02.dbf';
4. Back up the database. After making any structural changes to a database.
Relocating Datafiles in a Single Tablespace
1. Check the file name or size using DBA_data_files view.
SQL> SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES WHERE TABLESPACE_NAME =
'USERS';
FILE_NAME BYTES
------------------------------------------ ----------------
/u02/oracle/rbdb1/users01.dbf 102400000
/u02/oracle/rbdb1/users02.dbf 102400000
2. Take the Tablespace containing the datafiles offline:
ALTER TABLESPACE users OFFLINE NORMAL;
Page 120 of 134

3. Copy or Move the datafiles to their new locations and rename them
ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/users01.dbf',
'/u02/oracle/rbdb1/users02.dbf' TO '/u03/oracle/rbdb1/users01.dbf',
'/u04/oracle/rbdb1/users02.dbf';
4. Back up the database. After making any structural changes to a database.
Renaming and Relocating Datafiles in Multiple Tablespaces
For that you must have the ALTER DATABASE system privilege.
1. Ensure that the database is mounted.
2. Copy the datafiles to be renamed to their new locations and new names.
3. Use ALTER DATABASE to rename the file pointers in the database control file.
ALTER DATABASE RENAME FILE '/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf' TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
4. Back up the database. After making any structural changes to a database.
Note: To rename or relocate datafiles of the SYSTEM tablespace, the default temporary
tablespace, or the active undo tablespace you must use this ALTER DATABASE method because
you cannot take these tablespaces offline.
Dropping Datafiles
Alter Database Datafile 'C:\Oracle1\Oradata\Shaan\Users01.Dbf' Offline
Drop;
Alter Tablespace Users Drop Datafile
'C:\Oracle1\Oradata\Shaan\Users01.Dbf';
Restrictions for Dropping Datafiles
The Database must be open.
If a datafile is not empty, it cannot be dropped.
You cannot drop the first or only datafile in a tablespace.
This means that DROP DATAFILE cannot be used with a bigfile tablespace.
You cannot drop datafiles in a read-only tablespace.
You cannot drop datafiles in the SYSTEM tablespace.
If a datafile in a locally managed tablespace is offline, it cannot be dropped.
Temp Tablespace and Tempfiles:
Temporary tablespaces are used to manage space for database sort operations and for
storing global temporary tables. For example, if you join two large tables, and Oracle
cannot do the sort in memory (see SORT_AREA_SIZE initialization parameter), space will
Page 121 of 134

be allocated in a temporary tablespace for doing the sort operation such as the that
might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY,
GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc. The DBA should assign a
temporary tablespace to each user in the database to prevent them from allocating sort
space in the SYSTEM tablespace. This can be done with one of the following commands:
SQL> CREATE USER SNT DEFAULT TABLESPACE Data TEMPORARY TABLESPACE Temp;
SQL> ALTER USER SNT TEMPORARY TABLESPACE temp;
Note: Temporary tablespace cannot contain permanent objects so doesn't need to be backed up.
Tempfiles:
Tempfiles are similar to ordinary datafiles with the following exceptions:
Tempfiles are always in nologging mode and You cannot make a Tempfiles read only.
You cannot rename a Tempfiles and You cannot create a tempfiles with the Alter
Database command.
When you create a tempfiles, they are not always guaranteed allocation of disk space
for the file size specified.
When you create a TEMPFILE, Oracle only writes to the header and last block of the
file.
TEMPFILEs are not recorded in the database's control file that means one can just
recreate them whenever you restore the database or after accidental delete.
One cannot remove datafiles from a tablespace until you drop the entire tablespace.
However, one can remove a TEMPFILE from a database.
If you remove all tempfiles from a temporary tablespace, you may encounter
error: ORA-25153: Temporary Tablespace is Empty. Then recreate a tempfile.
SQL> ALTER TABLESPACE temp ADD TEMPFILE '/oradata/temp03.dbf' SIZE 100M;
Note: Except for adding a tempfile you cannot use the ALTER TABLESPACE statement for a locally
managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).
SQL> Create Temporary Tablespace Temp Tempfile '/Oradata/Mytemp_01.Tmp'
Size 20m Extent Management Local Uniform Size 16m;
SQL> CREATE TEMPORARY TABLESPACE temp;
Dropping Tempfiles
SQL>Alter Tablespace Lmtemp Drop Tempfile '/U02/Oracle/Data/Lmtemp02.Dbf';
SQL>Alter Database Tempfile '/U02/Oracle/Data/Lmtemp02.Dbf' Drop Including
Datafiles;


Page 122 of 134

Default Temporary Tablespaces:
In Oracle 9i and above, one can define a Default Temporary Tablespace at database
creation time, or by issuing an "ALTER DATABASE" statement:
SQL>Alter Database Default Temporary Tablespace Temp;
The default Temporary Tablespace is SYSTEM. Each database can be assigned one and
only one Default Temporary Tablespace. Using this feature, a Temporary Tablespace is
automatically assigned to users.
Restrictions apply to default temporary tablespaces:
The default Temporary Tablespace must be of type temporary
The default Temporary Tablespace cannot be taken off-line
The default Temporary Tablespace cannot be dropped until you create another one.
Monitoring Temporary Tablespace
Unlike datafiles, tempfiles are not listed in V$DATAFILE and DBA_DATA_FILES. Use
V$TEMPFILE and DBA_TEMP_FILES instead.
One can monitor Temporary segments from V$SORT_SEGMENT and V$SORT_USAGE
DBA_FREE_SPACE does not record free space for temporary tablespaces. Use
V$TEMP_SPACE_HEADER instead:
SQL> Select TABLESPACE_NAME, BYTES_USED, BYTES_FREE From
V$TEMP_SPACE_HEADER;
TABLESPACE_NAME BYTES_USED BYTES_FREE
------------------ ---------- ----------
TEMP 52428800 52428800
Monitoring Default Database Properties
SQL> SELECT * FROM DATABASE_PROPERTIES where PROPERTY_NAME =
'DEFAULT_TEMP_TABLESPACE';
Note: All new users that are not explicitly assigned a TEMPORARY Tablespace, will get the Default
Temporary Tablespace as its TEMPORARY Tablespace. Also when you assign a TEMPORARY
tablespace to a user, Oracle will not change this value next time you change the Default
Temporary Tablespace for the database.
Undo?
Database creates and manages information that is used to roll back or undo, changes to
the database. Such information consists of records of the actions of transactions before
they are committed. These records are collectively referred to as undo.
The Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Page 123 of 134

Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Oracle Flashback Query
Recover from logical corruptions using Oracle Flashback features
Using CREATE DATABASE to Create an Undo Tablespace
CREATE DATABASE rbdb1
CONTROLFILE REUSE
.
UNDO TABLESPACE undotbs_01 DATAFILE '/u01/oracle/rbdb1/undo0101.dbf';
Using the CREATE UNDO TABLESPACE Statement:
SQL>Create Undo Tablespace Undotbs_02 Datafile
'/U01/Oracle/Rbdb1/Undo0201.Dbf' Size 2m Reuse Autoextend On;
Adding a datafile to the Undo Tablespace:
Alter Tablespace Undotbs_01 Add Datafile '/U01/Oracle/Rbdb1/Undo0102.Dbf'
Autoextend On Next 1m Maxsize Unlimited;
Dropping an Undo Tablespace:
An undo tablespace can only be dropped if it is not currently used by any instance. If the
undo tablespace contains any outstanding transactions the DROP
TABLESPACE statement fails.
SQL>DROP TABLESPACE undotbs_01;
Note: If you drop an Undo Tablespace all contents of UNDOTBS will drop. Be careful not
to drop undo tablespace if undo information is needed by some existing queries.
Switching Undo Tablespaces:
SQL>ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;
Note: If the parameter value for UNDO TABLESPACE is set to (two single quotes), then
the current undo tablespace is switched out and the next available undo tablespace is
switched in. Please use this statement with care.
SQL> ALTER SYSTEM SET UNDO_TABLESPACE = '';
Monitoring UNDO Information:
V$UNDOSTAT, V$ROLLSTAT, V$TRANSACTION, DBA_UNDO_EXTENTS,
DBA_HIST_UNDOSTAT


Page 124 of 134

Important ORA- Errors and their Solution
ORA-01034/ ORA-07318: Oracle not available or No such file or directory
In case of PC client while the Oracle instance is shutdown and they are trying to access,
restart the instance.
ORA-01033: Initialization and shutdown in progress
Check to see it may be the target database is indeed in the middle of the initialization or
shutdown progress or may be the oracle attempting to startup or shutdown and is hanging
due to any reason.
Try to go into Administrative Tools->Services and making sure that both your Listener
service and the actual Database service are both set to automatic. Turn everything else for
Oracle (like the Management Server) off (set to manual). Reboot and see what happens.
Well if the services are set to automatic and they actually are showing as "Started" then
perhaps you need to Stop and Restart the services and then try.
ORA-00106 cannot startup/shutdown database when connected to a dispatcher
Cause: An attempt was made to start or shut down an instance while connected to a shared
server via a dispatcher.
Action: Reconnect as user INTERNAL without going through the dispatcher. For most
cases, this can be done by connect to INTERNAL without specifying a network connect
string.
ORA-00107 failed to connect to ORACLE listener process
Cause: Most likely due to the fact that the network listener process has not been started.
Action: Check for the following:
The network configuration file is not set up correctly.
The client side address is not specified correctly.
The listener initialization parameter file is not set up correctly.
ORA-01031: insufficient privileges
If you are seeing this error on a windows machine when doing 'sqlplus / as sysdba', you
might want to try the following:
Make sure the oracle user is a member of the dba group (or ora_dba group)
Make sure that the sqlnet.ora has the following line in it:
SQLNET.AUTHENTICATION_SERVICES= (NTS)
Follow the separate post for detailed description of above related issues:
ORA-01031: insufficient privileges
How to Stop access using "/ as sysdba"
Page 125 of 134

ORA-00257: Archiver is stuck. Connect internal only
The archive destination is probably full, backup the archive log and removes them to free
some space of the drive, the archiver will restart.
Check the initialization parameter ARCHIVE_LOG_DEST in the initialization file is set
correctly.
It may be always helpful for more details if you are checking the trace file.
RMAN-06059: expected archived log not found
RMAN attempted to backup an archivelog file, but couldn't find it.
Cause: This can happen for a variety of reasons; the file has been manually moved or
deleted, the archive log destination has recently been changed, the file has been
compressed etc.
Your options are either to restore the missing file(s), or to perform a crosscheck.
RMAN>change archivelog all crosscheck;
Note: It is advisable to perform a full backup of the database at this point.
When an archive log crosscheck is performed, RMAN checks each archive log in turn to
make sure that it exists on disk (or tape). Those that are missing are marked as unavailable.
If you have got missing logs, this won't bring them back. It will allow you to get past this error
and back-up the database though. For more detail solution follow the separate post in this
blog:RMAN-0059: expected archived log not found
ORA-01118: Cannot add any more data file limit exceeded.
When the Database is created the db_file parameter in the initialization file is set to a
limit. You can shutdown the database and reset these up to the MAX_DATAFILE as
specified in database creation. The default for MAXDATAFILES is 30. If the
MAX_DATAFILES is set to low, you will have to rebuild the control file to increase it before
proceeding.
The simplest way to recreate the controlfile to change the hard value MAXDATAFILES is
ALTER DATABASE BHACKUP CONTROLFILE TO TRACE;
Then go to UDUMP destination pick it up and modify the value of MAXDATAFILES
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;
SQL>@(name of edited file);
Finally mount and open the database.
ORA-01537: cannot add data file
An ORA-01537 is thrown when attempting to re-add a missing tempfile to a temporary
tablespace:
SQL> select name from v$tempfile;
NAME
Page 126 of 134

-----------------------------------------------------------
D:\oracle\oradata\orcl3\temp01.dbf
SQL> alter tablespace TEMP
add tempfile ' D:\oracle\oradata\orcl3\temp01.dbf' reuse;
ERROR at line 1:
ORA-01537: cannot add data file ' D:\oracle\oradata\orcl3\temp01.dbf' -
file already part of database.
This can happen if a step has been missed during a database cloning exercise.
Solution: With a temporary tablespace either to drop the missing tempfile and then add a
new one or use a different file name and leave the previous file as it is.
You can only drop a tempfile if it is not in use, but our case the temp file doesn't actually
exist, so it can't be in use.
Alter tablespace <TEMP_TS_NAME>
Drop tempfile '<FILE_PATH_AND_NAME>';
Alter tablespace <TEMP_TS_NAME>
add tempfile '<FILE_PATH_AND_NAME>' size <FILE_SIZE>;
For example:
SQL> alter tablespace temp
drop tempfile ' D:\oracle\oradata\orcl3\temp01.dbf';
Tablespace altered.
SQL> alter tablespace TEMP
add tempfile ' D:\oracle\oradata\orcl3\temp01.dbf' size 8192m;
Tablespace altered.
ORA-00055: Maximum Number of DML locks exceeded
The number of DML Locks is set by the initialization parameter DML_LOCKS. If this value is
set to low (which it is by default) you will get this error. Increase the value of DML_LOCKS. If
you are sure that this is just a temporary problem, you can have them wait and then try again
later and the error should clear. Some times it occurs at the moment of peak usage of
database.
Change in the parameter file (initSID.ora) For Example: dml_locks = 200
If you set the dml lock limit to 200 that means:
200 people could each be updating one table at a time.
20 people could all be updating 10 table at a time.
Page 127 of 134

Or 1 user could be doing an account number rename and using 100 tables while 10 others
people could be updating 10 tables.
ORA-00283: ORA-00314: ORA-00312:
While trying to restore controlfile from backup, while recovering got the following error
ORA-00283: recovery session canceled due to errors
ORA-00314: log 2 of thread 1, expected sequence# 2 doesn't match 11
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/jay/redo02.log'
Cause: In any case, the archivelog must be backed up else, a RESTORE alone cannot do a
RECOVER. If your database backup did not include the Archivelogs, then the backup you
created does not have the Redo information that Oracle must apply to the Database Backup.
That is why you got the "unknown log'. Also, if the controlfile backup is before the
archivelog backup, the controlfile, even when restored, is not aware of the archivelogs in
the backup created subsequent to it.
RMAN can still do a RECOVER, implicitly using the "BACKUP CONTROLFILE" and doing a
rollforward but it needs to have to restore the Archivelog first -- and the information about
which Backupset contains the ArchiveLog is not available to it. You would need to
CATALOG the Archivelog Backupset and then restore the archivelogs from there.
(If you use an RMAN Recovery Catalog database, then of course, the Catalog has
information about the ArchiveLogs and the Backupsets containing the Archivelogs so RMAN
queries the Catalog to identify the Backupsets and extracts the necessary Archivelogs from
the Backupsets).
Solution:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11
Next log sequence to archive 13
Current log sequence 13
SQL>alter database clear logfile 'D:\oracle\oradata/orcl3/redo02.log';
Database altered.
SQL>alter database open;
SQL>recover database until cancel;
Specify log: {<ret>=suggested | filename | AUTO | CANCEL}
Cancel
Media recovery cancelled.
Page 128 of 134

SQL> alter database open resetlogs;
Imp-00037 - Character set marker unknown
Cause: This usually means that the export file is corrupted.
Solution:
If you had previously compressed the dump file, make sure that you unzip it before
importing it.
Check the NLS setting of both side and make sure that the client server has the same
NLS setting as the production server (you can set only for the import session too).
Check the export and import utility version. It can also cause due to different version
export is imported through different version import. IMP HELP = Y will display utility version
along with other information or use import with SHOW = Y option for the same.
If the export file is not corrupted, report this as an import internal error and submit the
export file to customer support.
OSD-04008: Write File() failure, unable to write to file (OS 33)
Generally it happens on windows server when another process/user/something other than
oracle access the same database file.
Errors in file D:\oracle\admin\orcl3\bdump\orcl3_ckpt_1144.trc:
ORA-00221: error on write to controlfile
ORA-00206: error in writing (block 3, # blocks 1) of controlfile
ORA-00202: controlfile: 'D:\oracle\oradata\orcl3\control01.ctl'
ORA-27072: skgfdisp: I/O error
OSD-04008: WriteFile() failure, unable to write to file
O/S-Error: (OS 33) The process cannot access the file because another
process has locked a portion of the file.
CKPT: terminating instance due to error 221
Errors in file D:\oracle\admin\orcl3\bdump\orcl3_pmon_1132.trc:
ORA-00221: error on write to controlfile
If you are experiencing this problem and it is happening at seemingly random times, I'd
check for the presence of anti-virus software. If you have some installed, configure it not to
scan the databases data files.
If the problem is occurring at roughly the same time everyday, and that time just happens to
fall during the host backup, then that is a likely problem in the backup utility that locks the
file. Check the scheduled backup job. For more detail about this issue check metalink
note: 130871.1 and doc_id: 352819.999

Page 129 of 134

ORA-19809: limit exceeded for recovery files
The flash recovery area is full:
ORA-19815: WARNING: db_recovery_file_dest_size of 2147483648 bytes is
100.00% used, and has 0 remaining bytes available.
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 10150912 bytes disk space from 2147483648 limit
ARC0: Error 19809 Creating archive log file to
'D:\oracle\flash_recovery_area\orcl3\archivelog\2012_04_14\orcl3_135.arc'
ARC0: Failed to archive thread 1 sequence 444 (19809)
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance orcl3 - Archival Error
ORA-16038: log 2 sequence# 444 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: 'D:\oradata\orcl3\redo02.log'
ORA-16038: log 2 sequence# 444 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: ' D:\oradata\orcl3\redo02.log'
Thread 1 cannot allocate new log, sequence 446
ARC1: Archiving not possible: No primary destinations
ARC1: Failed to archive thread 1 sequence 444 (4)
ARCH: Archival stopped, error occurred. Will continue retrying
ORA-16014: log 2 sequence# 444 not archived, no available destinations
The following query will show the size of the recovery area and how full it is:
select floor(space_limit / 1024 / 1024) "Size ceil(space_used / 1024 / 1024) "Used

from v$recovery_file_dest order by name;
To fix the problem, you need to either make the flash recovery area larger, or remove some
files from it. If you have the disk space available, make the recovery area larger and bounce
the instance back to take effect this change:
Alter system set db_recovery_file_dest_size=<size> scope=both
To remove files you must use RMAN. Manually moving or deleting files will have no effect as
oracle will be unaware.
Page 130 of 134

The obvious choice is to backup and remove some archive log files. However, if you usually
write your RMAN backups to disk, this could prove tricky. RMAN will attempt to write the
backup to the flash recovery area which is full. You could try sending the backup elsewhere
using a command such as this:
rman target sys/oracle@orcl3 catalog catalog/catalog@rman
run {
allocate channel t1 type disk;
backup archivelog all delete input format 'D:\temp_location\arch_%d_%u_%s';
release channel t1;
}
This will backup all archive log files to a location of your choice and then remove them.
For this purpose you can also consider changing rman retention policy and rman archivelog
deletion policy. For Example if you have rman retention policy 3 you can limit it 2 same as
you can limit archivelog deletion policy to the weekly instead of monthly in your scheduled
backup. More detailed solution you can click on the deperate post for this error: ora-19815
ORA-32021: parameter value longer than 255 characters
It is in fact possible to set parameter values larger than 255 characters. To do so you need
to split the parameter up into multiple smaller strings, like this:
Alter system set <parameter> = 'string1','string2' scope=both;
ORA-16654: Fast-Start Failover is enabled
I recently received this error after performing a 'flashback database' on a primary database
that was part of a data guarded pair. I needed to open the database with resetlogs, but
because dataguard was configured for fast-start failover, the broker wouldn't allow it.
Normally, I would simply stop the broker momentarily, but when I tried to, this happened:
SQL> alter system set dg_broker_start=false;
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16654: Fast-Start Failover is enabled
The solution in this case was to disable fast-start failover using dgmgrl, stop the broker, open
the database resetlogs, and then re-enable fast-start failover afterwards:
oracle@bloo$ dgmgrl /
DGMGRL for Linux: Version 10.2.0.2.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> disable fast_start failover
DGMGRL> stop observer
Page 131 of 134

DGMGRL> exit
oracle@bloo$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.2.0 - Production on Sat Apr 21 10:37:59 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> alter system set dg_broker_start=false;
System altered.
SQL> alter database open resetlogs;
Database opened.
SQL> alter system set dg_broker_start=true;
System altered.
SQL> exit
oracle@bloo$ dgmgrl /
Connected.
DGMGRL> enable fast_start failover
DGMGRL> start observer
ORA-00056 DDL lock on object 'string.string' is already held in an incompatible mode
Cause: The attempted lock is incompatible with the DDL lock already held on the object.
This happens if you attempt to drop a table that has parse locks.
Action: Before attempting to drop a table, check that it has no parse locks. Wait a few
minutes before retrying the operation.
ORA-00057 maximum number of temporary table locks exceeded
Cause: The number of temporary tables equals or exceeds the number of temporary table
locks. Temporary tables are often created by large sorts.
Action: Increase the value of the TEMPORARY_TABLE_LOCKS initialization parameter
and restart Oracle.
ORA-00058 DB_BLOCK_SIZE must be string to mount this database (not string)
Cause: The value of the DB_BLOCK_SIZE initialization parameter used to start this
database does not match the value used when that database was created.
Potential reasons for this mismatch are:
Page 132 of 134

mounting the wrong database
using the wrong initialization parameter file
the value of the DB_BLOCK_SIZE parameter was changed
Action: For one of the above causes, either:
mount the correct database
use the correct initialization parameter file
correct the value of the DB_BLOCK_SIZE parameter
SQL> show user;
SYS
SQL>select * from v$version;
SQL>show parameter db_files
SQL>alter system set db_files = 256 scope = spfile;
SQL>shutdown immediate;
SQL>startup;
SQL>show parameter db_files;
ORA-00060 deadlock detected while waiting for resource
Cause: Your session and another session are waiting for a resource locked by the other.
This condition is known as a deadlock. To resolve the deadlock, one or more statements
were rolled back for the other session to continue work.
Action: Either:
Enter a ROLLBACK statement and re-execute all statements since the last commit or
Wait until the lock is released, possibly a few minutes, and then re-execute the rolled back
statements.
For more detailed solution description follow the separate post: ORA-00060 (DEADLOCKS)
ORA-00104 deadlock detected; all public servers blocked waiting for resources
Cause: All available public servers are servicing requests that require resources locked by a
client which is unable to get a public server to release the resources.
Action: Increase the limit for the system parameter MAX_SHARED_SERVERS as the
system will automatically start new servers to break the deadlock until the number of servers
reaches the value specified in MAX_SHARED_SERVERS.
ORA-00063 maximum number of LOG_FILES exceeded
Cause: The value of the LOG_FILES initialization parameter was exceeded.
Action: Increase the value of the LOG_FILES initialization parameter and restart Oracle.
The value of the parameter needs to be as large as the highest number of log files that
currently exist rather than just the count of logs that exist.
Page 133 of 134

ORA-00092 LARGE_POOL_SIZE must be greater than LARGE_POOL_MIN_ALLOC
Cause: The value of LARGE_POOL_SIZE is less than the value of
LARGE_POOL_MIN_ALLOC.
Action: Increase the value of LARGE_POOL_SIZE past the value of
LARGE_POOL_MIN_ALLOC.
Typically a size of 64MB is sufficient for most large pools, but if this is not enough and gets
errors like this:
ORA-04031: unable to allocate 65704 bytes of shared memory ("large
pool","unknown object","large pool","PX msg pool")
ORA-12853: insufficient memory for PX buffers: current 65400K, max needed
1512000K
Then you must configure "Large pool" size with a more accuracy.
SELECT nvl(name, 'large_pool') name, round(SUM(bytes)/1024/1024,2) size_mb
FROM V$SGASTAT WHERE pool='large pool'
GROUP BY ROLLUP(name);
If you run out of free memory in the large pool then increase it (for example the statement)
Alter system set large_pool_size = 389539635 scope=both;
ORA-00116 SERVICE_NAMES name is too long
Cause: The service name specified in the SERVICE_NAMES initialization parameter is too
long.
Action: Use a shorter name for the SERVICE_NAMES value (less than or equal to 255
characters).
ORA-00132 syntax error or unresolved network name 'string'
Cause: Listener address has syntax error or cannot be resolved.
Action: If a network name is specified, check that it corresponds to an entry
in tnsname.ora or other address repository as configured for your system. Make sure that
the entry is syntactically correct.
ORA-01652: unable to extend temp segment by string in tablespace string
For more detailed solution check the separate post: ORA-01652: unable to extend temp
segment by string in tablespace string
DIM-00014: Cannot open the Windows NT Service Control Manager. What can be the possible
cause for it?
While creating the oracle service on windows environment using oradim utility getting the following
error:
Page 134 of 134


Cause:
1. User access control is enable
2. Owner which ran the command is not the owner of the oracle software.
Solution:
1. Disable user access control by:
Click Start, and then click Control Panel.
In Control Panel, click User Accounts.
In the User Accounts window, click User Accounts.
In the User Accounts tasks window, click Turn User Account Control ON or OFF
2. Run the command prompt by logging to the owner of the software
3. Start -> Accessories -> Right click on Command Prompt and select "Run as
Administrator".
ORA-01940: cannot drop a user that is currently connected
Solution:
scott> drop user shahid1;
drop user shahid1
*
ERROR at line 1:
ORA-01940: cannot drop a user that is currently connected
scott> select sid, serial# from v$session where username = 'shahid1';
SID SERIAL#
----- --------
17 37
scott> alter system kill session '17,37;
System altered.
scott> drop user shahid1;
User dropped.
scott> select username from dba_users
where username = 'shahid1'
/

no rows selected
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS'
While trying to import a schema in oracle database 11g this error occurs.
Solution:
Basically huge transaction may lead to this error, so it is advisable to break transaction to smaller
units by using bulk collect and update.
1. Index maintenance activity will be performed when impdp import is in progress, so
disable the primary key constraint in the table, then undo generation will be less and import
will be successful.

Vous aimerez peut-être aussi