Vous êtes sur la page 1sur 32

Manually creating a new database on 11gR2

These are notes about creating a database manually. The DBCA can also be used to do this much
more easily.

Create the directories


First, create the directories you need for the datafiles. On a non production system, I like to keep all the
files for a database under a single directory so it is easy to delete the database later, eg:
mkdir -p /mnt/raid/dborafiles/11gr2/datafiles
mkdir -p /mnt/raid/dborafiles/11gr2/redo

For a production setup, each of these areas is probably a separate mount point on different disks etc.

Create a minimal init.ora


This file should go into $ORACLE_HOME/dbs and be called initSID.ora:
control_files = (/mnt/raid/dborafiles/ora11gr2/datafiles/control01.ora,
/mnt/raid/dborafiles/ora11gr2/datafiles/control02.ora,
/mnt/raid/dborafiles/ora11gr2/datafiles/control03.ora)

undo_management = auto
db_name = ora11gr2
db_block_size = 8192

# 11G (oracle will create subdir diag and all the required subdirs)
# This is a non-default location for the diag files. Normally they are created
# under $ORACLE_BASE, but for non production setups I like to keep all the files
# for a database instance under a single folder.
diagnostic_dest

= /mnt/raid/dborafiles/ora11gr2

Set the SID for your session


export ORACLE_SID=ora11gr2

Connect to SQLPLUS
$ sqlplus /nolog

SQL11g> connect / as sysdba

Create the SPFILE


$ create SPFILE from PFILE='/dboracle/product/11.2.0/dbhome_1/dbs/init11gr2.ora'

Startup the instance


SQL11g> startup nomount

Create the database


create database ora11gr2
logfile group 1 ('/mnt/raid/dborafiles/ora11gr2/redo/redo1.log') size 10M,
group 2 ('/mnt/raid/dborafiles/ora11gr2/redo/redo2.log') size 10M,
group 3 ('/mnt/raid/dborafiles/ora11gr2/redo/redo3.log') size 10M
character set

utf8

national character set utf8


datafile '/mnt/raid/dborafiles/ora11gr2/datafiles/system.dbf'
size 50M
autoextend on
next 10M
extent management local
sysaux datafile '/mnt/raid/dborafiles/ora11gr2/datafiles/sysaux.dbf'
size 10M
autoextend on
next 10M
undo tablespace undo
datafile '/mnt/raid/dborafiles/ora11gr2/datafiles/undo.dbf'
size 10M
autoextend on
default temporary tablespace temp
tempfile '/mnt/raid/dborafiles/ora11gr2/datafiles/temp.dbf'
size 10M
autoextend on

( TODO - unsure about setting max files sizes on these files )

Create the catalogue etc:


SQL11G> @$ORACLE_HOME/rdbms/admin/catalog.sql
SQL11G> @$ORACLE_HOME/rdbms/admin/catproc.sql

As SYSTEM (not SYS) run the following:


SQL11G> @$ORACLE_HOME/sqlplus/admin/pupbld.sql

(not doing this doesn't cause any harm, but a warning is displayed when logging into SQLPLUS if it is
not run)
The database is now basically ready to use, but there no users and no users tablespace. Note it is also
NOT in archive log mode, so is certainly not production ready, but may be good enough for a nonbacked up dev instance.

Create the users tablespace, local, auto allocate


SQL>CREATE TABLESPACE users DATAFILE '/mnt/raid/dborafiles/ora11gr2/datafiles/users_01.dbf'
SIZE 50M
autoextend on
maxsize 2048M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Create a user:
SQL11G> create user sodonnel
identified by sodonnel
default tablespace users
temporary tablespace temp;

SQL11G> alter user sodonnel quota unlimited on users;

SQL11G> grant connect, create procedure, create table, alter session to sodonnel;

Ensure the database comes up at startup time


Add a line to /etc/oratab to tell Oracle about the instance. This is used by the dbstart command, which
will start all the database specified in this file:

ora11gr2:/dboracle/product/11.2.0/dbhome_1:Y

To start all instances use dbstart and to stop use dbshut.


TODO - control script to autostart databases when the machine boots.

Setup the listener


At this point, only people on the local machine can connect to the database, so the last step is to setup
the listener. All you need to do here is add a file called listener.ora in
$ORACLE_HOME/network/admin, and have it contain something like the following:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)

Creating a tnsnames.ora file at this point would be a good idea too. It also goes into
$ORACLE_HOME/network/admin:
ora11gr2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(Host = localhost)(Port = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ora11gr2)
)
)

Manually create an ASM database


To create a database manually under ASM, it is actually a little easier than with a non asm database mainly because you don't need to worry about paths to datafiles etc.

Create a minimal init.ora:


control_files = (+DATADG, +FRADG)

undo_management = auto
db_name = CLEAN01
db_block_size = 8192

DB_CREATE_FILE_DEST = +DATADG
DB_RECOVERY_FILE_DEST = +FRADG
DB_RECOVERY_FILE_DEST_SIZE = 10G

Set the Oracle SID:


export ORACLE_SID=CLEAN01

Start the instance and create an spfile from the pfile


sqlplus / as sysdba
create spfile from pfile;
startup nomount

Create the database with a minimal create database


statement
create database clean01
logfile group 1 ('+DATADG') size 100M,
group 2 ('+DATADG') size 100M,
group 3 ('+DATADG') size 100M
character set

utf8

national character set utf8;

Create the catalog etc:


@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

As System (not SYS) run the following:


@?/sqlplus/admin/pupbld.sql

Create the temporary tablespace


create temporary tablespace temp tempfile '+DATADG' size 500M;

Create the Users Tablespace


CREATE TABLESPACE users DATAFILE '+DATADG'
SIZE 50M
autoextend on
maxsize 10G
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

Move the SPFile into ASM


create pfile from spfile;
create spfile='+DATADG/CLEAN01/spfileCLEAN01.ora' from pfile;
shutdown immediate;

Now create a file called initAPEX01.ora in the ORACLE_HOME/dbs directory, and put the following
contents in it:
spfile='+DATADG/CLEAN01/spfileCLEAN01.ora'

Remember to remove the SPFile from the dbs directory or it will not use the ASM spfile!

Add the Database to /etc/oratab


CLEAN01:/u01/app/oracle/product/11.2.0.4/dbhome_1:Y

Move a non asm database into asm


With the non-asm database up and running, change the control_files parameter and
db_create_file_dest parameters to point at ASM instead of their current location:
alter system set control_files='+DATADG', '+DATADG', '+DATADG' scope=spfile;
alter system set db_create_file_dest='+DATADG';

Now shutdown the database and bring it backup with nomount:


shutdown immediate;
startup nomount;

Now jump into RMAN to copy the relevant files into ASM:
rman target /
restore controlfile from '/oraworkspace/apex01/control01.ora';
backup as copy database format '+DATADG';
switch database to copy;
alter database open;

We are nearly done, but there are 3 more things to do:

Fix the temporary tablespace


select file_name, tablespace_name
from dba_temp_files;

FILE_NAME

TABLESPACE_NAME

-------------------- -----------------------------/oraworkspace/apex01 TEMP


/temp.dbf

alter tablespace temp add tempfile size 500M;


alter database tempfile '/oraworkspace/apex01/temp.dbf' drop including datafiles;

Fix the redo logs


The redo logs will all still be outside ASM, so they need dropped and recreated inside ASM:
SQL> select group#, status, bytes from v$log;

GROUP# STATUS

BYTES

---------- ---------------- ---------1 INACTIVE

52428800

2 INACTIVE
3 CURRENT

52428800
52428800

Groups 1 and 2 are inactive, so they can be dropped and recreated:


alter database drop logfile group 1;
alter database add logfile group 1 size 52428800;
alter database drop logfile group 2;
alter database add logfile group 2 size 52428800;

Switch the logs and recreate the final group:


alter system switch logfile;
alter database drop logfile group 3;
alter database add logfile group 3 size 52428800;

Check the logs are all now in ASM:


select member from v$logfile

MEMBER
-------------------------------------------------+DATADG/apex01/onlinelog/group_1.278.853143825
+DATADG/apex01/onlinelog/group_2.273.853143825
+DATADG/apex01/onlinelog/group_3.276.853144187

Get the SPFile into ASM


Finally, copy the SPFile into ASM. To do this create a pfile and then create the spfile in ASM:
create pfile from spfile;
create spfile='+DATADG/APEX01/spfileAPEX01.ora' from pfile;
shutdown immediate;

Now create a file called initAPEX01.ora in the ORACLE_HOME/dbs directory, and put the following
contents in it:
spfile='+DATADG/APEX01/spfileAPEX01.ora'

Restart the instance and confirm the spfile being used is in ASM:

SQL> show parameter spfile;

NAME

TYPE

VALUE

------------------------------------ ----------- -----------------------------spfile

string

+DATADG/apex01/spfileapex01.or
a

I think you should be able to get Oracle to detect the spfile in ASM without a bootstaped init.ora file in
the dbs directory, but I couldn't get it working, so it may not be possible.

SQL Trace and TKPROF


SQL Trace is an incredibly powerful tool, and allows you to get all sorts of information about a single
query, or set of queries. When it is enabled, it is like turning on SQL debug mode. In fact, over many
years of development, the engineers at Oracle have carefully instrumented large parts of the Oracle
code base with debug statements, and when SQL Trace is enabled, this debug information is made
available to you via a trace file.
Each time Oracle does anything related to your query, it is logged in this trace file. Pretty much
everything is logged there such as:

Reading data from disk

Reading indexes

Waiting to send data to the client

Waiting on locks

And much more

How Do you turn it on?


First you need to ensure that Timed Statistics are enabled in your session, or the trace file will be
missing vital information:
SQL11G> ALTER SESSION SET timed_statistics=TRUE;

Next, for a very large SQL statement, there is a chance the default size limit of the trace file will be
exceeded, and vital information about the query will be lost, so set the maximum size to unlimited:
SQL11G> ALTER SESSION SET max_dump_file_size=UNLIMITED;

Next, if you are on a busy system, it can be useful to add a unique identifier to your trace file to help
find it later:
SQL11G> ALTER SESSION SET tracefile_identifier='unique_identifier';

Finally, turn on SQL Trace:


SQL11G> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';

At this point, tracing is enabled for your session, so any SQL statements that you run will be traced. Try
running any SQL query:
SQL11G> select * from test1
order by object_id;

When running this query, you may notice that there is no change, the query runs just the same as it
always did and there is no indication tracing is enabled. However, be assured that the trace information
is being recorded and written to a file - you just need to know where to look.
After tracing some relevant SQL, you should turn tracing off again, either by exiting SQLPLUS (or
closing your session) or use the command:
SQL11G> ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';

Turing Trace on in another session


Sometimes you want to trace a session that is not your own, or that you cannot easily add the code
above to enable trace. If you have sysdba access to the database, the easiest way to enable trace in
another session is to use the oradebug command. First you need to identify the SPID of the process
you want to trace:
select p.spid,s.sid,s.serial#,s.username,s.status,p.program
from V$process p,V$session s
where s.paddr = p.addr

Identifying the session you want may take some work, which is more than I want to talk about here.
Once you have the SPID you want to trace, run the following three commands:
oradebug setospid <SPID found above>
oradebug unlimit
oradebug event 10046 trace name context forever, level 12

To turn tracing off again, simply run:


oradebug event 10046 trace name context forever, level 1

Finding Your Trace File


OK, so we have traced the SQL statement above, but where is the trace file? Well, it goes
into userdumpdest of course. Obvious right? Well not really actually.
Upon start up, there are literally tons of parameters that control all sorts of settings about how Oracle
works. For a developer, most of these settings are not important, but one such parameter is known
as userdumpdest, and is the location trace files are written to. Finding this location is pretty easy (if you
have the correct privileges on the database):
SQL11G> select name, value
from v$parameter
where name = 'user_dump_dest';

The value returned will be the location on the database server where trace files are written. If you get
an error trying to access the v$parameter table, you probably don't have the required privileges, and
will need to have a chat with your DBA.
If you look inside this directory, you should find a file that contains the unique identifier you specified
above, which will be the file you are after.

Making Sense of the trace file


Believe it or not, the contents of the trace file actually do make some sense, and it is possible decipher
them. I have written Perl programs that extra information, but luckily that is rarely necessary. Below is
an extract from the trace file generated earlier:
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /dboracle/product/11.2.0/dbhome_1
System name:

Linux

Node name: home.appsintheopen.com


Release:

2.6.18-238.9.1.el5xen

Version:

#1 SMP Tue Apr 12 18:53:56 EDT 2011

Machine:

x86_64

Instance name: ora11gr2


Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 30727, image: oracle@home.appsintheopen.com

*** 2011-06-23 20:32:54.940


*** SESSION ID:(50.2) 2011-06-23 20:32:54.940
*** CLIENT ID:() 2011-06-23 20:32:54.940
*** SERVICE NAME:(ora11gr2) 2011-06-23 20:32:54.940
*** MODULE NAME:(SQL*Plus) 2011-06-23 20:32:54.940
*** ACTION NAME:() 2011-06-23 20:32:54.940

WAIT #8: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857574939930

*** 2011-06-23 20:33:11.904


WAIT #8: nam='SQL*Net message from client' ela= 16964278 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591904507
CLOSE #8:c=0,e=19,dep=0,type=1,tim=1308857591904604

*** 2011-06-23 20:33:11.947


=====================
PARSING IN CURSOR #12 len=51 dep=0 uid=32 oct=3 lid=32 tim=1308857591947353
hv=130287142 ad='65e0ca48' sqlid='dysy9ps3w81j6'
select * from test1
order by object_id
END OF STMT
PARSE
#12:c=0,e=42705,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=3586745170,tim=1308857591947350
EXEC
#12:c=0,e=5473,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3586745170,tim=1308857591952914
WAIT #12: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591952949
FETCH
#12:c=0,e=119,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=3586745170,tim=1308857591953183

WAIT #12: nam='SQL*Net message from client' ela= 3763 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591956989
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591957040
FETCH
#12:c=0,e=42,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591957069
WAIT #12: nam='SQL*Net message from client' ela= 6744 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591963836
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591963869
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591963894
WAIT #12: nam='SQL*Net message from client' ela= 8821 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591972738
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591972775
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591972800
WAIT #12: nam='SQL*Net message from client' ela= 14300 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591987123
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591987160
FETCH
#12:c=0,e=36,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591987186
WAIT #12: nam='SQL*Net message from client' ela= 7150 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591994359
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591994396
FETCH
#12:c=0,e=36,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591994422
WAIT #12: nam='SQL*Net message from client' ela= 5636 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592000081
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592000113
FETCH
#12:c=0,e=38,p=0,cr=3,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592000142

WAIT #12: nam='SQL*Net message from client' ela= 5647 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592005812
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592005850
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592005875
WAIT #12: nam='SQL*Net message from client' ela= 5508 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592011423
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592011456
FETCH
#12:c=0,e=34,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592011480
WAIT #12: nam='SQL*Net message from client' ela= 12100 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592023603
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592023640
FETCH
#12:c=0,e=39,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592023669
WAIT #12: nam='SQL*Net message from client' ela= 6351 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592030043
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592030081
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592030106

As you can see, it contains a lot of information, but making any sense of that information looks difficult
to say the least.

TKProf
Luckily, Oracle provides a tool that comes bundled with Oracle that can take a trace file, and produce a
useful report with a single command.
From the UNIX command line enter the command (replacing the path names with the values relevant
for your system):
$ tkprof /path/to/trace/file.trc /path/to/output/file.prf

The tkprof program will read the trace file and produce a nicely formatted report in the output file. If you
have a look at the output, you will find the query you executed, along with some information about it,
eg:
select * from test1
order by object_id

call

count

cpu

elapsed

disk

query

current

rows

------- ------ -------- ---------- ---------- ---------- ---------- ---------Parse


Execute
Fetch

0.00

0.04

0.00

0.00

513

0.00

0.01

1135

0
0

0
0

7679

------- ------ -------- ---------- ---------- ---------- ---------- ---------total

515

0.00

0.06

1135

7679

Misses in library cache during parse: 1


Optimizer mode: ALL_ROWS
Parsing user id: 32

Rows

Row Source Operation

------- --------------------------------------------------7679 TABLE ACCESS BY INDEX ROWID TEST1 (cr=1135 pr=0 pw=0 time=26157 us cost=117
size=660394 card=7679)
7679 INDEX FULL SCAN TEST1_UK1 (cr=529 pr=0 pw=0 time=9213 us cost=18 size=0 card=7679)
(object id 13050)

Elapsed times include waiting on following events:


Event waited on

Times Max. Wait Total Waited

---------------------------------------- Waited ---------- -----------SQL*Net message to client


SQL*Net message from client

513
513

0.00
12.41

0.00
16.03

********************************************************************************

Often there will be many more SQL statements in the tkprof output which you did not run in your
session. These are known as recursive SQL statements, and are statements which Oracle has to run
behind the scenes to answer your query. Often these are due to parsing and dynamic stats gathering,
which are not topics I want to explore here.
In the tkprof extract above, there are three main sections.

Query Stats
Useful stats on the query are listed first, and are very similar to those obtained through autotrace, with
the useful addition of CPU consumed running the query.
In the tkprof output, the DISK column indicates how many blocks were read from disk, and is
equivalent to PHYSICAL READS in the autotrace output.
The QUERY column is the number of logical I/O operations required to answer the query, which may
have come from the buffer cache or disk. This is equivalent to the CONSISTENT GETS stat in
autotrace.
The CURRENT column indicates the number of blocks gotten in current mode, and are usually
required for updates.

Row Source Operations


The next section looks remarkably like an Explain Plan, and it basically is. The difference is that an
Explain Plan is an educated guess of how Oracle will process a query, while the row source output is
what actually happened. Almost always the two plans will be the same. The notable difference is that
the row source operation contains the actual number of rows obtained at each stage of the query
processing (as the "card" statistic) and the time spend processing each section, which can be useful.

Wait Statistics
The final section details the wait events which Oracle encountered when processing the query. In this
example, there isn't much of interest, only some time spent waiting on "SQL*Net message from client",
but on more complex queries all sorts of events will be logged here, such as time spend waiting on
locks and reading from disk. This is generally the section to look at when attempting to troubleshoot a
long running query, as it will give an indication of what it is spending time doing.
At the bottom of the trace file, some useful summary information is reported, such as cumulative wait
events for all queries and the number of SQL queries included in the trace file.

Wrapping Up
The best way to learn about tracing and tkprof is to experiment with it on your development box. Try
setting the tracing level to 12 instead of 8 to capture bind variables (at the expense of a much bigger
trace file) and figure out the various command line options to tkprof that control sorting the SQL
statements.
Once again for reference (and mainly easy cut and paste) the commands to enable tracing are:
ALTER SESSION SET timed_statistics=TRUE;
ALTER SESSION SET max_dump_file_size=UNLIMITED;
ALTER SESSION SET tracefile_identifier='unique_identifier';
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';

<SQL STATEMENT HERE>

ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';

Edition Based Redefinition - Cheat Sheet


Rules of Thumb

Keep it as simple as possible

Limit the time both applications are running as much as possible

All tables should have an Editioning View created against them that the application must access
instead of the table
The application should have no privileges on the base tables
Despite the protection offered by Editioning Views, 'select *' and inserts without listing columns
should be avoided.
All normal triggers should be moved from the base table to the Editioning Views.

Simple schema changes can only involve adding columns to existing tables or adding
completely new tables.

You can never drop or alter existing columns which are referenced by an Editioning View that is
in use by any live part of the application.

Indexes required for the upgrade should be created as invisible and altered to visible when the
upgrade is complete.
When PLSQL units need to be changed, they should be added to a new Edition.

Complex schema changes are possible using Cross Edition Triggers:

Try to avoid chains of triggers, which can get confusing fast

Ensure that only two Editions are ever in use on the database at any time.

Take care with normal and cross edition DML

Test, test and test some more away from production

Other things to investigate

DBMS_PARALLEL to avoid locking the entire table during a transform.

The IGNOREROWONDUPKEYINDEX hint

The APPLYINGCROSSEDITIONTRIGGER function.

DBMSSESSION.SETEDITION_DEFERRED

Oracle Advanced Developers Guide - Chapter 19.

Useful Commands
Default Database Edition
SQL11G> SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_EDITION';

Add an Edition
SQL11G> create edition upgrade_v2;

Grant an Edition
SQL11G> grant use on edition upgrade_v2 to public;

Revoke an Edition
SQL11G> revoke use on edition upgrade_v2 from public;

Switch default Edition (grants use to public automatically)


SQL11G> alter database default edition = upgrade_v2;

Current Session Edition


SQL11G> select sys_context('Userenv', 'Current_Edition_Name')
from dual;

Switch Edition
SQL11G> alter session set edition = new_edition_name;

What sessions are using an Edition


TODO (doesn't seem to be possible?)

Editioning Views
SQL11G> create or replace editioning view
as
select col from table;

Cross Edition Trigger Syntax


SQL11G> create or replace trigger user_comments_fwd_xed_trg
before insert or update or delete on user_comments
for each row
forward crossedition
disable
begin
if inserting or updating then
:new.comment_txt_v2 := :new.comment_txt;
end if;

end;
/

Getting a query Explain Plan


As an Oracle newbie, or as an experienced developer, you know the scenario - someone comes along
to your favorite Oracle related forum and posts something like:

Help! I have a query that was running just fine and now it has become slow - what
possible reason could there be? The query is: select * from sometable where ....
Well, obviously any number of things could be causing the query to slow down, but invariably the first
thing that any helpful person will ask for is the Query Execution Plan, also known as the Query Explain
Plan.

What is an Execution Plan?


One of the nice things about SQL is that you can write a query of almost limitless complexity, possibly
involving many tables, columns and where clauses and the database just produces a result for you. If
you think about it, there are many different ways Oracle can get that result, depending on the tables
involved, some examples are:

Full table scans and hash joins

Index range scans and nested loop joins

Index fast full scans

The order the tables are joined

Oracle contains a very complex piece of software called the Query Optimizer that takes a SQL query,
analyzes it and then using statistics on the tables, a set of rules and sometimes what seems like a bit
of magic figures out the most efficient way of accessing the data.
This analysing process is known as Parsing the query, and along with other things, it creates an
Execution Plan which is basically the set of steps Oracle must use to search the data and produce the
query results.

Obtaining an Execution Plan


Assuming you have a query, getting the execution plan for it is easy. Log into SQLPLUS as usual and
then use the 'explain plan for' command to generate the explain plan. Using the table created in
the AutoTrace section:

SQL11G> explain plan for


select object_id from test1 where rownum = 1;

Explained.

So what happened here? Well, the 'Explain Plan For' command did infact force the query to be
analysed and generated an explain plan for it, even though you cannot see it. The actual execution
plan went into a table called the PLANTABLE. Generally, you never need to access the PLAN TABLE
directly, Oracle has a utility that will get the results out for you, called DBMSXPLAN:
SQL11G> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------Plan hash value: 1774051367

----------------------------------------------------------------------------------| Id | Operation

| Name

| Rows | Bytes | Cost

(%CPU)| Time

----------------------------------------------------------------------------------| 0 | SELECT STATEMENT


|* 1 | COUNT STOPKEY

|
|

|
|

1|

5|

| 2 | INDEX FAST FULL SCAN| TEST1_UK1 |

(0)| 00:00:01 |
|

1|

5|

|
2

(0)| 00:00:01 |

-----------------------------------------------------------------------------------

The table of results shown above is the explain for the query and provides vital information about how
the query will execute at runtime. So, generating the plan for any query is a two step process:
1.
2.

Explain the query using 'explain plan for'


Obtain the results for last explain plan for command using the query: select * from
table(dbms_xplan.display());

Reading The Explain Plan


Reading some Explain Plans is easier than others. For example the one above only involves a single
index, and it's pretty clear what is going on, however if the query involved many tables, it can get
complex pretty quickly. As with the other tools described in this section, learning how to read an explain

plan is a job for another day, but at least now you know how to generate the plan and what it's purpose
is.

CREATE DATABASE 11G MANUALLY ON LINUX


1. Firstly, export Environment Variables. To export EV automatically for every
session, do below changes to /home/oracle/.bashrc file:
export ORACLE_SID=TEST
export ORACLE_HOME=/home/oracle/oracle/product/10.2.0/db_1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib

2. Create parameter file and modify it by setting minimum required parameters:


*.db_name=orcl
*.db_block_size=8192
*.sga_target=1677721600
*.undo_management=AUTO
*.control_files = (/home/oracle/oracle/product/10.2.0/control01.ctl)
*.adump=/home/oracle/oracle/product/10.2.0/adump
After creation of this parameter file, create below folders
in/home/oracle/oracle/product/10.2.0/ directory. Three of them are dump folders
(needed for trace files and alert.log file). Were going to keep Control Files
and DataFiles in oradata folder.
-

oradata

adump

dpdump

pfile

3. Create Server parameter file (SPFILE) using this parameter file and STARTUP
the instance in NOMOUNT mode.
CREATE SPFILE FROM PFILE=/home/oracle/oracle/product/10.2.0/init.ora;
STARTUP NOMOUNT

Now our instance started, SGA allocated and background processes started

4. To create a new database, use the CREATE DATABASE statement

CREATE DATABASE test


USER SYS IDENTIFIED BY test
USER SYSTEM IDENTIFIED BY test
LOGFILE GROUP 1 (/home/oracle/oracle/product/10.2.0/oradata/redo01.log) SIZE
50 m,
GROUP 2 (/home/oracle/oracle/product/10.2.0/oradata/redo02.log) SIZE 50
m,
GROUP 3 (/home/oracle/oracle/product/10.2.0/oradata/redo03.log) SIZE 50 m
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET us7ascii
NATIONAL CHARACTER SET al16utf16
DATAFILE /home/oracle/oracle/product/10.2.0/oradata/system01.dbf SIZE 325
m REUSE EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE /home/oracle/oracle/product/10.2.0/oradata/sysaux.dbf SIZE
400 m REUSE
DEFAULT TABLESPACE tbs_1 DATAFILE
/home/oracle/oracle/product/10.2.0/oradata/users.dbf SIZE 200m REUSE
AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1 TEMPFILE
/home/oracle/oracle/product/10.2.0/oradata/temp_tbs.dbf SIZE 20m REUSE
undo TABLESPACE undotbs DATAFILE
/home/oracle/oracle/product/10.2.0/oradata/undo01.dbf SIZE 200m REUSE
AUTOEXTEND ON MAXSIZE UNLIMITED

5. Run the scripts necessary to build views, synonyms, and PL/SQL packages
CONNECT / AS SYSDBA
SQL>@$ORACLE_HOME/rdbms/admin/catalog.sql
SQL>@$ORACLE_HOME/rdbms/admin/catproc.sql
6. Shutdown the instance and startup the database. Your database is ready for
use!

Create Database Manually In Oracle 11G On Linux


Create Database Manually In Oracle 11G On Linux
Here we will explore how to create database manually in Oracle 11G. Although there is a utility available
called as DBCA using which you can create database with ease but i will recommend you to create it
manually doing so we will know where we have kept all the configuration files etc.
Below are some highlights of steps involved in creating database manually.
1. create pfile , password file for new database
2. Create necessary directories
3. create instance and start database in nomount mode
4. use create database to create new database
5. Run necessary scripts file to create data dictonary table
6. Testing newly created database and registering to listener
First Login as ORACLE OS USER.

Now goto $ORACLE_HOME/dbs location and copy the content of init.ora into file name you want to create

Now open initzahid.ora and make the necessary changes as per your need.

Now as per changes you have made. make sure you have created necessary folders given in the path. if
not then create.

Now goto path and check if the necessary folders has been created.

Now open /etc/oratab file and make entry of database name and $ORACLE_HOME location.

Now set the environment path for zahid database. you can simply do it by executing oraenv command.

Now create a password file for sys user.

Once password file is created you can login to database and check.

Now you can start listener services if you want else it can be configure later also.

Now login to database and start the database using pfile you have created.

Now you can create spfile from pfile if you want. Major different using spfile over pfile is that you can make
online changes in configuration file how ever in pfile you have to restart database for any global changes.

Now check if the spfile has been created or not.

Now all the configuration part has been completed Now you can execute create database command.I have
created a script to create database you can execute this command on sql prompt also.
Now login to oracle and execute zahid.sql file to create database.

Now database has been created


Now import various catalog file to database

Once these catalogs are imported you can check if the database is mounted properly and it is opened in
read write mode or not

Now you can once restart database services and test

Now come out of database and check if all the processes are running properly or not.

Since all the processes are running properly Database has been created successfully

Vous aimerez peut-être aussi