Académique Documents
Professionnel Documents
Culture Documents
These are notes about creating a database manually. The DBCA can also be used to do this much
more easily.
For a production setup, each of these areas is probably a separate mount point on different disks etc.
undo_management = auto
db_name = ora11gr2
db_block_size = 8192
# 11G (oracle will create subdir diag and all the required subdirs)
# This is a non-default location for the diag files. Normally they are created
# under $ORACLE_BASE, but for non production setups I like to keep all the files
# for a database instance under a single folder.
diagnostic_dest
= /mnt/raid/dborafiles/ora11gr2
Connect to SQLPLUS
$ sqlplus /nolog
utf8
(not doing this doesn't cause any harm, but a warning is displayed when logging into SQLPLUS if it is
not run)
The database is now basically ready to use, but there no users and no users tablespace. Note it is also
NOT in archive log mode, so is certainly not production ready, but may be good enough for a nonbacked up dev instance.
Create a user:
SQL11G> create user sodonnel
identified by sodonnel
default tablespace users
temporary tablespace temp;
SQL11G> grant connect, create procedure, create table, alter session to sodonnel;
ora11gr2:/dboracle/product/11.2.0/dbhome_1:Y
Creating a tnsnames.ora file at this point would be a good idea too. It also goes into
$ORACLE_HOME/network/admin:
ora11gr2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(Host = localhost)(Port = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ora11gr2)
)
)
undo_management = auto
db_name = CLEAN01
db_block_size = 8192
DB_CREATE_FILE_DEST = +DATADG
DB_RECOVERY_FILE_DEST = +FRADG
DB_RECOVERY_FILE_DEST_SIZE = 10G
utf8
Now create a file called initAPEX01.ora in the ORACLE_HOME/dbs directory, and put the following
contents in it:
spfile='+DATADG/CLEAN01/spfileCLEAN01.ora'
Remember to remove the SPFile from the dbs directory or it will not use the ASM spfile!
Now jump into RMAN to copy the relevant files into ASM:
rman target /
restore controlfile from '/oraworkspace/apex01/control01.ora';
backup as copy database format '+DATADG';
switch database to copy;
alter database open;
FILE_NAME
TABLESPACE_NAME
GROUP# STATUS
BYTES
52428800
2 INACTIVE
3 CURRENT
52428800
52428800
MEMBER
-------------------------------------------------+DATADG/apex01/onlinelog/group_1.278.853143825
+DATADG/apex01/onlinelog/group_2.273.853143825
+DATADG/apex01/onlinelog/group_3.276.853144187
Now create a file called initAPEX01.ora in the ORACLE_HOME/dbs directory, and put the following
contents in it:
spfile='+DATADG/APEX01/spfileAPEX01.ora'
Restart the instance and confirm the spfile being used is in ASM:
NAME
TYPE
VALUE
string
+DATADG/apex01/spfileapex01.or
a
I think you should be able to get Oracle to detect the spfile in ASM without a bootstaped init.ora file in
the dbs directory, but I couldn't get it working, so it may not be possible.
Reading indexes
Waiting on locks
Next, for a very large SQL statement, there is a chance the default size limit of the trace file will be
exceeded, and vital information about the query will be lost, so set the maximum size to unlimited:
SQL11G> ALTER SESSION SET max_dump_file_size=UNLIMITED;
Next, if you are on a busy system, it can be useful to add a unique identifier to your trace file to help
find it later:
SQL11G> ALTER SESSION SET tracefile_identifier='unique_identifier';
At this point, tracing is enabled for your session, so any SQL statements that you run will be traced. Try
running any SQL query:
SQL11G> select * from test1
order by object_id;
When running this query, you may notice that there is no change, the query runs just the same as it
always did and there is no indication tracing is enabled. However, be assured that the trace information
is being recorded and written to a file - you just need to know where to look.
After tracing some relevant SQL, you should turn tracing off again, either by exiting SQLPLUS (or
closing your session) or use the command:
SQL11G> ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';
Identifying the session you want may take some work, which is more than I want to talk about here.
Once you have the SPID you want to trace, run the following three commands:
oradebug setospid <SPID found above>
oradebug unlimit
oradebug event 10046 trace name context forever, level 12
The value returned will be the location on the database server where trace files are written. If you get
an error trying to access the v$parameter table, you probably don't have the required privileges, and
will need to have a chat with your DBA.
If you look inside this directory, you should find a file that contains the unique identifier you specified
above, which will be the file you are after.
Linux
2.6.18-238.9.1.el5xen
Version:
Machine:
x86_64
WAIT #8: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857574939930
WAIT #12: nam='SQL*Net message from client' ela= 3763 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591956989
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591957040
FETCH
#12:c=0,e=42,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591957069
WAIT #12: nam='SQL*Net message from client' ela= 6744 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591963836
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591963869
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591963894
WAIT #12: nam='SQL*Net message from client' ela= 8821 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591972738
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591972775
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591972800
WAIT #12: nam='SQL*Net message from client' ela= 14300 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591987123
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591987160
FETCH
#12:c=0,e=36,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591987186
WAIT #12: nam='SQL*Net message from client' ela= 7150 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591994359
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857591994396
FETCH
#12:c=0,e=36,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857591994422
WAIT #12: nam='SQL*Net message from client' ela= 5636 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592000081
WAIT #12: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592000113
FETCH
#12:c=0,e=38,p=0,cr=3,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592000142
WAIT #12: nam='SQL*Net message from client' ela= 5647 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592005812
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592005850
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592005875
WAIT #12: nam='SQL*Net message from client' ela= 5508 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592011423
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592011456
FETCH
#12:c=0,e=34,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592011480
WAIT #12: nam='SQL*Net message from client' ela= 12100 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592023603
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592023640
FETCH
#12:c=0,e=39,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592023669
WAIT #12: nam='SQL*Net message from client' ela= 6351 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592030043
WAIT #12: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0
obj#=13050 tim=1308857592030081
FETCH
#12:c=0,e=35,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=1,plh=3586745170,tim=1308857592030106
As you can see, it contains a lot of information, but making any sense of that information looks difficult
to say the least.
TKProf
Luckily, Oracle provides a tool that comes bundled with Oracle that can take a trace file, and produce a
useful report with a single command.
From the UNIX command line enter the command (replacing the path names with the values relevant
for your system):
$ tkprof /path/to/trace/file.trc /path/to/output/file.prf
The tkprof program will read the trace file and produce a nicely formatted report in the output file. If you
have a look at the output, you will find the query you executed, along with some information about it,
eg:
select * from test1
order by object_id
call
count
cpu
elapsed
disk
query
current
rows
0.00
0.04
0.00
0.00
513
0.00
0.01
1135
0
0
0
0
7679
515
0.00
0.06
1135
7679
Rows
------- --------------------------------------------------7679 TABLE ACCESS BY INDEX ROWID TEST1 (cr=1135 pr=0 pw=0 time=26157 us cost=117
size=660394 card=7679)
7679 INDEX FULL SCAN TEST1_UK1 (cr=529 pr=0 pw=0 time=9213 us cost=18 size=0 card=7679)
(object id 13050)
513
513
0.00
12.41
0.00
16.03
********************************************************************************
Often there will be many more SQL statements in the tkprof output which you did not run in your
session. These are known as recursive SQL statements, and are statements which Oracle has to run
behind the scenes to answer your query. Often these are due to parsing and dynamic stats gathering,
which are not topics I want to explore here.
In the tkprof extract above, there are three main sections.
Query Stats
Useful stats on the query are listed first, and are very similar to those obtained through autotrace, with
the useful addition of CPU consumed running the query.
In the tkprof output, the DISK column indicates how many blocks were read from disk, and is
equivalent to PHYSICAL READS in the autotrace output.
The QUERY column is the number of logical I/O operations required to answer the query, which may
have come from the buffer cache or disk. This is equivalent to the CONSISTENT GETS stat in
autotrace.
The CURRENT column indicates the number of blocks gotten in current mode, and are usually
required for updates.
Wait Statistics
The final section details the wait events which Oracle encountered when processing the query. In this
example, there isn't much of interest, only some time spent waiting on "SQL*Net message from client",
but on more complex queries all sorts of events will be logged here, such as time spend waiting on
locks and reading from disk. This is generally the section to look at when attempting to troubleshoot a
long running query, as it will give an indication of what it is spending time doing.
At the bottom of the trace file, some useful summary information is reported, such as cumulative wait
events for all queries and the number of SQL queries included in the trace file.
Wrapping Up
The best way to learn about tracing and tkprof is to experiment with it on your development box. Try
setting the tracing level to 12 instead of 8 to capture bind variables (at the expense of a much bigger
trace file) and figure out the various command line options to tkprof that control sorting the SQL
statements.
Once again for reference (and mainly easy cut and paste) the commands to enable tracing are:
ALTER SESSION SET timed_statistics=TRUE;
ALTER SESSION SET max_dump_file_size=UNLIMITED;
ALTER SESSION SET tracefile_identifier='unique_identifier';
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';
All tables should have an Editioning View created against them that the application must access
instead of the table
The application should have no privileges on the base tables
Despite the protection offered by Editioning Views, 'select *' and inserts without listing columns
should be avoided.
All normal triggers should be moved from the base table to the Editioning Views.
Simple schema changes can only involve adding columns to existing tables or adding
completely new tables.
You can never drop or alter existing columns which are referenced by an Editioning View that is
in use by any live part of the application.
Indexes required for the upgrade should be created as invisible and altered to visible when the
upgrade is complete.
When PLSQL units need to be changed, they should be added to a new Edition.
Ensure that only two Editions are ever in use on the database at any time.
DBMSSESSION.SETEDITION_DEFERRED
Useful Commands
Default Database Edition
SQL11G> SELECT property_value
FROM database_properties
WHERE property_name = 'DEFAULT_EDITION';
Add an Edition
SQL11G> create edition upgrade_v2;
Grant an Edition
SQL11G> grant use on edition upgrade_v2 to public;
Revoke an Edition
SQL11G> revoke use on edition upgrade_v2 from public;
Switch Edition
SQL11G> alter session set edition = new_edition_name;
Editioning Views
SQL11G> create or replace editioning view
as
select col from table;
end;
/
Help! I have a query that was running just fine and now it has become slow - what
possible reason could there be? The query is: select * from sometable where ....
Well, obviously any number of things could be causing the query to slow down, but invariably the first
thing that any helpful person will ask for is the Query Execution Plan, also known as the Query Explain
Plan.
Oracle contains a very complex piece of software called the Query Optimizer that takes a SQL query,
analyzes it and then using statistics on the tables, a set of rules and sometimes what seems like a bit
of magic figures out the most efficient way of accessing the data.
This analysing process is known as Parsing the query, and along with other things, it creates an
Execution Plan which is basically the set of steps Oracle must use to search the data and produce the
query results.
Explained.
So what happened here? Well, the 'Explain Plan For' command did infact force the query to be
analysed and generated an explain plan for it, even though you cannot see it. The actual execution
plan went into a table called the PLANTABLE. Generally, you never need to access the PLAN TABLE
directly, Oracle has a utility that will get the results out for you, called DBMSXPLAN:
SQL11G> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------Plan hash value: 1774051367
----------------------------------------------------------------------------------| Id | Operation
| Name
(%CPU)| Time
|
|
|
|
1|
5|
(0)| 00:00:01 |
|
1|
5|
|
2
(0)| 00:00:01 |
-----------------------------------------------------------------------------------
The table of results shown above is the explain for the query and provides vital information about how
the query will execute at runtime. So, generating the plan for any query is a two step process:
1.
2.
plan is a job for another day, but at least now you know how to generate the plan and what it's purpose
is.
oradata
adump
dpdump
pfile
3. Create Server parameter file (SPFILE) using this parameter file and STARTUP
the instance in NOMOUNT mode.
CREATE SPFILE FROM PFILE=/home/oracle/oracle/product/10.2.0/init.ora;
STARTUP NOMOUNT
Now our instance started, SGA allocated and background processes started
5. Run the scripts necessary to build views, synonyms, and PL/SQL packages
CONNECT / AS SYSDBA
SQL>@$ORACLE_HOME/rdbms/admin/catalog.sql
SQL>@$ORACLE_HOME/rdbms/admin/catproc.sql
6. Shutdown the instance and startup the database. Your database is ready for
use!
Now goto $ORACLE_HOME/dbs location and copy the content of init.ora into file name you want to create
Now open initzahid.ora and make the necessary changes as per your need.
Now as per changes you have made. make sure you have created necessary folders given in the path. if
not then create.
Now goto path and check if the necessary folders has been created.
Now open /etc/oratab file and make entry of database name and $ORACLE_HOME location.
Now set the environment path for zahid database. you can simply do it by executing oraenv command.
Once password file is created you can login to database and check.
Now you can start listener services if you want else it can be configure later also.
Now login to database and start the database using pfile you have created.
Now you can create spfile from pfile if you want. Major different using spfile over pfile is that you can make
online changes in configuration file how ever in pfile you have to restart database for any global changes.
Now all the configuration part has been completed Now you can execute create database command.I have
created a script to create database you can execute this command on sql prompt also.
Now login to oracle and execute zahid.sql file to create database.
Once these catalogs are imported you can check if the database is mounted properly and it is opened in
read write mode or not
Now come out of database and check if all the processes are running properly or not.
Since all the processes are running properly Database has been created successfully