Académique Documents
Professionnel Documents
Culture Documents
Xmanager is used to work with with GUI applications of Linux server from client node.
Steps to work with Xmanager:
1. In the client, on desktop [or from the menu] we can find Xmanager 2.0
Xmanager Passive. Double click on it.
2. In the Telnet or Putty window login to linux and export or set the DISPLAY
variable. Ex: export DISPLAY=192.168.0.11:0.0 [ 192.168.0.11 is IP address of
client machine]
3. Test Xwindow is opening or not using ‘xclock’ command in telnet/putty window
For installing oracle , we have to specify oracle home dir which can be created using mkdir
command.
Page 1 of 102
DBA NOTES
[root@linux ~] cd /oraDB/kittu/ohome/root.sh
script this command will capture all the acivites that done by terminal or user
Syn:- script filename
Script abc
All the activities done by the user after running script will be copied into file called abc
In real time envi ronment, we need to seen all the activities performed by us to client.what
we need to send to client must be given by the client in documentation. This is also called
as ticket. These files are called log files
For sending this report, we need to make a clear view of activities in a file on local pc
Page 2 of 102
DBA NOTES
Syn:- vncserver
Then vnc server is started on server. When we start vnc server it asks for password
for frist time. Enter password whatever you like. After entering the password it create
hidden directory called .vnc. This file is created under home directory of user. This
directory consists of files like password,files,startup files,log and pid(process id) files .
They are
passwd
startup
linux6:1.pid
linux6:1.log
Linux6(hosthome):1(port).pid
In server,every vnc connection is created with portnumber its starts from ‘1’ next
connection is ‘2’
We identified this port number from a line where vnc is started .this file is
new’linux6:1(kittu)’ desktop is linux6:1
To kill process:-
For this we must have the process id of vnc server. This pid is stored in linux6:1.pid file
more linux6:1.pid
ps –ef |grep vnc
kill -9 6470
Then vnc process is stopped or killed
How to start vnc server from client and how to acces vnc server from client?
open o/s user
type vnc server and press enter
enter passwd if we open vnc server for first type.
identified the portnumber of vnc server
Linux6:1
open vnc viewer and type ip address of server along with portnumber
192.168.0.102:1
Enter the passwd and press enter
Then we connect to x-server through vnc viwer
Page 3 of 102
DBA NOTES
Screening:
Screening is the concept of maintaining the session ‘s data available when close the
session
For exemple ,when we are working with file in ‘vi’ editor we modified 100 lines and
close the session with out saving the file. Actually the file will not be modified at this
situation. In this case ,if we use screening ,we can retrive the modification as usual what we
had done.
This is possible by following the below steps
type screen command
do the modification to the file whatever we want
close the session
its given with name screen 0
each and every screen is identified by socket number
we list the screen by using command
syn: screen -ls
9501.pts -4 .linux6 socket number
=> the local screen is attached to the session by using the following command
Syn: screen –x socketnumber
this screen retains the session and we can do the activites what we want to do in previous
session
actually vnc is used for gui mode and screen is used for cui mode
Page 4 of 102
DBA NOTES
Manual process
DBCA (Database Configuration Assistant)
1. We must set the environments for database ie., we must set values to environment
variables like ORACLE_SID, ORACLE_HOME, PATH .
Fallow the below steps to set environment
ORALCE_SID will be database name. So SID must be same as of the database name we
want to create.
Page 5 of 102
DBA NOTES
After executing above statement, database is created is displayed. Then execute the below
post scripts
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
Instead of writing $ORACLE_HOME we may use ‘?’
This script creates all the dictionary views
After the completion of above script, run below script
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
This script creates all scripts for all procedural scripts
Page 6 of 102
DBA NOTES
Same process is required to create log some modifications required in initialization file and
create database.
2) initdbcherry.ora
db_name = dbcherry
db_cache_size=500m or 50000000
shared_pool_size = 50m
log_buffer =10000
undo_tablespace = undotso1
undo_retention=99
control_files= /oraDB/dittu/database/cs.ctl
undo_management = auto
compatible= 10.2.0.1.0
Then excute the below statements after executing 3 point above executed
Page 7 of 102
DBA NOTES
Welcome screen
1) Operation
Select operation you want to perform
Create database
Configure database options is database
Delete database
Manage template
2) Database templates
Select template from following list to creater database
General purpose
Transaction processing
New database
3) Global databasename: ramu
It is consider as ramu.appworld.com
sid: ramu
4) database connection option
Select the mode in which you want your database to operate by default
dedicated server mode (one user)
shared server mode (more user)
5) Initiazation parameters
By default it takes some values if not we can modify those values
memory
o typical charactersets
o custom o use default dbsize
shared pool-5000000 o use unicode sort area size 524288
buffer cache=3k o choose form list
javapool=25000
large –o
pga ---1500000
file locations
1)create serverparameter file
Trace file defination
Userprocess-adim /udump
Background process admin/bdump
Core dumps admin/cdump
Page 8 of 102
DBA NOTES
Database storage
Logfiles
System files
Controlfiles
7) create options
create database
save as templates
OK
Homedirectory: - is the location where the files related to that particular user were
navigated.
Page 9 of 102
DBA NOTES
Sys as sysdba:- This is root login account through this user we perform najor
activites like.
Starting a database
Shutdown the database
Removal of database
Monitoring the database
Taking backups
There is no password for sys as sysdba. It has the highest privileges of any database.
System :- Through this user we perform the low level activities as we mentioned above.
The password file system is manager.
To connect to system:
Sql>@scriptname
1) Configuring Instance
2) Configuring Database
1) Configuring Instance : - instance is nothing but memory this activity is done creating
Init + oracle_sid.ora
a) SGA
b) BACKGROUND PROCESS
Page 10 of 102
DBA NOTES
When we issue startup nomount,it reads initdb.ora and allocates memory to sga and
instance is started
Sga is space reserved for memory(ram) to database
2)configuring database
When we write(create database statements)
Database is created
After this 3 files are created.they are
Datafiles(dbf0
Controlfiles(ctl)
Redolog files(redo)
After this we have to perform post steps
Sizes for software
Oracle 9i-----1.6gb
Oracle 10-----1.26gb
Q) how to change the accounts of bash shell from k-shell
This can be done in /etc/password file
Change bash from ksh fro the uer which we want to change save the file and exit before
doing any modification in passwd file it is better to maintance a copy of that file
in bash the autoexe file is bash_profile in ksh the autoexe.file is .profile
When we are changing the shell the data and files in the former shell is available to later
shell
Oracle memory
Page 11 of 102
DBA NOTES
db buffer cache we will stored recently used data in buffer cache.if the user request is
present in in buffer cache it sends it to ueser .
Page 12 of 102
DBA NOTES
INSTANCE:
• when we do startup oracle will allocate SGA and background process are started
which are mandatory to run oracle. This is called Instance.
• Instance opens database files. Each and every thing is performed by Instance.
• All the logical manips like creating, reading, writing, etc., are done by instance.
• All the files are managed by instance.
• User cannot have access to files without instance
• User just cannot to instance not to files.
• User is able to view and perform data(manips) through instance only.
• Making things available to user is done by instance.
• Instance is nothing but ORACLE_SID.
• We can create database with unique.
When we shutdown the databse, Instance is closed and all the memory (SGA) is
deallocated, now we can open the database files (datafiles,logfiles,control files).
1. We use this when we are creating or altering databse. This stage is used for
maintainance of database ie., If we want to increase the size of datafiles, locations
of files and if any issues occurred in database.
Page 13 of 102
DBA NOTES
2. Startup Open;
Alter databse open;
ORACLE PROCESSES
Server Processes
Client Processes
Background Processes
Server Process : When this session is established, server process is created. Connects to
oracle instance and is started when user establishes a session. To handle the requests of
client process, user process, connected to instance server process is created of behalf of
each users app can perform the one or more of following:
• Parse and execute the sql statements issued through the application (client process)
• Returns result is such a way that the application can process the information
Client Process: started at a time a databse user requests connection to oracle server.
• Client process is a process which is created when client software is started.
• When we execute sql plus from $ prompt ,sqlplus becomes client process.
• The client proces is a process that sends message to a server, requesting the server
to perform a task (service). Client program usually manage the user-interface
portion of the application, validate the data entered by the user, dispatch requests to
server programs and sometimes executes business logic, the client bases process is
front end app that the user sees and interact with
Hand shake: When we start client process (ie., when we five sqlplus on shell prompt)
before this process interact with instance, the server process interact with client process.
This is called handshake.
Parent process: For every process there is a parent process. When we execute sqlplus,
from shell then $prompt ID becomes parent process ID to sqlplus. ie., shell spanned a
process……. Lsnrctl is a software which is executable in ORACLE_HOME/bin
Page 14 of 102
DBA NOTES
Local Process: When the connection established in server through shell prompt,
Then it is said to be local process.
Non-Local Process: If the connection established out of the server, then it said to be Non-
Local process
Parent process ID: for non- local client process is 1 init. Init is helpful to establish a
connection. These are client session (or) remote sessions. Oracle is a keyword. It is
established in bin. It is in oracle engine.
Starts when oracle instance is started, these are used to run oracle database
There are 2 tyes:
Mandatory SMON , PMON , CKPT , LGWR , DBWR
Optional ARCH , Pnnn , Jnnn , LCK
The mandatory process are mandatory to run oracle. These are started automatically
under SGA when we start oracle databse. These must be running in background as long as
database is up
Naming convention for these process:- ora_process_ORACLE_SID
EX: ora_smon_dbsidnu
ora_pmon_dbsidnu
ora_ckpt_dbsidnu
ora_dbwr_dbsidnu
ora_lgwr_dbsidnu
Page 15 of 102
DBA NOTES
When we issue a select statements, shared pool Converts the statements to sql
understandable language, Then, it sends it ot db_cache. Then sp server process work is
stopped. Now background processes starts working. If the sql statement related information
is available in db_cache, it sends it to user. If not, it searches for the info in
datafiles and then sends to user.
Page 16 of 102
DBA NOTES
When we say commit then data will be sent to redolog files from log buffer by
lgwr. The data in db_cache also sent to datafiles by dbwr but copy will be maintained in
db_cache for feature maintainence
OPTIONAL PROCESS
Pnnn: These are parallel store process to perform parallel DML activities. It can be used
for parallel exe of sql statements or recovery. The maximum number of parallel processes
that can be invoked is specified by initialising parameter
Parallel_min_servers=1
Parallel_max_servers=10
Lck: lock
This is available only in RAC instances. Meant for parallel server setups the
instance lock that are used to shared resources between instances are hold by lock process.
Page 17 of 102
DBA NOTES
SERVER PROCESS
A single database can be access to multiple instances. We have multiple servers for
instances, but only one database to all instances. This is called RAC.
STANDBY DATABASE
DATABASE ARCHITECTURE
There are 2 types of databases Architectures.
Physical
Logical
Physical Architecture is nothing but O/S level architecture. Files that are at o/s level
are said to by physical architecture.
Physical:
1. datafiles,min(1)
2. redofiles, min(2)
3. controlfiles, min(1)
Page 18 of 102
DBA NOTES
LOGICAL ARCHITECTURE:
Schema Object
Non – Schema Object
Schema is nothing but a user
Seeded Databases:- Default databases
Ex: sys, system
The objects which reside in the schema are said to be schema objects. Ex:
Table, view, index, synonym, procedure, package, function, database name,
sequence, etc., The objects which are not associated with schema are said to be non-
schema objects.
Ex: Tablespace, Roles
To start database:
Sqlplus “ / as sysdba”
Page 19 of 102
DBA NOTES
DATA DICTIONARY
1. Oracle will maintain entire system data into data dictionary of catalog.sql
2. System data is data which is required for functionality of database.
3. Data dictionary or catalog is set of tables, views and synonyms.
4. When we create database, some files and objects are created both physically
& logically.
There tables are extracted when we run created data. we connot access these tables
directly. It is very difficult to understand the data in these tables. There are some
views to access these tables. These are created when we run catalog.sql
VIEWS
Tables dba_tables all_tables user_tables
Idx dba_indexes all_indexes user_indexes
Synonyms dba_synonym -- --
Views dba_views -- --
Sequences dba_sequences -- --
Clusters dba_clusters -- --
Database rows dba_db_links -- --
Datafiles dba_datafiles -- --
Oracle will update the activities in to base tables whatever the ddl activities done
by us. Database engine will take this responsibility. Whole oracle is working based
on there base tables.
dba_ :- It will display everything in the database (all users info) every thing
Page 20 of 102
DBA NOTES
DICTIONARY VIEWS:
V$TABLES:
The views started with v$ are said to be dynamic performers
SQL*PLUS: This will work only in oracle.They are used to format output
i :- To insert a statement in buffer ie., adds new line to the sql statement
a :- Appends new words to sql statement
c :- Change the string and replace it with the required string
Page 21 of 102
DBA NOTES
save: by using this, we can save sql statement. We refer these statements as sql statements.
To run the sql scripts from any location, we had to mention the location of sql scripts ie.,
ORACLE_PATH in bash_profile
Export ORACLE_PATH = /tmp:/oraAPP:/oraAPP/kittu
Usuallly these sql scripts are saved in the location from where we fire sqlplus.
Droping user:-
Syn:- drop user xyz;
Page 22 of 102
DBA NOTES
Constraints:-
Select owner,constraint_name,constraint_type,table_name from dba_constraints;
To see constraint column:-
Select owner,constraint_name,table_name,column_name from dba_cons_column;
Tables:-
Select owner,table_name,tablespace_name,status from dba_tables;
Users:-
Select username,default_tablespace from dba_users;
Tablespaces:-
Select tablespace_name,status from dba_tablespaces;
Database:-
Select name,dbid,created,open_mode from v$database;
Version:-
V$version has only one column banner
Select * from v$version
Page 23 of 102
DBA NOTES
Datafiles:-
Select file_name,tablespace_name,bytes/1024/1024,online_status,autoextensible from
dba_data_files;
To see datafile in mount stage based on ts index :-
Select name,ts#,status,bytes/1024/1024 from v$datafile where ts# = 0;
Procedures:-
Select owner,procedure_name from dba_procedures;
To see source code:-
Select text from dba_source where name=’name’;
Objects:-
Select owner,object_name,object_type from dba_objects;
Functions:-
Select owner,object_name,object_type from dba_objects where object_type =
’FUNCTION’;
Source code:-
Select text from dba_source where name = ‘FUNCTIONS‘;
Packages:-
Select owner,object_name,object_type from dba_objects where object_type =
’PACKAGE’;
Source code:-
Select text from dba_source where name = ‘---‘;
Triggers:-
Select owner,trigger_name,trigger_type,table_name,column_name from dba_triggers;
Source code
Select text from dba_source where name= ‘---‘;
Control file:-
Select name,status from v$controlfile;
Log file:-
Select member,group#,status from v$logfile;
Privilages:-
To seee table privilages:-
Select grantee,owner,table_name,grantor,privilage from user_tab_privs;
To see user privilages:-
Select username,privilage,admin_option from user_sys_privs;
Page 24 of 102
DBA NOTES
TABLESPACE MANAGEMENT
Dropping a tablespace:
Before dropping tablespace, it is better to make tablespace offline.
Syn:- drop tablespace ts01;
In this case only tablespace is deleted but the data files are maintained in o/s level.
To delete the files in o/s level i.e. contents and data files (contents means objects
i.e. tables, views, etc) in TS.
Syn:- drop tablespace ts01 including contents and data files;
Page 25 of 102
DBA NOTES
Maintenance of datafiles:-
- Physical architectureis maintained by and managed by ORACLE ENGINE.
There are two ways in work nature:
1) Proactive:- the solution before problem exists.
2) Reactive :- the solution after problem exists.
- we can increase the datafile size automatically.
- This is possible by making autoextend on.
Syn:- alter database datafile ‘----‘ autoextend on;
Page 26 of 102
DBA NOTES
Renaming a tablespace:-
It is possible only in 10G
Syn:- alter tablespace chinni rename to babu;
In 9i, it is not possible to rename a tablespace. To perform this we had to follow the
below steps
- create new tablespace
- move all tables from old TS to new TS
Syn:- alter table <tname> move tablespace <new TS>;
Ex:- alter table emp move tablespace venki;
-- > When we move table from one TS to another TS, the table will be maintained inuser.
BIGFILE TABLESPACE:
It is new in 10g. by this we can create very big tablespace of terra bytes size.
Maximum size is 4 terrs bytes.
Syn:- create bigfile tablespace ts01 datafile ‘---‘ size 10g;
Renaming a datafile:-
To rename datafile we had to follow the below steps.
- make tablespace offline
- rename datafile in o/s level
syn:- mv < old filename> <new filename>
- rename file in sqllevel
syn:- alter tablespace venki rename datafile ‘<oldname>’ to
‘<newname>’;
Page 27 of 102
DBA NOTES
STARTUP:
There are different phases in startup:-
1) Instance allocation:-
Memory for SGA is allocated and background process starts.memory is
allocated by reading parameters from init.ora. this is instance.
Instance started
2) Mount stage:-
Instance opens control file
Database mounted
3) Database openstage:-
Database is opened. i.e. instance opens datafile and redo logs through control
files.because control file contains info of datafiles and redolog files.
II method:-
sql> startup nomount
In this stage, only instance is allocated.
Sql> alter database mount;
In this stage the database is mounted. i.e second phase.
Sql> alter database open;
In this stage the database is opened. i.e third phase.
We issue startup nomount to perform 2 things:-
1) creation of database
2) creation of control file
III method:-
sql> startup mount
In this stage, phase1 and 2 are executed.
Sql> alter database open;
Database is opened.
Generally we open the database in Mount stage to perform maintenance activities like
renaming datafiles, default tablespaces, redologs etc. In this stage we can’t access dba_
views. We access only v$ views.
Page 28 of 102
DBA NOTES
Undo tablespace:-
It is for undo operations. It will maintain old data till we issue commit.
SHUT DOWN:
There are 4 methods:-
1) shutdown (or) shutdown normal
2) shutdown immediate
3) shutdown abort
4) shutdown transactional
Page 29 of 102
DBA NOTES
Shutdown transactional
This option is used to allow active transactions to complete first i.e. it will let the
current transactions to be finished
It doesn’t allow client to start new transactions
Attempting to start new transaction results in disconnection
After completion of all transactions, any client still connected to Instance is
disconnected
Now the Instance shuts down
The next startup of database will not require any Instance recovery.
It will kill users who are idle
In real time, we use S.I and S.A
Page 30 of 102
DBA NOTES
Startup Restrict:-
We use this option to allow only oracle users with the Restricted session system
privilege to connect to database. i.e. only the DBA can have access to DB. We can use
alter command to disable this restrict session feature.
Syn:- alter system disable restricted session
Actually we use this when we are in maintenance. So we can’t give access of the
database to other users.We can enable restrict session feature after logging to database as
sys user.
Syn:- alter system enable restricted session
SYSAUX TABLESPACE
It is new in Oracle 10g. it is used to store database components that were stored in
system tablespace in prior releases of database. It was installed as an auxiliary TS to
SYSTEM TS. When we create the database, some database components that formerly
created and used separate tablespaces row occupy the SYSAUX TS.
If the SYSAUX TS becomed unavailable, core database functionality will remain
operational. The database features that use the SYSAUX TS could fail or function with
limited capacity.
Page 31 of 102
DBA NOTES
Note:- If we add control file without copying it, it searches for that file when we start the
database. So, oracle doesn’t read that file.
Page 32 of 102
DBA NOTES
Page 33 of 102
DBA NOTES
Each oracle has redo log files. These redo log files contains all changes made in
datafiles
The purpose of RDO is if something happens to one of the datafiles, a copy of datafile
is maintained in RDO’s which brings the datafile to the state it had before it
became unavalible. i.e. it is used for recovery of data.
The size of RDO is static. We determine its size in the creation of database. We can’t
change its size unless in the maintenance.
The idea is first to store the transactional data in log buffer to reduce i/o retention.
When a transaction commits (or) check point occurs, the data in log buffer must
be flushed into disk for the recovery. It is done by LGWR
The redolog of database contains one or more redolog files. The database requires a
minimum of two files to guarantee that one is always for writing while the other is
being archived ( if the database is in archivelog mode)
LGWR writes to redolog files in a circular fashion. When the current redolog file fills,
LGWR begins writing to the next available redolog file. When the last available
redolog file is filled, LGWR return to the first redolog file and writes to it starting the
cycle again. In this case if the first RDO is overwritten the data is lost. This happens
when the database is in NOARCHIVELOG mode.
This reading of data into another redolog file after filling the former one is said to be
log switch process.
It is a point at which the database stops writing to one redolog file and begins writing
to another file. Oracle DB assigns each redolog file a new log sequence number
everytime whenever a logswitch occurs and LGWR begins writing to it. When the
database archives redolog files, the archived log retains its LSN. A redolog file that is
cycled back for use is given the next available LSN.
Page 34 of 102
DBA NOTES
How can we know the database is in archive log (or) noarchive log mode:
- select log_mode from v$database;
(OR)
- archive log list
Page 35 of 102
DBA NOTES
Before drop a redolog group (or) member we had to perform the below steps:-
We can make the status of group as inactive (or) active. Because we can’t drop the
current running group.
We get the status of group by
select group#, archived,status from v$log;
So we forced oracle to switch the curren status to another group .
This is possible by using a command like.
alter system switch logfile;
Now current status of group changes.
*we can do this activity when the database is completely opened.
*we can add (or) remove a group and its members is mount stage and open stage also.
Page 36 of 102
DBA NOTES
Page 37 of 102
DBA NOTES
Page 38 of 102
DBA NOTES
Audit files:-
The files contains information, if we start sqlplus as ‘sys as sysdba’. It also updates
when we connected from another user to ‘sys as sysdba’.
Who logged in started the database is stored. It contains o/s user name, database
name, system name, oracle_home, database user, privilege, time etc.,
Whenever “we connect to sys user it creates audit files”.
Ex:- ora_3702. and
TABLESPACES
1) Permanent tablespaces
2) Undo tablespaces
3) Temporary tablespaces
in 10g ,we cannot have more then 65536 tbs.
Permanent tablespaces:
The tablespaces which are used to store the data permanently are said to be
permanent tablespaces.
Ex:- system,
Sysaux,
Etc.,
Undo tablespaces :
Every oracle database must have a method of maintaining information that is
used to rollback (or) undo,changes to the database. Such information consists of record of
actions of transactions, primarly before they are committed. Such records are collectively
referred as undo.
Undo tablespace is used to store undo records of database. i.e., uncommitted
transactions(pending data). We create undo tablespace at the time of database creation.
If there is no undo tablespace available, the instance starts but uses the SYSTEM
tablespace as default undo tablespace. It is not recommended option. So create undo
tablespace at the time of database creation (or) after that by setting parameter
value[undo_tablespace]
Page 39 of 102
DBA NOTES
1000
Old data
UNDO TS
1000 updated
Rollback to 5000
User TS
If the table contains the salary 1000 for some employees. If we update the salary
1000 to 5000. Then the records which contains salary 1000 will be stored into undo
tablespace and salary 5000 will be updated into table. If we do commit they remain.
Otherwise 1000 will come back to the table
We can view the tablespace type from dba_tablespaces
Select tablespace_name,contents from dba_tablespaces;
To know which undo tablespace is assigned to database.
Show parameter undo
(or)
From dictionary view database_properties
How to set undo tablespace fro sql prompt?
Alter system set undo_tablespace=’UNDOTS01’;
If we set this it is available only for that session. If there is sp file it will be permanent to
database because sp file allows dynamic alloction.
If there is no spfile we need to specify it in.
Renaming undotablespaces:
Similar to permanent tablespaces.
Temporary Tablespaces
Temporary tablespaces are used to manage space for database sort operations
and for sorting global temporary tables.
Page 40 of 102
DBA NOTES
Ex:-
If we join 2 large tables, and oracle cannot do the sort in memory(see SORT_AREA-
SIZE) initialization parameters, space will be allocated in a temporary tablespace for doing
the sort operation. Other sql operations that might require disk sorting are
create index,
Analyze,
Select distinct,
Order by,
Group by
The DBA should assign a temporary tablespace to each user in the database to
prevent them from allocating sort space in the SYSTEM tablespace.
TEMP FILES:
Unlike normal datafiles,tempfiles are not fully initialized when you create a temp file,
oracle only writes to the header and last block of the file.
This is why it is much quicker to create a temp file than to create a normal database file.
Temp files are not recorded in database’s control files. The implies that are one can just
recreate them whenever we restore the database (or) after deleting them by accident.
One cannot remove datafiles from a tablespace until we drop entire tablespace.
However, one can remove a tempfile
View:- dba_temp_files
Syn:- alter database tempfile
‘/oraAPP/temp1.dbf’ drop including datafiles;
If we remove all temp files from a temporary tablespace, you may encounter.
Error ORA-25153 temporary tablespace is empty
Use the below syntax to added temp file to temporary tablespace
Syn:- alter database temp
Add tempfile ‘/oraAPP/temp02.dbf’ size100m;
Page 41 of 102
DBA NOTES
USER- MANAGEMENT
Creating user:
Syn: Create user username identified by password;
Ex:-create user kittu identified by kittu;
Changing password:
Ex:-alter user kittu identified by ramu;
Password expire:
Syn:- alter user username password expire;
Dropping user:
Syn:- drop user kittu cascade;
Privilege:
Privilege is a right to execute a particular type of sql statement (or) to access
another user’s object.
(or)
Privilege is right to perform a specific activity. A privilege can be assigned to user
(or) a role
Page 42 of 102
DBA NOTES
System privilege:
A system privilege is right to perform particular action(or) to perform an action
on any schema objects of particular type. For example the privileges to create tablespace
and to delete the rows of any table in a database are system privileges. To perform DDL
activities.
who can grant and revoke system privileges ?
• users who have been granted a specific system privilege with the admin option.
• Users with the system privilege grant any privilege
i.e., DBA can grant system privileges
granting and revoking system privileges.
Syn:Grant create session to kittu;
Grant create table to kittu;
Syn: revoke create session from kittu;
Object privilege:
Object privilege is the permission to perform a particular action on a specific
schema object. To perform DML activities on other user. Some schema objects such a
clusters, indexes, triggers, and database links, do not have associated object privileges.
Their use is controlled with system privileges.
For example, to alter a cluster, a user must own the cluster (or) have the alter any cluster
system privilege.
Administrative privileges:
Administrative privileges that are required for administrator to perform basic
database operations are granted through two special system privileges, sysdba and sysoper
Page 43 of 102
DBA NOTES
ROLE:
Role is a set of privileges
Managing and controlling privileges is made easier by roles, which are named
groups of related privileges that you grant, as a group , to users (or) other roles, within a
database, a role name must be unique, different from usernames and all other role names.
Unlike schema objects, roles are not contained in any schema.
who can grant (or) revoke roles?
• any user with grant any role system privilege can grant or revoke any role
• any user granted a role with admin option can grant (or) revoke that role to (or) from
other users (or) roles of database.
There are 18 predefined roles.
Ex:- connect,resource, dba,select_catalog_role etc.,
Creating a role:
Syn:- create role rolename;
Ex:- create role abc;
Revoking:
Syn:- revoke create session from abc;
Dictionary views:
dba_roles is used to view total roles information in a database.
dba_role_privs is used to know which roles are assigned to users.
session_roles is used to view the roles for a particular session.
role_role_privs is used to view which roles are assigned to roles.
role_tab_privs is used to view which roles are assigned on tables (or) colums.
Page 44 of 102
DBA NOTES
QUOTA:-
Quota is some reserved space on tablespaces. This means to limit how much space a
user uses on a tablespace.
Quota can be assigned to user at the time of creation (or) after the creation
1) create user abc quota 10m on system;
2) alter user kittu quota 10m on system;
3) deleting quota alter user kittu quota 0m on system;
dictionary views:
dba_ts_quotas is used to know the how much quota is reserved for a particular
tablespace.
user_ts_quotas
PROFILES
Profile is a set of limits on database resources. Profiles are used to manage the
resources of database.
By default, a profile named default is available in the database.
If we assign profile to user, that user cannot exceed these limits.
To enable resource limits dynamically we need to set resource_limit parameter to true.
Alter system set resource_limit=true
To see this parameter
Show parameter resource_limit
To view profile information
Select * from DBA_PROFILES;
Profile, resource, resource_name, limit
Actually profiles has 2 types of parameters.
1) resource parameters
Can be viewed by user_resource_limit view
2) password parameters
Can be viewed by user_password_limits
To create profile,we must have create profile system privilege
Syn:- create profile abc limit sessions_per_user 2
Idle_time 30
Connect_time 10
Failed_login_attempts 2;
How to alter a profile
Syn:- alter profile abc limit idle_time 10;
How to drop a profile
Page 45 of 102
DBA NOTES
Unlimited:
When a resource parameter specified with this, it indicates that a user assigned this
profile can use an unlimited amount of this resource when specified with password
parameter, unlimited indicates that no limit has been set for the parameter.
Sessions_per_user:
It specifies no.of concurrent multiple sessions allowed per user
Connect_time:
It specifies the allowable connect time per session in minutes.
Idle_time:
It specifies allowed continuous idle time before user is disconnected in minutes.
Failed_login_attempts:
The no.of failed attempts to log into the user account before the account is locked
When oracle itself creates files, that files are said to be oracle managed files.
There is no need to define locations and names of CRD files.
The problem with OMF is only naming convention.
Creating database using OMF:-
For this we need to add 2 parameters in init.ora
They are DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST
In init.ora:-
Db_name=kittu
Shared_pool_size=100m
Db_cache_size=100m
Log_buffer=32768
Compatible=10.2.0.1.0
Page 46 of 102
DBA NOTES
Undo_management=auto
Db_create_file_dest= ‘/oraAPP/kittu/database1’;
Db_create_online_log_dest_1=’/oraAPP/kittu/database’;
connect as sys as sysdba startup nomount
Then database is created with creating a directory with database name. in that 3 directories
are created ., They are
Datafile in this all the data files are stored
Online log in this , log files are created
Control file in this control files are created
SYSTEM tablespace with datafile is created with 200mb size and is auto
Extensible.
SYSAUX tablespace with data file is created with 100mb size and is auto
extensible
UNDo tablespaces named SYS_undots is created with 120 mb size and is
Auto entensible.
2 redo log groups are created each one with size 100mb .each one contains
Only one member.
it creates one control file.
*In 10g, we can mention more than one destination parameter for redo logs
DB_CREATE_ONLINE_LOG_DEST_1
DB_CREATE_ONLINE_LOG_DEST_2
In this case we mention in init.ora as
DB_CREATE_FILE_DEST=’/oraApp/db1’
DB_CREATE_ONLINE_LOG_DEST_1=’/oraAPP/db1’
DB_CREATE_ONLINE_LOG_DEST_2=’/oraAPP/db2’
Now datafile is created in db1
Two control files are created in db1 and db2
Two redolog groups are created in db1 and db2
Each group contains 2 members
The control file in 1st dest location is primary one
In pfile indicates for all instances . either we use this RAC instances.
After creation of database, we need to specify control files location in init
.ora
i.e., control_ files=/oraAPP//db1/kittu/controlfile/a1_ctr.ctl
then only the control file is opened.
Drop database:
Sql> startup mount
Sql>alter system enable restricted session;
Sql>drop database
when we drop database, all the physical structure of database.
Page 47 of 102
DBA NOTES
when we create any tablespace, datafile, logfiles. Then a directory with database name as
a name is created in db1/kittu/kittu. In this again three directories are created
1)datafile
2) online log
3) control file
All the files we created after the creation of database, will be stored in this locations
The users related datafiles and redo’s created after execution of database will be stored
in these directories.
Note that OMF default size is 100mb, and the file size can be overridden at any
time. You can specify the filesize only bypass OMF and specify filename and
location in datafile clause.
Oracle enhanced the oracle 9i alter log to display message about tablespace creation
and data file creation. To see the alert_log, you must go to background_dump
destination directory.
show parameter background_dump
The parameter db_recovery_file_dest defines the location of flash recovery area,
which is default file system directory (or) ASM disk group where database creates
RMAN backups, when no format option is used, archived logs when no other local
destination is configured and flashback logs.
Create tablespace:-
Create tablespace ts01;
It alone creates datafile of size 100mb
Drop a tablespace:-
Drop tablespace ts01;
Page 48 of 102
DBA NOTES
PARAMETER MANAGEMENT
when we want to change database architecture, then we use alter Database command
when we want to change parameter to specific session (user), we use Alter session
command
when we want to change parameters to entire database, use alter System command
Dynamic parameter
The parameters whose values can be modifiable dynamically at Run time.
Static parameters
The parameters whose values cannot be modifiable at run time
Alter command:
Alter system/session set parameter_name= [spfile/memory/both]
Page 49 of 102
DBA NOTES
MANAGING INVENTORY
If we want to see list of oracle products on machine check for file inventory.xml:-
Location:- /etc/oraInventory/contents XML/
Page 50 of 102
DBA NOTES
ORACLE NETWORKING
To access the database from client system, we had to following the following steps:-
Step1:- server side
We need to configure listener on the server listener is a utility which is listening
to database connections
It is an executable file
Its location is ORACLE_HOME/network/admin/listener.ora
In one server we may have more than one listener depending on the load (no of clients
communication to the server)
Next open listener.ora file. It is readable text file. It is reachable text file
$ vi listener.ora
LISTENER_NAME =
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=IPC)(KEY=EXPROC LISTENER NAME))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.207)(PORT=1521))
)
SID-LIST_LISTERNAME=
(SID_LIST=
(SID_DESC=
(ORACLE_HOME=/oradb/nani9i)
(SID_NAME=databasename)
)
(SID_DESC=
(SID_NAME=PLSEXTPROC)
(ORACLE_HOME=/oradb/nani9i)
(PROGRAM=extproc)
)
)
Page 51 of 102
DBA NOTES
Listener name
List of sid’s (database)
Protocol (tcp/ip)
Port number (default 1521) We must have different port numbers for different
listener.
Host name (ip address)
After configuring listener in listener.ora open the database and exit and follow below
commands
Tnsnames.ora
Tnsentryname =
(description =
(ADDRESS=(PROTOCOL=TCP)(HOST=TARGET SERVER)(PORT=1521)
(CONECT_DATA=
Page 52 of 102
DBA NOTES
Step 3:
Toad
Username (or) databasename chinni (tnsname)
Schema kittu
Password ram
Stop listener:
apple=(ADDRESS_LIST =(ADDRESS=(PROTOCOL=IPC)
(KEY=EXTPROCapple(ADDRESS =(PROTOCOL=TCP)
(HOST=192.168.0.207(PORT=1599)))
SID_LIST_apple=(SID_LIST=(SID_DESC=(ORACLE_HOME=/home)
(SID_NAME=db9i))
(SID_DESC=(O_H=/tmp)
Page 53 of 102
DBA NOTES
in tsnames.ora we defined, multiple tns enter with different port numbers and same
sid
Everything will be same, only we has to create and listener with different port numbers
Tip
using lsnrctl command we can
1) Start
2) Stop
3) Services to know the services of server dedicated or mts or local or
nonlocal
4) Debug
5) Status
6) Help
7) Reload it will restart listener
listener can be started regardless the status of instance.
if we want to ………..
We need to define TNS_ADMIN environment variable in bash-profile
Export TNS_ADMIN =/home
Then oracle will look for listener.ora in /home directory
tnsnames.ora file can have ‘n’ number of tns entries
There is no significance of tns entry name we can give any name
Trc_level admin
SESSION MANAGEMENT
To kill a session
Syntax: - alter system session ‘sid,serial#’;
Ex: alter system kill session ‘1,20’;
STORAGE MANAGEMENT
Page 54 of 102
DBA NOTES
Segement:
A segment is a set of extents that contains all the data, for a specific logical
storage structure with a tablespace
For example, for each table, oracle database allocates one or more extents to form the
tables data segement.
Extent:
An extent is a specific number of contigous data blocks that are allocated
for storing a specific type of information
Oracle Block:
A block a smallest unit of storage in Oracle the size of a datanlock is fixed when
the database is created and cannot be changed except by rebuilding the database from .
This is primary data block sizes of datablocks are 2k,4k,8k,10k,32k
The size of block is determined by the parameter db_block_size in init.ora file.
In o/s also name stored in blocks only o/s file blocks only o/s file blocks size is 512
bytes or 1k
When we try to read some data oracle uses db_blocks. Oracle will retranslating while
reading data.
In 10g default blocksize is 8k
In 9i, default blocksize is 2k
Page 55 of 102
DBA NOTES
Block header
Table directory
Row directory
Free space
Row data
Or
Used space
Block header: It contains general block info such as the block address and the type of
segment (table or indexes)
Table directory:- This portion of datafiles contain information about the tables having
rows in the blocks.
Row directory:- This portion of data block contains info about the actual rows in block
(including address of each row piece in the row data area). After the apce has been
allocated in the row chaining of a data block overhead this space is not reclaimed when the
row is deleted therefore, a database that is currently empty but had up to 50 rows at one
time continous to have 100 bytes allocated in the header of row directory oracle databases
reuses this space only when new rows are inserted in the blocks.
Overhead: The data blocks header table directory and row directory are referred to
collectively as overhead some block overhead is fixed in size. The total block overhead
size is variable. On average the fixed and variable portions of data block overhead total 84
to 107 bytes.
Rowdata: This portion of data block contains table or index data. Row can space blocks.
FreeSpace: Free space is allocated for insertion of new rows and updates to rows that
require addition.
Pctused: This parameter sets the minimum percentage of a block that can be used for row
data plus overhead before new rows are added to that block before new row are added to
that block, after a block is filled to the limit determine by pctfree, oracle database consider
the block unavailable for the insertion of new rows until the percentage of that block falls
beneath the parameter pctused until the value is achieved, oracle database uses the free
spaces of the data block only for updates to rows already contains in the data block.
Page 56 of 102
DBA NOTES
Init trans: This parameter specifies how many transactions can be accessed to the dbblock
at any point of particular time
Tip: once the primary block size is mentioned you can create new tablespace with alternate
block size for creating table with parameters.
Extent Management
Storage Parameters:
• INITIAL
• NEXT
• MINEXTENTS
• MAXEXTENTS
• PCTINCREASE
INITIAL: The parameter specifies the first extent and the size of extent
NEXT: If the datablocks of a segment initial extent became full and more space is
required to hold new data oracle database automatically allocates an incremental extent
Page 57 of 102
DBA NOTES
for that segment. The size of incremental extent is same or greater than the previously
allocated extent i.e, we specify the extent after initial extent through this parameter.
MAXEXTENT: This parameter specifies the up to how many extents a segment can
hold.
b)
Initial 1m
Next 1m
Minextents 2
Maxextenst 5
Pctincrease 20%
1m 1m 1m 1.2m 1.5m
If we don’t specify storage parameters for a extent. Oracle itself allocates the default
storage parameters.
Segment is creation of extents.
Segment name is nothing but as object name. when we create a table or index it creates
segment.
By default each extent contains max of 5 blocks and min of 2 blocks.
Page 58 of 102
DBA NOTES
Create a tablespace and segment find the storage parameters without specifying
them?
All the parameters, blocks and their size for extents are allocated as per operating system.
Create a table in the tablespace with some parameters and check the parameters?
Extent Management
Why are we saying a tablespace is not visible in the file system. oracle store data physically
in datafile.
Page 59 of 102
DBA NOTES
Locally managed tablespace: The extents are managed with in tablespace in locally
managed tablespaces all the tablespace information and extent information is stored in
datafile header of that tablespace and don’t use data dictionary table for storing
information.
Advantage of LMTS is that no DML generate and reduce contention on data
dictionary tables and no undo generated when space allocation or deallocation occurs.
SYSTEM (or) AUTOALLOCATE: Autoallocate specifies that extent size are system
managed oracle will choose “optimal” next extent sizes starting with 64kb as the segment
grown larger extent size will increase to 1mn,8mb and eventually to 64mb .This is
recommended only for a low or unmanaged environment.
Default autoallocate i.e it takes database default storage.
Parameter
Syntax:- create tablespace tbs
datafile ‘star.dbf’ size 10m
extent management local autoallocate;
Page 60 of 102
DBA NOTES
UNIFORM:
It specifies that the tbs is managed with uniform extents of size bytes. The default size is
1m . The uniform extent size of lmts cannot be over written when a scheme object such as
table or index created
We can alter all parameters except initial and minextents in dmts i.e if we create dmts then
extent info is stored in dictionary and real data is stored in datafile of that tablespace. In
that case we need more I/O i.e the oracle has to search for extents in dictionary. which
degrades the performance.
SEGEMENT
Segments are the storage objects within the oracle database. A segment might be table, an
index, a cluster etc.
The level of logical database storage above an extent is called segment.
A segment is a set of extents that contains all the data for a specific logical storage
structure within a tablespace.
For example for each table oracle database allocates one or more extents to form that tables
data segment and for each index, oracle database allocates one or more extents to form its
index segment
Page 61 of 102
DBA NOTES
• rollback
• deferred rollback
• lobindex
• temporary
• cache
• permanent
Data Segments:
A single data segment in a oracle database holds all of the data for one of the follow..
• A table that is partitioned or clustered
• A partition of partitioned table
• A cluster of table.
Oracle database creates the data segment when you create the table or cluster with
create statement.
The storage parameters for a table cluster determine how its segments extents are allocated
you can set there storage parameters directly with appropriate create or alter the efficiency
of data retrieval and storage for data segment associated with the object.
Index Segment:
Oracle database creates the index segment for an index or an index partition when you
issue the create index statement. In this statement we can specify storage parameters for
creation of index.
The segments of table and index allocated with it do not have to occupy the same
tablespace setting the storage parameters directly affect the efficiency of data retrieval and
storage.
Temporary segments: When processing queries oracle database often requires temporary
workspace for intermediate stages of sql statement parsing and execution oracle database
automatically allocates this disk space called a temporary segment. Typically oracle
database requires a temporary segment as a database area for sorting.
Undo segments: Oracle database maintains information to reverse changes made to the
database. This information consists of search of the action of transactions, collectively
known as Undo .undo is stored in undo segments in an undo tablespace.
Page 62 of 102
DBA NOTES
Periodically, oracle database modifies the bitmap of the datafile(for lmts) or update the data
dictionary (for dmts) to reflect the regained extents as available space An data in the blocks
of freed extents becomes inaccessible.
Periodically oracle database deallocates one or more extents of a rollback segment if it has
optimal size specified.
If the rollback segment in larger than optimal (i.e it has too many extents) the oracle
database automatically deallocates one or more extents from rollback segment.
Oracle allocates space for segments in extents. When existing extents of segment are full
ORACLE allocates another extent for that segment. Because extents are allocated are
needed, the extents of a segment may or my not be contigous on disk, and may or may not
span files.
Manual: This option uses free lists for manging free spaces with in segments.
Auto : This option uses free lists for manging free space within segments. This is typically
called automatic segment space management it is default.
Page 63 of 102
DBA NOTES
Freelists: Freelists are lists of data blocks that have space available for inserting
• Even datafile must consist of one or more o/d blocks. Each o/s block may belongs
to one and only datafile.
• Every Tablespace may contain one or more segments. Each segment must exist in
one and only one tablespace.
• Every segment must consist of one or more extents. Each extent must belong to one
and only extent.
• Every extent must consist of one or more oracle blocks. Each oracle block may
belong to one and only one extent.
• Every extent must be located in one and only one datafile.The Space in datafile
may be allocated as one or more extents
• Every oracle block must consist of one or more o/s blocks.Every o/s block may be
part of one and only one oracle block.
Create a tablespace without any option of extent management and create a table amd
check?
Obs:
Extent_managment=local
Allocation_type=system
Segement_space_managment=auto
Initial = 65536
Min_extent=1
Max_extent= 2147483645
Bytes =65536
Page 64 of 102
DBA NOTES
to check allocation_type
select tablespace_name, allocation_type from dba_tablespaces;
allocation_type is system
initial takes 2m
But while storing it takes extent sizes as uniform
Allocation type – Manual
Obs:- initial – 40960, next 40960, min – 1 , max – 505 , E.M – Dictionary, pct -50,
S.S.N – Manual , Allocation type – User
- When we create table with out parameter same values are effected
- create table a2 (a number)
Page 65 of 102
DBA NOTES
exec.dbms_space_admin. tablespace_migrate_from_local(‘s01’);
Now it will be migrated from local to dictionary
Ie., we can chage LMTS to DMTS when ssp is manual
• the allocation type for DM is user
• to know the source code of tablespace
select dbms_metadata,get_dd1(‘TABLESPACE’,’LOC’,from dual);
dbms_metadata and dbms_space_admin are packages
If you notice poor performance in your oracle database row chaning and migration
may be one of several reasons, but we can present some of them by properly designing
and / or diagnosing the database.
Row migration & Row chaning are two potential problems that can be prevented by
suitable diagnosing, we can improve database performance.
The main considerations are:-
what is row chaining & row migration?
Page 66 of 102
DBA NOTES
Row Migration:
We will migrate a row when an update to that row would came it to now fit
on the block anymore (with all the data that exists there currently in that row)
A migration means that entire row will move and we just leave behind the
forwarding address. So, the original block (old block) has the row id of the new block and
the entire row is moved. In this we need more IO
Row Chaining:
A row is too large to fit into a single database block for example, if you use a 4kb
blocksize for your database, and you need to insert a row of 8kb into it, oracle will use 3
blocks and store the row in pieces. Some Conditions that will cause row chaining are
Detection:
Migrated and chained is a table or cluster can be identified by using the analyze
command with the list chained rows option. This command collects information about each
migrated or chained row and places this information into a specified output table. To create
a table that holds the chained rows, execute script utlchain.sql.
Resolving:
In most cases, chaining is unavoidable, especially when this involves tables with
large columns such al LONG, LOBs etc., when you have a lot of chained rows in different
tables and the average row length of the tables is not that large, then you might consider
rebuilding the database with a larger block size.
Ex:- you have a database with a 2k block size different tables have multiple large varchar
columns with an average row length of more than 2k. Then this means that you will have a
lot of chained rows because your block size is too small rebuilding the db with a larger
block size can give you a significant performance.
Page 67 of 102
DBA NOTES
Migration is caused by PCTFREE, being set too low, there is no enough room in the
block for updates. To avoid migration all tables that are updated should have there
PCTFREE set so that there is enough space within the block for updates. You need to
increase PCTFREE to avoid migrated rows. If you leave more space available in the block
for updates, then the row will be having more room to grow.
There are 2 built in commands provided by oracle which are used to start and shut down
the data base
We use this activity in Emergency Maintainence
DB-START:
This is a script which is located in ORACLE_HOME/bin. This is an
executable file when we execute this file. It will start the Oracle database from
/etc/rcl..locs. It should be only executed asa part of system boot procedure.
This script will start all the databases listed in the ORATAB file whose third
field is ‘Y’. This field is laos referred as monitoring field.There is no need to pass any
arguments. This script will ignore he entries where first field is ‘X’.
DB-SHUT:
This is an executable file which is located in $ORACLE_HOME/bin. This
will shutdown the database whose third field is Y.
Page 68 of 102
DBA NOTES
When we run there scripts for first time. It creates 2 logfiles in ORACLE_HOME
->startup.log
->shutdown.log
When we start and shutdown the database startup and shutdown information will be
updated into these files
BACKUPS
Backup and recovery is one of the most important aspects of DBA’s life. If you love your
company’s data, you would very well love your job. Hardware and software always be
replaced but your data may be irreplaceable.
Physical Backup:- means making the copies of the files related to physical architecture.
Eg: Datafiles, Control files, Redolog files
Logical Backup:- means taking the copies of logical structure of Database.
Eg: Tables, Schemas, Tablespaces, Database
We will be integrating the veritas software & hardware with the database. There must be a
separate admin (veritas admin) to maintain this technology.
Minimum it backups terabytes data just in one hour only!!
In real time environment we use tar command to take the backup into tape.
$ tar cvf filename *
WHOLE BACKUPS:
Page 69 of 102
DBA NOTES
A whole backup is a backup of all the datafiles, control file and ( if u are using
it) the spfile. Remember that as all multiplexed copies of the control file are identical , it is
necessary to backup only one of them. You do not backup the online redologs. Online
Redolog files are provided by multiplexing and optionally by archiving. Also note that only
datafiles for permanent tablespaces can be backed up. The temp files used for your
temporary tablespaces cant be backed up by RMAN or can they be put into backup mode
for an OS backup.
PARTIAL BACKUP:
It will include one or more datafiles and control file. It is copy of just a part of
the database.
INCREMENTAL BACKUP:
A incremental backup is a backup of just some of blocks of datafile. Only the
blocks of that have been changed or added since the last full backup will be included. It is
done by RMAN
ONLINE BACKUP:
Backup which is taken when the database is up or running.
OFFLINE BACKUP:
Backup which is taken when the database is shutdown.
PHYSICAL BACKUP
Traditional RMAN
Cold Cold
Hot Hot
COLD BACKUP
Backup which is taken when the database is down is said to be cold backup.
1) List out the datafiles, Control files and Redolog files by using v$datafile,
v$datafile, v$logfile.
Sql> select name from v$datafile;
Sql> select member from v$logfile;
Sql> select name from v$controlfile;
2) Shut down the database with shut immediate option.
Sql> shut immediate;
3) Now copy the crd files to backup location in OS level.
$cp /oraAPP/app/* /backup/
How can we check whether the cold backup is working properly or not?
Page 70 of 102
DBA NOTES
In the market there are many seheduling software Are available.
Ex: red wood
Page 71 of 102
DBA NOTES
$echo select * from tab;| sqlplus -s system/manager it will just display output.
Oracle assign every redo logfile to with a log sequence number to uniquely identified it.
the of redo’s for a database is collectively know as database’s redo log.
Oracle uses redolog to record all changes made to data base. oracle record every changes in
redo record. an entry in redo buffer describes what has changed assumes a user updates a
payroll table from 5 to 7 oracle records the old value in undo and new valuesw in the redo
record.
Since the redo log stored every changes to db the redo record for this transitioncontaions
three paths.
Changes to the transation table of undo
Changes to undo data block
Changes to payroll table data block
If the user commit then update to permanent table to make change permanent oracle
generate another redo record.
• if archiving is disable a filled online redo log is available once the changes
recorded in the log have been saved to the data file.
• If archiving is enabling a filled online redo log is available once the changes have
been saved to the data files and the file has been archived.
Archived log files are redologs that are oracle has filled redo entries(rendered in active)
and copied to one or more log archive destinations oracle can be run in either 2 modes.
*archive log:
Oracle archives the filled online redo files before reusing thenm in the cycle.
*No archiving::
Page 72 of 102
DBA NOTES
Oracle does not archiving the filled online redo log fikes before reusing the in the
cycle.
Running the database in archiving mode has the following benefits:
The database can be completely recovered from both instance and media failure
The user perform online backups i.e is backup the ts when database is open and
available for use.
Archive redologs can be transmitted applied to stand by database.
Oracle supports multiplexed archive logs ro avoid any possible single point of failure
or the archive log .
The user has more options, such as the ability to perform tablespace-point-in –time
recovery.
The user can only back up the database while it is completely closed after a clean
shutdown.
Typically the onlyu media recovery option is to restore the whole database which
causes the loss of all transactions issued since the last backup
These archived logs should be hosted on separated physically disk.
HOT BACKUP
Back up that is taken while the database iis up and running.
Logmode
Archivelog
Sql>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /stage/vijay/10g/dbs/arch
Oldest online log sequence 46
Next log sequence to archive 48
Current log sequence 48
Get the list of tablespace and datafiles from dba_data_files
Sql>select filename,tablesapce_name from dba_data_files;
Page 73 of 102
DBA NOTES
Oracle check points the tablespace (ie a check point is occurred for the tablespace) now all
changes om db_cache will be flushed to datafile.
Any ware updates the table (or) indexes ….. that datafile ,all the updated will be sent to
datafile but at this time the scn marks for each datafile in the tablespace are frozen
(blocked) at their current values.
The scn markers (numbers) will not be updated until the tablespace is taken out of backip
mode.
ORACLE SWITCHES TO logging full images of change database to the redologs this is
why the redologs grow at a mich faster rate while hot backups are going on.
Page 74 of 102
DBA NOTES
ie oracle maintains full copy of changed db-blocks in the redologs .’if log sitch occurs they
are archived’ at the point of time if any user wants to retrieve the updated data he gets that
data from redologs. If the data in redologs gets archived then he will retrieves the data from
dictionary cache where they are stored an default transactions. they stored as temporary
statements.
Suring hot backup the performer of the system slows sowm.
When we put the tablwspafe inend backup mode the headers of datafiles get released and
the scn numbers are updated using redolog files.
Tablespace ckpt:
A checkpoint scn occurred on onlu one ts is said to be checkpoint only that ts has differ scn
compared to all ts’s this is possible where we perform
Alter tablespace ts offline
Alter tablespace ts begin backup.
Database checkpoint:
Checkpoint occurred for only database all scn’s must be synchronized at this time(ie
equal).
Sqlplus <<E
Sys as sysdba
Set pages 0
Spool /tmp/backup.sql spoll the tablespaces begin ,copying
files tablespaces end backup syntax in backupsql.
#ranking the spooled sql and taking the backup of control file;
Sqlplus <<E
Sys as sysdba
@backup.sql
Page 75 of 102
DBA NOTES
Hot backup we can makes all tablespaces of database in begin backup in one shot by
Sql>alter database begin backup;
Dynamic sql:-
We can make a bunch of sqls statements in single command
Select ‘drop table’ || tname from tab;
Scn is the increasing number. It can be used to determine the age(state) of the database
Oracle user scns in control files, datafiles headers and redo records.
Every redolog contains both a log sequences number amd low high scn the low scn records
the lowest scn recorded in logfile,while the scn records the highest scn in the log file.
Checkpoint will update the headers pf redolog files with latest scn’s.
Get_system_change_number
1316516
For example ,we can perform an incomplete recovery of a database upto scn 1030
The scn number is very useful while recovering the database pr instance all the datafile
headers will have the same scn number when the instance shut down normally.
Smon checks the scn in all datafile header when the database is started database is opened
of the scn of control file is matched with scns of df’s and redo’s of the scn’s don’t match
the database is an inconsistence.
Page 76 of 102
DBA NOTES
Smon_scn_time table allow to roughly find out which scn was currently spwcific time in
that five days.
Checkpoint
Checkpoint is oracle background process it is mandatory background process
The latest scn is written to the control file and redolog files
Checkpoints will lead to updating the datafile header if the oracle background process ckpt
is not available for our system (or) is not started lgwr will perform the task
Page 77 of 102
DBA NOTES
Checkpoint number is the scn number at which all the dirty buffers are written to
disk. The checkpoint can be at object/tablesapces/datafiles/database level.
Scn –wrap,scn-base are retrieved from table smon-scn-time
q) does oracle do either crash recovery (or) transation recovery after shutdown abort if the
checkpoint was taken right before the instance crash?
Yes, oracle perform roll forward first if there are any changesbeyond that
checkpoint and roll back any checkpoint and roll back any uncommitted transations
Scn number are begin reported at frequent intervasls by smon in”smon-scn-time” table.
q) when this no of highest scn will be over,then what happen will oracle restart from first
number?
If the scn really reached to its maximum allowed value (after exhausting all
wraps),database has to be opened in reset logs mode and scn will start from beginning all
over again.
Q) does all the redo ectries has scn attached to them (or) does only the commit entries has?
All changes recorded in the redo (including commits and rollbacks ) will hace scn
associated with them.
Hot backup
Conditional execution for database level:
$ps-ef |grep smon|grep venkat| grep –v grep
It will show venkat-- database is up (or) down avoiding the grep statement.
Page 78 of 102
DBA NOTES
Page 79 of 102
DBA NOTES
Set pages 0
Spool hot.sql
Select ‘alter tablesapce =’|| tablespace-name|| begin backup;
From dba_tablespaces where contents not in ‘TEMPORARY’
Union all
Select ‘alter tablespace ‘||tablespace_name”’ end backup;’ from
dba_tablespaces where contents not in (‘TEMPORARY’);
E
SQLPLUS <<E
SYS AS SYSDBA
@hot.sql
Alter database backup control file ‘/stage/hot/backup.ctl’;
E
$chmod 700 hot sh
$hot.sh
Dynamically passing oracle sid:
#1/bin/bash
#set the environment
Export ORACLE_SID=${1}
Export ORACLE_HOME=’grep –w$1 /etc/oratab/
Awk –f “;” ‘{print
$2}’
Export PATH=$PATH:$ORACLE_HOME/bin
#check db is up /down
previous script
e
sqlplus << e
sys as sysdba
@hot.sql
Page 80 of 102
DBA NOTES
Page 81 of 102
DBA NOTES
each roll back segment can handle only fixed number of transactions from one instance.
oracle creats an initial rollback segments called segments when ever a db is created then
segment is in system ts we can’t drop system roll back segments.
place roll back segments in separate tablespaces
to create rollback segments, the user must have create rollback segment privilege.
Page 82 of 102
DBA NOTES
#umdo_managemnt=auto
*start the database
observation:
*when we comment undo management the undo tablespace become offline.
*also rollback segments are become offline.
this can be viewed from dba_rollback_segs
Sql>select segment_name,status,tablespace_name from dba_rollback_segs;
in this situation try to insert data into some table some non-system user which is
assigned to sine permanent tablespace.
Sql>conn kittu/kittu
Sql>insert into emp values(1);
Error:
Cannot use system rollback segment for non-system tablespace ‘KITTU’;
to resolve this situation we had to make the roll back segments online this can be done
in 2 ways.
1) manually
sql> alter system rollback segment r1 online;
2) mention parameter in initfile bounce the db.
Rollback_segment=(rs1,rs2)
Sql>select a.name,b.writes,b.extents,b.curext,b.xacts
From v$rollstat b,v$rollname a where a.user=b.user;
Page 83 of 102
DBA NOTES
Alter extents :-
Sql> alter rollback segments rbs storage (maxextents 120);
Shrinking:-
It means defragmentstion
Sql>alter rollback segments rbs shrink to 100k;
UNDO MANAGEMENT
Every oracle database must have a method to maintain information that is used to rollback,
or undo changes to the database such information consists of records of actions of
transactions, primarily before they are committed.
Page 84 of 102
DBA NOTES
till 8i the undo that used to generated, used to be handled rollback tablespace, which was
directly managed. In case we have choose to first create a rollback tablespace, then create
rollback segments and assign it to roll back tablespace.
now oracle 9i ,the new concept of undo tablespace is introduces,whioch helps in below
ways:
• it is logically managed.
• The undo segments are created by oracle itself.
• The number of undo segments are generated by oracle itself.
• The purpose of undo management and rollback segment is same The purpose of
undo segments amd rollback segment is same except the creation and maintaince
past.
it is not possible to use both methods in a single instance. However we can migrate
for example to created undo tablespace in database that is using rollback segments and
assign undo to db.
And to create rollback segs in database that using undo ts (or) commented it
However in both cases we must shut down and restart out database in order to effect the
switch to another.
Auto:
If we use undo tablespace method, you are operating in automatic undo management
mode.
An undo tablepsace must available into which oracle will store undorecors. The default
undo tablespace is created at database creation (or) an undo tablepspace can be created
explicitly
The parameter to be specified to create and assign an undo tablespace is undo_tablespace.
when instance startup,oracle automatically selects for use the first available undo
tablespae if there is no undo tablespace available the instance starts,but uses system
rollback segmet. This is not recommended. And an alert message is written to alert file.
undo_retention:
Retention is period of time. it is specified in units of seconds. it cam survive system crashes
ie, undo genated before an instance is crash ,is retained until its retention time has expired
even across restarting the machine.
Page 85 of 102
DBA NOTES
When the instance is recovered undo info is returned based on current setting if
undo_retention parameter.
Default is undo_retention=900 default
We can change this value dynamically by using below statement.
Sql> alter system set undo_retention=200;
It effects immediately.
droping undots:
Sql>drop tablespace undotbs;
rollback segments are overwritten ie, when the last extent rollback segment gets filled it
enters the later uncommitted to first extent of that signet only it over writes the data in
those extents.
We had to create rollback segments manually it used upto 8i.
undo segments maintain the uncommitted data till the retention period is reached. Even
though it fills all the extents also, it maintains the data till it reaches retention period.at that
time it throws an error.
Ora:30036 unable to extent the segment
Page 86 of 102
DBA NOTES
Oracle usually takes care of creating undo segments it is introduced in oracle 9i.
To build demo tables using sql Alomg with scott user :
@?/rdbms/admin/utlsampl.sql
select undtsn, undblock from v$undostat;
Temporary tablespaces
V$sort_usage
V$temstat
V$tempfile
V_$sort_usage
Dba_temp_file
Database_properties
Package:
Utl_recomp_stored
Sql>select file_name,tablespace_name,bytes,status from
dba_temp_files;
Database creation
we can create db without mentioning the below parameter in initfile.
Db_cache_size,shared_pool_size,log_buffer and control_files;
The sizes of the above parameter are
db_cache)size =48m
shared_pool_size=32m
log_buffer=7057408
Controlfile=control<sid>.ora
Loc ‘$oracle_home/dbs/
Total sga size = 112m.
solutions;
1) we had to get the sid. Serial# for that session.
Sql> select sid, serial#,username from v$sessions where username is not null;
Page 87 of 102
DBA NOTES
27 1632 scott
2) now excute the below package to enable tracing for that session.
Page 88 of 102
DBA NOTES
ARCHIVED LOGS
Q) How can we controlled the number of archiver processes?
Views:
V$database;
Sql;> select log_mode from v$database;
Logmode
-------------
ARCHIVELOG
V$archived_log:
It will show all archived logs information
Sql;>select name,dest_id,thread#,sequence#,arechived,completion
fromv$archived_log;
V$archived_dest:
Sql> select def_name,name_space,arctuver,log_sequence
From v$archive_dest;
Dest_name name_space archiver log_sequence
--------------------------------------------------------------------------------------------
Log_archive_dest system arch 0
V$backup_redolog.
V$log.
V$loghistory.
Q) what is the role to grant users to allow select pr ivileges on all data dictionary views?
Select_catalog_role.
Q)what is role to grant users to allow excute privilages for packages and
And procedures in data dictionary?
Excute_catolog_role.
Q) role to delete records from system audit table(aud$)
delete_catalog_role.
Page 89 of 102
DBA NOTES
0 0 3 3 0
Page 90 of 102
DBA NOTES
V$SGA
Name Value
V$INSTANCE
SQl>select archiver,Logins,shutdown_pending,
Database,status,blocked, active_status form v$ Instance;
V$log_history.
V$fixed_table
Sql> select name from v$fixed_table where name like ‘v%’;
V$fixed_View_definition:-
Page 91 of 102
DBA NOTES
8052
18947
Page 92 of 102
DBA NOTES
Page 93 of 102
DBA NOTES
Q)How can we import the table to target if the table already exists on target?
[app@linux6 ~]$exp system/manager file=a.dmp tables=kittu.a log=a.log
[kittu@linux6 ~]$emp system/manager file=a.dmp fromuser=kittu
touser=app ignore=y
copy the dump file from source to target and also create user in the target
import the schema’s data by using below sysntax
[kittu@linux6 ~]$imp system/manager file=a.dmp
fromuser=kittu touser=kittu
Q) How can we export and import large database whole size is 500GB?
This is possible by using filesize and file options in export
[app@linux6 ~]$exp system/manager filesize=100GB
file=a.dmp,b.dmp,c.dmp,d.dmp,e.dmp log=a.log full=y
copy the dump file to target
[app@linux6 ~]$ imp system/manager file=a.dmp, b.dmp, c.dmp,
d.dmp, e.dmp log=a.log full=y ignore=y
Page 94 of 102
DBA NOTES
Transport Tablespace:-
Usually if we are migrating a user which contains 1GB, it takes move time to
export and import to reduce the time, export/import has an option transport_tablespace. By
using this option we get the information about
the metadata of tablespace
Process:
Page 95 of 102
DBA NOTES
quit
EOF
We have 3 position parameter in the above file. So we need to pass 3 parameters
One is ORACLE_SID
Second is table1
Third is tables2
[app@linux6 ~]$ ./expscript app scott.emp ram.kk
Block Size:
Block size for data block is created at the time of DB creation. We can also maintain
database with datablocks having multiple block size.
Actually my DB it made with 8k, DB cache retrieves the 8k blocks only. In order to get
1k,4k blocks we need to add the below parameter in init.ora file.
Db_2k_cache_size=50m (2k_blocks)
This statement allots 50m from 2k blocks in db_cache
It is going to add additional space for 2k blocks in db cache.
4k_db_4k_cache_size
8k_db_8k_cache_size
After adding the above parameters we can create the TS with different block size as below
Create tablespace ts001 Datafile ‘/oraAPP/app/appdata/ts001.dbf’
size 10 blocksize 2k;
The data from db cache is flushed using LRU(least Recently used) algorithm.
The advantage of having biggest blocksize is if retrives data at a time.
The disadvantage of having bigger blocksize is more data is flushed into dbcache.
what is the package which validates username/passwd when we use export/import
Dbms_plugts.checkuser
Page 96 of 102
DBA NOTES
Q) What is the file which is used to read values of which are required for instance in P
file ?
i file
Ignore:-
It ignores the created errors (N)
From user:-
It indicates list of owner user names
To user:-
It indicates list of usernames
Compile:-
It compiles Procedures, packages and functions (Y)
Data files:-
Data files to be transported into Database
Page 97 of 102
DBA NOTES
Syn:-
exp System/manager file size=50G file=a.dmp,b.dmp Full=y
imp System/manager file size=50G file=a.dmp,b.dmp Full=y
The order of files in Exp/imp must be same
Page 98 of 102
DBA NOTES
Usage:-
Imagine if a table contains to extents. Many of the blocks in those extents are not
completely filled. If we remove any rows from the extent also, oracle can’t fill that
rows by using Data. That is more space is wasted. So, while retrieving data I from 10
extends, it takes more time.
Disadvantages:-
Page 99 of 102
DBA NOTES
Options of Export:-
User id:- It indicates username/password
Buffer :- It indicates size of data buffer , how many statements can be generated
at a time in buffer
File:- It indicates the all put files (expdat.dmp)
Compress:- Default value :- y
By using this Option, all extents will be node into single Individual Bigger
Extent while Importing.
Advantages:-
• Defragmentation occurs
• All Extends will be compressed into individual bigger Extent
Grants:- It will Expert Grants (Y)
Indexes:- Export indexes (y)
Direct:- It is used for Direct Path Export
⇒ Now it will prompt us for username, password(which we wish to take backup), dump
file [default name=expdat.dmp buffer Size(4096 [d]) etc
⇒ It backup the structure ,Indexes, constraints of table also
⇒ It will Export grants ,tabledata,extent by default
Non-Interactive Model:-
Instead of using parameter file ,you may use a parameter file where the parameters are
stored.
Make all inputs in file
If the data is exported on a system which it is imported, imp must be the newer
version. If something needs to be exported form 10g into 9i, it must be exported with
9iexp
in order to use exp/imp the catexp.sql script must be run.
It was called by catalog.sql
the utilities used for export and infort are exp and imp
Exp: It will scan and read the information of object form database and copies it into
Dump file in o/s level.
Imp: It will scan and read the information of dumpfile and copies it into database.
by using export and import we can take the backup of following.
• object level (table level)
• database level
• user level
• table space level
LOGICAL BACKUP
Backup which is taken when the database is up and running is said to be logical backup.
Backing up of one or move objects of database is said to be logical backup.
By using logical backup also, we can take full backup of database.
The files which had been created by export utility can only be read by import.
It is a prerequisite that oraenv or oraenv was excuted before you export or import data.
Go to odump location and convert the trace file from row format to readable format by
using tkprof.