Académique Documents
Professionnel Documents
Culture Documents
ORACLE ARCHITECTURE ............................................................................................................................ 7 2.1 PHYSICAL STRUCTURE .................................................................................................................................. 7 2.1.1 Data Files ................................................................................................................................................ 7 2.1.2 Redo-log Files.......................................................................................................................................... 7 2.1.3 Control Files............................................................................................................................................ 7 2.2 LOGICAL STRUCTURE.................................................................................................................................... 7 2.2.1 Tablespace ............................................................................................................................................... 7 2.2.2 Schema Objects........................................................................................................................................ 7
INSTANCE.......................................................................................................................................................... 8 3.1 SYSTEM GLOBAL AREA (SGA) ..................................................................................................................... 8 3.1.1 Automatic Shared Memory Management................................................................................................. 8 3.1.2 SGA_TARGET Initialization Parameter.................................................................................................. 8 3.1.3 Buffer cache ............................................................................................................................................. 8 3.1.4 Redolog Buffers ....................................................................................................................................... 8 3.1.5 Shared Pool ............................................................................................................................................. 9 3.1.6 Library Cache.......................................................................................................................................... 9 3.1.7 Data Dictionary ....................................................................................................................................... 9 3.1.8 Streams Pool............................................................................................................................................ 9 3.2 PGA - PROGRAM GLOBAL AREA................................................................................................................... 9 3.2.1 Background Processes............................................................................................................................. 9
4 4.1
4.2 DEFAULT PERMANENT TABLESPACE ........................................................................................................... 12 4.2.1 Pre-requisites for creating a database .................................................................................................. 12 4.2.2 Steps to create a database ..................................................................................................................... 12 4.3 DATA DICTIONARY VIEWS YOU CAN QUERY.............................................................................................. 14 5 TABLESPACE MANAGEMENT ................................................................................................................... 15 5.1 ADDITIONAL TABLESPACES......................................................................................................................... 15 5.2 ADVANTAGES OF HAVING DIFFERENT TABLESPACES ................................................................................... 15 5.3 TYPES OF TABLESPACES ............................................................................................................................. 15 5.4 BIGFILE TABLESPACE ............................................................................................................................... 15 5.5 TEMPORARY TABLESPACE GROUPS ............................................................................................................ 16 5.5.1 Examples................................................................................................................................................ 16 5.6 DATA DICTIONARY VIEWS YOU CAN QUERY.............................................................................................. 19 6 STORAGE PARAMETERS ............................................................................................................................ 20 6.1 6.2 7 AUTOMATIC SEGMENT SPACE MANAGEMENT............................................................................................. 21 DATA DICTIONARY VIEWS .......................................................................................................................... 22
MANAGING ROLLBACK SEGMENTS....................................................................................................... 23 7.1 7.2 DATA DICTIONARY VIEWS .......................................................................................................................... 24 PENDING OFFLINE ....................................................................................................................................... 24
UNDO MANAGEMENT.................................................................................................................................. 25 8.1 8.2 8.3 8.4 8.5 OVERVIEW OF UNDO MANAGEMENT .......................................................................................................... 25 UNDO RETENTION ....................................................................................................................................... 25 STEPS TO CREATE AND MAINTAIN UNDO TABLESPACE ............................................................................... 26 RETENTION GUARANTEE ............................................................................................................................. 27 DATA DICTIONARY VIEWS .......................................................................................................................... 27
MANAGING REDOLOG FILES.................................................................................................................... 28 9.1 9.2 9.3 9.4 INTRODUCTION ........................................................................................................................................... 28 MIRRORED REDO LOGS ............................................................................................................................. 28 LOG SWITCHES............................................................................................................................................ 28 DATA DICTIONARY VIEWS .......................................................................................................................... 29 MANAGING REDOLOG FILES ............................................................................................................... 30 10.1 10.2 10.3 INTRODUCTION ........................................................................................................................................... 30 MANAGING THE SIZE OF THE CONTROL FILE ................................................................................................ 30 DATA DICTIONARY VIEWS .......................................................................................................................... 32 ARCHIVED REDO LOG FILES................................................................................................................ 33 11.1 11.2 11.3 INTRODUCTION ........................................................................................................................................... 33 TO CONVERT THE DATABASE INTO ARCHIVELOG MODE ......................................................................... 33 DATA DICTIONARY VIEWS .......................................................................................................................... 33 ORACLE MANAGED FILES..................................................................................................................... 34 12.1 INTRODUCTION ........................................................................................................................................... 34 12.2 MAIN BENEFITS OF THE ORACLE MANAGED FILES ...................................................................................... 34 12.2.1 Scenario 1: Add Oracle Managed Files to an Existing Database..................................................... 35 12.2.2 Scenario 2: CREATE DATABASE: Using Oracle Managed Files.................................................... 35
10
11
12
13
MANAGING USERS AND RESOURCES................................................................................................. 36 13.1 MANAGING DATABASE USERS .................................................................................................................... 36 13.2 CREATING USERS ........................................................................................................................................ 36 13.3 MANAGING RESOURCES WITH PROFILES ..................................................................................................... 37 13.3.1 To Enable the Resource Limit ........................................................................................................... 38 13.3.2 To Create and Alter a Profile............................................................................................................ 38 13.3.3 To Drop a Profile .............................................................................................................................. 38 13.4 SESSION AND USER LICENSING ................................................................................................................... 39 13.5 USER AUTHENTICATION .............................................................................................................................. 39 13.5.1 Password File Authentication ........................................................................................................... 39 13.6 VIEWING INFORMATION ABOUT DATABASE USERS AND PROFILES .............................................................. 40
14
MANAGING USER PRIVILEGES AND ROLES .................................................................................... 42 14.1 IDENTIFYING USER PRIVILEGES................................................................................................................... 42 14.2 CATEGORIES OF PRIVILEGES ........................................................................................................................ 42 14.2.1 System Privileges............................................................................................................................... 42 14.2.2 Object Privileges ............................................................................................................................... 42 14.3 MANAGING USER ROLES ............................................................................................................................. 43 14.3.1 System defined roles .......................................................................................................................... 43 14.3.2 Creating, Altering, Dropping, Granting and Revoking Roles ........................................................... 43 14.3.3 Creating a Profile.............................................................................................................................. 44 14.3.4 Listing Privilege and Role Information ............................................................................................. 46
15
AUDITING.................................................................................................................................................... 47 15.1 INTRODUCTION ........................................................................................................................................... 47 15.2 INITIALIZATION PARAMETERS FOR AUDITING .............................................................................................. 47 15.2.1 Creating and Deleting Database Trail Views ................................................................................... 47 15.3 TYPES OF AUDITING .................................................................................................................................... 47 15.3.1 Auditing Successful and Unsuccessful Statement Execution ............................................................. 48
15.3.2 Auditing BY SESSION versus BY ACCESS ....................................................................................... 48 15.4 PRIVILEGE LEVEL OF AUDITING ............................................................................................................... 48 15.5 OBJECT LEVEL AUDITING ........................................................................................................................... 48 15.6 STATEMENT AUDITING ................................................................................................................................ 49 15.7 DATA DICTIONARY VIEWS TO QUERY ......................................................................................................... 50 16 16.1 16.2 16.3 16.4 16.5 17 MANAGING ORACLE NETWORKING USER NET MANAGER ................................................... 51 INTRODUCTION ........................................................................................................................................... 51 CONFIGURING LISTENER.ORA AND TNSNAMES.ORA ..................................................................................... 51 DATABASE LINKS ........................................................................................................................................ 51 GLOBAL NAMES .......................................................................................................................................... 52 MATERIALIZED VIEWS ................................................................................................................................ 53 MANAGING EXPORTS AND IMPORTS ................................................................................................ 55 17.1 TYPES OF BACKUPS..................................................................................................................................... 55 17.1.1 Export and Import ............................................................................................................................. 55 17.2 MAIN TASKS OF EXPORT AND IMPORT ........................................................................................................ 55 17.3 INCREMENTAL, CUMULATIVE AND COMPLETE EXPORTS ............................................................................ 57 17.4 IMPORT PARAMETERS ................................................................................................................................. 58 17.5 IMPORTING INCREMENTAL, CUMULATIVE AND COMPLETE EXPORT FILES ................................................... 59 17.6 POINTS TO REMEMBER BEFORE DOING EXPORT OR IMPORT ......................................................................... 59 17.7 SELECTING DETAILED INFO ABOUT INCTYPE EXPORTS ............................................................................ 61 18 ORACLE DATA PUMP .............................................................................................................................. 62 18.1 INTRODUCTION ........................................................................................................................................... 62 18.2 BENEFITS OF DATA PUMP EXPORT AND IMPORT ......................................................................................... 62 18.3 DATA PUMP EXPORT MODES ...................................................................................................................... 62 18.4 DATA PUMP IMPORT MODES....................................................................................................................... 63 18.5 TRANSPORT OF TABLESPACES ACROSS DIFFERENT PLATFORMS ................................................................. 63 18.5.1 Initiate a Job During Data Pump Export operations and Restarting the Jobs ................................. 64 18.6 DATA DICTIONARY VIEWS TO QUERY ......................................................................................................... 64 19 TUNING ENHANCING THE PERFORMANCE OF DATABASE..................................................... 65 19.1 INTRODUCTION ........................................................................................................................................... 65 19.2 GOALS FOR TUNING .................................................................................................................................... 65 19.3 TURING PROCESS STEPS.............................................................................................................................. 65 19.3.1 Tuning memory allocation................................................................................................................. 65 19.3.2 Tuning I/O ......................................................................................................................................... 65 19.3.3 Tuning Contention............................................................................................................................. 65 19.3.4 Memory Tuning ................................................................................................................................. 66 19.3.5 Stripping ............................................................................................................................................ 67 19.3.6 Parallel Query Option....................................................................................................................... 68 19.3.7 Table Cache....................................................................................................................................... 68 19.4 ONLINE SEGMENT SHRINKING ..................................................................................................................... 69 20 20.1 20.2 20.3 21 MEMORY MANAGEMENT...................................................................................................................... 70 INTRODUCTION ........................................................................................................................................... 70 DYNAMIC SYSTEM GLOBAL AREA (SGA)................................................................................................... 70 BUFFER CACHE ADVISORY ......................................................................................................................... 70 OPTIMIZATION ......................................................................................................................................... 71 21.1 INTRODUCTION ........................................................................................................................................... 71 21.2 EXECUTION PLAN STEPS ............................................................................................................................. 72 21.3 TYPES OF OPERATIONS ............................................................................................................................... 72 21.4 THE TRCSESS UTILITY ................................................................................................................................. 75 21.4.1 Sample output of trcsess .................................................................................................................... 75 21.5 ACTIVE SESSION HISTORY REPORTS ........................................................................................................... 76 21.5.1 Running the ashrpt.sql Report........................................................................................................... 76 21.6 AUTOMATIC WORKLOAD REPOSITORY REPORTS ........................................................................................ 77
21.6.1 21.6.2 22
Running the awrrpt.sql Report .......................................................................................................... 77 Running the awrrpti.sql Report ......................................................................................................... 77
MANAGING SHARED SERVERS ............................................................................................................ 79 22.1 INTRODUCTION ........................................................................................................................................... 79 22.1.1 Client Process.................................................................................................................................... 79 22.1.2 Server Process................................................................................................................................... 79 22.2 USING DEDICATED SERVER PROCESSES ...................................................................................................... 79 22.3 USING SHARED SERVER PROCESS ............................................................................................................... 80 22.4 SIMPLIFIED SHARED SERVER CONFIGURATION............................................................................................ 80 22.4.1 To Check Whether Shared Server Is Working ................................................................................... 81
23
PARTITIONED TABLES AND INDEXES ............................................................................................... 82 23.1 MANAGING PARTITIONED TABLES .............................................................................................................. 82 23.2 ADVANTAGES OF PARTITIONING TABLES .................................................................................................... 82 23.3 TYPES OF PARTITIONS ................................................................................................................................. 82 23.3.1 Range Partitioning ............................................................................................................................ 82 23.3.2 Hash Partitioning .............................................................................................................................. 84 23.3.3 List Partitioning ................................................................................................................................ 84 23.3.4 Composite Range-Hash Partitioning Tables..................................................................................... 85 23.3.5 Composite Range-List Partitioned Tables......................................................................................... 86 23.4 MANAGING PARTITIONED INDEXES ............................................................................................................. 88 23.4.1 Types of Indexes ................................................................................................................................ 89 23.5 DATA DICTIONARY VIEWS .......................................................................................................................... 90
24
MANAGING BACKUPS AND RECOVERY............................................................................................ 91 24.1 COLD BACKUP ............................................................................................................................................ 91 24.2 HOT BACKUP .............................................................................................................................................. 92 24.2.1 Whole Database Backup ................................................................................................................... 92 24.2.2 Hot Backup at Tablespace Level ....................................................................................................... 92 24.3 ONLINE BACKUP MODE OR ENTIRE DATABASE BACKUP ............................................................................ 93 24.4 RECOVERY .................................................................................................................................................. 94 24.4.1 Block Level ........................................................................................................................................ 94 24.4.2 Thread Recovery................................................................................................................................ 94 24.4.3 Media Recovery................................................................................................................................. 94
25
RECOVERY MANAGER ........................................................................................................................... 97 25.1 INTRODUCTION ..................................................................................................................................... 97 25.1.1 Costs and Benefits When Using a Recovery Catalog ........................................................................ 97 25.1.2 Setting Up the Recovery Catalog Schema ......................................................................................... 97 25.2 RMAN COMMANDS.................................................................................................................................. 100 25.2.1 Allocate Channel ............................................................................................................................. 100 25.2.2 BACKUP ......................................................................................................................................... 101 25.2.3 Create Script.................................................................................................................................... 105 25.2.4 Delete Script .................................................................................................................................... 105 25.2.5 RESTORE ........................................................................................................................................ 105 25.2.6 RECOVER ....................................................................................................................................... 107 25.2.7 Release Channel .............................................................................................................................. 109 25.2.8 Show ................................................................................................................................................ 109 25.2.9 Determining Datafile Backup Status ............................................................................................... 110 25.2.10 Propagating Information from the Control File.............................................................................. 110 25.3 IMPROVING INCREMENTAL BACKUP PERFORMANCE: CHANGE TRACKING................................................ 110 25.3.1 Enabling and Disabling Change Tracking...................................................................................... 110 25.4 TYPES OF RECOVERY MANAGER BACKUPS............................................................................................... 111 25.4.1 Full .................................................................................................................................................. 111 25.4.2 Incremental...................................................................................................................................... 111 25.4.3 Incremental Backup Sets ................................................................................................................. 111 25.5 COMPRESSED BACKUPS ............................................................................................................................ 112 25.6 CONFIGURING COMPRESSED BACKUP SETS AS DEFAULT FOR TAPE OR DISK ............................................ 112
25.7 IMAGE COPIES ........................................................................................................................................... 112 25.8 DROP DATABASE IN RMAN...................................................................................................................... 112 25.8.1 Restrictions...................................................................................................................................... 112 25.9 INCREMENTALLY UPDATED BACKUPS: ROLLING FORWARD IMAGE COPY BACKUPS ................................ 113 25.10 RMAN RECOVERY CATALOG VIEWS ................................................................................................... 113 26 FLASHBACK TECHNOLOGY ............................................................................................................... 113 26.1 THE FLASH RECOVERY AREA ................................................................................................................... 113 26.1.1 Files that can be stored in the Flash Recovery Area....................................................................... 113 26.1.2 Initialization parameters for the Flash Recovery Area ................................................................... 113 26.2 ORACLE FLASHBACK QUERY .................................................................................................................... 114 26.3 ORACLE FLASHBACK TABLE ..................................................................................................................... 114 26.4 ORACLE FLASHBACK DROP UNDO A DROP TABLE OPERATION .......................................................... 115 26.4.1 Recycle Bin ...................................................................................................................................... 115 26.4.2 Performing Flashback Drop on Tables in the Recycle Bin ............................................................. 116 26.4.3 Purging Objects from the Recycle Bin ............................................................................................ 116 26.5 FLASHBACK DATABASE ............................................................................................................................ 117 26.5.1 To Enable Flashback Database ...................................................................................................... 117 26.5.2 To perform the FLASHBACK DATABASE...................................................................................... 117 26.5.3 To Disable Flashback Database ..................................................................................................... 118 27 28 WORKING WITH SQL*LOADER ......................................................................................................... 119 WORKING WITH RAW DEVICES ........................................................................................................ 124 28.1 ADVANTAGES ........................................................................................................................................... 124 28.2 DISADVANTAGES ...................................................................................................................................... 124 28.3 CREATING A DATABASE USING RAW DEVICES .......................................................................................... 124 28.4 BACKUP AND RECOVERY........................................................................................................................... 125 28.4.1 BACKUP ......................................................................................................................................... 125 28.4.2 RECOVERY ..................................................................................................................................... 125 29 DBMS PACKAGES ................................................................................................................................... 126 29.1 DBMS_LOCK PACKAGE ......................................................................................................................... 126 29.1.1 Naming Locks.................................................................................................................................. 126 29.1.2 Requesting a Lock ........................................................................................................................... 126 29.1.3 Converting a Lock ........................................................................................................................... 127 29.2 DBMS_PIPE ............................................................................................................................................ 127 29.3 DBMS_ALERT PACKAGE - CREATING ALERTS ....................................................................................... 129 29.4 DBMS_JOBS PACKAGE ........................................................................................................................... 130 30 31 32 INSTALLATION OF ORACLE10G ON RED HAT ENTERPRISE LINUX 4 ................................... 131 INSTALLATION OF ORACLE10G ON SUN SPARC (SOLARIS 9) .................................................. 133 INDEX ......................................................................................................................................................... 135
2 ORACLE ARCHITECTURE
2.1 Physical Structure Physical structure of a database comprises of the following files: Data files Redo-log files Control files Archive log files Parameter file or Server parameter file Alert log and Trace files 2.1.1 Data Files An Oracle database has one or more physical Data files that hold the actual data of all logical structures like tables, indexes, etc. A data file can be associated with only one database and only one Tablespace. 2.1.2 Redo-log Files The primary function of Redo-logs is to record all the changes made to the database before they are written to the data files. These files can be mirrored and are used in performing recovery operations. 2.1.3 Control Files The Control files record control information of all files within the database. They are used to maintain internal consistency and play a vital role in recovery operations. These files can also be mirrored. Oracle automatically modifies the control files, which users cannot edit. They are used to maintain internal consistency and guide during Database recovery. It is divided into five parts. Information about the database, total number of datafiles, redo-logs and threads that are enabled (in parallel server). Information about each log group and current log group that LGWR is writing. Redo-logs: Each member of the log group, the size, the path, full name, log sequence no., etc. Datafile: Their size, name, path, status, etc. Log history of database. 2.2 Logical Structure Logical structure comprises of Tablespaces, Schema objects like tables, indexes, views, etc. 2.2.1 Tablespace Logical area of storage in a database that directly corresponds to one or more physical data files. 2.2.2 Schema Objects Schema is a logical collection of database objects of a user. Example: Tables, Views, Synonyms, Sequences, Indexes, Clusters, Database Triggers, Procedures, Functions, Packages, Database links, etc. The relationship between a Database, Tablespace and Datafile can be stated as: Each database is logically divided into one or more Tablespaces. One or more datafiles are explicitly created for each Tablespace to physically store the data of all logical structures in a Tablespace. The combined size of a Tablespace datafiles is the total storage capacity of the Tablespace. The combined storage capacity of a database Tablespace is the total storage capacity of the database.
3 INSTANCE
A System Global Area (SGA) and Oracle background processes constitute an instance. 3.1 System Global Area (SGA) It is a shared memory region allocated by Oracle that contains data and control information for an Oracle instance. An SGA comprises of Buffer cache, Redo-log buffers and Shared pool area. 3.1.1 Automatic Shared Memory Management DBA can simply specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and the Oracle Database will automatically distribute this memory among various sub-components to ensure most effective memory utilization. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of a workload without requiring any additional configuration. The database automatically distributes the available memory among the various components as required, allowing the target system to maximize the use of all available SGA memory. 3.1.2 SGA_TARGET Initialization Parameter The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the following components: 1. 2. 3. 4. 5. 6. 7. Fixed SGA The Log Buffer The Buffer Cache The Shared Pool The Large Pool (optional) The Java Pool (optional) The Streams Pool (optional)
It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier releases, in which memory for the internal and fixed SGA was added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the size of the shared memory region allocated by the database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET. After start-up, SGA_TARGET can be decreased or increased dynamically. However, it cannot exceed the value of SGA_MAX_SIZE that was computed at startup. Set the value of SGA_TARGET, either by editing the text initialization parameter file and restarting the database or by issuing the following statements:
SQL> ALTER SYSTEM SET SGA_TARGET = <value>;
3.1.3 Buffer cache Buffer cache stores the most recently used blocks of data. It can also have modified data that has not yet been permanently written to disk. When a row in a table is updated, the foreground server processes reads the datafile information on the disk into the buffer caches, hence modifying the data block in the server memory. If other Users request new data from the disk and if no data block is freely available in the buffer cache, DBWR writes the blocks from the buffer cache to the datafile using the LRU (Last Recently Used) mechanism. 3.1.4 Redolog Buffers Stores redo log entries, a log of changes made to a database.
3.1.5 Shared Pool Shared pool comprises of library cache and dictionary cache. Library cache stores and shares SQL statements and PL/SQL procedures in memory. 3.1.6 Library Cache Oracle Parses the statement and determines the most efficient Execution plan for the statement when a SQL statement is issued. Oracle then caches the statement in the Shared pool and if another user issues the same statement, Oracle shares the statement already in memory rather than repeating the same steps. 3.1.7 Data Dictionary Oracle continuously requests and updates the information in the data dictionary of the database. To maximize the performance of the systems internal operation, data dictionary cache holds the dictionary information. 3.1.8 Streams Pool Oracle Streams enables information sharing. Using Oracle Streams, you can share data and events in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing events, and sharing the events with other databases and applications. Streams provide the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. To configure streams pool, specify the size of the pool in bytes using the STREAMS_POOL_SIZE initialization parameter. 3.2 PGA - Program Global Area Memory buffer that contains data and holds control information for the Server Process. 3.2.1
DBWR CKPT PMON Dnnn Snnn QMNn MMON RBAL
Background Processes
Database writer Check point Process monitor Dispatcher Shared Server Process Queue monitor process Manageability Monitor Process Rebalance activity process LGWR SMON RECO CJQ ARCn MMNL MMAN Log writer System monitor Recoverer Coordinator Job Queue Process Archiver Manageability Monitor Light Process Memory Manager Process
3.2.1.1 DBWR It writes blocks from the buffer cache to the appropriate data files. It writes a block in memory back to disk only when a time-out occurs or when a foreground Server process doesnt have enough free buffers in the memory or if Oracles performs a checkpoint. 3.2.1.2 LGWR Writes redo-log entries generated in the redo log buffer to an on-line redo-log file. As and when a transaction is carried out, Oracle creates a small record called redo entries that contains just enough information necessary to regenerate the changes made by the transaction. Oracle temporarily stores your transaction redo entries in the servers redo log buffer. The redo log buffer is a small memory area that temporarily caches transactions and redo entries for all system transactions. Oracle does not consider a transaction as committed until LGWR successfully writes the transaction redo entries and a commit record to the transaction log.
LGWR writes: When log buffer is full When a transaction is committed For every three seconds, and; When the buffer is 1/3 full 3.2.1.3 CKPT Responsible for signaling the DBWR at checkpoints and updating all the data files, control files of the database. Its duty can be performed by LGWR. The purpose of a Checkpoint is to establish mileposts of transaction consistency on disk. Checkpoint indicates how much of the transaction logs redo entries Oracle must apply if a server crash occurs and a database recovery is necessary. 3.2.1.4 SMON Performs: Instance recovery at instance startup in a multiple instances Recovery of instances that have failed in Cluster Environment (RAC) Cleans up temporary segments that are no longer in use Recovers dead transactions skipped during crash and instance recovery Coalesce the free extents within the database, to make free space contiguous and easy to allocate 3.2.1.5 PMON This process performs Process Recovery when a User process fails. It is responsible for cleaning up the cache, freeing resources used by the processes. In the MTS environment, it checks on Dispatcher and Server Processes, restarts them in times of failure. 3.2.1.6 ARCn The process copy filled On-line Redolog files to the specified destination. It is active only when database is running in Archive log mode. The sequential set of archived transaction log files that ARCH creates, is called as Archived transaction log. 3.2.1.7 RECO This process is used to resolve distributed transactions that are pending due to a network or a system failure in distributed database. At time intervals, the local RECO attempts to connect to a remote database and automatically complete the commit or rollback a local portion of any pending distributed transactions. 3.2.1.8 Dnnn It is responsible for routing requests from connected user processes to available shared server processes and returning responses back to the appropriate user processes. 3.2.1.9 CJQ This is the coordinator of job queue processes for an instance. It monitors the JOB$ table. 3.2.1.10 QMNn The queue monitor process is an optional background process for Oracle Streams Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes. These processes, like the job queue processes, are different from other Oracle background processes in that processes failure does not cause the instance to fail.
3.2.1.11 MMON MMON performs various manageability-related background tasks, for example: Issuing alerts whenever a given metrics violates its threshold value Taking snapshot by spawning additional process (MMON slaves) Capturing statistics value for SQL objects, which have been recently modified 3.2.1.12 MMNL MMNL performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation. 3.2.1.13 MMAN MMAN is used for internal database tasks. 3.2.1.14 RBAL Coordinates rebalance activity for disk groups in an Automatic Storage Management instance. It performs a global open on Automatic Storage Management disks. 3.2.1.15 ORBn ORBN performs the actual rebalance data extent movement in an Automatic Storage Management instance. There can be many of these at a time, called ORB0, ORB1, and so fort. 3.2.1.16 SCN System change number defines a committed version of a database at a precise moment of time. When a transaction commits, it is assigned an SCN that uniquely identifies the transaction. Primarily used for recovery operations. For example: If transaction A does updates and commits, it will be assigned an SCN value, 30. The next transaction B that commits five minutes later will receive an SCN value of 31 or greater. If B receives a value say 37, that means Oracle has assigned the SCN numbers of 30 and 37 to two transactions A and B. SCN need not be sequential. It plays an important role in distributed databases.
When a distributed transaction is committed, the highest SCN of all database instances involved is given to the distributed transaction. Thus, there will never be a problem with read consistency at the time of high transactions, multiple transactions may commit at a time then the LGWR process may write multiple commit records to online redo log files. This is known as group commits. 3.2.1.17 Low and High SCN When a redo log file is filled up, it switches to next the redo log file. The new redo log file is marked as low SCN, which is one greater than the high SCN of previous log. The low SCN represents the lowest value of the change number that is stored in that log file. Similarly, when the log file is closed, the high SCN mark is set to the highest SCN recorded in the log file. You can get more information from V$LOG_HISTORY.
4 DATABASE CREATION
Database creation prepares many operating system files so that they work together as an oracle database. A database needs to be created once, regardless of how many data files it has or how many instances access it. 4.1 SYSAUX Tablespace The SYSAUX Tablespace is always created at database creation. The SYSAUX Tablespace serves as an auxiliary Tablespace to the SYSTEM Tablespace. Because it is the default Tablespace for many Oracle Database features and products that previously required their own Tablespaces, it reduces the number of Tablespaces required by the database and that you, as a DBA, must maintain. Other functionality or features that previously used by the SYSTEM Tablespace can now use the SYSAUX Tablespace, thus reducing the load on the SYSTEM Tablespace. You cannot alter these attributes with an ALTER TABLESPACE statement. You cannot drop or rename the SYSAUX Tablespace. The size of the SYSAUX Tablespace is determined by the size of the database components that occupy SYSAUX. The space requirements of the SYSAUX Tablespace will increase after the database is fully deployed, depending on the nature of its use and workload. 4.2 Default Permanent Tablespace The DEFAULT TABLESPACE clause of the CREATE DATABASE statement specifies a default permanent Tablespace for the database. Oracle Database assigns to this Tablespace any non-SYSTEM users for whom you do not explicitly specify a different permanent Tablespace. If you do not specify this clause, then the SYSTEM Tablespace is the default permanent Tablespace for non-SYSTEM users. Oracle strongly recommends that you create a default permanent tablespace. 4.2.1 4.2.2 Pre-requisites for creating a database Sufficient Primary memory should be available to start the Oracle instance Sufficient Disk space must be available on the computer Steps to create a database
$ vi .bash_profile export ORACLE_SID=DEMO (or another database name) export ORACLE_HOME=/oraeng/app/oracle/product/10.2.0 export PATH=$ORACLE_HOME/bin:$PATH :wq $ .bash_profile #execute .bash_profile
"init<Oracle_SID>.ora".
Create the initialization (parameter) by copying from the sample "init.ora" to The name of the file can be anything but the name has to be specified explicitly at the time of the database startup.
$ cd $ORACLE_HOME/dbs $ cp init.ora initDEMO.ora (same as your ORACLE_SID=DEMO)
Make the necessary changes in your "init<Oracle_SID>.ora" file, example:, if db_name=DEFAULT change it to db_name=DEMO.
$ vi initDEMO.ora db_name=DEMO instance_name=DEMO db_cache-size=16m control_files=(/disk1/oradata/DEMO/contl.ctl, /disk2/oradata/DEMO/cont2.ctl) background_dump_dest=/disk1/oradata/DEMO/bdump user_dump_dest=/disk1/oradata/DEMO/udump core_dump_dest=/disk1/oradata/DEMO/cdump #Parameters to be passed for automatic undo management (optional) undo_management=auto undo_Tablespace=UNDOTBS undo_retention=900 :wq
Create the necessary directories to place database files, redolog files, control files and the dump_dest directories.
$ $ $ $ $ $ $ cd /disk1/oradata mkdir DEMO cd DEMO mkdir bdump cdump udump # Create directories as specified in init<Oracle_SID>.ora cd /disk2/oradata mkdir DEMO cd
Execute the CREATE DATABASE command, which is defined in the following lines (i.e. script written in a file 'cr8demo.sql' create with vi and execute the script). Now to create the database.
$ vi cr8demo.sql CREATE DATABASE demo DATAFILE /disk1/oradata/DEMO/system01.dbf SIZE 170M SYSAUX DATAFILE /disk1/oradata/DEMO/sysaux01.dbf SIZE AUTOEXTEND ON UNDO TABLESPACE undotbs DATAFILE /disk1/oradata/DEMO/undotbs01.dbf SIZE 20M DEFAULT TEMPORARY TABLESPACE temp TEMPFILE /disk1/oradata/DEMO/temp01.dbf SIZE 10M DEFAULT TABLESPACE user_data DATAFILE /disk1/oradata/DEMO/user_data1.dbf SIZE 20M LOGFILE GROUP 1 (/disk1/oradata/DEMO/redolog1a.log, /disk2/oradata/DEMO/redolog1b.log) SIZE 4M, GROUP 2 (/disk1/oradata/DEMO/redolog2a.log, /disk2/oradata/DEMO/redolog2b.log) SIZE 4M CONTROLFILE REUSE; :wq
Note:
Defining the CONTROLFILE clause is optional, because we will be specifying it in init.ora file. Moreover, REUSE is used only if the respective file is already existing with the same size.
When you execute this statement, Oracle performs the following operations: Creates the control file(s) for the database Creates the redolog files for the database Creates the SYSTEM Tablespace and the system rollback segment Creates the SYSAUX Tablespace Creates Default Permanent Tablespace USER_DATA Creates UNDO Tablespace and default temporary Tablespace TEMP Creates the data dictionary Creates user SYS and SYSTEM Mounts and Opens the database for use After the above statement is processed, the CATPROC and CATALOG scripts are to be executed, as user SYS, which are present in "$ORACLE_HOME/rdbms/admin" directory. The commands are as follows:
SQL> @ORACLE_HOME/rdbms/admin/catalog.sql #as user SYS SQL> @ORACLE_HOME/rdbms/admin/catproc.sql #as user SYS
4.3
Data Dictionary Views You Can Query V$SGA V$INSTANCE V$DATABASE V$PROCESS V$SYSAUX_OCCUPANTS
5 TABLESPACE MANAGEMENT
5.1 Additional Tablespaces A database is divided into one or more logical storage units called Tablespaces. A database administrator can use the Tablespace to do the following: To control the user access by making it read only or read write. Control database size by adding/dropping Tablespaces. Assign specific space quota for database users. Control availability of data by taking individual Tablespace online or offline. Perform online database backup or recovery operations. Allocate data storage across devices to improve performance. Remember, for best performance, it is recommended to place the Tables, Indexes, Rollback Segments, UNDO Segments and Temporary Segments in different Tablespaces on different Hard-Drives. 5.2 5.3 Advantages of having different Tablespaces Separation of user data from data dictionary data (which is in System Tablespace). Separation of one application data to another. Store different Tablespaces datafiles on separate disk drives to reduce I/O contention Separate rollback segment data from user data or Undo Tablespace Take individual Tablespaces offline while others remain online Reserve a Tablespace for a particular type of a database use such as high update activity, read only activity or temporary storage segments Backup individual Tablespaces leaving the rest for user access
Types of Tablespaces Read/Write Read Only Temporary Undo While or after the Tablespace creation, the above types can be specified (Default is Permanent). A database DEMO [created earlier] requires four (4) Tablespaces. They can be created as follows:
SQL> CREATE TABLESPACE user1_demo DATAFILE /disk1/oradata/DEMO/user1_demo01.dbf SIZE 10M EXTENT MANAGEMENT DICTIONARY; SQL> CREATE TABLESPACE user2_demo DATAFILE /disk1/oradata/DEMO/user2_demo01.dbf SIZE 10M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 20K ONLINE; SQL> CREATE TABLESPACE index_demo DATAFILE /disk1/oradata/DEMO/index_demo01.dbf REUSE;
5.4 BIGFILE Tablespace A BIGFILE Tablespace is a Tablespace with a single, but very large (up to 4G blocks) datafile. 1. To create a BIGFILE Tablespace.
SQL> CREATE BIGFILE TABLESPACE bigtbs DATAFILE /disk1/oradata/DEMO/bigtbs01.dbf SIZE 10M;
5.5
3. You can remove a Tablespace from a group as shown in the following statements:
SQL> ALTER TABLESPACE temp3 TABLESPACE GROUP ;
7. To change the name of a datafile in a Tablespace This can be done in two methods: First method a. Make the desired Tablespace offline
SQL> ALTER TABLESPACE user1_demo OFFLINE;
Note: You cannot rename SYSTEM or SYSAUX tablespaces and tablespaces on OFFLINE mode. 9. To drop a tablespace
SQL> DROP TABLESPACE user1_demo INCLUDING CONTENTS;
10. A new option is added to DROP TABLESPACE command to delete the OS files associated with the tablespace:
SQL> DROP TABLESPACE user2_demo INCLUDING CONTENTS AND DATAFILES;
11. To coalesce a tablespace - Smaller contiguous free extents can be coalesced into one larger free extent. By default, SMON process coalesces the free extents of tablespaces in background.
SQL> ALTER TABLESPACE user1_demo COALESCE;
Note: You cannot drop a datafile containing data. Datafile should be empty. If you intend to use multiple block sizes in your database you must have DB_CACHE_SIZE and at least one DB_nk_CACHE_SIZE parameter set. Oracle assigns an appropriate default value to the DB_CACHE_SIZE value to the DB_CACHE_SIZE parameter, but the DB_nk_CACHE_SIZE parameters default it to 0 and no additional block size caches are configured. In order to configure multiple block sizes, edit the INIT<SID>.ora file with the following parameters:
$ vi initDEMO.ora DB_2K_CACHE_SIZE=4M DB_4K_CACHE_SIZE=4M
Note: By default, when you create any tablespace other than SYSTEM, the extent management is LOCAL AUTOALLOCATE.
SQL> CREATE TABLESPACE user4k_demo DATAFILE /disk1/oradata/DEMO/user4k_demo01.dbf SIZE 10M EXTENT MANAGEMENT DICTIONARY BLOCKSIZE 2K; SQL> CREATE TABLESPACE user8k_demo DATAFILE /disk1/oradata/DEMO/user8k_demo01.dbf SIZE 10M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 20K BLOCKSIZE 4K ONLINE;
5.6
Data Dictionary Views You Can Query DBA_TABLESPACES DBA_TABLESPACE_GROUPS DBA_DATAFILES DBA_FREE_SPACE V$TABLESPACE V$DATAFILE DATABASE_PROPERTIES
d. To get the details of Default Temporary Tablespace and Default Permanent Tablespace
SQL> SELECT property_name, property_value FROM DATABASE_PROPERTIEIS WHERE property_name LIKE %DEFAULT%;
6 STORAGE PARAMETERS
Every tablespace has default storage parameters. To override the system defaults in that Tablespace, a user can specify the parameters while creating the objects. The following are the parameters: INITIAL The size in bytes of the first extent allocated when a segment is created. Though default system values are given data blocks, user bytes to set a value for this Parameter. You can also use the abbreviations K and M to indicate kilobytes and megabytes. 5 datablocks 2 datablocks OS specific The size of the next extent to be allocated for a segment. The second extent is equal to the original setting for the next. From third extent onwards Next is set to the previous size of Next multiplied by (1+Pctincrease/100). You can also user K or M as above. 5 datablocks 1 datablock OS specific The total number of extents, including the first, can ever be allocated for the segment. Dependent on the data block size and OS 1 (extent) OS specific The total number of extents to be allocated when the segment is created. This allows for a large allocation of space at creation time, even if contiguous space is not available. 1 (extent) 1 (extent) OS specific
If minextents are more than 1, then the specified number of incremental extents is allocated at creation time using initial, next, pctincrease. PCTINCREASE The percent by which each incremental extent grows over the last incremental extent allocated for a segment. If pctincrease is 0, then all incremental extents are the same size. If pctincrease is greater than 0, then each time the next is calculated, it grows by pctincrease. It cannot be negative. It is specified in percentage. 50% 0% OS specific
Pctincrease for Rollback Segment is always 0. Pctincrease cannot be specified for Rollback segments.
PCT FREE
It is used to set percentage of a block to be reserved (kept free) for future updates. After this parameter is met, the block is considered to be full and it is not available to insert new rows. It is used to allow a block to be reconsidered for the insertion of new rows. When the percentage of a block being used falls below PCTUSED either through row deletion or updates reducing column storage, the block is again available for insertion of new rows. It reserves pre-allocated amount of space for initial number of transaction entries to access rows in the data block concurrently. Space is reserved in the header of all data blocks of all associated data or index segment. The default value is 1 for tables and 2 for clusters. As multiple transactions concurrently access the rows of the same data block, space is allocated for each transactions entry in the block. Once the space reserved by inittrans is depleted, space for additional transaction entries is allocated out of the free space in a block, if available. Once allocated, this space effectively becomes a permanent part of the block header. The maxtrans parameter is used to limit the no. of transaction entries that can concurrently use data in a data block.
PCT USED
INITTRANS
MAXTRANS
6.1 Automatic Segment Space Management Locally managed tablespaces allow extents to be managed automatically by Oracle. Oracle 9i allows free and used space within segments stored in locally managed tablespaces to also be managed automatically. Using the SEGMENT SPACE MANAGEMENT clause of CREATE TABLESPACE you can specify AUTO or MANUAL to specify the type of segment space management Oracle will use.
SQL> CREATE TABLESPACE ts_auto DATAFILE /disk1/oradata/DEMO/ts_auto01.dbf SIZE 10M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 20K SEGMENT SPACE MANAGEMENT AUTO;
Free lists have been the traditional method of managing free space within segments. Bitmaps, however, provide a simpler and more efficient way of managing segment space. They provide better space utilization and completely eliminate any need to specify and tune the PCTUSED, FREELISTS and FREELISTS GROUPS attributes for segments created in the database. If such attributes should be specified, they are ignored. The keyword AUTO tells Oracle that you want to use bitmaps to manage the free space within segments. To change the initial extent of a Table:
SQL> SQL> SQL> SQL> CREATE TABLE junk (a NUMBER) STORAGE (INITIAL 40K); SELECT * FROM USER_SEGMENTS WHERE segment_name = JUNK; ALTER TABLE JUNK DEALLOCATE UNUSED KEEP 40K; SELECT * FROM USER_SEGMENTS WHERE segment_name = JUNK;
Note: Check in user_segments, you will see that the initial is not decreased because you have used delete command, which will not reset the high water mark. But, if you still want to decrease further then, do:
SQL> TRUNCATE TABLE junk; SQL> ALTER TABLE junk DEALLOCATE UNUSED KEEP 40K; SQL> SELECT * FROM USER_SEGMENTS WHERE segment_name = JUNK;
6.2
Data Dictionary Views DBA_SEGMENTS DBA_EXTENTS DBA_TABLES DBA_INDEXES DBA_TABLESPACES DBA_DATA_FILES DBA_FREE_SPACE
SUM shows the amount of free space in each tablespace, PIECES shows the amount of fragmentation in the datafiles of the tablespace, and; MAXIMUM shows the largest contiguous area of space.
This query is useful when you are going to create a new object or you know that a segment is about to extend, and you want to make sure that there is enough spaces in the containing tablespace. d. Listing the Segment Storage Parameters
SQL> SELECT segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents FROM DBA_SEGMENTS WHERE segment_name=EMP';
1. If you want to either make it online or want to change storage parameters you have to use alter command as follows:
SQL> ALTER ROLLBACK SEGMENT <segment> [ONLINE|OFFLINE];
Note:
Rollback segment cannot be created in a locally managed tablespaces. To create a rollback segment, the tablespace must be a dictionary-managed tablespace.
To bring rollback segments online automatically, set ROLLBACK SEGMENTS parameter in init<ORACLE_SID>.ora parameter file.
7. To drop a rollback segment. Make it offline first, and drop the segment
SQL> ALTER ROLLBACK SEGMENT rbs1 OFFLINE; SQL> DROP ROLLBACK SEGMENT rbs1;
7.1
7.2
Pending Offline
When you take a Rollback Segment offline, it does not actually go offline until all active transactions in it have completed. Between the time when you attempt to take offline and where it is actually offline, its status in DBA_ROLLBACK_SEGS remain ONLINE, but it is not used for new transactions. To determine whether any Rollback Segment for an instance are in this state, use the following query:
SQL> SELECT name, xacts Active Transactions, extents, wraps resize, hwmsize, optsize, status FROM V$ROLLNAME, V$ROLLSTAT WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN;
Historically, Oracle has used rollback segments to store undo. Space management for these rollback segments have proven to be quite complex. Oracle now offers another method of storing undo that eliminates the complexities of managing rollback segment space, and enables DBAs to exert control over how long the undo is retained before being overwritten. This method uses an UNDO TABLESPACE. Note: You cannot use both methods in the same database instance
8 UNDO MANAGEMENT
8.1 Overview of Undo Management An undo tablespace is organized as a uniform bitmapped tablespace. It is composed of one or more files containing undo segments. For each Oracle instance, you only have to allocate enough disk for the workload in that instance in an undo tablespace. You no longer need to decide on the number and sizes of rollback segments to create, and on how to assign transactions (of different sizes) strategically to individual rollback segments. Rollback segments are still used but are internally created and maintained, and are called undo segments. With automatic undo management, you cannot CREATE, DROP, or ALTER undo segments. Undo segments have the same structure as normal rollback segments but they: o Support automatic creation o Use a modified allocation policy compared to Oracle8i o Support dynamic extents transfer SMON shrinks undo segments when needed If you use the rollback segment method of managing undo space, you are said to be operating in the manual undo management mode. If you use the undo tablespace method, you are operating in the automatic undo management mode. You determine the mode at instance startup using UNDO_MANAGEMENT initialization parameter.
UNDO_MANAGEMENT = AUTO UNDO_TABLESPACE = UNDOTBS UNDO_RETENTION = 900
8.2 Undo Retention Retention is specified in units of seconds. It is persistent and can survive system crashes. That is, undo generated before an instance crash, is retained until its retention time has expired even across restarting the instance. Oracle Flashback features depend upon the availability of older undo information. If the UNDO_RETENTION initialization parameter is specified, the default is 900 seconds. The UNDO_RETENTION parameter can be set in the initialization parameter file or can be changed dynamically at any time using the ALTER SYSTEM command:
SQL> ALTER SYSTEM SET UNDO_RETENTION = 900;
When the instance starts up, Oracle automatically selects for use the first available undo tablespace. If there is no undo tablespace available, the instance starts, but uses the SYSTEM rollback segment, which is not recommended.
8.3 Steps to Create and Maintain Undo Tablespace 1. CREATE an Undo Tablespace There are two methods of creating an undo tablespace. The first method creates the undo tablespace when the CREATE DATABASE statement is issued. The second method is used with an existing database. It uses CREATE UNDO TABLESPACE statement. You cannot create database objects in an undo tablespace. It is reserved for system-managed undo data.
SQL> CREATE UNDO TABLESPACE undotbs_02 DATAFILE /disk1/oradata/DEMO/undo_01.dbf SIZE 10M;
An undo tablespace is permanent, locally managed tablespace, read-write and in logging mode with default block size. Values for MINIMUM EXTENT and DEFAULT STORAGE are system generated. 2. Altering an Undo Tablespace However, because most aspects of undo tablespaces are system managed, you need only be concerned with the following actions: a. Adding or resizing a datafile b. Renaming a data file c. Bringing a datafile online or taking it offline d. Beginning or ending an open backup
SQL> ALTER TABLESPACE undotbs_02 ADD DATAFILE /disk3/oradata/DEMO/undo_02.dbf SIZE 10M AUTOEXTEND ON;
3. Switching Undo Tablespace You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=undotbs_02;
Note: The switch operation does not wait for transactions in the old undo tablespace to commit. If there are any pending transactions in the old undo tablespace, undo tablespace enters into a PENDING OFFLINE mode (status). If the parameter value for UNDO_TABLESPACE is set to '' (single quotes), the current undo tablespace will be switch out without switching in any other undo tablespace. For example, to un-assign an undo tablespace in the event that you want to revert to ROLLBACK SEGMENT UNDO MODE.
SQL> ALTER SYSTEM SET UNDO_TABLESPACE = '';
4. Dropping an Undo Tablespace Use the DROP TABLESPACE statement to drop an undo tablespace. An undo tablespace can only be dropped if it not currently used by any instance. If the undo tablespace contains any outstanding; e.g. a transaction died but has not yet been fully recovered), the DROP TABLESPACE statement fails.
SQL> DROP TABLESPACE undotbs_02 INCLUDING CONTENTS AND DATAFILES;
8.4 Retention Guarantee If retention guarantee is enabled, the specified minimum undo retention is guaranteed. When you enable this option, the database never overwrites unexpired undo data that is, undo data whose age is less than the undo retention period. By enabling the guarantee option, you instruct the database not to overwrite unexpired undo data even if it means risking failure of currently active DML operations. A typical use of the guarantee option is when you want to ensure deterministic and predictable behavior of Flashback Recovery by guaranteeing the availability of the required undo data.
SQL> CREATE UNDO TABLESPACE undo_tbs1 DATAFILE /disk1/oradata/DEMO/undo_tbs1_01.dbf SIZE 10M RETENTION GUARANTEE;
Existing undo tablespaces can also be made to comply with the guarantee by ALTERing them, as in:
SQL> ALTER TABLESPACE undo_tbs2 RETENTION GUARANTEE;
You can verify that the tablespace has guaranteed undo retention with:
SQL> SELECT RETENTION FROM DBA_TABLESPACES WHERE TABLESPACE_NAME=UNDO_TBS1;
8.5
b. To check the commit time for each extent in the undo tablespaces:
SQL> SELECT segment_name, tablespace_name, extent_id, file_id, block_id, blocks, commit_jtime, commit_wtime, status FROM DBA_UNDO_EXTENTS;
Note: If the redolog files are lost, the above command (which command? -- Domingo) will recreate those logfiles. 5. To drop the log group and members If you drop online group then you will get the following error, ORA-7360 Unable to obtain information about log group.
SQL> ALTER DATABASE DROP LOGFILE GROUP 3; /*(to drop a group)*/ SQL> ALTER DATABASE DROP LOGFILE MEMBER /disk3/oradata/demo/redolog2a.log;
To drop online redolog group, consider the following points: a. An instance requires at least two groups of online redolog files, regardless of the number of members in the groups. b. You can drop an online redolog group only if it is not the active group. If you need to drop the active group, first force a log switch to occur.
SQL> ALTER SYSTEM SWITCH LOGFILE;
Make sure an online redolog is archived before dropping it. If you drop a member from the online group, then you get the following error, ORA-313 Open failed for member of member of log group. To drop a member, consider the following points: a. It is OK to drop online redolog files, so that, a mirrored online redolog becomes temporarily unsymmetrical, for e.g., if you are using duplexed groups of online redolog files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately, so that, all groups have at least two members, and there by eliminate the single point of failure of online redolog. b. You can drop an online redo log group only if it is not the active member. If you need to drop the active member, first force a log switch to occur. 9.4 Data Dictionary Views V$LOG V$LOGFILE V$LOG_HISTORY V$LOGHIST V$RECOVERY_LOG V$ARCHIVED_LOG
No media recovery is required. By using mirrored control files, you avoid unnecessary problems if a disk failure occurs on the database server. 10.2 Managing the size of the control file Typical control files are small. The major portion of a control file size depends on the values set for the parameters: MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXINSTANCES of the CREATE DATABASE statement that created the associated database. The maximum control file size is operating system specific. To check the number of files specified in control files.
SQL> ALTER DATABASE BACKUP CONTROL FILE TO TRACE; $ cd /disk1/oradata/DEMO/udump $ cp ora_2065.trc bkup.sql $ cat bkup.sql
If the MAXDATAFILES parameter is set to 5 and if you try to add 6th datafile, issuing the command:
SQL> ALTER TABLESPACE user_demo ADD DATAFILE /disk1/oradata/DEMO/user03.dbf SIZE 10M; ORA-1503 create control file failed ORA-116 file number 3 larger than MAXDATAFILES (5)
To increase the number of maximum data files supported by your DB, edit your trace file.
$ vi bkup.sql MAXDATAFILES 10 :wq SQL> CONNECT / AS SYSDBA SQL> @bkup.sql SQL> ALTER TABLESPACE user_demo ADD DATAFILE /disk1/oradata/DEMO/user03.dbf SIZE 10M;
Follow the same steps to increase the parameter values for LOGFILES and LOGMEMBERS. 1. To create an additional copy of controlfile, issue the following statements. Include the complete path of the new file in 'control_files' parameter in INIT.ORA.
$ cd /disk2/oradata/DEMO $ cp control1.ctl control2.ctl SQL> STARTUP
2. To drop excessive control files a. Shutdown the database b. Edit the parameter control_files in INIT.ORA and remove one of the control file entries, leaving at least one control file to start the database c. Restart the database The above steps do not delete the file physically from the disk.
SQL> SHUTDOWN IMMEDIATE $ cat initDEMO.ORA # here we are only observing line 1 which reads control files CONTROL_FILES = (/disk2/oradata/DEMO/control2.ctl) SQL> STARTUP
3. To trace the control file to udump destination and generate the create control file syntax:
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE; $ cd /disk2/oradata/DEMO/udump $ vi ora_2065.trc STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE DEMO RESETLOGS NOARCHIVELOG LOGFILE GROUP 1 (/disk1/oradata/DEMO/redolog1.log, /disk2/oradata/DEMO/redolog2.log) SIZE 4M, GROUP 2 (/disk1/oradata/DEMO/redolog1.log, /disk2/oradata/DEMO/redolog2.log) SIZE 4M DATAFILE /disk1/oradata/DEMO/system01.dbf SQL> $ cp SQL> SQL> ALTER DATABASE OPEN; ora_2065.trc orabkup.sql SHUTDOWN ABORT @orabkup.sql
4. To rename (change the name of) a database a. Trace the control file
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
c. Set the parameter db_name to the new line in the init.ora d. Remove the existing control files from their destination e. Finally execute the traced control file to create new control file
in 10g. 11.2 To Convert the database into ARCHIVELOG mode Enter the following parameters in the init<SID>.ora
SQL> SHUTDOWN IMMEDIATE;
Note: To enable archive log mode, shutdown the database with normal, immediate option.
$ cd $ORACLE_HOME/dbs $ vi initDEMO.ora LOG_ARCHIVE_DEST=/disk2/oradata/DEMO/ARCH :wq $ mkdir /disk2/oradata/DEMO/ARCH $ sqlplus / as sysdba SQL> STARTUP MOUNT SQL> ARCHIVE LOG LIST
The above command will show whether automatic archival is started or not and also whether the database is running in archive log mode or not. After passing the above parameters in init<SID>.ora, automatic archival will be started but the database will still be in NOARCHIVELOG mode. The following command will convert the database in ARCHIVELOG mode:
SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN;
12.2.1 Scenario 1: Add Oracle Managed Files to an Existing Database Assume in this case that an existing database does not have any Oracle-managed files, but the DBA would like to create new tablespaces with Oracle-managed files and locate them in directory /disk1/oradata/demo2. 1. Setting the initialization parameters To allow automatic datafile creation, set the DB_CREATE_FILE_DEST initialization parameter to the file system directory in which to create the data files. This can be done dynamically as follows:
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = /disk1/oradata/DEMO;
2. Creating tablespaces Once DB_CREATE_FILE_DEST is set, the DATAFILE clause can be omitted from a CREATE TABLESPACE statement. The datafile is created in the location specified by DB_CREATE_FILE_DEST by default. For example:
SQL> CREATE TABLESPACE tbs_2;
When tablespace tbs_2 is dropped, the datafiles associated with this tablespace will also be removed from the operating system.
SQL> DROP TABLESPACE tbs_2;
Try to check the datafile in the path specified in DB_CREATE_FILE_DEST. 12.2.2 Scenario 2: CREATE DATABASE: Using Oracle Managed Files 1. Included in the initialization parameter file:
DB_CREATE_FILE_DEST = /disk1/oradata/DEMO DB_CREATE_ONLINE_LOG_DEST_1 = /disk1/oradata/DEMO DB_CREATE_ONLINE_LOG_DEST_2 = /disk2/oradata/DEMO
In this example, the file sizes for Oracle managed files for the default temporary tablespace and undo tablespace are specified. A database with the following characteristics is created. A 200 megabyte SYSTEM tablespace datafile in directory /disk1/oradata/DEMO Two online redo log groups with two members of 100 MB each, one each in directories /disk1/oradata/DEMO and /disk2/oradata/DEMO. For the undo tablespace named undo_ts, a 10 MB datafile in directory
/disk1/oradata/DEMO
If no CONTROL_FILES initialization parameter was specified, then two control files, one each in directories /disk1/oradata/demo3 and /disk2/oradata/DEMO The control file in /disk1/oradata/DEMO is the primary control file.
2. To change the profile of an existing user. Note: Profile clerk must be created prior to creating a user. If the user is already created, use ALTER USER command to assign profile.
SQL> ALTER USER user_01 PROFILE clerk;
3. The identified by clause is used to give the user a password. To change a users password, issue the ALTER USER command.
SQL> ALTER USER user_01 IDENTIFIED BY <new_password>;
4. To drop a user
SQL> DROP USER user_01 CASCADE;
Use cascade option if user has any objects already existing in the schema. 5. The following statement assign quotas on more tablespaces and revokes quota from SYSTEM tablespace.
SQL> ALTER USER user_01 QUOTA 10M ON user_data02 QUOTA 0M ON system;
13.3 Managing Resources with Profiles System resource limits are managed with the user profiles. A profile is a named set of resource limits that can be assigned to a user. These resources can generally be established at the session and statement levels. A session is created every time a database user connects to the database. If a session level resource limit is reached, the current statement is rolled back and error message is returned to the user. The database administrator has option to globally enable or disable profiles. That is the DBA has a capability to make specific resource limits apply to all users. To create a profile issue the CREATE PROFILE command. The following resource limits can be set during profile creation.
SESSIONS_PER_USER CPU_PER_SESSION CPU_PER_CALL CONNECT_TIME FAILED_LOGIN_ATTEMPTS PASSWORD_LIFE_TIME PASSWORD_REUSE_MAX PASSWORD_VERIFY_FUNCTION PASSWORD_LOCK_TIME PASSWORD_GRACE_TIME IDLE_TIME LOGICAL_READS_PER_SESSION LOGICAL_READS_PER_CALL PRIVATE_SGA
COMPOSITE_LIMIT
Limits the number of concurrent sessions for user. Limits the CPU time for session. This is expressed in hundredths of seconds. Limits CPU time for a call. This is expressed in hundredths of seconds. Limits the elapsed connect time of a session. No of failed attempts after which account is going to be locked No of days the password is valid No of times password can be changed Function with which it is going to verify the password No of days password is going to be locked No of days it is going to prompt for password expiry Defines the maximum amount of continuous inactive time span Limits the number of data blocks read in a session Limits the number of data blocks read for a call to Proces a SQL statement Limits the amount of private space a session can reserve in the system global area. This limit applies only if you are using a multi-threaded server. Limits the total resource cost per session, this is a composite of the resources such as CPU_PER_SESSION, CONNECT_TIME, LOGICAL_READS_PER_SESSION and PRIVATE_SGA.
13.3.1 To Enable the Resource Limit The following statement creates profile named clerk. This statement defines only four resources. All others assume their resource limits as defined by the default profile. To enable profile, you have to include, RESOURCE_LIMIT=true (in init.ora), or
SQL> ALTER SYSTEM SET RESOURCE_LIMIT=TRUE;
Example:
SQL> CREATE PROFILE clerk LIMIT SESSIONS_PER_USER 1 CONNECT_TIME 560 CPU_PER_CALL 6000 FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 60 PASSWORD_LOCK_TIME 1 PASSWORD_GRACE_TIME 10 IDLE_TIME 15; SQL> ALTER PROFILE clerk LIMIT LOGICAL_READS_PER_SESSION 20000 CPU_PER_CALL DEFAULT LOGICAL_READS_PER_CALL 100;
The following information is available in the data dictionary for every user and profile. List of users in the database Each users default tablespace for tables, clusters and indexes Memory usage for each current session Space quotas for each user Each users assigned profile and resource limits The cost assigned to each applicable resource 13.3.3 To Drop a Profile
SQL> DROP PROFILE clerk CASCADE;
Use CASCADE option if you assign profile to any user. Note: One can not drop the DEFAULT profile.
13.4 Session and User Licensing If your site is licensed by concurrent usage, you can track and limit the number of sessions concurrently connected to a database. If your site is licensed by named users., you can limit the number of named users created in a database. In either case, you control the licensing facilities, and must enable the facilities and set the appropriate limits by setting these initialization parameters: LICENSE_MAX_SESSIONS LICENSE_SESSION_WARNING LICENSE_MAX_USERS a. To set the maximum sessions for an instance:
SQL> ALTER SYSTEM SET LICENSE_MAX_SESSIONS = 100;
13.5 User Authentication Oracle provides several means for users to be authenticated before they are allowed to create a database session. You can define users such that the database performs both identification and authentication of users. This is called database authentication. You can define users such that authentication is performed by the operating system or network service. This is called external authentication. 13.5.1 Password File Authentication Instead of connecting to sys, one can connect to the database through other user by following steps; 1. Create the password file using the ORAPWD utility
$ cd $ORACLE_HOME/dbs $ orapwd file=<filename> password=<password> entries=<max_users>
Example:
$ orapwd file=orapwDEMO password=oracle entries=3
2. Set the initialization parameter REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE 3. Grant sysdba or sysoper to users
SQL> connect / as SYSDBA SQL> GRANT SYSDBA TO scott; (or) SQL> GRANT SYSOPER to scott;
4. The user Scott can connect to the database without connect to internal
SQL> CONNECT scott/oracle AS SYSDBA; SQL> SHUTDOWN
13.5.1.1 Connecting to the database directly through OS login account 1. Create a user exactly as OS account (Example: devadm is an OS account)
SQL> CREATE USER devadm IDENTIFIED EXTERNALLY;
3. Include OS_AUTHENT_PREFIX=" " parameter in init<sid>.ora parameter file. And restart the database in order to read the initialization parameter file. 4. Connect to the database via sqlplus as follows:
$ sqlplus /
13.6 Viewing Information about Database Users and Profiles DBA_USERS ALL_USERS USER_USERS DBA_TS_QUOTAS USER_TS_QUOTAS USER_PASSWORD_LIMITS USER_RESOURCE_LIMITS DBA_PROFILES RESOURCE_COST V$SESSION V$SESSTAT V$STATNAME Examples:
SQL> CREATE PROFILE prof_clerk1 LIMIT SESSIONS_PER_USER 1 IDLE_TIME 30 CONNECT_TIME 600; SQL> CREATE USER user_lee IDENTIFIED BY lee DEFAULT TABLESPACE user_data TEMPORARY TABLESPACE temp_data QUOTA 50M ON user_data PROFILE prof_clerk1; SQL> CREATE USER user_jone IDENTIFIED BY jone DEFAULT TABLESPACE user_data QUOTA UNLIMITED ON user_data; SQL> CREATE USER user_mary IDENTIFIED BY mary;
Note: When specific quotas are assigned, the exact number is indicated by the MAX_BYTES columns. Unlimited quotas are indicated by -1. d. Viewing memory use for each user session:
SQL> SELECT username, value || BYTES Current UGA Memory FROM V$SESSION sess, V$SESSTAT stat, V$STATNAME name WHERE sess.SID=stat.SID AND Stat.STATISTIC# = name.STATISTIC# AND name.NAME = session uga memory;
Note: System privileges cannot be granted along with object privileges and roles in the same grant command. 14.2.1.2 Revoking system privileges System privileges can be revoked using REVOKE command. The user must have the admin option for a system privilege being revoked.
SQL> REVOKE CREATE TABLE FROM Robert;
14.2.2 Object Privileges An object privilege is permission to perform an action on a specific object, such as a table, package or view. Some database objects dont have an associated object privilege. Following are the available object privileges and their associated objects: Alter - Tables, Sequences Delete - Tables, Views Execute - Procedures Index - Tables Insert - Tables, Views Reference - Tables Select - Tables, Views, Sequences Update - Tables, Views
1. To grant object privilege: You must own the object specified or have been granted the object privileges with the grant option.
SQL> GRANT INSERT, UPDATE ON temp TO Tom;
2. Revoking the object privileges: Object privileges can be revoked using the REVOKE command.
SQL> REVOKE UPDATE ON temp FROM Tom;
14.3 Managing User Roles Roles are named groups of related privileges that are granted to individual users and other roles. Roles are created to manage the privileges for a database or to manage the privileges for a user group. Roles have a certain set of properties that promote an easier management of database privileges. The properties are as follows: Simplistic privilege management Individual dynamic privilege management Application specific security 14.3.1 System defined roles Oracle provides five predefined roles with the Oracle Server. You cannot grant and revoke privileges and roles to these predefined roles just as you can to any role you define. The following is a list of the Oracle predefined roles and their granted privileges: CONNECT Create session RESOURCE Create cluster, create procedure, create sequence, create table, create trigger, create type etc. DBA All system privilege with admin option EXP_FULL_DATABASE Select any table, backup_any_table, insert, delete and update on the tables sync.incvid, sys.incfil and sys.incexp. IMP_FULL_DATAASE Become user, write down DELETE_CATALOG_ROLE Delete privileges on all dictionary packages for this role. EXECUTE_CATALOG_ROLE Execute privileges on all catalog views and views for this role. SELECT_CATALOG_ROLE Select privilege on all catalog tables and views for this role. A system role can be granted with the admin option. This option enables users to do the following: Grant or revoke the role to or from any user or role in the database Grant the role with admin option to other users and roles Alter or drop the role The creator of a role is automatically granted the role with admin option. 14.3.2 Creating, Altering, Dropping, Granting and Revoking Roles 1. Creating a role The name you provide for the role must be unique among the user names and roles in the database. Roles are not contained in the schema of the user. When a role is crated it has no privileges associated with it. You must grant privileges or other roles to a new role. The GRANT command is used to assign privileges and roles to the new role. To create a role, one must have the CREATE ROLE system privilege. The following command creates the role named clerk:
SQL> CREATE ROLE clerk IDENTIFIED BY demo;
2. Altering roles You can alter roles using the ALTER ROLE command:
SQL> ALTER ROLE clerk IDENTIFIED BY <password>;
3. Dropping roles To drop a role from a database, use the DROP ROLE command:
SQL> DROP ROLE clerk;
4. Granting roles Roles can be granted to users, to other roles, and to public. Public represents all users of the system. Use the SQL GRANT command or the grant system privilege. The next statement grants role manager the user user_01 with the admin option.
SQL> GRANT manager TO user_01 WITH ADMIN OPTION;
You can not selective revoke the admin option of a role. To remove the admin option, you must revoke the role and then again grant the role without the admin option. 14.3.3 Creating a Profile Examples: The following statement creates the profile prof.
SQL> CREATE PROFILE prof LIMIT FAILED_LOGIN_ATTEMPTS 5 PASSWORD_LIFE_TIME 60 PASSWORD_REUSE_MAX 60 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_VERIFY_FUNCTION verify_function PASSWORD_LOCK_TIME 1 PASSWORD_GRACE_TIME 10;
Note: Prior to using the PASSWORD_VERIFY_FUNCTION, you need to create the function. 14.3.3.1 Password Complexity Verification 1. Oracle password complexity verification routine can be specified using a PL/SQL script (utlpwdmg.sql), which sets the default profile parameters. The format of the PL/SQL is:
routine_name ( userid_parameter IN VARCHAR(30), Password_parameter IN VARCHAR(30), Old_password_parameter IN VARCHAR(30)).
2. To create a user with the same password as the username with the Prof:
SQL> CREATE USER Scott IDENTIFIED BY scott PROFILE Prof; ORA-28003: Password verification for the specified password failed ORA-20001: Password same as user
3. To change the users password to scott again and returns and error
SQL> ALTER USER scott IDENTIFIED BY scott; ORA-28007: The password can not be reused
13. To list all the column specific privileges that have been granted:
SQL> SELECT grantee, table_name, column_name, privilege FROM dba_col_privs;
14. To list all system privileges currently available in the issuers Security domain, both from explicit privilege grants and from enabled roles:
SQL> SELECT * FROM session_privs;
17. To check the granted ROLES and their PRIVILEGES to TOM user:
SQL> SELECT a.grantee, a.granted_role, b.privilege FROM dba_role_privs a, dba_sys_privs b WHERE a.granted_role = b.grantee and a.grantee = TOM;
14.3.4
Listing Privilege and Role Information ALL_COLS_PRIVS ALL_TABS_PRIVS ALL_TAB_PRIVS_MADE ALL_TAB_PRIVS_RECD DBA_ROLES USER_ROLES_PRIVS USER_SYS_PRIVS COLUMN_PRIVILIGES ROLE_ROLE_PRIVS SESSION_PRIVS USER_COL_PRIVS USER_TAB_PRIVS USER_TAB_PRIVS_MADE USER_TAB_PRIVS_RECD DBA_COL_PRIVS DBA_ROLE_PRIVS DBA_SYS_PRIVS DBA_TAB_PRIVS ROLE_SYS_PRIVS SESSION_ROLES
15 AUDITING
15.1 Introduction Auditing is done to check regular and suspicious activity on the database. When your auditing purpose is to monitor for suspicious database activity, consider the following guidelines: Audit generally, then specifically: When starting to audit for suspicious database activity, it is common that not much information is available to target specific users or schema objects. Therefore, audit options must be set more generally at first. Once preliminary audit information is recorded and analyzed, the general audit options should be turned off and more specific audit options enabled. Protect the audit trail: when auditing for suspicious database activity, protect the audit trail so that audit information can not be added, changed or deleted without being audited. When auditing a normal database activity, consider the following guidelines: Audit pertinent actions: To avoid cluttering meaningful information with useless audit records and reduce the amount of audit trail administration, only audit the targeted database activities. Archive audit records and purge audit trail: Once you have collected the required information, archive the audit records of interest and purge audit trail of this information. 15.2 Initialization Parameters for Auditing In order to invoke auditing you have to set these parameters in INIT.ORA
AUDIT_TRAIL = [TRUE or DB | OS | FALSE or NONE] AUDIT_FILE_DEST = /disk1/oradata/DEMO/audit
Audit trail can be set to three parameters: TRUE or DB Stores the audit information in the database only OS Stores the audit information in physical files FALSE or NONE Auditing is disabled AUDIT_TRAIL enables or disables the writing of rows to the audit trail. Audited records are written if the value is NONE or if the parameter is not present. The OS option enables system wide auditing and causes audited records to be written to the operating systems audit trail (the SYS.AUD$ table). The values TRUE and FALSE are also supported for backward compatibility TRUE is equivalent to DB, and FALSE is equivalent to NONE. 15.2.1 Creating and Deleting Database Trail Views The database audit trail (SYS.AUD$) is a single table in each Oracles Data Dictionary. To help you view meaningful auditing information in this table, several predefined views are provided. You have to run cataudit.sql (automatically run when you run catalog.sql) as sys to create audit trail views. Auditing can be done on all types of commands. 15.3 Types of Auditing Privileges level of auditing Object level of auditing Statement level of auditing Focusing Statement, Privilege and Object Auditing: - Oracle allows you to focus statement, privilege and object auditing in three areas: Successful and unsuccessful executions of the audited SQL statement By SESSION and by ACCESS auditing For specific users or for all users in the database (statement and privilege auditing only).
15.3.1 Auditing Successful and Unsuccessful Statement Execution For a statement, privilege, or object, auditing oracle allows the selected auditing of successful executions of statements, unsuccessful attempts to execute or both. Using either form of the AUDIT command, you can include: The WHENEVER SUCCESSFUL option, to audit only successful executions of the audited statement. The WHENEVER NOT SUCCESSFUL option, to audit only unsuccessful executions of the audited statement Neither of the previous option, to audit both successful and unsuccessful executions of the audited statement. 15.3.2 Auditing BY SESSION versus BY ACCESS Most auditing options can be set to indicate how audit records should be generated if the audited statements is issued multiple times in a single user session. BY SESSION For any type of audit, BY SESSION inserts only one audit record in the audit trail, per user and schema object, during the session that includes the audited action. BY ACCESS Setting audit BY ACCESS inserts one audit record into the audit trail for each execution of an auditable operation. Example: the single audit trail contains 4 records for four selected statements if the audit on select. 15.4 Privilege level of AUDITING Monitoring on suspicious activities done at the system privileges granted to any user. 1. To audit sessions of STEEVE and LORI
SQL> AUDIT CREATE SESSION BY steeve, lori;
If any of the users who are under auditing has performed the audited operation then he can be queried from DBA_AUDIT_TRAIL:
SQL> SELECT username, action_name, returncode FROM dba_audit_trail;
15.5 Object Level Auditing Object auditing is the selective auditing of specific DML statements. Object auditing audits the operations permitted by schema object privileges, such as SELECT or DELETE statements on a given table. 1. Auditing on select by ALL
SQL> AUDIT SELECT ON emp;
2. To audit select and delete only whenever any user access the object.
SQL> AUDIT SELECT, DELETE ON scott.emp BY ACCESS;
If any of the users who are under auditing has performed the audited operation then he can be queried from DBA_AUDIT_TRAIL.
SQL> SELECT username, action_name, returncode FROM dba_audit_trail;
15.6 Statement Auditing Statement auditing is the selective auditing of related groups of statements that fall into two categories: DDL statements, regarding a particular type of database structure or schema object, but not a specifically named structure or schema object (for example, audit table audits all create and drop table statements) DML statements, regarding a particular type of database structure or schema object, but not a specifically named structure or schema object (for example, audit select table audits all select from TABLE/VIEW statements, regardless of the table or view). Queries, which work with statement level of auditing: 1. To audit on an statement which has table key word in it
SQL> AUDIT TABLE BY scott;
If any users who are under auditing has performed the audited operation, then he can be queried from DBA_AUDIT_TRAIL
SQL> SELECT username, action_name, returncode FROM dba_audit_trail;
b. To audit all unsuccessful select, insert, and delete statements on all tables and unsuccessful uses of the execute any procedure system privilege, by all database users, by access:
SQL> AUDIT SELECT ANY TABLE, INSERT ANY TABLE, DELETE ANY TABLE, EXECUTE ANY PROCEDURE BY ACCESS WHENEVER NOT SUCCESSFUL;
c. To disable audit:
SQL> NOAUDIT;
f.
15.7 Data Dictionary Views to Query STMT_AUDIT_OPTION_MAP AUDIT_ACTIONS ALL_DEF_AUDIT_OPTS DBA_STMT_AUDIT_OPTS USER_OBJ_AUDIT_OPTS, DBA_OBJ_AUDIT_OPTS USER_AUDIT_TRAIL, DBA_AUDIT_TRAIL USER_AUDIT_SESSION, DBA_AUDIT_STATEMENT USER_AUDIT_OBJECT, DBA_AUDIT_OBJECT DBA_AUDIT_EXISTS USER_AUDIT_SESSIONS, DBA_AUDIT_SESSION USER_TAB_AUDIT_OPTS
16.3 Database Links 1. Database links are used to access schema objects at remote database from the local database
SQL> CREATE DATABASE LINK <linkname> CONNECT TO <username> IDENTIFIED BY <password> USING <aliasname>;
To create database link at the local database which connects to the remote database user, issue:
SQL> GRANT CREATE DATABASE LINK TO scott; SQL> CONNECT scott/tiger; SQL> CREATE DATABASE LINK lnk1 CONNECT TO system IDENTIFIED BY manager USING myalias;
Database Links do not support DDL (CREATE, ALTER, DROP, etc) operations. You can use DML operations such as INSERT, UPDATE, and DELETE using database links. 3. To drop a database link issue the command:
SQL> DROP DATABASE LINK <link_name>;
16.4 Global Names If database participates in a global network with more number of databases, and there are two or more databases with the same name, then it becomes difficult to address each of the databases differently. Oracle offers a solution to this problem, by enabling the GLOBAL NAMING concept. The Global Name for a database in Oracle follows the pattern DB_NAME.DB_DOMAIN, where the domain_name for the machine is always unique. Also Oracle imposes you to create database links pointing to remote databases in the similar pattern DB_NAME. DB_DOMAIN. Set the following parameters in the initialization file (init<ORACLE_SID>.ora) to enable GLOBAL NAMES:
DB_NAME = ORCL DB_DOMAIN = mydot.com GLOBAL_NAMES = TRUE
Start the database and rename the global_name for the database by issuing the following statement:
SQL> ALTER DATABASE RENAME global_name TO orcl.mydot.com;
16.5 Materialized Views MATERIALIZED VIEWs can be thought of as a table that holds the results of a query, usually on one or more tables, called master tables, in a remote database. When MATERIALIZED VIEWs are used, a refresh interval is established to schedule refreshes of replicated data. Local updated can be prevented and transaction based refreshes can be used, available for some type of MATERIALIZED VIEWs, send the master database only those rows that have changed for the MATERIALIZED VIEW. You need CREATE MATERIALZED VIEW, CREATE TABLE, CREATE VIEW and CREATE INDEX privileges. The queries that form the basis of MATERIALIZED VIEWs are grouped into two categories. Simple and Complex MATERIALIZED VIEWs: Simple MATERIALIZED VIEWs queries should not have a GROUP BY, CONNECT BY clause, a sub-query, join conditions or set operations. If a MATERIALIZED VIEWs query has any of these clauses or operations, it is referred to as a Complex MATERIALIZED VIEW. When a MATERIALIZED VIEW is created, several internal objects are created in schema of the MATERIALZED VIEW. These objects should not be altered. To create a MATERIALZED VIEW, the steps are as follows: Syntax:
SQL> CREATE MATERIALIZED VIEW <materialized view name> REFRESH [COMPLETE | FAST] WITH [PRIMARY KEY | ROWID] START WITH SYSDATE NEXT SYSDATE+1(24*60*60) # (for every second) AS SELECT * FROM <username.table_name@linkname>;
If you create a MATERIALZED VIEW with refresh fast option, then you need to create a MATERIALIZED VIEW log on the table at the remote site (i.e., at the server side).
SQL> CREATE MATERIALIZED VIEW LOG ON <table_name>;
COMPLETE Entire data is regenerated every time the MATERIALIZED VIEW is refreshed. FAST Only the rows that are modified are regenerated every time the MATERIALIZED VIEW is refreshed using the MATERIALIZED VIEW log. Change information is stored in the MATERIALIZED VIEW log. MATERIALIZED VIEW log is a table in the master database that is associated with the master table. Oracle uses MATERIALIZED VIEW log to track the rows that have been updated on the master table. Example. If LNK1 has order table, on which I want to create the MATERIALIZED VIEW then,
SQL> CREATE MATERIALIZED VIEW snap1 REFRESH COMPLETE WITH ROWID START WITH SYSDATE NEXT SYSDATE + 1 / (24*60*60) AS SELECT * FROM _ HYPERLINK "mailto:jones.order@link1" __jones.order@link1_;
However without setting the following parameters in the init<ORACLE_SID>.ora, which starts the necessary background processes (snp0), the MATERIALIZED VIEWs will not get refreshed automatically.
JOB_QUE_PROCESSES=3
Oracle automatically controls and monitors the commit or rollback of a distributed transaction and maintains the integrity of the global database using a mechanism called two phase commit. This mechanism guarantees that the nodes participating in a distributed transaction either commit or rollback the transaction, thereby maintaining the integrity. It has two phases: Prepare phase The initiating node asks all the participants to prepare (either to commit or rollback, even if there is failure). Commit Phase If all participants respond to the initiating node that they are prepared, the initiating node asks all nodes to commit the transaction, if all participants cannot prepare, it asks to rollback the transaction. If there is failure of transaction due to any reason, the status of transaction is recorded in commit point site. Commit point decides the commit point strength at the beginning. All transactions are automatically resolved by RECO and automatically removed from the pending transaction. Note: The MATERIALIZED VIEW can be manually refreshed using the DBMS package:
SQL> EXECUTE DBMS_MVIEW.REFRESH (materialized view name, refresh_option);
The Option parameter is to indicated whether MATERIALIZED VIEW has been created with COMPLETE or FAST option.
Syntax:
$ exp <username/password> parameters;
Export parameters: BUFFER FILE COMPRESS GRANTS INDEXES ROWS CONSTRAINTS LOG FULL OWNER TABLES INCTYPE PARFILE FEEDBACK TRANSPORT_TABLESPACE TABLESPACES VOLSIZE FILESIZE QUERY TRIGGERS RESUMABLE RESUMABLE_NAME RESUMABLE_TIMEOUT FLASHBACK_SCN FLASHBACK_TIME Examples:
- Size of a data buffer - Output file (default expdat.dmp) - Import data to one extent (Default Y) - Export grants (Default Y) - Export indexes (Default Y) - Export data rows (Default Y) - Export constraints (Default Y) - Logfile of screen output - Entire file (Default N) - List of owner names - List of table names - Incremental export type (INCREMENTAL, CUMULATIVE & COMPLETE) - Parameter file - Display progress every x rows (default 0) - Export transportable tablespace metadata (N) - List of tablespaces to transport - Number of bytes tow rite to each tape volume - Maximum size of each dump file - Select clause used to export a subset of a table - Export triggers (Y) - Suspend when a space related error is encountered (N) - Text string used to identify resumable statement - Wait time for RESUMABLE - SCN used to set session snapshot back to time - Used to get the SCN closest to the specified time
4. If you want to export your database, with only those tables that are changed after the previous complete backup:
$ exp system/manager file=inc.dmp log=inc.log inctype=incremental
5. If you want to export tables emp, dept which are own by scott, with no constraints,
$ exp scott/tiger file=scott1.dmp log=scott1.log tables=emp,dept constraints=n
or
$ cat demo.par creating an export parameter file file=scott1.dmp log=scott1.log tables=emp,dept constraints=n $ exp scott/tiger parfile=demo.par
6. If you want to export only a portion of a tables data, use Oracle 8is new export option QUERY
$ exp scott/tiger file=emp.dmp log=emp.log tables=emp query=\where deptno=10\;
7. If you want to export partitions. If emp table is having two partitions M and Z. It exports only partition M from table emp
$ exp scott/tiger tables=emp:m rows=y
17.3 Incremental, Cumulative and Complete Exports Incremental exports Backs up only tables that have changed since the last incremental, cumulative or complete export. An incremental export exports the table definition and all its data not just changed rows (i.e. entire table). Cumulative exports Backs up tables that have changed since the last cumulative or complete export. A cumulative export includes all the incremental exports done from last cumulative or complete into a single cumulative export file. Complete exports Establishes a base for incremental and cumulative exports. It is also similar to full database export except it updates the tables that track incremental and cumulative exports. Assume that as manager of a data center, you do the following tasks: 1. A complete export (X) of every three weeks 2. A cumulative export (C) every Sunday 3. An incremental export (I) every night Your export schedule is as follows:
Day 1 X S 2 I M 3 I T 4 I W 5 I T 6 I F 7 I S 8 C S 9 I M 10 I T 11 I W 12 I T 13 I F 14 I S 15 CI S 16 I M 17 I T 18 I W
To restore through day 18, first you import the system information from the incremental export taken on day 18. Then you import the data from: 1. The complete export taken on day 1 2. The cumulative export taken on day 8 3. The cumulated export taken on day 15 4. Three incremental exports taken on days 16, 17, 18
17.4 Import parameters BUFFER FILE GRANTS INDEXES ROWS LOG FULL TABLES INCTYPE PARFILE SHOW IGNORE DESTROY INDEXFILE FROMUSER TOUSER COMMIT POINT_IN_TIME_RECOVER SKIP_UNUSABLE_INDEXES
Size of a data buffer Output file (default expdat.dmp) Export grants (Default Y) Export indexes (Default Y) Export data rows (Default Y) Logfile of screen output Entire file (Default N) List of table names Incremental export type (SYSTEM, RESTORE) Parameter file Lists file content (Default N) Ignore create errors (Default N) Overwrite tablespace datafile (Default N) Write table/index into specified file From which user To which user you want to import Commit array insert (Default N) Whether or not import recovers one or more tablespaces Whether or not import skips building indexes that were set to the index unusable state TRANSPORT_TABLESPACE Import transportable tablespace metadata (N) TABLESPACES List of tablespaces to be transported into the database DATAFILES Datafiles to be transported into the database TTS_OWNERS Users that own data in the transportable tablespace set RESUMABLE Suspend when a space related error is encountered (N) RESUMABLE_NAME Text string used to identify resumable statement RESUMABLE_TIMEOUT Wait time for RESUMABLE
4. If you want to import only the data of emp and ignore errors
$ imp system/manager file=steeve.dmp log=steeve_ign.log ignore=y
17.5 Importing Incremental, Cumulative and Complete export files To import the data you have the following export file: 1. One complete export X1 2. Two cumulative exports C8 and C15 3. Three incremental exports I16, I17 and i18 You should import in the following order:
$ $ $ $ $ $ $ imp imp imp imp imp imp imp system/manager system/manager system/manager system/manager system/manager system/manager system/manager inctype=system full=y file=I18 inctype=restore full=y file=X1 inctype=restore full=y file=C8 inctype=restore full=y file=C15 inctype=restore full=y file=I16 inctype=restore full=y file=I17 inctype=restore full=y file=I18
We do logical Backups for: 1. 2. 3. 4. 5. Moving Database objects from one database to another Moving Database objects from one User account to another User account When we are upgrading from Oracle version When migrating from one platform to another platform It is an alternative to the regular COLD or HOT Backup (Physical). If the tape doesnt work at least we can recover from the Logical Backup 6. For database REORG: If the database has been growing for a while, it needs to manage too many extents (since the Segments keep on growing). The only way to get rid of so many extents is by Taking a FULL database Export Re-creating the database Importing the FULL database
By doing so, all the segments will be created with INITIAL extent as big as the Tables space (basically it adds all extents space and makes an INITIAL extent). Another importing that that REORG ensures is INTEGRITY of Oracle Data. Although we probably never see any problems with the data, there could be little chance that Oracle gets some bad data because of Disk-Block corruption, which would hide in the dark until you try to Select the data from that Table. These kinds of intricacies would be avoided by doing REORG at least every four (4) months. 17.6 Points to remember before doing export or import 1. Always specify LOG=<logfile> parameter when you do an EXPORT, to know what exactly happened when you are away from the machine. It will also be helpful at the time of IMPORT so that we can compare the LOG files and make sure all tables are rightly being imported with all the rows. 2. Always specify BUFFER=2000000 so that, it overrides system default BUFFER parameter which is only 64K, which means whenever EXP writes 64K worth of data to MEMORY (not SGA), it tries to write that data do Export File. 3. Always specify FILE=<filename>, otherwise Oracle starts writing to a default file called EXPDAT.DMP, by which you may overwrite an earlier version of EXP which might be one that is needed. 4. Always perform EXP by giving the complete syntax (command line) rather than the performing by INTERACTIVE method. 5. Whenever you are doing EXP or IMP dont try to do at the prompt, instead write the command in a file and execute as a SHELL program, that too not directly, but using NOHUP or AT (UNIX commands). This will ensure that though there are problems with your terminal or modem, the command will still run as a background process. 6. Upon using the EXPORT file, try to Compress it, so that it wont occupy much space.
7. If you are doing an EXP specifying FULL=Y, and if the database size is relatively larger, then dont create indexes as part of the IMP, rather IMP with INDEXES=N and try to create indexes manually after the IMP is done. 8. EXP & IMP are the only way to get all the SCHEMA definitions. You can get the SCHEMA definitions by giving the following command:
$ exp scott/tiger file=expscott.dmp log=expscott.log $ imp scott/tiger file=expscott.dmp log=scott_DDL.sql
9. Issue these commands immediately one after the other without dropping any tables or deleting any data or objects. This statements fails since we are trying to create all SCOTTs objects once again which are already there. So IMP process will generate all errors and will dump into LOG file. Once the IMP finishes, we can go into the LOG file and by removing the Error messages, we can get the entire SCHEMA definition (undocumented). 10. In point 7, we discussed that INDEXES should be created after IMP. But we dont have any SQL script to generate indexes. Please the code:
$ exp sys/sys full=y file=expfull_Mar08.dmp log=expfull.log buffer=2000000 $ imp sys/sys full=y file=expfull_Mar08.dmp indexfile=cr8_indexes.sql
(At this point we didnt create any objects, except Oracle writes all the INDEXES information to this file) Now re-create the database and issue the following command:
$ imp sys/sys full=y file=expfull_Mar08.dmp log=impfull.log buffer=2000000 indexes=n commit=y
As we know, this would fail because there is no password associated with the UserID. Hence the entire file would have incomplete statements. This can be altered by issuing the following command:
SQL> ALTER SESSION SET CURRENT_SCHEMA = SCOTT;
This would be very helpful since we dont really know all the passwords, thats why we are logging as that user in-directly, from SYS or SYSTEM as that user. 11. If you have large tables and if the RBS are not big enough to store the entire tables information, you use COMMIT=Y at the time of IMPORT whenever the BUFFER is full, which wont fill-up Rollback Segments. There is a disadvantage in specifying COMMIT=Y which is if the IMPORT fails midway (for any reason), the last imported table might contain PARTIAL number of rows this would cause some more failures when this table acts as Master table for other tables. Solution: In the above scenario it is best to just drop the last table and start the same command again.
17.7 Selecting Detailed Info about INCTYPE Exports 1. To observe the details about Logical BACKUP using INCTYPE in the database:
SQL> SELECT * FROM dba_exp_version;
18.4 Data Pump Import Modes Data Pump Import is a utility for loading and export dump file set into a target system. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows: Full import mode Schema mode Table mode Tablespace mode Transportable tablespace mode Examples: 1. To perform the full database import
$ impdp system/manager DUMPFILE=dpump_dir1:expfull.dmp FULL=y LOGFILE=dpump_dir2:full_imp.log JOB_NAME=full_job1
4. The following Export and Import commands are executed to remap the scotts schema into the Steves schema.
$ expdp SYSTEM/password SCHEMAS=scott DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp $ impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp REMAP_SCHEMA=scott:steve
5. To exclude the metadata information like views, packages and functions from scott schema, the following command is issued
$ impdp scott/tiger DIRECTORY=dpump_dir1 DUMPFILE=scott_exclude.dmp EXCLUDE=VIEW, PACKAGE, FUNCTION
18.5 Transport of Tablespaces Across Different Platforms This functionality can be used to Allow a database to be migrated from one platform to another Provide an easier and more efficient means for content providers to publish structured data and distribute it to customers running Oracle Database on different platforms. Simplify the distribution of data from a data warehouse environment to data marts which are often running on smaller platforms. Steps for transporting tablespace from Unix to Windows: 1. Determine if the platforms use the same Endian format by querying the V$TRANSPORTABLE_PLATFORM
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
4. Use the operating system utility to copy the datafiles of USERS tablespace to the target server. 5. Plug the tablespace into the target database. From the OS command, issue:
C:> impdp directory=dump_dir dumpfile=users_tts.dmp transport_datafiles=c:\oracle\oradata\ORCL\users_01.dbf
18.5.1 Initiate a Job During Data Pump Export operations and Restarting the Jobs Data Pump uses a master table, a master process, and worker process to perform the work and keep track of progress. For every Data Pump Export job and Data Pump Import jobs, a master process is created. The master process controls the entire job, including communicating with clients, creating and controlling a pool of worker processes and performing logging operations. The job name is used to identify the export job in subsequent actions, such as when the ATTACH parameter is used to attach to a job. The job name becomes the name of the master table in the current users schema. The master table is used to control the export job. 1. To export the scott schema objects and assigning a job name for it.
$ expdp scott/tiger DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job NOLOGFILE=y
2. While the export is running, press Ctrl+C. This will start the interactive command interface of the Data Pump Export. The interactive interface, logging to the terminal stops and the Export prompt is displayed. Issue the following command to stop the job
Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job (y/n) : y
3. Enter the following command to reattach to the job you just stopped:
$ expdp scott/tiger ATTACH=exp_job
4. After the job status is displayed, you can issue the CONTINUE_CLIENT command to resume logging mode and restart the expfull job.
Export> CONTINUE_CLIENT;
19.3.4 Memory Tuning 19.3.4.1 Tuning Buffer Cache Buffer cache is the cache structure in the SGA and holds copies in memory of the Most Recently Used Oracle data blocks. First find out the ration between Misses and Hits i.e., the MISS RATION. If the ratio is less than 85% then, increase the buffer allocated, using the parameter db_cache_size=<size>. In INIT.ORA. The query to find the ratio is as follows:
SQL> SELECT 1-(P.VALUE/D.VALUE+C.VALUE)) HIT RATIO FROM V$SYSSTAT P, V$SYSSTAT C, V$SYSSTAT D WHERE P.NAME=physical reads AND D.NAME=db_block gets AND C.NAME=consistent gets;
In the above query: Physical reads Consistent gets DB Block gets The value of this column indicates the number of reads made to each database file Statistics reflects the number of accesses made to the block buffer to retrieve the data in a consistent mode Statistics reflects the number of blocks accessed via single block gets.
Note: One physical read can read multiple blocks by changing the following parameters: DB_FILE_MULTIBLOCK_READ_COUNT = <The number of blocks to read> 19.3.4.2 Tuning Redolog Buffer Cache To tune the Redolog buffer one has to reduce the waiting for the latches. You have to find the ratio between Redolog space wait times and redo writes. If the ratio is more than 1% then we need to tune. The information of this can be obtained from V$LATCH and V$SYSSTAT. The query is as follows:
SQL> SELECT name, value FROM V$SYSSTAT WHERE name=redo log space requests;
If misses/gets > 1 and immediate_misses/ (immediate_gets + immediate_misses) > 1 you need to look into LOG_BUFFERS i.e., increase the parameter log_buffers in the init.ora In the above query: Gets Misses Immediate_misses Immediate_writes
This column shows the total no of requests for information on the corresponding item. This column shows the no of data requests resulting in cache misses This column shows the number of unsuccessful immediate request for each latch. This column shows the number of successful immediate request for each latch
19.3.4.3 Tuning Library cache It is present in data dictionary, which has shared SQL and PL/SQL area. This section tells you how to tune the library cache by: Examining library cache activity Reducing library cache misses Speeding access to shared SQL and PL/SQL areas in the library cache The query is as follows:
SQL> SELECT SUM(pins), SUM(reloads) SUM(reloads)/SUM(pins) * SUM(reloads)*100 MISS RATIO FROM V$LIBRARYCACHE;
Pins Reloads
This column shows the number of times an item in the library cache was executed The column shows the number of library cache misses on execution steps
If the miss_ratio is less than 85%, you should reduce this library cache misses (by increasing the shared_pool_size=<bytes> parameter in INIT.ORA. 19.3.5 Stripping Stripping is the practice of dividing a large tables data into small portions and storing these portions in separate data files on separate disks. This permits multiple processes to access different portions of the table concurrently without disk contention. STRIPPING is particularly helpful in optimizing random access to tables with many rows. Stripping can either be done manually as below:
SQL> CREATE TABLESPACE test DATAFILE /disk1/oradata/DEMO/u1.dbf size 5M, /disk2/oradata/DEMO/u2.dbf size 5M, /disk3/oradata/DEMO/u3.dbf size 5M STORAGE (MINEXTENTS 3); SQL> CREATE TABLE junk (a NUMBER) STORAGE (MINEXTENTS 3) TABLESPACE test;
Then insert few thousand rows into the table. While the insertion is going on observe the status of the files in V$FILESTAT;
SQL> SELECT file_id FROM dba_data_files; SQL> SELECT * from V$FILESTAT;
If you have more data files and if you have only one DBWR the performance may decrease, so you increase DBWRs.
SQL> SHUTDOWN $ vi init.ora #look for the following parameter DISK_ASYNCH_IO=FALSE DBWR_IO_SLAVES=3 :wq SQL> STARTUP $ ps ux|grep ora_ # observed that there are extra slave processes
19.3.6 Parallel Query Option Oracle will process the SQL statement by a single server process with the parallel query option, multiple processes can work together simultaneously to process a single SQL statement. This capability is called parallel query option. The Oracle server can process statement more quickly than if only a single server process processed it, query processing can be effectively split among many CPUs on a single system.
$vi init.ora PARALLEL_MIN_SERVERS=1 PARALLEL_MAX_SERVERS=10 PARALLEL_SERVER_IDLE_TIME=4 (default in minutes) :wq SQL> SHUTDOWN SQL> STARTUP SQL> CREATE TABLE junk (a NUMBER) PARALLEL 5; SQL> SELECT table_name, degree FROM USER_TABLES; SQL> Insert many rows SQL> SELECT * FROM junk;
From any other terminal using the OS command, you can see the ora_pxxx_DEMO processes:
$ ps ux|grep ora_
19.3.6.1 Data Dictionary Views to Query V$PX_PROCESS V$PX_SESSION V$PX_PROCESS_SYSSTAT 19.3.7 Table Cache To cache a table, specify the cache clause either in CREATE TABLE or ALTER TABLE command. If a table is marked as a cached that tables block will be considered as the most recently used blocks in the data block buffer cache. Even if they read via a full table scan, thus you can avoid having your small tables blocks frequently removed from the data block buffer cache. The example below shows the TEST table is marked as a cache table. The first time its blocks are read into the data block buffer cache, they will be marked as the most recently used blocks in the cache.
SQL> CREATE TABLE test (a NUMBER) CACHE; /* Table is created up-front with Cache option */ SQL> CREATE TABLE test (a NUMBER); /* An e.g. for later decision to implement Cache */ SQL> ALTER TABLE test CACHE;
To deactivate the cache status of the table use the NOCACHE clause:
SQL> ALTER TABLE test NOCACHE;
19.4 Online Segment Shrinking The online SHRINK command is a feature for reclaiming fragmented free space below the high water mark in an Oracle Database segment. You use online segment shrink to reclaim Shrink operations can be performed only on segments in locally managed tablespaces with automatic segment space management (ASSM). Shrink a table and all its dependent segment
SQL> SQL> SQL> SQL> ALTER ALTER ALTER ALTER TABLE TABLE TABLE TABLE emp emp emp emp ENABLE SHRINK SHRINK SHRINK ROW MOVEMENT; SPACE CASCADE; SPACE COMPACT; SPACE;
20 MEMORY MANAGEMENT
20.1 Introduction Memory is a critical system resource, which has a significant impact on the overall performance of the Oracle database. Database administrators, therefore, closely monitor system memory utilization to ensure its most optimal use. Continuing its quest to make the management of Oracle databases simple, Oracle 9i includes a number of features, which automate memory management, provide optimal out-of-box performance and allow dynamic reconfiguration. Oracle 9is enhanced memory management enables faster query execution, ensures the most efficient utilization of available memory, and facilitates better integration with operating system resource managers. 20.2 Dynamic System Global Area (SGA) Oracle 9i makes it simple to add to and remove memory from an instance. Database administrators can change the size of the buffer cache and the shared pool without having to restart the instance. By allowing such change to be made online, the dynamic SGA feature helps administrators increase database availability. It also makes it possible to dynamically change the memory available to an Oracle instance resulting in either from changes in system hardware or changes to OS resource manager allocations. The SGA configuration of a running instance can be altered by changing the values of the parameters governing sizes of the buffer cache (DB_CACHE_SIZE) and the shared pool (SHARED_POOL_SIZE). Both these parameters have been made dynamic in Oracle 9i and their values can be modified using the ALTER SYSTEM command. 20.3 Buffer Cache Advisory Oracle 9i also include an advisory to help DBAs size the buffer cache optimally. This advisory relies on an internal simulation based on the current workload to predict the cache miss rates for various sizes of the buffer cache raging from 10% to 200% of the current cache size. These predictions are published through a new view V$DB_CACHE_ADVISE. This view can be used to determine if the current size of the buffer cache should be increased or decreased to ensure optimal performance for the present workload. By providing a deterministic way to size the buffer cache, Oracle 9i takes the guesswork out of database memory configuration thereby eliminating wastage caused by memory over-allocation. The buffer cache advisory is turned off by default since there is a minor performance overhead associated with the data collection and cache simulation. It can be turned by setting the parameter DB_CACHE_ADVICE=ON. Changing the value of this parameter to READY stops further data collection but preserves the advisory results in the V$DB_CACHE_ADVICE VIEW. The contents of this view are cleared when the advisory is turned off by setting the DB_CACHE_ADVICE=OFF.
SQL> SELECT NAME, BLOCK_SIZE, BUFFERS_FOR_ESTIMATE, EST_PHYSICAL_READ_FACTOR FROM V$DB_CACHE_ADVICE;
21 OPTIMIZATION
21.1 Introduction Optimization is the process of choosing the most efficient way to execute a SQL statement. This is an important step in the processing of any data manipulation language statement (select, insert, update or delete) which is done by the Optimizer. The Optimizer formulates execution plans and chooses the most efficient plan before executing a statement. There are two types of optimizers like: Rule based (is obsolete) Cost based Rule based Cost based Using this approach the Optimizer chooses an execution plan based on the access paths available and the ranks of these paths. Using this approach, the Optimizer considers available access paths and factors in information based on the statistics in the data dictionary objects (tables, clusters or indexes) accessed by the statement to determine which execution plan is most efficient. The ANALYZE command generates these statistics. Cost based with be effective only on the tables which are analyzed. The cost based approach also considers hints. Cost based approach has three options: CHOOSE, ALL_ROWS and FIRST_ROWS. This can be enabled by using the following command:
SQL> ALTER SESSION SET OPTIMIZER_MODE=ALL_ROWS SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS;
There are 15 access paths. They are: Single row by rowed Single row by cluster join Single row by hash cluster key with unique key or primary key Single row by unique or primary key Cluster join Hash cluster key Indexed cluster key Composite key Single column indexes Bounded range search on indexed columns Unbounded range search on indexed columns Sort merge join Max or Min of indexed column Order by on indexed columns Full table scans
21.2 Execution Plan Steps This section deals with the following tables:
COMPANY table: company_id name address city state parent_company_id COMPETITOR: company_id product_id SALES: company_id period_id sales_total NUMBER (primary key) VARCHAR2 VARCHAR2 VARCHAR2 (index) VARCHAR2 (index) NUMBER (index) NUMBER (primary key, foreign key to company_id) NUMBER (primary key)
SQL> @ORACLE_HOME/rdbms/admin/utlxplan
This command generate a plan_table 21.3 Types of Operations 1. AND-EQUAL: It merges sorted list of values returned by indexes
SQL> EXPLAIN PLAN SET STATEMENT_ID = test1 FOR SELECT name, city, state FROM company WHERE city=roanoke AND state=VA; SQL> SELECT LPAD( , 2 * level)||operation||' '||options||' '||object_name FROM plan_table WHERE statement_id= test1 CONNECT BY prior_id=parent_id AND statement_id = test1 START WITH id=1;
Note: For every plan operation issue the above command with different statement id. 2. CONCATENATION: It does a union all of results sets.
SQL> EXPLAIN PLAN SET STATEMENT_ID = test2 FOR SELECT name, city, state FROM company WHERE state=TX AND city IN (Houston, Austin, Dallas);
3. FILTER: It performs where a clause condition when no index can be used to assist in the evaluation.
SQL> EXPLAIN PLAN SET STATEMENT_ID = test3 FOR SELECT company_id, name FROM company WHERE state=VA CONNECT BY parent_company_id=prior company_id START WITH company_id=1;
4. HASH JOIN: It joins tables by creating an in-memory bitmap of one of the tables and then using a hashing function to locate the join rows in the second table.
SQL> EXPLAIN PLAN SET STATEMENT_ID = test4 FOR SELECT company.name FROM company, sales WHERE company.company_id = sales.company_id AND sales.period_id=3 AND sales.sales_total>1000;
6. NESTED LOOPS: Nested loops joins tables access operations where at least one of the join columns is indexed.
SQL> EXPLAIN PLAN SET STATEMENT_ID = test6 FOR SELECT company.name FROM company, sales WHERE company.company_id = sales.company_id AND sales.period_id = 3 AND sales.sales_total > 1000
7. USING HINTS: Hints are suggestions that give the optimizer for optimizing a SQL statement. You can use hints to specify: The optimization approach for a SQL statement The goal of a cost based approach for a SQL statement The access path for the a table accessed by the statement The join order for a join statement A join operation in a join statement Note: The syntax for hints is similar to SQL statement, the only difference is the addition of the + sign to indicate the start of the hint. The + cannot be preceded by a space. 8. ALL_ROWS: It is used to minimize the time it takes for all rows to be returned by the query.
SQL> SELECT /*+ ALL_ROWS */ Company.name From company, sales WHERE company.company_id = sales.company_id AND sales.period_id = 3 AND sales.sales_total > 1000;
9. FIRST_ROWS: It tells the operator to optimize the query with the goal of the shortest response time for the return of the first row from the query.
SQL> SELECT /*+ FIRST_ROWS */ company.name FROM company, sales WHERE company.company_id = sales.company_id AND sales.period_id = 3 AND sales.sales_total > 1000;
10. FULL: The full hints tells the optimizer to perform a table access full operation the specified table.
SQL> SELECT /*+ FULL (COMPANY) */ name, city, state FROM company WHERE city = ROANOKE AND state = VA;
11. RULE: The rule hint tells the optimizer to use rule-based optimization for a query.
SQL> SELECT /*+ RULE */ company.name FROM company, sales WHERE company.company_id = sales.company_id AND sales.period_id = 3 AND sales.sales_total > 1000;
12. It ban be used in three different ways: If a single index is listed, that index will b used If multiple indexes are listed, the optimizer will choose which indexes to use. If a table is listed but no indexes are listed, the optimizer will choose an index or indexes to use for that table.
SQL> SELECT /*+ INDEX (company) */ Name, city, state FROM company WHERE city = roanoke AND state = VA;
13. CACHE: The cache hint, when used for a table in query tells Oracle to treat the table as a cached table, i.e., cache tells Oracle to keep the blocks from the full table scan of a table in the SGAs data block buffer cache area, instead of quickly removing them from SGA.
SQL> SELECT /*+ FULL (competitor) CACHE (competitor) */ FROM competitor WHERE company_id > 5; SQL> SELECT /*+ FULL (competitor) NOCACHE (competitor) */ FROM competitor WHERE company_id > 5;
21.4 The trcsess Utility The trcsess utility consolidates trace output from selected trace files based on several criteria: Session id Client id Service name Action name Module name After trcsess merges the trace information into a single output file, the output file could be processed by TKPROF. Syntax: The syntax for the trcsess utility is:
Trcsess [output=output_file_name] [session=session_id] [clientid=client_id] [service=service_name] [action=action_name] [module=module_name] [trace_files]
Specifies the file where the output is generated. If this option is not specified, then standard output is used for the output Consolidates the trace information for the session specified. The session identifier is a combination of session index and session serial number, such as 21.2371. You can locate these values in the V$SESSION view. Consolidates the trace information for given client id. Consolidates the trace information for the given service name. Consolidates the trace information for the given action name Consolidates the trace information for the given module name Is a list of all the trace file names, separated by spaces, in which trcsess should look for trace information. The wild character * can be used to specify the trace file names. If trace files are not specified, all the files in the current directory are taken as input to trcsess.
One of the session, clientid, service, action, or module options must be specified. If more than one of the session, clientid, service, action or module options is specified, then the trace files which satisfies all the criteria specified are consolidated into the output file. 21.4.1 Sample output of trcsess This sample output of trcsess shows the consolidation of traces for a particular session. In this example, the session index and serial number is equal to 21.2371. Trcsess can be invoked with various options. In the following case, all files in current directory are taken as input:
$ trcsess session=21.2371
21.5 Active Session History Reports Use Active Session History (ASH) reports to perform analysis of: Transient performance problems that typically last for a few minutes Scoped or targeted performance analysis by various dimensions or their combinations, such as time, session, module, action or SQL_ID You can view ASH reports using Enterprise Manager or by running the following SQL scripts: The ashrpt.sql SQL script generates an HTML or text report that shows ASH information for a specified duration. The ashrpti.sql SQL script generates an HTML or text report that displays ASH information for a specified duration for a specified database instance. The reports are divided into multiple sections. The HTML report includes links that can be used to navigate quickly between sections. The content of the report contains ASH information used to identify blocker and waiter identities and their associated transaction identifiers and SQL for a specified duration. 21.5.1 Running the ashrpt.sql Report To generate a text report of ASH information, run the ashrpt.sql script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/ashrpt.sql
1. First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
2. Specify the time frame to collect ASH information by asking the begin time in minutes prior to the system date.
Enter value for begin_time: -10
3. Next, enter the duration in minutes that report for which you want to capture ASH information from the begin time. The default duration of system date minus begin time is accepted in the following example:
Enter value for duration:
4. The report in this example will gather information beginning from 10 minutes before the current time and ending at the current time. Next, accept the default report name or enter a report name. The default name is accepted in the following example:
Enter value for report name: Using the report name ashrpt_1_0310_0131.txt
21.6 Automatic Workload Repository Reports You can view the AWR reports with Oracle Enterprise Manager or by running SQL scripts. To run an AWR report, a user must be granted the DBA role. The reports are divided into multiple sections. The HTML report includes links that can be used to navigate quickly between sections. The content of the report contains the workload profile of the system for the selected range of snapshots. Running Workload Repository Reports using SQL Scripts You can view AWR reports by running the following SQL scripts; The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids. The awrrpti.sql SQL script generates an HTML or text report that displays the statistics for arrange of snapshot Ids on a specified database instance. 21.6.1 Running the awrrpt.sql Report To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql script at the SQL prompt:
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
1. First, you need to specify whether you want an HTML or text report.
Enter value for report_type: text
2. Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 2
3. After the list displays, you are prompted for the beginning and ending snapshot Id for the workload repository report.
Enter value for begin_snap: 150 Enter value for end_snap: 160
4. Next, accept the default report name or enter a report name. The default name is accepted in the following example:
Enter value for report_name: Using the report name awrrpt_1_150_160
5. The workload repository report is generated. 21.6.2 Running the awrrpti.sql Report To specify a database and instance before entering a range of snapshot IDs, run awrrpti.sql script at the SQL prompt to generate an HTML or text report:
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
1. First, specify whether you want an HTML or a text report. After that, a list of the database identifiers and instance number displays, similar to the following:
Instance in this Workload Repository Schema ---------------------------------------------------DB Id Inst Num DB Name Instance Host ------ ----- ----- ----------- ---------------3309173529 1 MAIN main DLSUN1690 3309173529 1 TINT251 tint251 stint251
2. Enter values for the database identified (dbid) and instance number (inst_num) at the prompts:
Enter value for dbid: 3309173529 Using 3309173529 for database Id Enter value for inst_num: 1
3. Next, you are prompted for the number of days and shapshot IDs, similar to the awrrpt.sql script, before the text report is generated. Note: Rule Base Optimizer is deprecated in 10g.
The LGWR process records the transaction in the online redo file only on a subsequent commit request from the user. If the request is successful, the Server sends a message across the network to the user, else appropriate error message is transmitted. Throughout this entire procedure, the other background Processes are running and watching for any conditions that require intervention. In addition, Oracle is managing other transactions and preventing contention between different transactions that request the same data.
22.3 Using Shared Server Process A database Server is currently running Oracle using multi-threaded Server configuration. A client workstation runs a database application such as Oracle Forms. It attempts to establish a connection to the server using ORALE Net driver. The Database Server is currently running the proper ORACLE Net driver. The Listener Process on the database Server detects the connection request from the client application and determines how the User Process should be connected to the available Dispatcher. The User issues a SQL statement. For example, the user updates a row in a table. The Dispatcher Process places the User Processs request on the request queue, which is in the SGA and shared by all Dispatcher Processes. An available shared Server Process checks the common Dispatcher request and picks up the next SQL statement on the queue. It then Processes the SQL statement as described in previous example. Once the Shared Server Process finishes processing the SQL statement, the Process places the result on the response queue of the Dispatcher Process that sent the request. The Dispatcher Process checks its response queue and sends completed request back to the user Process that made the request. To implement Shared Server you have to set these parameters in init.ora
DISPATCHERS = (PROTOCOL=TCP)(DISP=2) # minimum dispatchers for TCP to be started MAX_DISPATCHERS = 5 # maximum number of dispatchers you want to start SHARED_SERVERS = 2 # minimum number of server process you want to start MAX_SHARED_SERVERS = 5 # maximum number of server processes LOCAL_LISTENER = (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.12)(PORT=1521)) $ lsnrctl start $ sqlplus /nolog SQL> CONNECT / as sysdba SQL> SHUTDOWN SQL> STARTUP SQL> EXIT $ sqlplus system/manager@alias #alias name in tnsnames.ora
22.4 Simplified Shared Server Configuration Oracle 10g is shared server aware by default and a value greater than zero for SHARED_SERVERS parameter will enable the feature. The SHARED_SERVER parameter is dynamic and hence can be enabled or disabled.
SQL> ALTER SYSTEM SET SHARED_SERVERS=4 SCOPE=BOTH;
Note: In Oracle 10g we need not explicitly configure Dispatcher by default one Dispatcher is started.
22.4.1 To Check Whether Shared Server Is Working 1. From the operating system you can give this command to see whether SHARED SERVER is working:
$ ps x|grep ora_ #check for (LOCAL=YES) or (LOCAL=NO), It should not appear
2. To move table partitions: You can use the MOVE PARTITION clause to move a partition.
SQL> ALTER TABLE parts MOVE PARTITION dept2 TABLESPACE ts_2000 NOLOGGING;
# This statement always drops the partitions old segment and creates a new segment, even if you dont specify a new tablespace. 3. To add table partitions
SQL> ALTER TABLE sales ADD PARTITION jan2001 VALUES LESS THAN (2002, 02, 01) TABLESPACE tse;
4. To drop table partitions: Delete the rows from the partition before dropping the partition.
SQL> DELETE FROM sales WHERE transid < 10000; SQL> ALTER TABLE sales DROP PARTITION dec2000;
6. To truncate partition tables: You can use the ALTER TABLE TRUNCATE PARTITION statement to remove all rows from a table partition with or without reclaiming space.
SQL> SQL> SQL> SQL> SQL> SQL> ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE DELETE FROM ALTER TABLE sales sales sales sales sales sales TRUNCATE PARTITION dec2000; (OR) DISABLE CONSTRAINT dname_sales1; TRUNCATE PARTITION dec2000; ENABLE CONSTRAINT dname_sales1; (OR) WHERE TRANSID < 10000; TRUNCATE PARTITION dec2000;
7. To split table partitions: You can split a table partition by issuing the ALTER TABLE SPLIT PARTITION statement.
SQL> ALTER TABLE sales SPLIT PARTITION sales_q4 at (2000, 12, 01) INTO (PARTITION sales_q4, PARTITION sales_q5);
8. To merge table partitions: You can use either of the following strategies to merge table partitions. To merge partition OSU1 into partition OSU2: a. Export the data from OSU1 b. Issue the following statement
SQL> ALTER TABLE oh DROP PARTION osu1;
c. Import the data from Step 1 into partition OSU2; Another way to merge partition OSU1 into partition OSU2; a. Exchange partition OSU1 of table OH with dummy table COLS b. Issue the following statement:
SQL> ALTER TABLE OH DROP PARTITION OSU1;
c. Insert as SELECT from the dummy table, to move the data from OSU1 back into OSU2;
23.3.2 Hash Partitioning Hash partitioning partitions the data according to a hash value. It controls the physical placement of data across a fixed number of partitions and gives you a highly tunable method of data placement. 1. To create a hash partition
SQL> CREATE TABLE purchases(inv_no NUMBER, day NUMBER, Month NUMBER, year NUMBER) PARTITION BY HASH(day, month, year) (PARITION h1 TABLESPACE ts1, PARTITION h2 TABLESPACE ts2, PARTITION h3 TABLESPACE ts3);
Note:
Splitting, dropping, merging the existing partitions is not possible with Hash Partitioned tables.
23.3.3 List Partitioning List partitioning complements the functionality of range partitioning, which is often used in data warehouses. Range partitioning is useful for segmenting a table along a continuous domain, such as time. Tables in data warehouses are often range-partitioned by time, so that each range partitions contains the data for a given range of time values such as one partition per month or per week. In contrast, list partitioning is useful for segmenting a table along a discrete domain, such as products. Each partition in a list-partitioned table contains data matching a list of discrete partition-key values. For example, suppose that a data warehouse for a large corporation contains data for many different countries. The data warehouse administrator could choose to list-partition a large table by the regions Europe, North America, South America, and Asia. 1. CREATE A LIST partitioned table:
SQL> CREATE TABLE sales_by_region (item# NUMBER, qty NUMBER, store_name VARCHAR(30), state_code VARCHAR(4), sale-date DATE) PARTITION BY LIST (state_code) (PARTITION region_east VALUES (WB,AM,MZ,NGL) TABLESPACE ts1, PARTITION region_west VALUES (GT, MH,MP,RT) TABLESPACE ts2, PARTITION region note VALUES (JM,ND,UP,HP,PJB) TABLESPACE ts3));
3. This example MERGEs two partitions into a partition that will inherit all of its attributes from the table-level attributes, except for PCTFREE and MAXEXTENTS which are specified in the statement:
SQL> ALTER TABLE sales_by-region MERGE PARTITIONS region_east, region_west INTO PARTITION region_east_west PCTFREE 50 STORAGE (MAXEXTENTS 20);
4. This is a new statement which can be used to extend the partition value list of an existing partition to contain additional literal values:
SQL> ALTER TABLE sales_by_region MODIFY PARTITION region_east ADD VALUES (BIH);
5. This statement DROPs a set of state codes from the existing REGION_EAST partition value-list.
SQL> DELETE sales_by_region WHERE state_code IN (MZ); SQL> ALTER TABLE sales_by_region MODIFY PARTITION region_east DROP VALUES (MZ);
23.3.4 Composite Range-Hash Partitioning Tables Attributes specified for a range partition apply to all sub-partitions of that partition. You can specify different attributes for each range partition, and you can specify a STORE IN clause at the partition level if the list of tablespaces across which the sub-partitions of that partition should be spread is different from those of other partitions. All of this is illustrated in the following example.
SQL> CREATE TABLE emp (deptno NUMBER, empname VARCHAR(32), grade NUMBER) PARTITION BY RANGE (deptno) SUBPARTITION BY HASH (empname) SUBPARITIONS 8 STORE IN (ts1, ts3, ts5, ts7) (PARTITION p1 VALUES LESS THAN (1000), PARTITION p2 ALUES LESS THAN (2000) STORE IN (ts2, ts4, ts6, ts8), PARTITION p3 VALUES LESS THAN (MAXVALUE) (SUBPARTITION p3_s1 TABLESPACE ts4, SUBPARTITION p3_s2 TABLESPACE ts5));
23.3.5 Composite Range-List Partitioned Tables The concept of range-list partitioning is similar to that of the other composite partitioning method, rangehash, but this time you can specify that the sub-partitions are to be list rather than hash. Specifically, after the CREATE TABLE PARTITION BY RANGE clause, you can include a SUBPARTITION BY LIST clause that follows similar syntax and rules as the PARTITION BY LIST clause. The individual PARTITION and SUBPARTITION clauses, and optionally a SUBPARTITION TEMPLATE clause, follow.
SQL> CREATE TABLE sample_regional_sales (deptno number, item_no VARCHAR(20), txn_date date, txn_amount number, state varchar2(2)) PARTITION BY RANGE (txn_date) SUBPARTITION BY LIST (state) (PARTITION q1_1999 VALUES LESS THAN (TO_DATE(1-APR-1999, TABLESPACE tbs_1 (SUBPARTITION q1_1999_nortwest VALUES (OR, WA), SUBPARTITION q1_1999_southwest VALUES (AZ, UT, NM), SUBPARTITION q1_1999_northeast VALUES (NY, VM, NJ), SUBPARTITION q1_1999_southeast VALUES (FL, GA), SUBPARTITION q1_others VALUES (DEFAULT) TABLESPACE tbs_4 ), PARTITION q2_1999 VALUES LESS THAN ( TO_DATE(1-JUL-1999, TABLESPACE tbs_2 (SUBPARTITION q2_1999_nortwest VALUES (OR, WA), SUBPARTITION q2_1999_southwest VALUES (AZ, UT, NM), SUBPARTITION q2_1999_northeast VALUES (NY, VM, NJ), SUBPARTITION q2_1999_southeast VALUES (FL, GA), SUBPARTITION q2_1999_northcental VALUES (SD, WI), SUBPARTITION q2_1999_southcentral VALUES (OK, TX) ), PARTITION q3_1999 VALUES LESS THAN ( TO_DATE(1-OCT-1999, TABLESPACE tbs_3 (SUBPARTITION q3_1999_nortwest VALUES (OR, WA), SUBPARTITION q3_1999_southwest VALUES (AZ, UT, NM), SUBPARTITION q3_others VALUES (DEFAULT) TABLESPACE tbs_4 ), PARTITION q4_1999 VALUES LESS THAN ( TO_DATE(1-JAN-2000, TABLESPACE tbs_4 );
DD-MON-YYYY))
DD-MON-YYYY))
DD-MON-YYYY))
DD-MON-YYYY))
Exercises: 1. Exchanging Table Partitions You can convert a partition into a non-partitioned table, and a table into a partition of a partitioned table by exchanging their data (and index) segments. 2. Merging Adjacent Table Partitions: This following scenario describes how to merge two adjacent table partitions. Suppose you have to merge two partitions FEB95 and MAR95, of the SALES table by moving the data from FEB95 partition into the MAR95 partition. 23.3.5.1 To MERGE the two Table Partitions 1. Create a temporary table to hold the FEB95 partition data.
SQL> CREATE TABLE sales_feb95 (.) TABLESPACE ts_temp STORAGE (INITIAL 2);
Now the SALES_FEB95 table placeholder segment is attached to the FEB95 partition. 3. Drop the FEB95 partition, this frees the segment originally owned by the SALES_FEB95 table.
SQL> ALTER TABLE sales DROP PARTITION feb95;
4. Move the data from the SALES_FEB95 table into the MAR95 partition via an INSERT statement.
SQL> INSERT INTO sales PARTITION (mar95) SELECT * FROM sales_feb95;
Note: Using the extended table name here is more efficient. Instead of attempting to compute the partition to which a row belongs, Oracle verifies that it belongs to the specified partition. 5. Drop the SALES_FEB95 table to free the segment originally associated with the FEB95 partition.
SQL> DROP TABLE sales_feb95;
23.3.5.2 Creating a View from Partitioned table 1. At a time we cant select data from the two partitions of a table for this we have to create a partition view based on two partitions of a table:
SQL> CREATE VIEW account AS SELECT * FROM accounts_jan95 UNION ALL SELECT * FROM accounts_feb95;
23.3.5.3 To Incrementally Migrate the Partition View to a Partition Table 1. Initially, only the two most recent partitions, ACCOUNTS_NOV95 and ACCOUTNS_95 will be migrated from the view to the table by creating the partition table. Each partition gets a temporary segment of 2 blocks (as a placeholder).
SQL> CREATE TABLE accounts_new (.) TABLESPACE ts_temp STORAGE (INITIAL 2) PARTITION BY RANGE (opening_date) (PARTITION jan95 VALUES LESS THAN (950201), ... PARTITION dec95 VALUES LESS THAN (960101));
2. Use the EXCHANGE command to migrate the tables to the corresponding partitions.
SQL> WITH SQL> WITH ALTER TABLE ALTER TABLE TABLE accounts_new EXCHANGE PARTITION nov95 accounts_NOV95 WITH VALIDATION; TABLE accounts_new EXCHANGE PARTITION dec95 accounts_DEC95 WITH VALIDATION;
So now, the placeholder data segments associated with the NOV95 and DEC95 partitions have been exchanged with the data segments associated with the ACCOUNTS_NOV95 and ACCOUNTS_DEC95 tables. 3. Redefine the ACCOUNTS view.
SQL> CREATE OR REPLACE VIEW accounts SELECT * FROM accounts_JAN95 UNION ALL SELECT * FROM accounts_FEB95 UNION ALL ... UNION ALL SELECT * FROM accounts_new PARTITION (nov95) UNION ALL SELECT * FROM accounts_new PARTITION (dec95);
4. Drop the ACCOUNTS_NOV95 and ACCOUNTS_DEC95 tables, which own the placeholder segments that were originally attached to the NOV95 and DEC95 partitions. 5. After all the tables in the UNION ALL views are converted into partitions, drop the view and the partitioned table that was renamed as the view.
SQL> DROP VIEW accounts; SQL> RENAME accounts_new TO accounts;
23.4 Managing Partitioned Indexes Indexes can be either partitioned or non-partitioned. Database administrators and application developers need to analyze their indexing needs for their application. Type of access to data through applications Performance in accessing data Availability in case of disk failure Are parallel operation possible? All of these issues will influence your choice of an indexing strategy.
23.4.1 Types of Indexes Oracle supports four types of indexes that you can use according to your specified needs: Non-partitioned indexes Global prefixed indexes Local prefixed indexes Local non-prefixed indexes Examples: 1. Create table SALES partitioned by RANGE
SQL> CREATE TABLE sales (acct_no NUMBER (5) NOT NULL, person VARCHAR2 (30), sales_amount NUMBER (8) NOT NULL, week_no NUMBER (2) NOT NULL) PARTITION BY RANGE (week_no) (PARTITION BY p1 VALUES LESS THAN (16), PARTITION BY p2 VALUES LESS THAN (36), PARTITION BY p3 VALUES LESS THAN (53));
Note:
In the example, you could leave out the partition name and tablespace name. In that case, the partition name would be the same as the partition name for the corresponding table partition, and the partition would be placed in the same tablespace as the table partition the index is associated with.
A global index can only be range partitioned, but can be defined on any type of partitioned tables. In a global index, one index partition may refer to more than one underlying table partition. 4. Create PREFIXED indexes:
SQL> CREATE INDEX sales_idx ON sales (acct_no, person) GLOBAL PARTITION BY RANGE (acct_no) (PARTITION pi1 VALUES LESS THAN (32), PARTITION pi2 VALUES LESS THAN (MAXVALUES));
Note:
Note:
An index is prefixed if the LEFTMOST columns of the index are based on exactly the same columns as the partition key of the index.
An index is non-prefixed if the LEFTMOST columns of the index are not based on the same columns as the partition key of the index. 6. To add Index partition: You cannot explicitly add a partition to a local index. Instead, new partitions are added to local indexes only when you add a partition to the underlying table. You cannot add a partition to a global index because the highest partition always has a partition bound of MAXVALUE. 7. To Drop Index Partitions: You cannot explicitly drop a partition from a local index.
SQL> ALTER INDEX npr REBUILD PARTITION P2;
Note:
8. To split Index Partitions: You cannot explicitly split a partition in a local index. You can use the ALTER INDEX SPLIT PARTITION statement to split a partition in a global index if the partition is empty.
SQL> ALTER INDEX quon1 SPLIT PARTITION Canada AT VALUES LESS THAN (100) INTO (PARTITION CANADA1 TABLESPACE TS11, PARTITION CANADA2 TABLESPACE TS12); SQL> ALTER INDEX quon1 REBUILD PARTITION canada1; SQL> ALTER INDEX quon1 REBUILD PARTITION canada2;
Note:
If partition name contains data, then SPLIT PARTITION marks both split-partion-1 and split-partition2- INDEX UNUSABLE. You must ALTER INDEX REBUILD PARTITION to rebuild the partitions before using them.
23.5 Data Dictionary Views DBA_PART_INDEXES DBA_TAB_PARTITIONS DBA_IND_PARTITIONS DBA_INDEXES DBA_OBJECTS DBA_SEGMENTS
SQL> SELECT a.table_name, a.partition_name, b.partitioning_type, a.tablespace_name, a.high_value FROM user_tab_partitions a, user_part_tables b WHERE a.table_name=b.table_name ORDER BY a.table_name, a.partition_name; SQL> SELECT a.index_name, a.partition_name, b.partitioning_type, a.tablespace_name, a.high_value, b.locality FROM user_ind_partitions a, user_part_indexes b WHERE a.index_name=b.index_name ORDER BY a.index_name, a.partition_name;
24.2 Hot Backup 24.2.1 Whole Database Backup Whole database backup can be performed while database is up and running in Archive log mode.
$ mkdir p /disk5/oradata/DEMO/HOTBKUP $ vi hot.sql SET ECHO OFF SET FEEDBACK OFF SET HEADING OFF SPOOL hot_backup.sql SELECT spool hot.log FROM dual; SELECT alter database ||database_name|| begin_backup; ||chr(10)||!cp ||file_name|| /disk5/oradata/DEMO/HOTBKUP ||chr(10)||alter database ||database_name|| end backup; FROM dual; SELECT alter database backup controlfile to || /disk5/oradata/demo/HOTBKUP/control.new||; FROM dual; SELECT alter system switch logfile; FROM dual; SPOOL off @hot_backup.sql /* execute the SQL file just generated */ EXIT :wq $ cd /disk5/oradata/DEMO/HOTBKUP $ ls l
24.2.2 Hot Backup at Tablespace Level Hot backup is taken when the database is up and running in archive log mode. Hot backup can be taken on tablespace by tablespace mechanism. You must put the tablespace in BEGIN BACKUP mode (using ALTER TABLESPACE command) and after finishing backup, you must set it to END BACKUP mode. It is worth to note that hot backup will generate lot of redo entries.
$ mkdir p /disk5/oradata/DEMO/HOTBKUP $ vi hot.sql SET ECHO OFF SET FEEDBACK OFF SET HEADING OFF SPOOL hot_backup.sql SELECT spool hot.log FROM dual; SELECT alter tablespace ||tablespace_name|| begin backup; ||chr(10)||!cp ||file_name|| /disk5/oradata/DEMO/HOTBKUP ||chr(10)||alter tablespace ||tablespace_name|| end backup; FROM dba_data_files; SELECT alter database backup controlfile to || /disk5/oradata/demo/HOTBKUP/control.net||; FROM dual; SELECT alter system switch logfile; FROM dual; SPOOL OFF @hot_backup.sql /* execute the SQL file just generated */ EXIT :wq $ cd /disk5/oradata/DEMO/HOTBKUP $ ls l
24.3 Online Backup Mode or Entire Database Backup 1. If you are backing up all tablespaces datafiles at once in online backup mode, the command is:
SQL> ALTER DATABASE BEGIN BACKUP;
You no longer have the need to place each tablespace into backup mode individually unless you want to backup only one tablespace at a time. You can not take individual tablespaces offline or in begin backup when whole database is in BEGIN BACKUP mode. 2. Back up all files of the online tablespaces. For example, a UNIX user might backup datafiles with the *.dbf suffix as follows:
$ cp $ORACLE_HOME/oradata/trgt/*.dbf /disk2/backup/
4. Archive the online redo logs so that the redo logs required to recover the tablespace backups will be available for later media recovery. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
24.4 Recovery Recovery is of three types. They are online block recovery, thread recovery and media recovery. In all the three cases, the algorithm that applies the redo records against an individual block is the same. 24.4.1 Block Level It is automatically performed by Oracle during normal operation of the database and is transparent to the user and does not involve any action from the DBA. When a process dies while changing a buffer, Oracle reconstructs the buffer using the online redo log files for the current thread and writes it to the disk. The buffer header contains the information regarding the range of redo records that needs to be applied to it. If the PMON process is performing the block recovery, Oracle does not allow it to spend large amount of time working on the recovery of one buffer. 24.4.2 Thread Recovery If an instance crashes while it has a database open, it is necessary to do thread recovery. This involves applying the database on the redo changes in the thread that have occurred since the last time the thread was check pointed. The checkpoint SCN stored in the control file for the thread instance dad will have the lost changes applied. Thread recovery is done either as a part of instance recovery or crash recovery. Instance recovery is done while the database is open and once instance detects that another instance as dead. This is possible only if you are running multiple instances instance, using parallel query option. If you restart a dead instance before recovery is done, then Oracle will do crash recovery. In general, the first instance to open the database after an abnormal termination (shutdown abort or database crash) does crash recovery. 24.4.3 Media Recovery Media recovery is done in response to a recovery command issued by the DBA. It is used to make backup datafiles current, or to restore changes that were lost when a datafile went offline without checkpoint. For example, if you take a tablespace offline, using the immediate option, the data files will go offline without a checkpoint being performed by Oracle. Media recovery can apply archived log files, as well as online log files. 24.4.3.1 Online and Offline Recovery Recovery command Recover database Recover tablespace Recovery datafile Syntax:
SQL> RECOVER [AUTOMATIC] [FROM LOCATION] [DATABASE] |UNTIL TIME/DATE |UNTIL CANCEL |UNTIL CHANGE USING BACKUP CONTROLFILE;
24.4.3.2 Complete vs Incomplete Recovery Recovering the database from a media failure without losing any data is known as complete recovery. Otherwise it is known as incomplete recovery. Complete recovery should be implemented when all the redolog files, backup data files (for all the lost or damaged data files), and a current valid control file are available. Case 1. The database is running in NOARCHIVE mode and you lost a data file because of media failure and you take cold backup every night. How youll recover the database. The scenario can be simulated as follows: 1. Take a cold backup of the database. Startup the database and perform some transactions. Now using HOST command, remove one datafile at the operating system level. Now abort the instance. 2. Now try to open the database, you will get an Error stating that particular data file is missing. Now shutdown the database and restore the previous nights backup and open the database. So you lost todays transactions. This is complete recovery though you lost todays actions because as far as the database is concerned, it did not loose anything, which came from last night. It may appear to you that it is incomplete, but it is still complete recovery for that time. Note: You just cannot restore the lost datafile from previous backup and start up the database, because the database will be in inconsistent state. So it will fail. Case 2. Everything is same except that it is running in ARCHIVE mode. Here you restore the lost file from the previous nights backup, Mount the database and issue the command RECOVER DATAFILE AUTOMATIC. Oracle will apply the relevant archived log files and online redo log files and when it will open the database. Here, you have lost no data hence it is complete recovery. Everything is as above except that you lost the online redolog files only. In this case, you have archived log files but not online redolog files. so you can restore up to the last available archived log file only by issuing the RECOVER DATABASE UNTIL CANCEL. Cancel the media recovery immediately after applying the last archived file, open database with resetlogs option. This will invalidate the previous log files. This is an incomplete recovery.
Steps:
Case 3.
Case 4.
Database is running in Archive log mode. We used to take cold backup every night. On one day, a programmer accidentally dropped one importable table (assume that at 11:29:45 a.m.). You realized this at 2:00 p.m. As this is a critical database without losing others data you have to recover the lost table.
Steps: 1. 2. 3. 4.
Switch the log file Shutdown the database Take the cold backup of the current database (New backup) Restore only the datafiles from previous nights backup provided there are no structural changes to the database. 5. Mount the database and issue the command:
SQL> ALTER DATABASE RECOVERY AUTOMATIC USING BACKUP CONTROLFILE UNTIL TIME 2005-08-07:11:29:00; # Note: Using backup control file if you have restored the control file otherwise use current control file only].
Open the database by resetting the logs Export the table, which the programmer has dropped Shutdown the database Restore the new backup Startup the database Now import the table, which was exported in step 5. A DBA has lost both the control files of a database, which is in archive log mode. To recovery the database, use CREATE CONTROLFILE command.
Steps:.
$ vi cr8ctl.sql CREATE CONTROLFILE REUSE DATABASE DEMO ARCHIVELOG LOGFILE GROUP 1 (/disk1/ordata/DEMO/redolog1.log, /disk2/oradata/DEMO/redolog2.log) SIZE 4M, GROUP 2 (/disk1/ordata/DEMO/redolog1.log, /disk2/oradata/DEMO/redolog2.log) SIZE 4M DATAFILE /disk1/oradata/DEMO/system01.dbf RESETLOGS RECOVER DATABASE; SQL> STARTUP NOMOUNT; SQL> @cr8ctl.sql SQL> ALTER DATABASE OPEN RESETLOGS;
25 RECOVERY MANAGER
25.1 INTRODUCTION 25.1.1 Costs and Benefits When Using a Recovery Catalog When you use a recovery catalog, Recovery Manager can perform a variety of automated backup and recovery functions; however, Recovery Manager requires that you maintain a recovery catalog schema, and any associated space used by that schema. If you use a recovery catalog, you must decide which database you will use to install the recovery catalog schema, and also how you will back this database up. The size of the recovery catalog schema: Is dependent of the number of databases monitored by the catalog Is dependent on the number and size of Recovery Manager scripts stored in the catalog Will grow as the number of activated logs and backups for each database grow If you use Recovery Manager to backup many databases, you may wish to create a separate recovery catalog database, and create the Recovery Manager in that database. You should also decide whether or not to operate this database in ARCHIVELOG mode. If you have more than one database to backup, you can create more than one recovery catalog and have each database serve as the others recovery catalog. For example, assume there are two production databases, one called ACCT and a second called PAY, you can install the recovery catalog for ACCT in the PAY database, and the recovery catalog for the PAY database in ACCT. This enables you to avoid the extra space requirements and memory overhead of maintaining a separate recovery catalog database. However, this solution is not practical if the recovery catalog databases for both reside in tablespaces residing on the same physical disk. Note: You must install the recovery catalog schema in a different database from the target database you will be backing up. If you dont, the benefits of using a recovery catalog are lost if you lose the database and need to restore. It is difficult to restore and recover if you lose your control files and do not use a recovery catalog. The only way to restore and recover when you have lost all control files and need to restore and recovery datafiles after creating the control file manually.
Note:
25.1.2 Setting Up the Recovery Catalog Schema When you use a recovery catalog, you need to setup the schema. Oracle suggests you put the recovery catalog schema in its own tablespace; however, it could be put in the system tablespace, if necessary. To Set Up the Recovery Catalog Schema 1. Create a tablespace RCVCAT that will be the default tablespace of a catalog schema placing the read data. 2. Using Server Manager (Line Mode) connect SYS (as SYSDBA) to the database containing the recovery catalog. 3. Issue the following commands:
SQL> SPOOL cr8rman.log SQL> CONNECT / AS SYSDBA SQL> CREATE USER rman IDENTIFIED BY rman TEMPORARY TABLESPACE temp DEFAULT TABLESPACE rman_ts QUOTA UNLIMITED ON rman_ts; SQL> GRANT RECOVERY_CATALOG_OWNER TO rman; SQL> CONNECT rman/rman $ rman catalog rman/rman RMAN> CREATE CATALOG TABLESPACE rman_ts;
Example: 1. This example connects to the target database prod1 without a recovery catalog:
$ rman TARGET SYS/sys_pwd@prod1 NOCATALOG;
2. Example to make a connection to the target database and recovery catalog database:
$ rman target sys/sys_pwd@prod1 catalog rman/rman prod1
3. Registering the target database in the recovery catalog from RMAN prompt.
RMAN> REGISTER DATABASE;
6. To set parameter for automatic backup format for the data files
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT /disk2/oradata/DEMO/RMAN/%U.rmbk;
7. This example connects to the target database prod1 without specifying catalog options. Because CONNECT CATALOG is not run at the RMAN prompt, RMAN connects in default NOCATALOG mode when the first command requiring a repository connection is run:
$ rman RMAN> CONNECT TARGET SYS/sys_pwd@prod1 RMAN> BACKUP DATABASE;
8. This example connects to the target database prod1 and the recovery database rcat, and then runs the command file b_whole_10.rcv:
$ rman TARGET SYS/sys_pwd@prod1 CATALOG rman/rman@rcat [@/oracle/dbs/b_whole_10.rcv
(or)
cmdfile=/oracle/dbs/b_whole_10.rcv]
9. This example connects to the target database prod1 without a recovery catalog and then specifies that RMAN should append messages to the message log:
$ rman TARGET sys/sys_pwd@prod1 NOCATALOG LOG=$ORACLE_HOME/dbs/log/msglog.f APPEND
Note:
Here 2, is the file_id from the dba_data_files # Commands such as BACKUP, RESTORE and ALLOCATE can be executed within a run command
11. This example deletes all archived redo logs older than two weeks
RMAN> DELETE ARCHIVELOG ALL UNTIL TIME SYSDATE-14;
25.2 RMAN Commands 25.2.1 Allocate Channel To manually allocate a channel, which is a connection between RMAN and a database instance. Each connection initiates an Oracle server session on the target or auxiliary instance: this server session performs the work of backing up, restoring, or recovering backup sets and copies. Each channel operates on one backup set at a time (for BACKUP, RESTORE, or RECOVER) or one image copy at a time (for COPY). RMAN automatically releases the channel at the end of the job. Example 1 This command allocates a tape channel for a whole database backup:
RMAN> RUN { ALLOCATE CHANNEL dev1 DEVICE TYPE sbt; BACKUP DATABASE; }
Example 2
When creating multiple copies of a backup set, you can specify the SET BACKUP COPIES command. The following example generates a single backup of the database, and then creates four identical backups of datafile 1 to four file systems:
RMAN> RUN { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK MAXPIECESIZE 2M; BACKUP DATABASE; SET BACKUP COPES = 4; BACKUP DATAFILE 1 FORMAT /disk1/bp/%U, /disk2/sv/%U, /disk3/bp/%U, /dsk3/sv/%U; }
25.2.2 BACKUP To backup a database, tablespace, datafile (current or copy), control file (current or copy), archived log, or backup set. You can backup a target or standby database. When performing a backup, specify the files that you want to backup. RMAN puts the input files into one or more backup sets, which are RMAN-specific logical structures. The backup set is the smallest unit of a backup. RMAN only records backup sets in the repository that Complete successfully. Each backup set contains at lest one backup piece, in which is a physical file containing the backed up data. FULL Copies all blocks into the backup set, skipping only datafile blocks that have never been used. RMAN makes full backups by default if neither FULL nor INCREMENTAL is specified. The server session does not skip blocks when backing up archived redo logs or control files. A full backup has no effect on subsequent incremental backups, so it is not considered a part of the incremental backup strategy. INCREMENTAL LEVEL = integer Copies only those data blocks that have changed since the last incremental integer backup, where integer is any integer from 1 to 4. For example, in a level 2 backup RMAN backs up all blocks used since the most recent level 2, level 1 or level 0 backup. This type of incremental backup is called a differential backup to distinguish it from a cumulative backup. If you specify INCREMENTAL, then in the backup Spec clause, you must set one of following parameters: DATAFILE, DATAFILECOPY, TABLESPACE, or DATABASE. RMAN does not support incremental backups of control files, archived redo logs, or backup sets. BACKUPSET Backs up either ALL backup sets or backup sets specified by primary_key or completion time. Use this parameter in conjunction with the DEVICE TYPE sbt clause to backup all backups on disk to tape. You cannot backup from tape to tape or from tape to disk: only from disk to disk or disk to tape. DATAFILE Specifies a list of one or more datafiles. DATAFILECOPY filename Specifies the file names of one or more datafile image copies. TABLESPACE tablespace_name Specifies the names of one ore more tablespaces. RMAN backs up all datafiles that are currently part of the tablespaces. DATABASE Specifies datafiles in the current database. To include the current controlfile in the backup set, specify the INCLUDE CURRENT CONTROLFILE clause. CURRENT CONTROLFILE Specifies the current controlfile; CHANNEL channel_id Specifies the case-sensitive name of a channel to use when creating backup sets. Use any name that is meaningful, for example ch1 or dev1. CUMULATIVE Copies the data blocks used since the most recent backup at level n-1 or lower, where n is an integer from 1 to 4. For example, in a cumulative level 2 backup RMAN backs up all blocks used since the most recent level 1 or level 0 backup. MAXSETSIZE = integer Specifies a maximum size for a backup set in bytes (default), kilobytes (K), megabytes (M), and gigabytes (G). Thus, to limit a backup set to 3 MB, specify MAXSETSIZE = 3M.
FILESPERSET = integer
NOT BACKED UP
DEVICE TYPE
Specifies the maximum number of input files in each backup set. If you set FILEPERSET = n, then RMAN never includes more than n files in a backup set. Backs up only those files (of the files specified on the command) that RMAN has not backed up since the specified time. If SINCE TIME is not specified, then only those files that have never been backed up will be backed up. SINCE TIME = date_string Specifies the date after which RMAN should backup files that have no backups. The date_string is either a data in the current NLS_DATE_FORMAT, or a SQL date expression such as SYSDATE-1. Allocates automatic channels for the specified device type only.
FORMAT
Create a snapshot of the current control file and places it into each backup set produced by this clause. Deletes the input files upon successful creation of the backup set. Specify this option only when backing up archived logs, datafile copies, or backup sets. Specifies a filename to use for the backup piece. If you do not Specify the FORMAT parameter, RMAN stores the backup pieces in a port-specific directory ($ORACLE_HOME/dbs on Unix);
%c %d %D %F %M %n %p %s %t %T %u %U %Y %%
Specifies the copy number of the backup piece within a set of duplexed backup pieces Specifies the name of the database Specifies the current day of the month from the Gregorian calendar in format DD Combines the DBID, day, month, year and sequence into a unique and repeatable generated name Specifies the month in the Gregorian calendar in format MM Specifies the name of the database, padded on the right with x characters to a total length of eight characters Specifies the piece number within the backup set Specifies the backup set number Specifies the backup set time stamp, which is a 4-byte value derived as the number of seconds elapsed since a fixed reference time Specifies the year, month, and day in this format: YYYYMMDD Specifies an 8-character name constituted by compressed representations of the backup set number and the time the backup set was created Specifies a convenient shorthand for %u_%p_%c that guarantees uniqueness in generated backup filenames Specifies the year in this format: YYYY Specifies the % character. For example %%Y translates to string %Y.
Examples: 1. Backing up a database This command backups the database to the tape and then backs up the control file that contains the record of the database backup.
BACKUP DATABASE; BACKUP CURRENT CONTROLFILE;
2. Backing up tablespaces and data files This command uses two backup Spec clauses to backup tablespaces and datafiles and let RMAN perform automatic parallelization of the backup:
RUN { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK FORMAT /fs1/%U; ALLOCATE CHANNEL dev2 DEVICE TYPE DISK FORMAT /FS2/%U; BACKUP (TABLESPACE SYSTEM, sales1, sales2, sales3 FILEPERSET 20) (DATAFILE 12,14,15); }
4. Backing up backup sets to tape In this example, you want to keep recent backup sets on disk and older backup sets on tape. You do not want backup sets to exist on disk and tape simultaneously. Hence, you execute this command to backup older backups created more than two weeks ago to tape and then delete the input backup pieces:
BACKUP DEVICE TYPE sbt BACKUPSET CRATE BEFORE SYSDATE-14 DELETE INPUT;
5. Specifying DEVICE TYPE on the BACKUP command This example configures DISK as the default device type, then backs up archived logs to tape:
CONFIGURE DEFAULT DEVICE TYPE TO DISK; BACKUP DEVICE TYPE sbt ARCHIVELOG ALL;
6. Performing a Cumulative Incremental Backup This example backs up all blocks changes in the database since the most recent level 0 or level 1 backup:
BACKUP INCREMENTAL LEVEL 2 CUMULATIVE SKIP INACCESSIBLE DATABASE;
7. Checking for Corruption This example backups datafile 3 and specifies that no more than two blocks with corruption should be tolerated.
RUN { SET MAXCORRUPT FOR DATAFILE 3 TO 2; BACKUP CHECK LOGICAL DATAFILE 3; }
8. Creating a long-term backup This example creates a consistent backup of the database that is exempt from the retention policy and tells RMAN to keep the backup for the next year, but not to keep the archived logs necessary to recover it:
SHUTDOWN; STARTUP MOUNT; BACKUP DATABASE UNTIL SYSDATE+365 NOLOGS;
9. Backing up files with no recent backups This example backs up all the database files and archived logs that have not been backup in the last month:
BACKUP DATABASE, ARCHIVELOG ALL NOT BACKED UP SINCE TIME SYSDATE-31;
10. This example mounts the database, takes a whole database backup, then opens the database. At the RMAN prompt enter
STARTUP MOUNT; BACKUP DATABASE; # Now that the backup is complete, open the database ALTER DATABASE OPEN;
11. To restore the control file to its default location enter the following:
STARTUP NOMOUNT; RESTORE CONTROLFILE; ALTER DATABASE MOUNT;
12. This examples restores backup archived redo log files from tape to the fall within a range of SCNs:
RESTORE ARCHIVELOG SCN BETWEEN 500 AND 700;
13. This example backups only archived logs 1372 of thread 1 and then deletes it.
BACKUP ARCHIVELOG SEQUENCE 1372 DELETE INPUT;
14. This example backs up all archived logs from sequence 288 to sequence 301 on thread 1 and deletes the archived logs after the backup is complete. If the backup fails, the logs are not deleted.
RUN { ALLOCATE CHANNEL dev1 DEVICE TYPE sbt; BACKUP ARCHIVELOG SEQUENCE BETWEEN 288 AND 301 THREAD 1 # Delete original archived redo logs after backup completes DELETE INPUT; }
25.2.3 Create Script To create a script and store it in the recovery catalog for future reference. Store scripts provide a common repository for frequently execute collections of RMAN commands. Any command that is legal within a RUN command is permitted in the stored script. The script is not executed immediately, run the script with @. Examples: 1. Creating a script example This creates a script called b_whole_10 that backs up the database and archived redo logs, then executes it:
CREATE script b_whole_10 { ALLOCATE CHANNEL d1 DEVICE TYPE sbt; BACKUP INCREMENTAL LEVEL 0 TAG b_whole_10 DATABASE PLUS ARCHIVELOG; } RUN { EXECUTE script b_whole_10};
25.2.4 Delete Script To delete a stored script from the recovery catalog Deleting a script this deletes the script b_whole_10:
DELETE SCRIPT b_whole_10;
25.2.5 RESTORE To restore files from backups or image copies. By default, RMAN restores files to their default location. Typically, you restore when a media failure has damaged a current datafile, control file, or archived log or prior to performing a point-in-time recovery. The RESTORE commands restores full backups, incremental backups (level 0 only), or copies of datafiles, control files, and archived redo logs. Because the RECOVER command automatically restores archived logs as needed, you should seldom need to restore logs manually. Note: In Oracle 9i, unlike in previous RMAN releases, RMAN by default does not restore a datafile if the file is in the correct place and its header contains the expected data (RMAN does not scan the datafile body for corrupt blocks).
When you perform a restore operation by using a backup control file and use a recovery catalog, RMAN automatically adjusts the control file to reflect the structure of the restored database. CONTROLFILE Restores the current control file to the default location and automatically replicates it to all CONTROL_FILES locations in the initialization parameter file. The default location is first filename specified in the CONTROL_FILES parameter. DATABASE Restores all data files in the database except those that are offline or read-only. Unlike BACKUP DATABAE, RESTORE DATABASE does not automatically include the control file, you must issue an additional RESTORE command to perform this operation. DATAFILE datafileSpec Restores the datafiles specified by filename or absolute datafile number. TABLESPACE tablespace_name Restores all data files in the specified tablespaces.
Examples: 1. Restoring a tablespace This takes a tablespace offline, restores it, and then performs media recovery:
SQL> ALTER DATABASE TBS_1 OFFLINE IMMEDIATE; RESTORE TABLESPACE tbs_1; RECOVER TABLESPACE tbs_1; SQL> ALTER TABLESPACE TBS_1 ONLINE;
2: Restoring the control file This restores the control file to its default location, replicates it to all multiplexed locations, and mounts the database.
RUN { STARTUP FORCE NOMOUNT; RESTORE CONTROLFILE; ALTER DATABASE MOUNT; }
3. Restoring the database using a backup control file This example restores the control file, replicates it to all control file locations specified in the parameter file, and then mounts the control file in order to restore the database.
STARTUP NOMOUNT; RUN { ALLOCATE CHANNEL c1 DEVICE TYPE sbt; RESTORE CONTROLFILE; ALTER DATABASE MOUNT; RESTORE DATABASE; }
4. Restoring archived redo logs to new location This restores all archived redo logs to the /oracle/temp_restore directory:
RUN { SET ARCHIVELOG DESTINATION TO /oracle/temp_restore; RESTORE ARCHIVELOG ALL; }
6. This example restores archived redo log files from tape that fall within a range of SCNs:
RESTORE ARCHIVELOG SCN BETWEEN 500 AND 700;
25.2.6 RECOVER To apply redo logs or incremental backups to one ore more restored datafiles in order to update them to a specified time. RMAN uses online redo records and restores backup sets of archived redo logs as needed to perform he media recovery. RMAN first looks for the original archived logs or image copies, and if none are available, then it restores backups. If RMAN has a choice between applying an incremental backup or applying redo, then it always chooses the incremental backup. If overlapping levels of incremental backup are available, then RMAN automatically chooses the one covering the longest period of time. Note that RMAN can apply incremental backups to restored files that were not created as part of an incremental backup. Note: When RMAN applies incremental backups, it recovers changes to objects created with the NOLOGGING option. Applying archived redo logs to datafiles does not recover these changes. DEVICE TYPE deviceSpecifier Allocates automatic channels for the specified device type only. For example, if you configure automatic disk and tape channels, and issue REVOER DEVICETYPE DISK, then RMAN allocates only disk channels. Specifies that the entire database is to be recovered. Unless you specify an Until clause, RMAN performs complete recovery. Until Clause specifies a non-current time, SCN, or log sequence number for termination of the RECOVER command. You must open the database with the RESETLOGS option after incomplete recovery. Specifies tablespaces by tablespace name. Specifies a list of one or more datafiles to recover. Specify datafiles by either filename (by using a quoted string) or absolute datafile number (by using an integer).
DATABASE
Examples: 1. Recovering a Tablespace in an Open Database The following example takes tbs_1 offline, uses automatic channels to restore and recover it (deleting the logs that it restored from tape), and brings it back online;
SQL ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE; RESTORE TABLESPACE_1; RECOVER TABLESPACE tbs_1 DELETE ARCHIVELOG; SQL ALTER TABLESPACE tbs_1 ONLINE;
2. Recovering datafiles restored to new locations The following example uses the pre-configured disk channel and manually allocates one media management channel to use datafile copies on disk and backups on tape, and restores one of the tablespace tbs_1 to a different location:
RUN { ALLOCATE CHANNEL dev2 DEVICE TYPE sbt; SQL ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE; SET NEWNAME FOR DATAFILE disk7/oracle/tbs11.f TO disk9/oracle/tbs11.f; RESTORE TABLESPACE tbs_1; SWITCH DATAFILE ALL; RECOVER TABLESPACE tbs_1; SQL ALTER TABLESPACE tbs_1 ONLINE; }
3. Replicating a restored control file This restores a control file to a temporary location and replicates it manually
STARTUP NOMOUNT; RUN { SET UNTIL TIME Jun 18 2004 16:32:36; RESTORE CONTROLFILE TO /tmp/cf.tmp; REPLICATE CONTROLFILE FROM /tmp/cf.tmp; STARTUP FORCE MOUNT; }
4. Performing Incomplete Recovery by Using a Backup Control file Assume that both the database and archived redo log 1234 were lost due to a disk crash. Because you do not have incremental backups, you need to recover the database using available archived redo logs. You do not need to restore tablespace readonly1 because it has not changed since log 1234.
RUN { SET UNTIL SEQUENCE 1234 THREAD 1; #Recover database until log SEQUENCE 1234 RESTORE CONTROLFILE TO /vobs/orable/dbs/cf1.f; # Because you specified a restore destination, manually replicate the control file # RMAN replicates automatically when no destination is specified REPLICATE CONTROL FILE FROM /vobs/oracle/dbs/cf1.f; ALTER DATABASE MOUNT; RESTORE DATABASE SKIP TABLESPACE temp1, readonly1; RECOVER DATABASE SKIP FOREVER TABLESPACE temp1; ALSTER DATABASE OPEN RESETLOGS; SQL DROP TABLESPACE temp1; SQL CREATE TABLESPACE temp1 DATAFILE /vobs/oracle/dbs/temp1.f SIZE 10M TEMPORARY; }
5. Performing Incomplete Recovery Until a Log Sequence Number This example assumes that log sequence 1234 was lost due to a disk crash and the database needs to be recovered by using available archived redo logs.
RUN { SET UNTIL SEQUENCE 1234 THREAD 1; RESTORE CONTROLFILE TO $ORACLE_HOME/dbs/cf1.f; REPLICATE CONTROLFILE FROM $ORACLE_HOME/dbs/cf1.f; ALTER DATABASE MOUNT; RESTORE DATABASE; RECOVER DATABASE; # recovers through log 1233 SQL ALTER DATABASE OPEN RESETLOGS; }
6. Performing Incomplete Recovery to a specified SCN This example recovers the database until a specified SCN:
STARTUP MOUNT; RUN { ALLOCATE CHANNEL ch1 TYPE sbt; RESTORE DATABASE; RECOVER DATABASE UNTIL SCN 1000; # recovers through SCN 999 SQL ALTER DATABASE OPEN RESETLOGS; }
25.2.7 Release Channel To release a channel while maintaining the connection to the target database instance. Specify the channel name with the same identifier used in the ALLOCATE CHANNEL command. This command is optional because RMAN automatically releases all channels allocated when the RUN command terminates. Example. 1. Releasing a channel This example makes three identical backup sets of datafiles 1 to 4 to tape with channel ch1, then releases it. RMAN then makes three identical backups of datafiles 5 and 6 to tape with channel ch2 and then releases it:
RUN { SET BACKUP COPIES = 3; ALLOCATE CHANNEL ch1 DEVICE TYPE sbt FILESPERSET = 2 RATE = 1000; ALLOCATE CHANNEL ch2 DEVICE TYPE sbt MAXPIECESIZE = 3000; BACKUP CHANNEL ch1 DATAFILE 1,2,3,4; RELEASE CHANNEL ch1; BACKUP DATAFILE 5,6; }
25.2.8 Show To display the current CONFIGURE command settings. The output of SHOW consists of the CONFIGURE commands used to set the configuration. RMAN default configurations are suffixed with #defaullt. Examples: Showing Channel Configurations - This example shows commands relevant for displaying automatic channel configurations:
SHOW SHOW SHOW SHOW SHOW CHANNEL; DEVICE TYPE; DEFAULT DEVICE TYPE; MAXSETSIZE; ALL;
25.2.9 Determining Datafile Backup Status To view the backup status of a datafile, you can use the dictionary table V$BACKUP. This table lists all online files and gives their backup status. Note: V$BACKUP is not useful if the controlfile currently in use is restored backup or new controlfile created since the media failure occurred. A restored or re-created controlfile does not contain the information Oracle needs to fill V$BACKUP accurately. Also, if you have restored a backup of a file, that files STATUS in V$BACKUP reflects the backup status of the older version of the file, not the most current version. Thus, this view might contain misleading information on restored files.
In the STATUS column, INACTIVE indicates that the file is not currently being backed-up. ACTIVE indicates that the file is marked as currently being backed-up. 25.2.10 Propagating Information from the Control File The size of the target databases control file will grow, depending on the number of: Backups performed Archive logs created Days (minimum number) this information is stored in the control file You can specify the minimum number of days this information is kept in the control file using the parameters CONTROL_FILE_RECORD_KEEP_TIME. Entries older than the number of days, are candidates for overwrites by newer information. The larger the CONTROL_FILE_RECORD_KEEP_TIME settings is, the larger the control file will be. At a minimum, you should resynchronize your recovery catalog at intervals less than the CONTROL_FILE_RECORD_KEEP_TIME setting, because after this number of days, the information in the controlfile will be overwritten with the most recently created information; if you have not resynchronized, and information has been overwritten, this information can not be propagated to the recovery catalog. Note: The maximum size of the control file is port specific. See your OS specific Oracle documentation. 25.3 Improving Incremental Backup Performance: Change Tracking RMANs change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change-tracking file. If change tracking is enabled, RMAN uses the change-tracking file to identify changed blocks for incremental backups, thus avoiding the need to scan every block in the datafile. There is a new background process responsible for writing data to the block change-tracking file, which is called Blocks Change Writer (CTWR). 25.3.1 Enabling and Disabling Change Tracking You can enable or disable change tracking when the database is open. To alter the change tracking setting, you must use SQL*Plus to connect to the target database with the administrator privileges. 1. You can enable this tracking mechanism by issuing the following command:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE /rman_bkups/change.log;
This command creates a binary file called /rman_bkups/change.log for tracking purposes. 2. To disable change tracking, use this SQL statement:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
3. To see whether change tracking is currently enabled, you can query from SQL*Plus, you can query V$BLOCK_CHANGE_TRACKING.
SQL> SELECT filename, status FROM V$BLOCK_CHANGE_TRACKING;
25.4 Types of Recovery Manager Backups The backup_type applies to all backup_specifications in the backup_specification_list. The following two backup types are available. 25.4.1 Full This is the default if neither full or incremental is specified. A full backup copies all blocks into the backup set, skipping only datafile blocks that have never been used. No blocks are skipped when backing up archivelogs or control files. A full backup has no effect on subsequent incremental backups, and is not considered to be part of the incremental backup strategy. Backing up full database (including current control file)
$ rman target sys/<password>@acct catalog rman/rman RMAN> BACKUP DATABASE INCLUDE CURRENT CONTROLFILE;
25.4.2 Incremental An incremental backup at a level greater than 0 copies only those blocks that changed since the last incremental backup. An incremental backup at level 0 is identical in content to a full backup, but the level 0 backup is considered to be part of the incremental strategy. Certain checks are performed when attempting to create an incremental backup at level greater than 0. 25.4.3 Incremental Backup Sets An incremental backup is a backup of one or more datafiles that contains only those blocks that have been modified since a previous backup at the same or lower level; unused blocks are not written out. 25.4.3.1 Physical Incremental Backup (On Sunday) Sunday: Take a complete physical backup
$ rman target sys/<password>@acct catalog rman/rman RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE;
Monday:
$ rman target sys/<password>@acct catalog rman/rman RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE;
Tuesday:
$ rman target sys/<password>@acct catalog rman/rman RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE;
Wednesday:
$ rman target sys/<password>@acct catalog rman/rman RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
25.5 Compressed Backups RMAN has been enhanced to provide binary compression for backup sets, reducing the amount of disk space required to make a backup. In many cases, the additional overhead required to compress the backup set is offset by the reduced I/O load when writing the backup set to disk. In addition, the binary compression algorithm used by RMAN is optimized for use with oracle datafiles, making it a better alternative than using operating system file system or tape device compression scheme.
RMAN> BACKUP AS COMPRESSED BACKUPSET TABLESPACE users2;
This example backups the entire database and archived logs, producing compressed backup sets:
RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
25.6 Configuring Compressed Backup sets as Default for Tape or Disk Its easy to make compression the default for disk or tape using the CONFIGURE command.
RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET; RMAN> CONFIGURE DEVICE TYEP SBT BACKUP TYPE TO COMRPESSED BACKUPSET;
25.7 Image Copies An image copy is a bit-for-bit identical copy of a database file. In previous versions of RMAN, the COPY command made image copies of datafiles. COPY command is deprecated in Oracle 10gs version. RMAN can create image copies of datafiles and datafile copies, control files and control file copies, archive redo logs and backup pieces. RMAN creates image copies when the AS COPY option is used with the BACKUP command. Using BACKUP AS COPY command to make an image copy of both SYSTEM and SYSAUX tablespaces.
RMAN> BACKUP AS COPY TABLESPACE system, sysaux;
RMAN automatically determine which datafiles belong to each tablespace and performs the image copy for each. 25.8 Drop Database in RMAN Deletes the target database and if RMAN is connected to a recovery catalog, un-register it. RMAN removes all datafiles, online logs, and control files belonging to the target database. Restrictions Execute this command only at the RMAN prompt You must be connected to the target database, which must either mounted exclusive and not open If you want RMAN to delete archived logs, copies and backups belonging to the database, then you must use the DROP DATABASE INCLUDING BACKUPS form of the command. Example: In this example, you connect to test1 database and deletes the datafiles as well as all backup copies and archived logs associated with the database.
$ rman TARGET SYS/oracle@test1 CATALOG test1/test1@catdb RMAN> DROP DATABASE INCLUDING BACKUPS;
25.8.1
25.9 Incrementally Updated Backups: Rolling Forward Image Copy Backups Oracles Incrementally Updated Backups feature lets you avoid the overhead of taking full image copy backup of datafiles, while providing the same recovery advantages as an image copy backups. At the beginning of a backup strategy, RMAN creates an image copy backup of the datafile. Then, at regular intervals, such as daily, level 1 incremental backups are taken and applied to the image copy backup, rolling it forward to the point in time when the level 1 incremental was created. During restore and recovery of the database, RMAN can restore from this incrementally updated copy and then apply changes from the redo logs, with the same results as restoring the database from a full backup taken at the SCN of the most recently applied incremental level 1 backup. A backup strategy based on incrementally updated backups can help minimize time required for media recovery of your database. 25.10 RMAN Recovery Catalog Views Recovery Catalog Views RC_ARCHIVED_LOG RC_BACKUP_CONTROLFILE RC_BACKUP_DATAFILE RC_BACKUP_PIECE RC_BACKUP_REDOLOG RC_BACKUP_SET RC_DATABASE RC_DATAFILE RC_RMAN_CONFIGURATION RC_LOG_HISTORY
Corresponding V$ Views V$ARCHIVED_LOG V$BACKUP_DATAFILE V$BACKUP_DATAFILE V$BACKUP_PIECE V$BACKUP_REDOLOG V$BACKUP_SET V$DATABASE V$DATAFILE V$RMAN_CONFIGURATION V$LOG_HISTORY
26 FLASHBACK TECHNOLOGY
26.1 The Flash Recovery Area The flash recovery area feature lets you setup a location on disk where the database can create and manage a variety of backup and recovery-related files. With Automatic Disk-Based Backup and Recovery, you can create a flash recovery area which automates management of backup-related files. Use of the flash recovery is strongly recommended. Using a flash recovery area simplifies the ongoing administration of your database by automatically naming files, retaining them as long as they are needed for restore and recovery activities, and deleting them when they are no longer needed to restore your database and space is needed for some other backup and recovery related purpose. 26.1.1 Files that can be stored in the Flash Recovery Area Online redo logs Archived redo logs Control files RMANs own working files for backup and restore operations
26.1.2 Initialization parameters for the Flash Recovery Area To enable the flash recovery area, you must set the two-initialization parameters DB_RECOVERY_FILE_DEST_SIZE, which specifies the disk quota, or maximum space to use for flash recovery area for this database and DB_RECOVERY_FILE_DEST, which specifies the location of the flash recovery area. 26.1.2.1 Using the V$RECOVERY_FILE_DEST View You can query the V$RECOVERY_FILE_DEST view to find out the current location, disk quota, space in use and total number of files in the flash recovery area
SQL> SELECT * FROM V$RECOVERY_FILE_DEST;
26.2 Oracle Flashback Query Oracle Flashback Query feature lets you specify a target time and then run queries against your database, viewing results, as they would have appeared at that time. To recovery from an unwanted change like an erroneous update to a table, a user could choose a target time before the error run a query to retrieve the contents of the lost rows. In data recovery context, it is useful to be able to query the state of a table at a previous time. Example: If, for instance, you discover that at 12:30pm, an employee JOHN had been deleted from your EMPLOYEE table, and you know that at 9:30am, that employees data was correctly stored in the database, you could query the contents of the table as of a time before the deletion to find out what data had been lost, and, if appropriate, re-insert the lost data in the database. Querying the past state of the table is achieved using the AS OF clause of the SELECT statement. For example, the following query retrieves the state of the employee record for JOHN at 9:30am, April 4, 2003: Step 1:
SQL> SELECT * FROM employee AS OF TIMESTAMP TO_TIMESTAMP(2003-04-04 09:30:00, yyyy-mm-dd h:mi:ss) WHERE name = john;
Step 2:
SQL> INSERT INTO employee (SELECT * FROM employee AS OF TIMESTAMP TO_TIMESTAMP(2003-04-04 09:30:00, yyyy-mm-dd h:mi:ss) WHERE name = john);
This missing row is re-created with its previous contents, with minimal impact to the running database; Oracle Flashback Version Query lets you view all the versions of all the rows that ever existed in one or more tables in a specified time interval. You can also retrieve metadata about the differing versions of the rows, including start time, end time, operation and transaction id of the transaction that created the version. This feature can be used both to recover lost data values and to audit changes to the tables queried. Oracle Flashback Transaction Query lets you view changes made by a single transaction, or by all the transactions during a period of time. 26.3 Oracle Flashback Table Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a specified point in time in the past very quickly, easily, and without taking any part of the database offline. In many cases Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table restores tables while automatically maintaining associated attributes such as current indexes, triggers and constraints, and not requiring the DBA to find and restore applicationspecific properties. Using Flashback Table causes the contents of one or more individual tables to revert to their state at some past SCN or time.
Flashback Table uses information in the undo tablespace to restore the table. You do not have to restore any data from backups, and the rest of your database remains available while the Flashback Table operation is being performed. Row movement must be enabled on the table for which you are issuing the FLASHBACK TABLE statement. You can enable row movement with the following SQL statement:
SQL> ALTER TABLE employee ENABLE ROW MOVEMENT;
Performing Flashback Table: - The following SQL*Plus statement performs a FLASHBACK TABLE operation on the table employee.
SQL> FLASHBACK TABLE employee TO TIMESTAMP TO_TIMESTAMP(2003-04-04 09:30:00, YYYY-MM-DD HH24:MI:SS);
The employee table is restored to its state when the database was at the time specified by the timestamp. You can also specify the target point-in-time for the FLASHBACK TABLE operation using an SCN:
SQL> FLASHBACK TABLE employee TO SCN 123456;
26.4 Oracle Flashback Drop Undo a DROP TABLE Operation Oracle Flashback Drop reverses the effects of a DROP TABLE operation. It can be used to recover after the accidental drop of a table. Flashback Drop is substantially faster than other recovery mechanism that can be used in this situation, such as point-in-time recovery, and does not lead to any loss of recent transactions or downtime. When you drop a table, the database does not immediately remove the space associated with the table. Instead, the table is renamed and along with the associated objects, it is placed in the Recycle Bin of the database. The Flashback Drop operation recovers the table from the recycle bin. To understand how to use Oracle Flashback Drop, you must also understand how the recycle bin works, and how to access and manage its contents. 26.4.1 Recycle Bin The recycle bin is a logical container for all dropped tables and their dependent objects. When a table is dropped, the database will store the table, along with its dependent objects in the recycle bin so that they can be recovered later. Dependent objects, which are stored in the recycle bin includes indexes, constraints, triggers, nested tables, LOB segments and LOB index segments. The table and its dependent objects will remain in the recycle bin until they are purged from the recycle bin. For example, this statement places EMPLOYEE_DEMO table and its dependent objects in the recycle bin:
SQL> DROP TABLE employee_demo; Table Dropped
If you are sure that you will not want to recover a table later, you can drop it immediately and permanently, instead of placing it in the recycle bin, by using the PURGE option of the DROP TABLE statement as shown in this example:
SQL> DROP TABLE employee_demo PURGE;
26.4.2 Performing Flashback Drop on Tables in the Recycle Bin Use the FLASHBACK TABLE TO BEFORE DROP statement to recover objects from the recycle bin. You can specify either the name of the table in the recycle bin or the original table name. The following example restores BIN$gk3lsj/3akk5hg3j2lkl5j3d==$0 table, changes its name back to and purges its entry from the recycle bin.
SQL> FLASHBACK TABLE bin$gk3lsj/3akk5hg3j2lkl5j3d==$0 TO BEFORE DROP;
hr.int_admin_emp
You can assign a new name to the restored table by specifying the RENAME TO clause: For example:
SQL> FLASHBACK TABLE bin$gk3lsj/3akk5hg3j2lkl5j3d==$0 TO BEFORE DROP RENAME TO hr.int2_admin_emp;
26.4.3 Purging Objects from the Recycle Bin The PURGE command is used to permanently purge objects from the recycle bin. Once purged, objects can no longer be retrieved from the bin using Flashback Drop. 26.4.3.1 PURGE TABLE Purging a Table and Dependent Objects The PURGE TABLE command purges an individual table and all of its dependent objects from the recycle bin. This example shows the syntax, using the tables original name:
SQL> PURGE TABLE emp;
You can also use the recycle bin name of an object with PURGE TABLE;
SQL> PURGE TABLE BIN$KSD8DB9L345KLA==$0;
26.4.3.2 PURGE RECYCLE BIN Purging all objects in the Users Recycle Bin The PURGE RECYCLEBIN command purges the contents of the recycle bin for the currently logged-in user.
SQL> PURGE RECYCLEBIN;
26.4.3.3 Viewing and Querying Objects in the Recycle Bin 1. To view the contents of the recycle bin, use the SQL*Plus command SHOW RECYCLE BIN.
SQL> SHOW RECYCLEBIN;
2. This example uses the views to determine the original names of the dropped objects:
SQL> SELECT object_name as recycle_name, original_name, type FROM recyclebin;
3. To view only objects in the recycle bin, use the USER_RECYCLEBIN and DBA_RECYCLEBIN views. This example shows the required syntax:
SQL> SELECT * FROM BIN$KSD8DB9L345KLA==$0;
26.5 Flashback Database Oracle Flashback Database provides a more efficient direct alternative to database point-in-time recovery. It is unlike the other flashback features in that it operates at a physical level. When you use Flashback Database, your current database files revert to their contents at a part time. The end product is much like the result of a database point-in-time recovery, but can be much faster because it does not require you to restore datafiles from backup, and requires only limited application of redo compared to media recovery. Flashback Database uses flashback logs to access past versions of data blocks, as well as some information from the archived redo log. Performs a flashback database operation, returning the database to (or to just before) target time, as specified by time, SCN or log sequence number. Because FLASHBACK DATABASE does not require you to restore a backup, it is usually much faster than incomplete recovery. 26.5.1 To Enable Flashback Database To enable Flashback Database, set the DB_FLASHBACK_RETENTION_TARGET initialization parameter and issue the ALTER DATABASE FLASHBACK ON statement. Follow the process outlined here: 1. Start SQL*Plus and ensure that the database is mounted, but not open.
SQL> STARTUP MOUNT;
2. Set the DB_FLASHBACK_RETENTION_TARGET parameter to the desired value. This value is in minutes. The value for this parameter is set to 60 minutes in this example.
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=60;
26.5.2 To perform the FLASHBACK DATABASE 1. Query the target database to determine the range of possible flashback SCNs. The following SQL*Plus queries show you the latest and earliest SCN in the flashback window:
SQL> SELECT CURRENT_SCN FROM V$DATABASE; SQL> SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;
3. Run the FLASHBACK DATABASE statement to return the database to a prior TIMESTAMP or SCN. For example:
SQL> FLASHBACK DATABASE TO SCN 4693; (or) SQL> FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24);
Open the database to examine the results of the Flashback Database operation. You can perform some queries to make sure you have recovered the data you need. FLASHBACK DATABASE to a specific SCN: - The following command flash back the database to a particular SCN:
RMAN> FLASHBACK DATABASE TO SCN 46963; FLASHBACK DATABASE
to a specific Time: - The following command uses SQL date conversion functions to specify the target time
RMAN> FLASHBACK DATABASE TO TIMESTAMP(2003-03-11 16:00:00, YYYY-MM-DD HH24:MI:SI);
26.5.3 To Disable Flashback Database You can disable flashback logging for the entire database with this command
SQL> ALTER DATABASE FLASHBACK OFF;
The control file starts with LOAD DATA statement. INFILE specifies that data is found in the control file and not in an external file INTO TABLE is the table into which the data can be loaded into (dept). By default SQL*Loader requires the table to be empty before it inserts any records.
To invoke SQL*Loader, the command is: Syntax: $ sqlldr <options> The options are:
userid controlfile log bad data discard discardmax skip load errors rows direct bindsize silent parfile
Username and password Control file name Log file name Bad file name Data file name Discard file name No of discards to allow Number of logical records to skip (Default 0) Number of logical records to load (Default 0) Number of errors to allow (Default 0) Number of rows in conventional path Direct path data saves (Default 64 in conventional, all in direct) Size of conventional path bind array in bytes (system-dependent) Suppress messages during run Parameter file
Examples:
$ sqlldr userid=scott/manager control=case1.ctl log=case1.log $ sqlldr control=case.ctl log=case.log data=etc.dat userid=steeve/abc errors=40 load=2000 discard=abc.dis discardmax=2
or
$ vi par.file file with the parameters control=case.ctl log=case.log data=etc.dat userid=steeve/abc errors=40 load=2000 discard=abc.dis discardmax=2 $sqlldr parfile=par.file
Case 1:
Loads the data from the controlfile into the table dept
$ vi case1.ctl load data infile * into table dept fields teminated by , optionally enclosed by (deptno, dname, loc) begin data 12,RESEARCH,SARATOGA 10,ACCOUNTING, CLEVELAND 11,ART, SALEM 13, FINANCE, BOSTON 21,SALES,PHILA 22,SALES, ROCHESTER 42,INTL, SAN FRAN :wq $ sqlldr = userid=scott/tiger control=case1.ctl log=case1.log
Case 2:
$ vi case2.ctl LOAD DATA Infile case2.dat INSERT Into table tmp (empno position (01:04) integer external, ename position (06:15) char, job position (17:25) char, mgr position (27:30) integer external, sal position (32,39) decimal external, comm. Position (41:48) decimal external, deptno position (50:51) integer external) :wq $ vi case2.dat 7782 Clark Manager 7839 2572 .50 10 7839 King President 5500 10 7934 Miller Clerk 7782 920.00 20 7566 Jones Manager 7839 1600.00 300.00 30 7654 Martin Salesman 7693 1312.50 1400.00 30 $ sqlldr userid=scott/tiger control=case2.ctl log=case2.log
Case 3:
Adds the data into emp table using sequence function . Sequence function generates unique keys for loaded data.
SQL> alter table emp add (projno number, loadseq number) $ vi case3.ctl load data infile * append into table emp fields terminated by , optionally enclosed by (empno, ename, job, mgr, hiredate date (20) dd-month-yyyy, sal, comm., deptno, char terminated by : , projno, loadseq sequence (max,1)) begin data 7782, clark, manager, 7839,09-JUN-1981,2572.50,,10:101 7839 king, president,,17-NOV-1981,5500.00,,10:102 7934,miller, clerk, 7782,23-JAN-1982,920.00,,10:102 7566,jones,manager,7839,02-APR-1981,3123.75,,20:101 7499,allen,salesman,7698,20-FEB-1981,1600.00,300.00,30:103 7654,martin,salesman,7689,28-SEP-1981,1312.50,1400,30:103 $ sqlldr userid=scott/tiger control=case3.ctl log=case3.log
Case 4:
Combines multiple records into one logical record using CONTINUEIF. Inserting negative numbers, discardmax is used to specify a maximum number of discards and also rejecting records due to duplicate values in a unique index or due to invalid data.
$ vi case4.ctl load data infile case4.dat discardfile case4.dsc discardmax 999 replace continueif this (1) = * into table emp (empno position (1:4) integer external, ename position (6:15) char, job position (17:25) char, mgr position (27:30) integer external, sal position (32:39) decimal external, comm position (41:48) decimal external, deptno position (50:51) integer external, hiredate position (52:60) integer external) :wq $ vi case4.dat *7782 clark manager 7839 2752.50 -10 2512-Nov-85 *7839 king president 5500.00 2505-Apr-83 *7934 miller manager 7839 3123.75 2517-Jun-85 :wq $ sqlldr userid=scott/tiger control=case4.ctl log=case4.log
Case 5:
Explains how to use sqlldr to breakdown repeating groups in a flat file and load the data into normalized tables one record may generate multiple rows, and use of when clause and also loading the same field (empno) into multiple tables.
$ vi case5.ctl load data infile case5.dat badfile case5.bad discardfile case5.dsc replace into table emp (empno position (1:4) integer external, ename position (6:15) char, deptno position (17:18) char, mgr position (20:23) integer external) into table proj when proj != (empno position (1:4) integer external, projno position (25:27) integer external) into table proj when projno != (empno position (1:4) integer external, projno position (29:31) integer external) :wq $ vi case5.dat 1234 baker 10 999 101 102 103 1234 joker 10 999 102 103 104 2664 young 20 2983 101 102 104 :wq $ sqlldr userid=scott/tiger control=case5.ctl log=case5.log
Case 6:
loads the data into table EMP using the direct path load method and also builds the indexes
$ vi case6.ctl load data infile case6.dat; insert into table emp sorted indexes (empid) (empno position (1:4) integer external nullif empno=blanks, ename position (6:15) char, job position (17:25) char, mgr position (27:30) integer external nullif mgr=blanks, sal position (32:39) decimal external nullif sal=blanks, com position (41:48) decimal external nullif comm=blanks, deptno position (50:51) integer external nullif deptno=blanks) :wq $ sqlldr userid=scott/tiger case6.ctl log=case6.log direct=true
28.4 Backup and recovery When you are working with RAW devices and additional layer is introduced in the backup and recovery procedures. 28.4.1 BACKUP First you have to use the Unix command dd which takes two arguments If=input file this is the name and path of the character device Of=output file target file Example:
$ dd if=/dev/c0t0d0s1 of=/temp/sys.dd
This command extracts data from the block device roll and in the file sys.dd. Now you can use regular backup utilities like tar on these files. Suppose you have files: sys.dd, rbs.dd, temp.dd Now using tar:
$ tar Pcvf /temp/bckup.tar sys.dd rbs.dd temp.dd
28.4.2 RECOVERY Suppose we need to recover the datafile /dev/c0t0d0s1 for this 1. First extract the respective file
$ tar Pxvf /temp/bckup.tar /tmp/sys.dd
Note:
The difference between this command and previous command. Here the output file is the block device and not the device. The conv argument converts the character or block or block data into block mode. You can use this basic syntax for all backup and recovery procedures.
29 DBMS PACKAGES
29.1 DBMS_LOCK Package Creating user locks with Oracle lock management services You can use Oracle lock management services for your applications. It is possible to request a lock for a specific mode, give it a unique name, change the lock mode, and release it. The Oracle lock management services are available through procedures in the DBMS_LOCK package. The following procedures are callable from DBMS_LOCK package: Function/procedure ALLOCATE_UNIQUE REQUEST CONVERT RELEASE SLEEP 29.1.1 Naming Locks DMBS_LOCK.ALLOCATE_UNIQUE (LOCKNAME IN VARCHAR2, LOCKHANDLE OUT VARCHAR2, EXPIRATION_SECS IN INTEGER DEFAULT 864000); Lock name Lock hande Expiration secs For example:
SQL> EXEC dbms_lock.allocate_unique(testlock, 30);
Description Allocate a unique lock ID to a named lock Request a lock of a specific mode Convert a lock from one more to another Release a lock Put a procedure to sleep for a specified time
Specify the name of the lock for which you want to generate a unique ID Returns to the caller the handle to the lock ID generated Specify the number of seconds to wait after the last ALLOCATE_UNIQUE
29.1.2 Requesting a Lock To request a lock with a given mode, use the request function.
DBMS_LOCK.REQUEST (ID IN VARCHAR2, LOCKMODE IN INTEGER DEFAULT x_mode, TIMEOUT_IN maxwait RELEASE_ON_COMMIT IN BOOLEAN false);
For example:
SQL> SELECT * FROM dba_locks;
Return integer 0 Success 1 Timeout 2 Deadlock 3 Parameter error 5 Dont own lock specified by ID or lock handle illegal lock handle Example: 1. To convert the lock from shared to exclusive
SQL> EXEC DBMS_LOCK.CONVERT(3300,:lockhandle,x,null);
2. To release a lock:
DBMS_LOCK.RELEASE (ID IN INTEGER); SQL> EXEC DBMS_LOCK.RELEASE(3300);
3. To suspend the session for a given period of time, use the SLEEP procedure.
DMBS_LOCK.SLEEP(SECONDS IN NUMBER); SQL> EXEC DBMS_LOCK.SLEEP(10);
29.2 DBMS_PIPE The DBMS_PIPE package allows tow or more sessions in the same instance to communicate. Oracle pipes are similar in concept to the pipes in UNIX, but Oracle pipes are not implemented using the operating system pipe mechanisms. Information, sent through Oracle pipes < is buffered in SGA. All information in the pipes is lost when the instance is down. The following table shows the procedures than can be called: Function/procedure Description Create_pipe Explicitly create the pipe Pack_message Build message in the local buffer Send_message Send messages on the named pipe Receive_message Copy message from named pipe into local buffer Next_item_type Return data type of next item in buffer Unpack_message Access next item in buffer Remove_message Remove the message Purge Purge the contents of named pipes Reset_buffer Purge the contents of the local buffer Unique_session_name Return a unique session name
To create a new pipe: DBMS_PIPE.CREATE_PIPE(PIPE_NAME IN VARCHAR2 MAXPIPESIZE IN INTEGER DEFAULT 8192 PRIVATE IN BOLLEAN DEFAULT true) Return integer; Pipe name Maxpipesize Private For example:
SQL> EXEC DBMS_PIPE.CREATE_PIPE(testpipe, 8192, true);
Specify the name of the pipe that you are creating. The name must be unique across the instance. Specify the maximum size allowed for the pipe in bytes. The total sizes of all the messages on the pipe cannot exceed this amount. Use the default TRUE, to create a private pipe. Public pipes can be created implicitly when you call the SEND_MESSAGE.
1. To remove explicitly: Pipes create implicitly by SEND_MESSAGE are automatically removed when empty. DMBS_PIPE.REMOVE_PIPE(PIPENAME IN VARCHAR2) Return integer;
SQL> EXEC DBMS_PIPE.REMOVE_PIPE(TESTPIPE);
3. Receive a message: To receive a message from a pipe, first call RECEIVE_MESSAGE to copy the message into the local message buffer. When you receive a message, it is removed from the pipe, that is, a message can only be received once. DBMS_PIPE.RECEIVE_MESSAGE(PIPENAME IN VARCHAR2, TIMEOUT IN INTEGER DEFAULT maxwait) return integer; For example:
Status:=dbms_pipe.send_message(proc1,10); Status:=dbms_pipe.receive_message(testpipe,10);
29.3 DBMS_ALERT Package - Creating Alerts The DBMS_ALERT package provides support for the asynchronous notification of database events. By appropriate use of this package and database triggers, an application can cause itself to be notified whenever values of interest in the database are changed. The following table shows the procedures included in this package: Function/procedure Register Remove Signal Wait any Wait one Set_Defaults 1. To set default using alerts: DBMS_ALERT.SET_DEFAULTS(POLLING_INTERVAL IN NUMBER); For example:
SQL> EXEC DBMS_ALERT.SET_DEFAULT(120);
Description Receive mail from an alert Disable notification from an alert Signal an alert (send message to registered session) Wait time out seconds to receive alert message from an alert registered for session Wait time out seconds to receive message from named alert Set polling interval
29.4 DBMS_JOBS Package This packages allows control of the Oracle job queues. Oracle job queue allow DBAs to schedule, execute and eliminate jobs from within Oracle itself independent of the operating system queuing mechanisms. 1. To submit a job:
SQL> ed ins.sql Begin For I in 1..10 loop Insert into test values (I); End loop; End; / X NUMBER; DBMS_JOB.SUBMIT(:x, program name, SYSDATE, NEXTINTERVAL);
For example:
SQL> EXEC DBMS_JOB.SUBMIT(:x,ins.sql, sysdate, sysdate+1/(24*60*60);
You have to see the job number in DBA_JOBS TABLE. The parameters specified are: Name Program name, which you want to submit as a job Sysdate From which date you want to start a job Next The next interval when you want to execute the job Job number The number assigned to a job
In the above example, the /disk1, /disk2, and /disk3 are external disk subsystems. The reason why we have like this, in case if the internal disk goes corrupted, we can simply re-install Linux after replacing the drive and everything can function normally. And make sure your external drives are running with either RAID-0 or RAID-5, so those disk problems wont stop the show. Login as root and do the following: 1. Create the directory structure to hold the software
# mkdir p /oraeng/app/oracle/product/10.2.0
3. Create a user called oracle10g in which user account you will be installing the software.
# useradd u 10001 g oinstall G dba d /oraeng/app/oracle/product/10.2.0 m oracle10g # passwd oracle10g Changing password for user oracle10g New UNIX password: Retype new UNIX password:
6. Now login as oracle10g user 7. Update your profile to suit your environment and do the following:
$ vi .bash_profile export ORACLE_SID=ORCL export ORACLE_HOME=/oraeng/app/oracle/product/10.2.0 export LD_LIBRARY_PATH=$ORACLE_HOME/lib,user/ucblib,/usr/openwin/lib export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/ccs/bin:/usr/ucb/bin:$PATH:. Export CLASS_PATH=$ORACLE_HOME/jlib :wq $ . .bash_profile
Disk Space Requirements 400MB disk space in the /tmp directory Between 1.5GB and 3.5GB of disk space for the oracle software, depending on the installation type, 1.2GB of disk space for a preconfigured database that uses file system storage (optional) OS Requirements One of the following 64-bit OS versions Solaris 8 update 7 or later Solaris 9 update 6 or later Solaris 10 On Solaris 9, the required packages are:
SUNWarc SUNWhea SUNWsprox SUNWxwfnt SUNWi15cs SUNWi1cs SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWbtool SUNWi1of
To determine whether the required packages are installed, enter a command similar to the following:
# pkginfo I SUNWarc SUNWsprox SUNWxwfnt SUNWi15cs SUNWi1cs SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of
If the package is not installed, then install it In addition, you need to verify that the following patches are installed on the system. Patches for Solaris 9 112233-11, SunOS 5.9: Kernel Patch 111722-04, SunOS 5.9: Match Library (libm) patch The following additional patches are required for Numa Systems: 115675-01, SunOS 5.9: liblgrp API 113471-08, SunOS 5.9: Miscellaneous SunOS Commands Patch 113875-01, SunOS 5.9: /usr/lib/liblgrp.so Patch To determine whether an operating system patch is installed, enter a command similar to the following:
# /usr/sbin/patchadd p | grep patch_number (without version number)
# useradd u 10001 g oinstall G dba d /oraeng/app/oracle/product/10.2.0 m oracle10g # passwd oracle10g Changing password for user oracle10g New UNIX password: Retype new UNIX password:
5. Configure Kernel Parameters Open the /etc/system file in any text editor and if necessary, add lines similar to the following:
# vi /etc/system set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semms1=256 set semsys:seminfo_semvmx=32767 set shmsys:seminfo_seminfo_shmmax_4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 :wq # reboot
32 INDEX
ASM, 21 BIGFILE Tablespace, 15 Control file, 30 Control files, 7 init.ora, 12 Install Oracle on Linux, 131 Install Oracle on Solaris, 133 Instance, 8 PGA, 9 Raw devices, 124 Recovery Catalog Schema, 97 Recovery Manager (RMAN), 97 Redo-log, 7 Rollback segments, 23 Schema, 7 SCN, 11 SQL*Loader, 119 Storage parameters, 20 Storage Parameters, 20 SYSAUX Tablespace, 12 Tablespaces, 15 UNDO tablespace, 25