Académique Documents
Professionnel Documents
Culture Documents
D82217
June 2013
Edition 2.0
D69748GC20
Activity Guide Volume I
Preparation Workshop
Oracle Database 11g: OCM Exam
Technical Contributors other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered
and Reviewers in any way. Except where your use constitutes "fair use" under copyright law, you
Sharath Bhujani may not use, share, download, upload, copy, print, display, perform, reproduce,
publish, license, post, transmit, or distribute this document in whole or in part without
Joel Goodman the express authorization of Oracle.
Setsuko Fujitani
The information contained in this document is subject to change without notice. If you
Lakshmi Narapareddi find any problems in the document, please report them in writing to: Oracle University,
500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
warranted to be error-free.
Editors
Restricted Rights Notice
Vijayalakshmi Narasimhan
Rashmi Rajagopal If this documentation is delivered to the United States Government or anyone using
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names
may be trademarks of their respective owners.
Contents
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
Appendix A: Practices
Practice 1-1: Configuring the Initial Oracle Network Environment A1-2
Practice 1-2: Creating an Oracle Database A1-3
Practice 1-3: Managing the Oracle Instance A1-4
Practice 1-4: Managing Undo Data A1-5
Practice 1-5: Managing Database Storage Structures A1-6
iii
Practice 4-8: Using Reference Partitioning A4-10
Practice 4-9: Using Interval Partitioning A4-11
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
iv
Appendix B: Solutions
Solutions for Practice 1-1: Configuring the Initial Oracle Network Environment B1-2
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
v
Solutions for Practice 5-1: EXPLAIN PLAN and SQL*Plus AUTOTRACE B5-1
Solutions for Practice 5-2: SQL Trace and TKPROF B5-4
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
vi
Appendix C: Manual Solutions
Manual Solutions for Practice 1-1: Configuring the Initial Oracle Network
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
Environment C1-2
Manual Solutions for Practice 1-2: Creating an Oracle Database C1-4
Manual Solutions for Practice 1-3: Managing the Oracle Instance C1-8
Manual Solutions for Practice 1-4: Managing Undo Data C1-10
Manual Solutions for Practice 1-5: Managing Database Storage Structures C1-11
Manual Solutions for Practice 1-6: Configuring the Oracle Network
Environment C1-14
Manual Solutions for Practice 1-7: Oracle Shared Server C1-17
Manual Solutions for Practice 1-8: Using Password Security Feature C1-21
vii
Manual Solutions for Practice 4-7: Using SecureFiles C4-29
Manual Solutions for Practice 4-8: Using Reference Partitioning C4-31
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
Manual Solutions for Practice 7-1: Adding a Physical Standby Database to your
Configuration C7-1
Manual Solutions for Practice 7-2: Using Real-Time Query C7-7
Manual Solutions for Practice 7-3: Performing Switchover C7-10
Manual Solutions for Practice 7-4: Creating and Managing a Snapshot Standby
Database C7-13
Manual Solutions for Practice 7-5: Configuring RMAN Parameters C7-18
Manual Solutions for Practice 7-6: Setting the Data Protection Mode C7-19
Manual Solutions for Practice 7-7: Enabling Fast-Start Failover C7-21
viii
These eKit materials are to be used ONLY by you for the express purpose SELF STUDY. SHARING THE FILE IS STRICTLY PROHIBITED.
Appendix A: Practices
Your Tasks
1. First, stop the default listener and then create a LISTENER listener. Use the following
information:
Object Setting
Listener name LISTENER
Host <fully qualified host name of your
odd PC>
Protocol TCP/IP
2. Configure local naming methods for the new orcl database and PROD1 database (already
exists). The orcl database will be created in the practice titled Practice 1-2: Creating an
Oracle Database. Use the following information:
Object Setting
Service Name orcl.oracle.com
Protocol TCP/IP
Port 1521
Host
IP address or fully qualified
host name of your odd PC
Net Service Name orcl
Object Setting
Service Name PROD1.us.oracle.com
Protocol TCP/IP
Port 1521
Host
IP address or fully qualified
host name of your odd PC
Net Service Name PROD1
Background: You are about to begin creating your first Oracle database. You anticipate that
several similar databases will be needed in the near future. Therefore, you decide to create your
orcl database, as well as a database template and the database creation scripts. Locate the
scripts in the /home/oracle/labs directory (which is the directory that you use most often
throughout this course).
Note: Completing the database creation is critical for all following practice sessions.
Your Tasks
Background: You have just installed the Oracle software and created a database. You want to
ensure that you can start and stop the database and see the application data.
Your Tasks
1. View the initialization parameters of the orcl database. Set the JOB_QUEUE_PROCESSES
parameter to 30.
5. In the alert log, view the phases that the database went through during startup.
Background: In the orcl database, a new version of your application will include several
reports based on very long-running queries. Configure your system to support these reports.
Your Tasks
1. Use the Undo Advisor to calculate the amount of undo space required to support a report that
takes 60 minutes to run, on the basis of an analysis period of the last seven days.
2. Resize the undo tablespace to support the retention period required by the new reports (or
400 MB, whichever is smaller). Do this by increasing the size of the existing data file.
Background: You need to create a new tablespace for the INVENTORY application in the orcl
database. All scripts are located in the /home/oracle/labs directory.
Your Tasks
1. Create a new, locally managed tablespace called INVENTORY. Use the following
specifications:
Object Setting
Tablespace name INVENTORY
2. Run the lab_01_05_02.sql script to create and populate a table (called X) in the
INVENTORY tablespace. What error do you eventually see?
_______________________________________________________________________
3. Define space for 50 MB in the tablespace instead of 5 MB, while keeping the same single
data file in the tablespace. What is the ALTER statement that is executed to make this change?
_______________________________________________________________________
4. Run the lab_01_05_04.sql script that drops the table and re-executes the original script
that previously returned the space error.
Note that the same number of row inserts are attempted, and there is no error because of the
increased size of the tablespace.
5. Create a new, bigfile tablespace called BIGTBS. Use the following specifications:
Object Setting
Tablespace name BIGTBS
Extent Management Locally Managed
Status Read Write
File Size
Data File Name
5 MB
bigtbs.dbf
Background: You need to connect to the instructors orcl database instance. Work with your
instructor to enable connections by using different methods. Ensure that users can use connect-
time failover to take advantage of a backup listener.
Your Tasks
2. Modify your local Names Resolution file so that you can connect to your instructors orcl
3. Test your changes to the network configuration by using SQL*Plus. Use system as the
username, oracle as the password, and testorcl as the connect string. To see the
information related to the instructor, select the instance_name and host_name columns
from the v$instance table. You should see the instructors host name.
4. Create a LISTENER2 listener to support connect-time failover. Use port 1561 for this
listener. Use the Static Database Registration tab to connect the listener to your database.
Use the following information:
Object Setting
Listener name LISTENER2
Host <fully qualified host name of your odd
PC>
Service name orcl.oracle.com
Protocol TCP/IP
Port 1561
SID orcl
Oracle Home Directory /u01/app/oracle/product/11.2.0/dbhome_1
You notice that your system is performing poorly during peak load times. After investigating,
you find that user sessions are consuming so much memory that your system is swapping
excessively. Configure your system to reduce the amount of memory that is consumed by user
sessions.
Tasks
2. Configure your system to use shared servers with a minimum of two shared servers and
dispatchers attribute for TCP/IP should be set to a minimum of two dispatchers.
4. Configure two additional local naming methods for the orcl database. The
shared_orcl alias always uses a shared server connection and the orcl alias always
uses a dedicated server connection.
You decide to increase the security of your passwords by enforcing case sensitivity for the
password for privileged users.
If there is no HR schema on the orcl database, run the HR.sh script that is located at
/home/oracle/labs. The script will create the HR schema.
$ cd /home/oracle/labs
$ ./HR.sh
3. Confirm the Password Case Sensitivity settings for the instance for privileged users.
Attempt to connect to the orcl instance as the SYSBDA user using a net service name
and:
A lowercase password
An uppercase password
A mixed-case password
4. Make the password file case-sensitive to match the database password usage.
1. Create a tablespace for the recovery catalog and a recovery catalog owner in your PROD1
database. The tablespace name is RCTS. The username is RCUSER with the password
oracle. This user is granted a privilege of recovery catalog owner.
2. Connect to the recovery catalog database (PROD1) with the appropriate recovery catalog
owner name (RCUSER) using RMAN. Create the recovery catalog in the RCTS tablespace.
3. Using RMAN, connect to your target database orcl and the recovery catalog database
PROD1.
4. Using RMAN, execute the resync catalog command to resynchronize the control file
1. For the orcl database, configure autobackup of the control file and the server parameter file.
2. Configure backup optimization and enable block change tracking. Specify
/u01/app/oracle/oradata/orcl/chg_track.f for the name of the block change
tracking file.
3. Create a whole database backup as the base backup of the incremental backup using the
Oracle-suggested backup strategy.
4. View information about your backups.
5. Create an archival backup of the orcl database for Long-Term storage using the tag
1. Use SQL*Plus to query the HR.REGIONS table. Make a note of the number of rows in the
HR.REGIONS table.
2. At the operating system prompt, execute the lab_02_04_02.sh script that is located in
/home/oracle/labs to simulate a failure in your database. This script deletes the
EXAMPLE tablespace data file.
3. Use SQL*Plus to query the HR.JOBS table.
1. Use SQL*Plus to view information about the control files in the orcl database. Query
V$CONTROLFILE.
2. Simulate a failure in your environment by executing the lab_02_05_02.sh script that is
located in /home/oracle/labs to delete all your control files.
3. You need some more information about your control files. Query
V$CONTROLFILE_RECORD_SECTION to learn more about the contents of your control
file.
4. You have lost all your control files and will need to recover them from the control file
If there is no SH schema on the orcl database, run the SH.sh script that is located at
/home/oracle/labs. The script will create the SH schema.
$ cd /home/oracle/labs
$ ./SH.sh
1. This practice will demonstrate the use of external tables to load data into a data warehouse.
The table will be called sales_delta_XT and it will use data that is contained in the
salesDec01.dat file, which is located in the /home/oracle/labs directory. Before
you can create an external table, you will need to create a directory object in the database that
2. When creating an external table, you are defining two parts of information:
The metadata information for the table representation inside the database
The HOW access parameter definition to extract the data from the external file
After the creation of this meta information, the external data can be accessed from within the
database, without the necessity of an initial load.
Execute the following statements as the SH user to create the external table:
CREATE TABLE sales_delta_XT (
PROD_ID NUMBER,
CUST_ID NUMBER,
TIME_ID DATE,
CHANNEL_ID CHAR(2),
PROMO_ID NUMBER,
QUANTITY_SOLD NUMBER(3),
AMOUNT_SOLD NUMBER(10,2)
)
ORGANIZATION external (
TYPE oracle_loader
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE log_dir:'sh_sales.bad'
LOGFILE log_dir:'sh_sales.log_xt'
FIELDS TERMINATED BY "|" (
prod_id, cust_id,
time_id CHAR(11) DATE_FORMAT DATE MASK "DD-MON-YYYY",
channel_id, promo_id, quantity_sold, amount_sold)
)
location('salesDec01.dat')
);
4. Load the data serially from the sales_delta_xt external table into the SALES fact table.
Roll back the operation when you have finished. Execute the following SQL statements:
5. Using the Data Pump driver, create an external table called sales_ch from the SALES
table that contains all the records that have a channel_id of 4. Use the same data_dir
directory that you created in step 1. Name the Data Pump output file that will be created as
'sales_ch.exp'. Execute the following SQL statements to create the external table:
Make sure that you can access the sales_ch table. Look at the sales_ch.exp file that is
being used by the new table (and created in the process).
If there is no HR schema on the orcl database, run the HR.sh script that is located at
/home/oracle/labs. The script will create the HR schema.
$ cd /home/oracle/labs
$ ./HR.sh
Background: In the recent past, you received a number of questions about the HR schema. To
analyze them, without interfering with the daily activities, you decide to use the Data Pump
Wizard to export the HR schema to file. When you perform the export, you are not sure into
which database you will be importing this schema.
HR_TEST tablespace:
DATAFILE : /u01/app/oracle/oradata/orcl/hr_test01.dbf
SIZE : 10MB
HR_TEST user:
DEFAULT TABLESPACE : HR_TEST
QUOTA : UNLIMITED on HR_TEST
PRIVIREGE : CREATE SESSION
3. As the SYSTEM user, import the exported HR schema back into the orcl database,
remapping it to the previously created HR_TEST schema and HR_TEST tablespace.
If there is no SH schema on the orcl database, run the SH.sh script that is located in
/home/oracle/labs. The script will create the SH schema.
$ cd /home/oracle/labs
$ ./SH.sh
1. On the orcl database, connect as SH with the password sh and estimate the number of rows
a materialized view corresponding to the following query would contain:
2. Execute the following SQL to create a table called CUST_ID_SALES_AGGR. Compare the
estimated size that you obtained in step 1 with the actual number of rows in the existing
CUST_ID_SALES_AGGR table. This table contains the data corresponding to the query in
step 1. What is your conclusion?
4. Assume that the CUST_ID_SALES_AGGR table indeed corresponds with the result of the
query in step 1. Use this knowledge to create a materialized view on the prebuilt
CUST_ID_SALES_AGGR table. Make sure that this materialized view can be used for future
query rewrites and is fast refreshable on demand.
5. Count the number of objects in the SH schema again, to check for new objects. From the data
dictionary, identify the objects having CUST_ID_SALES_AGGR as name. What type of
objects are these? What is your conclusion?
that MV1 should not be defined on the prebuilt table CUST_ID_SALES_AGGR, and its
SELECT list should not contain the following two expressions: COUNT(amount_sold)
and COUNT(*). Count the number of objects in the SH schema again. What new objects
have been created? What is your conclusion?
7. Compare the staleness status of the materialized views that you have created. What is your
conclusion?
8. Execute the lab_03_03_08.sql script. This script adds one row to the CUSTOMERS
table. Check the staleness status and the compile status of both the materialized views.
If there is no SH schema on the orcl database, run the SH.sh script that is located in
/home/oracle/labs. The script will create an SH schema.
$ cd /home/oracle/labs
$ ./SH.sh
The salesDec01.dat file that is located in /home/oracle/labs has been extracted from
an OLTP database and needs to be loaded into the data warehouse. You need a transformation,
because the data is not in the right format. The QUANTITY_SOLD and AMOUNT_SOLD columns
must be summed, grouping on all other columns, before loading the data into the target table.
1. Connect to SQL*Plus as the SH user and create a staging table SALES_DEC01 to load the
data into. The table must have the same structure as the SALES table.
Hint: Use the clause WHERE 1=0.
Modify the channel_id column to accept CHAR data because the initial data that you will
load is not in exactly the same format as the SALES fact table.
CREATE TABLE sales_dec01 AS
SELECT *
FROM sales
WHERE 1=0;
2. Load the data from the salesDec01.dat file into the SALES_DEC01 staging table, using
the sales_dec01.ctl control file that is located in /home/oracle/labs. Verify the
information in the control file before loading.
4. When you have successfully loaded the data into the SALES table, drop the staging table
sales_dec01.
$ cd /home/oracle/labs
$ ./SH.sh
1. Assume that you want to move the January 2000 sales data from your data warehouse to
the SALES table in your data mart. The data must be placed into a separate tablespace in
order to be transported. So, create a tablespace called tt_temp_sales to hold the
January sales data. After creating the tablespace, create a table called
4. You will transport the tablespace to the PROD1 database. Create the
/home/oracle/PROD1 directory and copy the data file and the dump file to the
directory. Verify that the files have been successfully copied.
/home/oracle/PROD1 that will be used by the Data Pump import, and create a user
named SH by using the following SQL statement.
When this is done, use the impdp utility as shown as follows to make the tt_transfer
tablespace accessible to the PROD1 database:
7. When the import is finished, verify that the temp_jan_sales table is accessible.
8. Drop the tt_temp_sales tablespace and the orcl_dir directory from the orcl
database.
9. Drop the tt_temp_sales tablespace and the PROD1_dir directory from the PROD1
database.
1. In the orcl database, verify the current value of the initialization parameters that are used
for parallel execution.
3. Tune the parameter to be executed in serial if the computed elapsed time is below 20 seconds
in this session.
1. In the orcl database, connect as the SH user and create the partitioned table
LITTLE_SALES by executing the following SQL statements:
CREATE TABLE little_sales
PARTITION BY HASH (time_id)
(PARTITION LS1,PARTITION LS2)
PARALLEL
AS
SELECT * FROM sales WHERE 1=2;
Note that the LITTLE_SALES table has only two partitions and the dictionary degree of
Is the INSERT statement executed in parallel? You can confirm it by examining the
V$PQ_SESSTAT view.
2. Enable parallel DML in your session. Execute the INSERT statement again and confirm the
output of selecting the v$pq_sesstat view.
1. In the orcl database, connect as SYSDBA and query the current values of the initialization
parameters dealing with parallel execution that are of interest to you. Execute the following
SQL statements:
5. Execute the following SQL statements. This script joins the two tables, TEMP_SALES and
TEMP_CHANNELS, using a DOP of 5. Examine V$PQ_SESSTAT to determine the DOP for
the operation.
SELECT count(*)
FROM temp_sales s, temp_channels c
WHERE (s.channel_id) = (c.channel_id);
7. After you have finished, drop the TEMP_CHANNELS and TEMP_SALES tables.
In this practice, you optimize a query to use star transformation and access the benefits of using
this optimizer technique.
1. From a terminal session, connect as the oracle user and execute the
setup_star_schema_lab.sh script that is located in your /home/oracle/labs
directory.
2. From the same terminal window, start a SQL*Plus session connected as the SH user and do
not disconnect from it until this practice finishes. Before executing the following SQL
statement, ensure that you flush both the shared pool and the buffer cache to avoid caching
3. Without modifying the SH schema, how can you improve the execution plan for the query
mentioned in step 2? Verify your solution and explain why it is probably a better solution.
You can use second_run.sql.
4. How would you enhance the previous optimization without changing the SH schema? You
can use third_run.sql.
5. How do you eliminate one access on the CUSTOMERS table from the previous execution
plan for the same SELECT statement seen in step 3?
7. Fix the issue that you found and apply your solution from step 5 again.
8. Verify that you solved the problem from step 5. You can use fourth_run.sql.
9. Clean up your environment by removing the index that you created and by returning the
constraint to its original state by using the cleanup_star_schema_lab.sql script.
In the orcl database, there is a business requirement that a record must be logged whenever
employee salary information is accessed. INSERT, UPDATE, and DELETE are recorded in a
journal table by using triggers. Create a proof of concept solution for SELECT accesses. Create a
user PFAY and prove that SELECT accesses will be recorded.
1. Create a security officer account that has privileges to create user accounts, grant privileges,
and administer fine-grained auditing and fine-grained access control. This account is named
SEC with the password sec. In this and subsequent practices, security functionality is
implemented in a single user. Create this user, giving it the following properties:
Name is SEC
4. Create an encrypted tablespace named ENCTS that uses the default encryption algorithm with
a size of 10 MB. Use the name of the data file $ORACLE_HOME/dbs/encts.dat.
5. View the data dictionary.
If there is no HR schema on the orcl database, run the HR.sh script that is located
in/home/oracle/labs. The script will create the HR schema.
$ cd /home/oracle/labs
$ ./HR.sh
1. Execute the following SQL to query the HR.LOCATIONS table for location ID 1400.
SELECT *
FROM hr.locations
WHERE location_id = 1400;
UPDATE hr.locations
SET postal_code = postal_code + 100
WHERE location_id = 1400;
commit;
3. Execute the following SQL to query the POSTAL_CODE column in HR.LOCATIONS and
view the change.
SELECT *
FROM hr.locations
WHERE location_id = 1400;
4. Execute the following SQL to update the POSTAL_CODE column in the HR.LOCATIONS
table, simulating user error.
UPDATE hr.locations
SET postal_code = postal_code + 100
WHERE location_id = 1400;
commit;
5. Perform Flashback Versions Query to correct user errors.
6. Return to your SQL*Plus session. Query the HR.LOCATIONS table to confirm the Flashback
operation.
SELECT *
FROM hr.locations
WHERE location_id = 1400;
In this practice, you create an application context, set the context using a secure package, and test
the context.
1. Connect as SYSTEM using ORACLE@orcl as password. Using the SYS_CONTEXT
procedure, display the following session-related attributes:
CURRENT_USER
SESSION_USER
PROXY_USER
IP_ADDRESS
The procedure that sets the application context has the following properties:
Owned by: SEC user
Part of: CURRENT_EMP package
Name: SET_EMP_INFO
It is called from a logon trigger named EMP_LOGON that is also owned by SEC. This
trigger applies to all users.
Execute $HOME/labs/lab_04_04_03_a.sql to create the package and package body.
In this practice, you create, enable, and test a Fine-Grained Access Control (FGAC) policy.
1. How does FGAC determine which rows belong in the Virtual Private Database (VPD) for the
current user?
2. How does FGAC know which tables are defined in the VPD?
3. The SEC user also needs the privilege to create policies. As SYSTEM, grant SEC the ability to
execute the package that creates policies.
4. What privilege exempts the user from access policies? Why does the SEC user need this
privilege? Grant it to SEC.
1. Ensure that you are pointing to the orcl database. Using SQL*Plus, connect to the database
as the SYS user and execute the lab_04_06_01.sql script from the
/home/oracle/labs directory. The script creates a small FLA_TBS1 tablespace, creates
the ARCHIVE_ADMIN user with the ARCHIVE_ADMIN password, and unlocks the HR user
with the hr password. The password is case-sensitive by default.
2. Give the ARCHIVE_ADMIN user administrative privileges for creating, maintaining, and
4. Create a Flashback Data Archive named fla1 using fla_tbs1 tablespace and set the
retention to 1 year.
6. Now, switch to the role of a flashback archive user. Connect as the HR user with the hr
password. Enable this flashback archive for the EMPLOYEES table.
7. To view and increase the salary of Mr. Fox three times by 1000, execute the
lab_04_06_07.sql script. This produces activity in the Flashback Data Archive.
9. As the HR user, choose a time after the creation of the Flashback Data Archive and before you
executed the erroneous DML. To view Mr. Foxs employee record as of that time, execute the
following query (replace 10 MINUTES with your chosen historic date, format examples:
50 SECOND, 10 DAY, 5 MONTH):
Note: You receive an ORA-1466 error, if you specify a time before the Flashback Data
Archive was started. Reduce the time to a smaller interval and try again. If you still see the
salary of 12600, increase your time interval.
UPDATE hr.employees
SET salary = (SELECT salary FROM hr.employees
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '10' MINUTE)
WHERE last_name = 'Fox')
WHERE last_name = 'Fox';
11. As the ARCHIVE_ADMIN user, drop the fla1 Flashback Data Archive.
Note: Dropping a Flashback Data Archive includes dropping the internal tamper-proofed
history table. You cannot drop this table directly due to auditing and security requirements.
12. Connected as the SYS user, clean up your environment by deleting the fla_tbs1
tablespace and the ARCHIVE_ADMIN user.
1. In the orcl database, create a tablespace, secf_tbs, by using the following SQL
statements. Enable sh to use the secf_tbs tablespace.
2. Connect orcl database by sh, create the T1 table in the secf_tbs tablespace with the
following columns:
EMPNO NUMBER
3. Clean up your environment by dropping table t1 and deleting the secf_tbs tablespace.
$ cd /home/oracle/labs
$ ./SH.sh
2. Still in your SQL*Plus session, connect as the SH user. Execute the lab_04_08_02.sql
script to create the range-partitioned ORDERS table.
Interval partitioning fully automates the creation of range partitions. Managing the creation of
new partitions can be a cumbersome and highly repetitive task. This is especially true for
predictable additions of partitions covering small ranges, such as adding new daily partitions.
Interval partitioning automates this operation by creating partitions on demand.
2. Find the information about the NEWSALES table in the dictionary view.
3. Execute the lab_04_09_03.sql script to insert new data into the NEWSALES table that
forces the creation of a new partition (segment).
If there is no SH schema on the orcl database, run the SH.sh script that is located at
/home/oracle/labs. The script will create the SH schema. You can ignore the errors on the
Data Pump Import utility.
$ cd /home/oracle/labs
$ ./SH.sh
1. Open a terminal window. Set the current directory to labs. Start SQL*Plus. Log in to the
SELECT cust_first_name,cust_last_name
FROM customers
WHERE cust_id =100
1. Open a terminal window. Set the current directory to labs. Start SQL*Plus. Log in to the
orcl database as the SH user with the password sh. Make sure that AUTOTRACE is
disabled.
2. Provide an identifier for the trace file to help you locate it. Drop all indexes (except the index
of the primary key) on the CUSTOMERS table. Enable SQL Trace. Analyze the following
SQL statement by using SQL Trace and TKPROF.
SELECT max(cust_credit_limit)
3. Now create an index on the CUST_CITY column, and then run the same query again.
4. Disable tracing.
5. Determine the location of the trace files by using the SHOW PARAMETER DIAGNOSTIC
command and making a note of the DIAGNOSTIC_DEST destination.
8. Locate your file by the file identifier that you gave in step 2. Look for a file called
orcl_ora_xxxxx_OCMWS.trc.
9. View the difference the in execution plans and statistics of the SQL statement with and
without an index. You can use gedit to do this. Change back to your home directory, and then
to the labs directory.
10. Now take a look at DBMS_MONITOR. Start two sessions, one connected as SYS as SYSDBA
and the other connected as SH.
11. From the SYSDBA session, determine the session ID (sid) and serial number (serial#)
from v$session for the SH user, and then describe the DBMS_MONITOR package. Then,
from the SYSDBA session, enable tracing using the sid and serial# values for the other
session, including the waits and bind information.
13. From the remaining SYSDBA session, determine your DIAGNOSTIC_DEST location, locate
the trace file, and view the contents. Determine the location of the trace files by using the
SHOW PARAMETER DIAGNOSTIC command and making a note of the
DIAGNOSTIC_DEST destination.
15. Change the directory to the DIAGNOSTIC_DEST destination that you retrieved by the
previous query.
$ ls ltr
19. You can use gedit to view the file. Change back to your home directory and then to the
labs directory.
2. Enable AUTOTRACE and run the following SQL statement. The WHERE clause contains three
predicates. Execute this statement and take note of the indexes used, the cost of the execution
plan, and the amount of I/O performed.
3. Drop the indexes (except the index of the primary key) again and replace them with a single
concatenated index of three columns CUST_GENDER, CUST_POSTAL_CODE, and
CUST_CREDIT_LIMIT. Then run the SQL statement of the step 2 again.
4. Drop all indexes (except the index of the primary key) on the CUSTOMERS table. Create three
bitmapped indexes on CUST_GENDER, CUST_POSTAL_CODE, and
CUST_CREDIT_LIMIT, and then run the SQL statement of the step 2 again. This statement
has a complicated WHERE clause. Bitmapped indexes are good for this type of statement. You
see several bitmap operations in the execution plan.
5. Finally, investigate the benefits of function-based indexes. Drop all indexes on the
CUSTOMERS table. First, create a normal index on the CUST_LAST_NAME column and run
the following SQL statement:
Create a function-based index that utilizes the LOWER function on the CUST_LAST_NAME
column.
1. Connect to the orcl database as the sh user and create a MY_CUST table that is identical to
the CUSTOMERS table.
CREATE table my_cust AS SELECT * FROM customers;
2. Query USER_TABLES to verify the existence of statistics for the MY_CUST table.
3. Run the following query on the table, and then view the execution plan using AUTOTRACE
TRACEONLY EXPLAIN.
SELECT * FROM my_cust WHERE cust_id < 50;
5. Run the query again and view the execution plan by using AUTOTRACE.
7. Drop the index that you created. Then verify the index statistics again.
What do you see?
8. Identify the last analyzed date and sample size for all the tables in your schema.
9. Identify the types of histograms for all the columns in your schema.
Hint: Query USER_TAB_COL_STATISTICS.
11. Now consider histograms. First flush the shared pool. Then run lab_05_04_11.sql that
creates the NEW_CUST table and populates it with skewed data. It also creates an index and a
histogram on the skew data column CUST_ID. After this script is run, the CUST_ID column
has 1,000 rows with a value of 1, one row with a value of 2, and one row with a value of 3.
Now run the following statement and use AUTOTRACE to get the execution plan.
SELECT count(ord_total) FROM new_cust where cust_id = 1;
You see that this is a full table scan. Now try running the following SQL. Is the index used?
SELECT count(ord_total) FROM new_cust where cust_id = 2;
The optimizer uses the histogram to determine whether to use an index.
In this practice, you manipulate both deferred statistics publishing and statistics extensions. The
basic idea of this practice is to test various statistic gatherings on a particular table before
publishing the best ones in your production environment.
1. Confirm whether the ORACLE_SID was set to orcl. Execute the stats_setup.sh script.
This script creates a new user called STATS and creates and populates a new table called
STATS.TABJFV.
2. Start a SQL*Plus session connected as user STATS. Do not disconnect from that session.
Make sure that you delete all existing statistics on STATS.TABJFV and verify that none exist
3. Determine the publishing mode for STATS.TABJFV statistics, and set it to PENDING mode.
4. Collect statistics on STATS.TABJFV and investigate the result. What do you observe? Use
the collect_pending.sql script.
5. From a terminal window, connected as the STATS user in a SQL*Plus session (do not exit
from this session after this step), disable dynamic sampling for your session and determine
the number of rows that the optimizer can currently estimate for the following query:
Hint: Use an explain plan to view the execution plan and SQL statement as follows:
select plan_table_output
from table(dbms_xplan.display('plan_table',null,'BASIC ROWS'));
6. Now, switch your session to use pending statistics that were previously collected.
7. Determine again the optimizers estimation of the number of rows returned by your query.
What do you observe?
8. Create a statistics extension to group C1 and C2 to indicate that both columns are correlated
in STATS.TABJFV. When done, gather statistics again on your table with maximum
precision for your extension.
9. Determine again the optimizers estimation of the number of rows returned by your query.
What do you observe?
2. In your terminal window, log in to SQL*Plus as the QRC user. From now on, do not
disconnect from this session. Determine the current content of the query cache by using the
following statement:
select type,status,name,object_no,row_count,row_size_avg
from v$result_cache_objects order by 1;
5. Determine the current content of the query cache. What do you observe?
6. Flush the buffer cache of your instance and rerun the query that was executed in step 3. What
do you observe?
7. Insert a new row into the CACHEJFV table by using the following statement:
insert into cachejfv values('c');
commit;
8. Execute your first query again and check the result cache. What do you observe?
11. Clear the result cache. Query V$RESULT_CACHE_OBJECTS to verify the clear operation.
DEFAULT_PLAN resource plan. Then you map a couple of Oracle users and your major OS user
to resource groups. Activate the resource plan and test your assignments.
Log in as the SYS user (with oracle as the password, connect as SYSDBA) and perform the
necessary tasks either through Enterprise Manager Database Control or through SQL*Plus. All
scripts for this practice are in the /home/oracle/labs directory.
If there is no HR, SCOTT, OE, or PM user, execute lab_05_07.sql to create sample users by
using SQL*Plus.
1. Using Enterprise Manager Database Control, create a resource group called APPUSER. At
this point, do not add users to the group.
2. Add the APPUSER and LOW_GROUP consumer groups to the DEFAULT_PLAN resource
plan. Change the level 3 CPU resource allocation percentages: 60% for the APPUSER
consumer group and 40% for the LOW_GROUP consumer group.
3. Configure Consumer Group Mappings, so that the HR Oracle user belongs to the APPUSER
consumer group and the SCOTT Oracle user to the LOW_GROUP consumer group. For the
SCOTT user, confirm that his ORACLE_USER attribute has a higher priority than the
CLIENT_OS_USER attribute.
4. Configure Consumer Group Mappings so that the oracle OS user belongs to the
SYS_GROUP consumer group.
5. Assign the PM Oracle user to the following consumer groups: APPUSER, LOW_GROUP, and
SYS_GROUP.
7. Test the consumer group mappings. Start two SQL*Plus sessions: the first with the
system/oracle@orcl connect string and the second with the scott/scott@orcl
connect string. Test other mappings as well.
/home/oracle/labs. The script will create the SH schema. You can ignore the errors on the
Data Pump Import utility.
$ cd /home/oracle/labs
$ ./SH.sh
2. Using Enterprise Manager, create a SQL Access Advisor tuning task based on the captured
workload that is in the SH.SQLSET_MY_ACCESS_WORKLOAD SQL tuning set by using the
SQLACCESS_WAREHOUSE template.
1. Create a baseline named Monday over past snapshots of Monday and compute statistics over
the static baseline.
Background: SQL Plan Management (SPM) is a new Oracle Database 11g feature that provides
controlled execution plan evolution.
With SPM, the optimizer automatically manages execution plans and ensures that only known or
verified plans are used.
When a new plan is found for a SQL statement, it will not be used until it has been verified to
have comparable or better performance than the current plan.
1. Before you can start this practice, you need to set up a new user. Execute the
spm_setup.sh script to set up the environment for this practice. This script creates the SPM
2. The first component of SPM is Plan Capture. There are two main ways to capture plans:
automatically (on the fly), or bulk load. You look at automatic capture first. Connect to the
orcl database as the spm user with the password spm and enable automatic plan capture so
that the SPM repository is automatically populated for any repeatable SQL statement.
3. Execute the following query in your SQL*Plus session (no space in /*LOAD):
select /*LOAD_AUTO*/ * from sh.sales where quantity_sold > 40 order
by prod_id;
Use the spm_query1.sql script to execute the query.
4. Because this is the first time that you have seen this SQL statement, it is not yet repeatable,
so there is no plan baseline for it. To confirm this, you can check that the plan baseline was
not loaded. Check whether there are any plan baselines that exist for your statement.
6. The SQL statement is now known to be repeatable and a plan baseline is automatically
captured. Check that the plan baseline was loaded for the previous statement. What do you
observe?
7. Now, change or alter the optimizer mode to use FIRST_ROWS optimization and re-execute
your statement. Describe what happened.
8. Now reset the optimizer mode to default values and disable auto capture of plan baselines.
9. Purge the plan baselines and confirm that the SQL plan baseline is empty. Use
purge_auto_baseline.sql.
10. Now, you will see how to directly load plan baselines from the cursor cache. Before you
begin, you need some SQL statements. Still connected to your SQL*Plus session, check the
execution plan for the following SQL statement, and then execute it (use the
explain_spm_query3.sql and spm_query3.sql scripts):
11. Now change the optimizer mode to use FIRST_ROWS optimization and re-execute the step
10. What do you observe?
13. Now that the cursor cache is populated, you need to get the SQL ID for your SQL statement
by using the following SQL.Use the SQL ID to filter the content of the cursor cache and load
the baselines with these two plans.
In this practice, you simulate exporting a SQL Tuning Set (STS) from a 10g database and import
it back into an 11g test environment. There, you access the performance of the SQL statements
that you imported before upgrading the 10g database.
If you do not use an spfile, at first, create an spfile from pfile and restart orcl database.
1. From a terminal window, which is referred to as the first session, execute the
setup_SPAbig10g.sh script to set up your simulated 10g environment. In this simulated
environment, the OPTIMIZER_FEATURES_ENABLE parameter is set to 10.2.0.2.
2. (Perform steps 2 and 3 at the same time) Generate a SQL Tuning Set (STS) called STS_JFV
that captures SQL statements from the cursor cache for approximately 12 minutes every five
seconds. Make sure that you try to capture only statements from the SQL_JFV module in the
APPS schema. Also, this STS should belong to the SYS user. Use the capsts10g.sh
script to perform this step.
$./capsts10g.sh
3. (Perform steps 2 and 3 at the same time) From a second terminal window, connected as the
oracle user, execute your workload by using the wrkl10g_jfv.sh script. This script
runs a workload of 45 statements that will be captured in STS_JFV automatically.
$./wrkl10g_jfv.sh
4. After approximately 12 minutes, both sessions should have finished. Connect to the orcl
database as the sys user, check the content of STS_JFV, and stage it in a table called
APPS.STS_JFV_TAB.
5. By using Data Pump Export, export the APPS schema to the default Data Pump directory
(DATA_PUMP_DIR).
6. Now restart your 11g environment to restore the database to an 11g environment. Use the
setup_SPAbig11g.sh script to perform this step.
$./setup_SPAbig11g.sh
9. As the SYS user, use Enterprise Manager Database Control to test the behavior of STS_JFV
in the simulated 10g environment and compare it to the 11g environment. You will do this by
changing the OPTIMIZER_FEATURES_ENABLE parameter. What are your conclusions?
To begin monitoring your targets, you need to deploy the Management Agent. In the lesson, you
learned that there are several methods available for deploying the agent. The method this practice
is going to use is to install the agent by using Deployments tab.
1. Log in to the Grid Control console by using sysman as the user and Oracle123 as the
password. You will receive a security alert for the first-time login, click OK and then you
need to add an exception on the Secure Connection Failed page. After logging in, click the
Deployments tab. On the Deployments page, in the Agent Installation section, click the
Install Agent link. The Select the type of Agent Deployment that you want to perform
page appears, showing the various installation options. Click Fresh Install to perform a new
Now that the Management Agent has been deployed to your targets, you need to configure
monitoring credentials for the EMREP database.
1. Log in to the Grid Control console as the sysman user using the Oracle123 password.
Click the Targets tab and find the database target. Note that on the Databases subtab, the
status of the EMREP database shows as unavailable. This is because Grid Control does not
have monitoring credentials configured yet for this database. Configuring the database
involves setting the monitor password for the dbsnmp user on your database to the
appropriate value, in this case, Oracle123.
You create a Super Administrator that will be using your Grid Control console, so you can avoid
using sysman if you want.
1. Create an administrator called Sys#_Admin and make this administrator a Super
Administrator. Assign a password of Oracle123 to this administrator.
Grid Control.
1. What is the value of the pga_aggregate_target initialization parameter?
2. Add another data file to the USERS tablespace.
3. Schedule a full database backup to happen at 2:00 PM two weeks from today.
1. Change the threshold for the Tablespace Space Used (%) metric. You change the threshold
for the tablespace USERS to 80% and 90% for warning and critical thresholds, respectively.
You need to define the email addresses to receive notifications. You can specify multiple
addresses if you want to be notified in different ways. In this practice, you define email addresses
to receive notifications in different formats.
1. Log in as Sys#_Admin and using the Preferences link, define email addresses to receive
notifications. Define two email addresses. The first email address should have a long message
format and the second email should have a short message format.
an alert is triggered.
1. Define a schedule assuming that you work from 9:00 AM to 5:00 PM every weekday, except
Thursday. On Thursday, you work from 3:00 PM to 11:00 PM. You want to receive an email
in the long format on all your working days and receive emails in the long and short format
on Thursday.
Notification rules are sets of conditions that determine when a notification occurs. Grid Control
provides a few out-of-the-box notification rules for some of the most common problem
situations. Administrators can subscribe to them by selecting the Subscribe (Send E-mail) check
box for the respective notification rule.
Subscribe to the out-of-the-box notification rule Agents Unreachable so that you can be alerted
whenever any agent is not reachable. Use the Public Rules section on the Preferences page.
tasks. First, you monitor existing scheduler elements, and then you create scheduler components
and test them.
In this practice, you use Enterprise Manager Grid Control to define and monitor the Scheduler and
automate tasks. Regularly, click Show SQL to review all statements that are new to you.
Log in to Enterprise Manager Grid Control as the sysman user. Log in to the orcl database as
the SYS user (with oracle as password, connect as SYSDBA) or as the HR user (with hr as
password, connect as Normal), as indicated. Perform the necessary tasks either through
Enterprise Manager Grid Control or through SQL*Plus. All scripts for this practice are in the
/home/oracle/labs directory.
Question 1: Are there any existing windows? What are their names?
___________________________________
___________________________________
___________________________________
___________________________________
6. Review the Scheduler Job Classes page in Enterprise Manager. Are there any existing job
classes? If so, which resource consumer group is associated with each job class?
___________________________________
___________________________________
___________________________________
___________________________________
In this practice, you use Enterprise Manager Grid Control to create Scheduler objects and
automate tasks.
Prerequisite: Ensure that you complete step 1 in Practice 6-9, which gives the HR user
administrative privileges.
1. While you are logged in to the database as the HR user, on the Grid Control orcl database
page, create a simple job that runs a SQL script:
General:
Name: CREATE_LOG_TABLE_JOB
Hint: You may have to refresh the page for the Schedule to appear.
4. Using Enterprise Manager Grid Control, create a job named LOG_SESSIONS_JOB that uses
the LOG_SESS_COUNT_PRGM program and the SESS_UPDATE_SCHED schedule. Make
sure that the job uses FULL logging.
5. In your SQL*Plus session, check the HR.SESSION_HISTORY table for rows.
Question: If there are rows in the table, are the time stamps three seconds apart?
_____________________________________
Note: Make sure that you do not delete the wrong schedule.
This practice is to help you get familiar with the Job System. You create various jobs, single and
multitask, and interact with these jobs. To understand the Job functionality, you perform the
following tasks:
Create a job that runs a SQL script immediately (save this job to the library).
1. Create a simple job called Team# SQL Job that runs a SQL script against the database on
your database host. This job should perform a SELECT statement on the employees table
in the HR schema. Use select * from hr.employees; for the SQL script. Save the job
to the Job Library, so that you can make changes to it later. Go to the Job Library and run the
job. The result of this job should be similar to:
Primary Database
Database Name: orcl
Instance Name: orcl
Database Unique Name: orcl
Grid Control Target Name: orcl.oracle.com
Settings:
Host: odd PC
File Location: /u01/app/oracle/oradata/stdby
1. Enable Real-Time Query and insert a new row into HR.REGIONS table of primary database.
Then confirm that the inserted new row can be queried against the physical standby database
while Redo Apply is active.
2. Switch back to orcl (the original primary database) and confirm that orcl is the primary
database.
1. Configure the Flashback Database on your primary and physical standby databases.
1. In your RMAN session (connected to your primary database), configure the backup retention
policy to allow recovery for seven days.
2. Specify that the archived redo log files can be deleted after they are applied to the standby
database.
1.
Set the protection mode to maximum availability.
Practice 7-6: Setting the Data Protection Mode
1. Use the Oracle Universal Installer (runInstaller) and install Oracle Grid Infrastructure using
the following information.
Now that Grid Infrastructure has been installed, you install Oracle Database 11g Release 2
software.
1. Use the Oracle Universal Installer (runInstaller) and install Oracle Database 11g Release 2
software using the following information:
ORACLE_BASE /u01/app/oracle
Software Location /u01/app/oracle/product/11.2.0/dbhome_1
Inventory /u01/app/oraInventory