Vous êtes sur la page 1sur 101

1. which two statements are true about identifying unused indexes? (choose two.

a. performance is improved by eliminating unnecessary overhead during dml


operations.
b. v$index_stats displays statistics that are gathered when using the monitoring
usage keyword.
c. each time the monitoring usage clause is specified, the v$object_usage view is
reset for the specified index.
d. each time the monitoring usage clause is specified, a new monitoring start time
is recorded in the alert log.

answer: a, c

explanation:
monitoring index usage
oracle provides a means of monitoring indexes to determine if they are being used
or not used. if it is determined that an index is not being used, then it can be
dropped, thus eliminating unnecessary statement overhead.
to start monitoring an index's usage, issue this statement:
alter index index monitoring usage;
later, issue the following statement to stop the monitoring:
alter index index nomonitoring usage;
the view v$object_usage can be queried for the index being monitored to see if the
index has been used. the view contains a used column whose value is yes or no,
depending upon if the index has been used within the time period being monitored.
the view also contains the start and stop times of the monitoring period, and a
monitoring column (yes/no) to indicate if usage monitoring is currently active.
each time that you specify monitoring usage, the v$object_usage view is reset for
the specified index. the previous usage information is cleared or reset, and a new
start time is recorded. when you specify nomonitoring usage, no further monitoring
is performed, and the end time is recorded for the monitoring period. until the
next alter index ... monitoring usage statement is issued, the view information is
left unchanged.

2. you need to create an index on the sales table, which is 10 gb in size. you
want your index to be spread across many tablespaces, decreasing contention for
index lookup, and increasing scalability and manageability.
which type of index would be best for this table?

a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based

answer: c

explanation:
i suggest that you read chapters 10 & 11 in oracle9i database concepts release 2
(9.2) march 2002 part no. a96524-01 (a96524.pdf)

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 10 bitmap indexes
the purpose of an index is to provide pointers to the rows in a table that contain
a given key value. in a regular index, this is achieved by storing a list of
rowids for each key corresponding to the rows with that key value. oracle stores
each key valuerepeatedly with each storedrowid.in abitmap index,abitmapfor eachkey
value is used instead of a list of rowids.
each bit in the bitmap corresponds to a possible rowid. if the bit is set, then it
means that the row with the corresponding rowid contains the key value. a mapping
function converts the bit position to an actual rowid, so the bitmap index
provides the same functionality as a regular index even though it uses a different
representation internally. if the number of different key values is small, then
bitmap indexes are very space efficient.
bitmap indexing efficiently merges indexes that correspond to several conditions
in a where clause. rows that satisfy some, but not all, conditions are filtered
out before the table itself is accessed. this improves response time, often
dramatically.
note: bitmap indexes are available only if you have purchased the oracle9i
enterprise edition.
see oracle9i database new features for more information about the features
available in oracle9i and the oracle9i enterprise edition.

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 11
partitioned indexes
just like partitioned tables, partitioned indexes improve manageability,
availability, performance, and scalability. they can either be partitioned
independently (global indexes) or automatically linked to a table's partitioning
method (local indexes).
local partitioned indexes
local partitioned indexes are easier to manage than other types of partitioned
indexes. they also offer greater availability and are common in dss environments.
the reason for this is equipartitioning: each partition of a local index is
associated with exactly one partition of the table. this enables oracle to
automatically keep the index partitions in sync with the table partitions, and
makes each table-index pair independent. any actions that make one partition's
data invalid or unavailable only affect a single partition.
you cannot explicitly add a partition to a local index. instead, new partitions
are added to local indexes only when you add a partition to the underlying table.
likewise, you cannot explicitly drop a partition from a local index. instead,
local index partitions are dropped only when you drop a partition from the
underlying table.
a local index can be unique. however, in order for a local index to be unique, the
partitioning key of the table must be part of the index's key columns. unique
localindexes are useful for oltp environments.
see also: oracle9i data warehousing guide for more information about partitioned
indexesname, and stores the index partition in the same tablespace as the table
partition.
global partitioned indexes
global partitioned indexes are flexible in that the degree of partitioning and the
partitioning key are independent from the table's partitioning method. they are
commonly used for oltp environments and offer efficient access to any individual
record.
the highest partition of a global index must have a partition bound, all of whose
values are maxvalue. this ensures that all rows in the underlying table can be
represented in the index. global prefixed indexes can be unique or nonunique. you
cannot add a partition to a global index because the highest partition always has
a partition bound of maxvalue. if you wish to add a new highest partition, use the
alter index split partition statement. if a global index partition is empty, you
can explicitly drop it by issuing the alter index drop partition statement. if a
global index partition contains data, dropping the partition causes the next
highest partition to be marked unusable. you cannot drop the highest partition in
a global index.
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 10
unique and nonunique indexes
indexes canbeunique or nonunique.unique indexesguaranteethat notworows of a table
have duplicate values in the key column (or columns). nonunique indexes do not
impose this restriction on the column values. oracle recommends that unique
indexes be created explicitly, and not through
enabling a unique constraint on a table.
alternatively, you can define unique integrity constraints on the desired columns.
oracle enforces unique integrity constraints by automatically defining a unique
index on the unique key. however, it is advisable that any index that exists for
query performance, including unique indexes, be created explicitly.

reverse key indexes


creating a reverse key index,compared to a standard index, reverses the bytes of
each column indexed (except the rowid) while keeping the column order. such an
arrangement can help avoid performance degradation with oracle9i real application
clusters where modifications to the index are concentrated on a small set of leaf
blocks. by reversing the keys of the index, the insertions become distributed
across all leaf keys in the index.
using the reverse key arrangement eliminates the ability to run an index range
scanning query on the index. because lexically adjacent keys are not stored next
to each other in a reverse-key index, only fetch-by-key or full-index (table)
scans can be performed.
sometimes, using a reverse-key index can make an oltp oracle9i real application
clusters application faster. for example, keeping the index of mail messages in an
e-mail application: some users keep old messages, and the index must maintain
pointers to these as well as to the most recent.
the reverse keyword provides a simple mechanism for creating a reverse key index.
you can specify the keyword reverse along with the optional index specifications
in a create index statement:
create index i on t (a,b,c) reverse;
you can specify the keyword noreverse to rebuild a reverse-key index into one
that is not reverse keyed:
alter index i rebuild noreverse;
rebuilding a reverse-key index without the noreverse keyword produces a rebuilt,
reverse-key index.
function-based indexes
you can create indexes on functions and expressions that involve one or more
columns in the table being indexed. a function-based index computes the value of
the function or expression and stores it in the index. you can create a function-
based index as either a b-tree or a bitmap index.

function-based indexes provide an efficient mechanism for evaluating statements


that contain functions in their where clauses. the value of the expression is
computedand stored intheindex.whenitprocessesinsert and update statements,
however, oracle must still evaluate the function to process the statement.
for example, if you create the following index:
createindex idx on table_1 (a + b * (c - 1),a,b);
then oracle can use it when processing queries such as this:
select a from table_1 where a + b * (c - 1) < 100;

3. which type of index does this syntax create?

create index hr.employees_last_name_idx on hr.employees(last_name) <br />


pctfree 30<br />
storage(initial 200k next 200k<br />
pctincrease 0 maxextents 50) <br />
tablespace indx;

a. bitmap
b. b-tree
c. partitioned
d. reverse key

answer: b

explanation:
oracle provides several indexing schemes that provide complementary performance
functionality. these are:
1 b-tree indexes-the default and the most common.
2 b-tree cluster indexes-defined specifically for cluster.
3 hash cluster indexes-defined specifically for a hash cluster.
4 global and local indexes-relate to partitioned tables and indexes.
5 reverse key indexes-most useful for oracle real application cluster
applications.
6 bitmap indexes-compact; work best for columns with a small set of values.
7 function-based indexes-contain the precomputed value of a function/expression.
8 domain indexes-specific to an application or cartridge.

4. the credit controller for your organization has complained that the report she
runs to show customers with bad credit ratings takes too long to run. you look at
the query that the report runs and determine that the report would run faster if
there were an index on the credit_rating column of the customers table.<br />

the customers table has about 5 million rows and around 100 new rows are added
every month. old records are not deleted from the table.<br />
the credit_rating column is defined as a varchar2(5) field. there are only 10
possible credit ratings and a customer's credit rating changes infrequently.
customers with bad credit ratings have a value in the credit_ratings column of
'bad' or 'f'.<br />
which type of index would be best for this column?

a. b-tree
b. bitmap
c. reverse key
d. function-based

answer: b

explanation:
ad a: why b-tree is not good for this problem:
(1) b-trees provide excellent retrieval performance for a wide range of queries,
including exact match and range searches.
(2) inserts, updates, and deletes are efficient, maintaining key order for fast
retrieval.
since, we will not update this column, and no records are deleted, also we don't
have a wide range of queries for this column so b-tree is not a good solution.

ad c: creating a reverse key index, compared to a standard index, reverses the


bytes of each column indexed (except the rowid) while keeping the column order.
such an arrangement can help avoid performance degradation with oracle9i real
application clusters where modifications to the index are concentrated on a small
set of leaf blocks. by reversing the keys of the index, the insertions become
distributed across all leaf keys in the index.
using the reverse key arrangement eliminates the ability to run an index range
scanning query on the index. because lexically adjacent keys are not stored next
to each other in a reverse-key index, only fetch-by-key or full-index (table)
scans can be performed. full index (table) scan for 1 million records???? no
way!!!!

ad d: the function-based index is a new type of index, implemented in oracle8i,


that is designed to improve query performance by making it possible to define an
index that works when your where clause contains operations on columns.

5. your developers asked you to create an index on the prod_id column of the
sales_history table, which has 100 million rows.
the table has approximately 2 million rows of new data loaded on the first day of
every month. for the remainder of the month, the table is only queried. most
reports are generated according to the prod_id, which has 96 distinct values.

which type of index would be appropriate?

a. bitmap
b. reverse key
c. unique b-tree
d. normal b-tree
e. function based
f. non-unique concatenated

answer: a

explanation:
regular b*-tree indexes work best when each key or key range references only a few
records, such as employee names. bitmap indexes, by contrast, work best when each
key references many records, such as employee gender.
bitmap indexes can substantially improve performance of queries with the following
characteristics:
(a) the where clause contains multiple predicates on low- or medium-cardinality
columns.
(b) the individual predicates on these low- or medium-cardinality columns select a
large number of rows.
(c) bitmap indexes have been created on some or all of these low- or medium-
cardinality columns.
(d) the tables being queried contain many rows.

you can use multiple bitmap indexes to evaluate the conditions on a single table.
bitmap indexes are thus highly advantageous for complex ad hoc queries that
contain lengthy where clauses. bitmap indexes can also provide optimal performance
for aggregate queries. 96<<100 million low cardinality ==> bitmap indexes, lot of
rows ==> bitmap indexes.

see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg.
181. (10-13)

6. you need to create an index on the passport_records table. it contains 10


million rows of data. the key columns have low cardinality. the queries generated
against this table use a combination of multiple where conditions involving the or
operator.
which type of index would be best for this type of table?

a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based

answer: a

explanation:
bitmap indexes can substantially improve performance of queries with the following
characteristics:
(a) the where clause contains multiple predicates on low- or medium-cardinality
columns.
(b) the individual predicates on these low- or medium-cardinality columns select a
large number of rows.
(c) bitmap indexes have been created on some or all of these low- or medium-
cardinality columns.
(d) the tables being queried contain many rows.

you can use multiple bitmap indexes to evaluate the conditions on a single table.
bitmap indexes are thus highly advantageous for complex ad hoc queries that
contain lengthy where clauses. bitmap indexes can also provide optimal performance
for aggregate queries.
ad a: true. low cardinality ==> bitmap indexes, lot of rows ==> bitmap indexes.
see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg.
181. (10-13)

7. the user smith created the sales history table. smith wants to find out the
following information about the sales history table:<br />
<br />
- the size of the initial extent allocated to the sales history data segment<br />
- the total number of extents allocated to the sales history data segment<br />
<br />
which data dictionary view(s) should smith query for the required information?

a. user_extents
b. user_segments
c. user_object_size
d. user_object_size and user_extents
e. user_object_size and user_segments

answer: b

explanation:
sql> desc user_segments

name null? type

segment_name varchar2(81)
partition_name varchar2(30)
segment_type varchar2(18)
tablespace_name varchar2(30)
bytes number
blocks number
extents number
initial_extent number
next_extent number
min_extents number
max_extents number
pct_increase number
freelists number
freelist_groups number
buffer_pool varchar2(7)

8. which password management feature ensures a user cannot reuse a password for a
specified time interval?

a. account locking
b. password history
c. password verification
d. password expiration and aging

answer: b

explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 22-8
account locking
oracle can lock a user's account if the user fails to login to the system within a
specified number of attempts. depending on how the account is configured, it can
be unlocked automatically after a specified time interval or it must be unlocked
by the database administrator.
password complexity verification
complexity verification checks that each password is complex enough to provide
reasonable protection against intruders who try to break into the system by
guessing passwords.
password history
the password history option checks each newly specified password to ensure that a
password is not reused for the specified amount of time or for the specified
number of password changes. the database administrator can configure the rules for
password reuse with create profile statements.

9. which view provides the names of all the data dictionary views?

a. dba_names
b. dba_tables
c. dictionary
d. dba_dictionary

answer: c

explanation:
http://docs.rinet.ru:8080/o8/ch02/ch02.htm

all the data dictionary tables and views are owned by sys. you can query the
dictionary table to obtain the list of all dictionary views.

10. the control file defines the current state of the physical database.
which three dynamic performance views obtain information from the control file?
(choose three.)

a. v$log
b. v$sga
c. v$thread
d. v$version
e. v$datafile
f. v$parameter

answer: a, c, e

explanation:
v$log: this view contains log file information from the control files.
v$sga: this view contains summary information on the system global area (sga).
v$thread: this view contains thread information from the control file.
v$version: version numbers of core library components in the oracle server. there
is one row for each component.
v$datafile: this view contains datafile information from the control file.
v$parameter: displays information about the initialization parameters that are
currently in effect for the session. a new session inherits parameter values from
the instance-wide values displayed by the v$system_parameter view.

11. which data dictionary view shows the available free space in a certain
tablespace?

a. dba_extents
b. v$freespace
c. dba_free_space
d. dba_tablespacfs
e. dba_free_extents

answer: c

12. which data dictionary view would you use to get a list of object privileges
for all database users?

a. dba_tab_privs
b. all_tab_privs
c. user_tab_privs
d. all_tab_privs_made

answer: a

explanation:
ad a: true. dba_tab_privs this view lists all grants on objects in the database.
(a58242.pdf) pg. 261. (2-91).
ad b: false. all_tab_privs this view lists the grants on objects for which the
user or public is the grantee. (a58242.pdf) pg. 203. (2-33).
ad c: false. user_tab_privs this view contains information on grants on objects
for which the user is the owner, grantor, or grantee. (a58242.pdf) pg. 333. (2-
163).
ad d: false. all_tab_privs_made this view lists the user's grants and grants on
the user's objects. (a58242.pdf) pg. 204. (2-34).

13. user smith created indexes on some tables owned by user john. you need to
display the following:
<br />
index names<br />
index types<br />
<br />
which data dictionary view(s) would you need to query?

a. dba_indexes only
b. dba_ind_columns only
c. dba_indexes and dba_users
d. dba_ind columns and dba_users
e. dba_indexes and dba_ind_expressions
f. dba_indexes, dba_tables, and dba_users

answer: a

explanation:
ad a: dba_indexes. this view contains descriptions for all indexes in the
database. to gather statistics for this view, use the sql command analyze. this
view supports parallel partitioned index scans. (a58242.pdf) pg. 230. (2-60).
ad b: dba_ind_columns. this view contains descriptions of the columns comprising
the indexes on all tables and clusters. (a58242.pdf) pg. 232. (2-62).
ad c: dba_users. this view lists information about all users of the database.
(a58242.pdf) pg. 267. (2-97).
ad e: dba_ind_expressions does not exist.
ad f: dba_tables. this view contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze.
(a58242.pdf) pg. 262. (2-92).

14. you need to know how many data files were specified as the maximum for the
database when it was created. you did not create the database and do not have the
script used to create the database. how could you find this information?

a. query the dba_data_files data dictionary view.


b. query the v$datafile dynamic performance view.
c. issue the show parameter control_files command.
d. query the v$controlfile_record_section dynamic performance view.

answer: d

explanation:
ad a: false. dba_data_files contains information about database files. we need
information about max number of datafiles. see (a58242.pdf) pg. 225. (2-55)
ad b: v$datafile ontains datafile information from the control file. (a58242.pdf)
pg. 363. (3-23)
ad c: this command just shows the locations of the current control files.
ad d: v$controlfile_record_section displays information about the controlfile
record sections. (a58242.pdf) pg. 360. (3-20)

15. examine the command:


<br />
create table employee<br />
( employee_id number <br />
constraint employee_empid_pk primary key,<br />
employee_name varcnar2(30),<br />
manager_id number <br />
constraint employee_mgrid_fk references
employee(employee_id));<br />

the emp table contains self referential integrity requiring all not null values
inserted in the manager_id column to exist in the employee_id column. which view
or combination of views is required to return the name of the foreign key
constraint and the referenced primary key?

a. dba_tables only
b. dba_constraints only
c. dba_tab_columns only
d. dba_cons_columns only
e. dba_tables and dba_constraints
f. dba_tables and dba_cons_columns

answer: b

explanation:
ad a: false. dba_tables contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze. no
constraint information see (a58242.pdf) pg. 262 (2-92).
ad b: true. dba_constraints contains constraint definitions on all tables. see
(a58242.pdf) pg. 253 (2-83).
ad c: false. dba_tab_columns contains information which describes columns of all
tables, views, and clusters. no constraint name information. see (a58242.pdf) pg.
259 (2-89).
ad d: false. dba_cons_columns contains information about accessible columns in
constraint definitions. see (a58242.pdf) pg. 224 (2-54).
ad e: false. we don't need the dba_tables.
ad f: false.

16. which data dictionary view(s) do you need to query to find the following
information about a user?<br />
- whether the user's account has expired<br />
- the user's default tablespace name<br />
- the user's profile name<br />

a. dba_users only
b. dba_users and dba_profiles
c. dba_users and dba_tablespaces
d. dba_users, dba_ts_quotas, and dba_profiles
e. dba_users, dba_tablespaces, and dba_profiles

answer: a

explanation:
sql> desc dba_users

name null? type


username not null varchar2(30)
user_id not null number
password varchar2(30)
account_status not null varchar2(32)
lock_date date
expiry_date date
default_tablespace not null varchar2(30)
temporary_tablespace not null varchar2(30)
created not null date
profile not null varchar2(30)
initial_rsrc_consumer_group varchar2(30)
external_name varchar2(4000)

17. you need to determine the location of all the tables and indexes owned by one
user. in which dba view would you look?

a. dba_tables
b. dba_indexes
c. dba_segments
d. dba_tablespaces

answer: c

explanation:
ad a: false. dba_tables contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze. no
index information see (a58242.pdf) pg. 262 (2-92).
ad b: false. dba_indexes contains descriptions for all indexes in the database. to
gather statistics for this view, use the sql command analyze. this view supports
parallel partitioned index scans. no table information. see (a58242.pdf) pg. 230
(2-60).
ad c: true. dba_segments contains information about storage allocated for all
database segments. username of the segment owner, type of segment: ... table,
index .... see (a58242.pdf) pg. 254 (2-84).
ad d: false. dba_tablespaces contains descriptions of all tablespaces. no table
and index information. see (a58242.pdf) pg. 264 (2-94).

18. which data dictionary view would you use to get a list of all database users
and their default settings?

a. all_users
b. user_users
c. dba_users
d. v$session

answer: c

explanation:
ad a: false. all_users this view contains information about all users of the
database: name of the user, id number of the user, user creation date, but no
default settings. see (a58242.pdf) pg. 209 (2-39).
ad b: false. user_users this view contains information about the current user. not
all user. see (a58242.pdf) pg. 339 (2-169).
ad c: true. dba_users this view lists information about all users of the database.
default tablespace for data, default tablespace for temporary table see
(a58242.pdf) pg. 267 (2-97).
ad d: false. v$session this view lists session information for each current
session. see (a58242.pdf) pg. 417 (3-77).

19. you want to limit the number of transactions that can simultaneously make
changes to data in a block, and increase the frequency with which oracle returns a
block back on the free list.

which parameters should you set?


a. initrans and pctused
b. maxtrans and pctfree
c. initrans and pctfree
d. maxtrans and pctused

answer: d

explanation:
http://perun.si.umich.edu/~radev/654/resources/oracledefs.html

pctfree
specifies the percentage of space in each of the table's data blocks reserved for
future updates to the table's rows. the value of pctfree must be a positive
integer from 1 to 99. a value of 0 allows the entire block to be filled by
inserts of new rows. the default value is 10. this value reserves 10% of each
block for updates to existing rows and allows inserts of new rows to fill a
maximum of 90% of each block. pctfree has the same function in the commands that
create and alter clusters, indexes, snapshots, and snapshot logs. the combination
of pctfree and pctused determines whether inserted rows will go into existing data
blocks or into new blocks.

pctused
specifies the minimum percentage of used space that oracle maintains for each data
block of the table. a block becomes a candidate for row insertion when its used
space falls below pctused. pctused is specified as a positive integer from 1 to
99 and defaults to 40. pctused has the same function in the commands that create
and alter clusters, snapshots, and snapshot logs. the sum of pctfree and pctused
must be less than 100. you can use pctfree and pctused together use space within
a table more efficiently.

initrans
specifies the initial number of transaction entries allocated within each data
block allocated to the table. this value can range from 1 to 255 and defaults to
1. in general, you should not change the initrans value from its default. each
transaction that updates a block requires a transaction entry in the block. the
size of a transaction entry depends on your operating system. this parameter
ensures that a minimum number of concurrent transactions can update the block and
helps avoid the overhead of dynamically allocating a transaction entry. the
initrans parameter serves the same purpose in clusters, indexes, snapshots, and
snapshot logs as in tables. the minimum and default initrans value for a cluster
or index is 2, rather than 1.

maxtrans
specifies the maximum number of concurrent transactions that can update a data
block allocated to the table. this limit does not apply to queries. this value
can range from 1 to 255 and the default is a function of the data block size. you
should not change the maxtrans value from its default. if the number concurrent
transactions updating a block exceeds the initrans value, oracle dynamically
allocates transaction entries in the block until either the maxtrans value is
exceeded or the block has no more free space. the maxtrans parameter serves the
same purpose in clusters, snapshots, and snapshot logs as in tables.

20. which steps should you take to gather information about checkpoints?

a. set the log_checkpoints_to_alert initialization parameter to true. monitor the


alert log file.
b. set the log_checkpoint_timeout parameter. force a checkpoint by using the
fast_start_mttr_target parameter. monitor the alert log file.
c. set the log_checkpoint_timeout parameter.
force a log switch by using the command alter system force logswitch.
force a checkpoint by using the command alter system force checkpoint. monitor
the alert log file.
d. set the fast_start_mttr_target parameter to true.
force a checkpoint by using the command alter system force checkpoint. monitor
the alert log file.

answer: a

explanation:
testking said b.<br /><br />

http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96536/ch1103.htm#1019186
log_checkpoints_to_alert lets you log your checkpoints to the alert file. doing so
is useful for determining whether checkpoints are occurring at the desired
frequency.
fast_start_mttr_target: lets you specify in seconds the expected mean time to
recover (mttr), which is the expected amount of time oracle takes to perform
recovery and startup the instance.
log_checkpoint_timeout: limits the number of seconds between the most recent redo
record and the checkpoint.
log_checkpoint_interval: limits the number of redo blocks generated between the
most recent redo record and the checkpoint.

21. you decided to use oracle managed files (omf) for the control files in your
database. which initialization parameter do you need to set to specify the default
location for control files if you want to multiplex the files in different
directories?

a. db_files
b. db_create_file_dest
c. db_file_name_convert
d. db_create_online_log_dest_n

answer: d

explanation:
http://www.orafaq.net/parms/>

paramter name: db_files description: max allowable # db files


paramter name: db_create_file_dest description: default database location

http://www.orafaq.net/archive/oracle-l/2002/07/08/102823.htm:
db_file_name_convert converts the db file name:
db_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby')

http://www.oracle-base.com/articles/9i/oraclemanagedfiles.asp:
managing redo log files using omf
when using omf for redo logs the db_creat_online_log_dest_n parameters in the
init.ora file decide on the locations and numbers of logfile members. for exmple:
db_create_online_log_dest_1 = c:\oracle\oradata\tsh1
db_create_online_log_dest_2 = d:\oracle\oradata\tsh1
22. which command can you use to display the date and time <br />
in the form 17:45:01 jul-12-2000 using the default us7ascii character set?

a. alter system set nls_date_format='hh24:mi:ss mon-dd-yyyy';


b. alter session set date_format='hh24:mi:ss mon-dd-yyyy';
c. alter session set nls_date_format='hh24:mi:ss mon-dd-yyyy';
d. alter system set nls_date_format='hh:mi:ss mon-dd-yyyy';

answer: c

explanation:
http://www.idera.com/support/documentation/oracle_date_format.htm<br />
alter session set nls_date_format = <date_format>

23. which initialization parameter determines the location of the alert log file?

a. user_dump_dest
b. db_create_file_dest
c. background_dump_dest
d. db_create_online_log_dest_n

answer: c

http://www.experts-exchange.com/databases/oracle/q_20308350.html

there is one alert log per db instance and normally named as


alert_&lt;sid&gt;.log. trace files, on the other hand, are generated by the oracle
background processes or other connected net8 processes when oracle internal errors
occur and they dump all information about the error into the trace files. you can
also set the level of tracing for net8 connections as per your requirement.
the alert log is a special trace file. the alert log of a database is a
chronological log of messages and errors, which includes the following:
(a) all internal errors (ora-600), block corruption errors (ora-1578), and
deadlock errors (ora-60) that occur.
(b) administrative operations, such as create/alter/drop
database/tablespace/rollback segment sql statements and startup, shutdown, and
archive log.
(c) several messages and errors relating to the functions of shared server and
dispatcher processes.
(d) errors occurring during the automatic refresh of a snapshot.
(e) the values of all initialization parameters at the time the database and
instance start.

location:
all trace files for background processes and the alert log are written to the
destination specified by the initialization parameter background_dump_dest. all
trace files for server processes are written to the destination specified by the
initialization parameter user_dump_dest. the names of trace files are operating
system specific, but usually include the name of the process writing the file
(such as lgwr and reco).

info about all the oracle parameters: http://www.orafaq.net/parms/

24. which two environment variables should be set before creating a database?
(choose two.)

a. db_name
b. oracle_sid
c. oracle_home
d. service_name
e. instance_name

answer: b, c

explanation:

see ocp oracle 9i database: fundamentals i, p. 67.:


you may need to configure a few environment variables before creating your
database, such as oracle_base, oracle_home, oracle_sid, ora_nls33,
ld_library_path, and others.

check out: in this question we deal with environment variables, not parameters!

instance_name
represents the name of the instance and is used to uniquely identify a specific
instance when clusters share common services names. the instance name is
identified by the instance_name parameter in the instance initialization file,
initsid.ora. the instance name is the same as the oracle system identifier (sid).

oracle system identifier (sid)


a name that identifies a specific instance of a running pre-release 8.1 oracle
database. for an oracle9i real application clusters database, each node within the
cluster has an instance referencing the database. the database name, specified by
the db_name parameter in the initdb_name.ora file, and unique thread number make
up each node's sid. the thread id starts at 1 for the first instance in the
cluster, and is incremented by 1 for the next instance, and so on.

oracle_home
corresponds to the environment in which oracle products run. this environment
includes location of installed product files, path variable pointing to products'
binary files, registry entries, net service name, and program groups.
if you install an ofa-compliant database, using oracle universal installer
defaults, oracle home (known as \oracle_home in this guide) is located beneath
x:\oracle_base. it contains subdirectories for oracle software executables and
network files.
oracle corporation recommends that you never set the oracle_home environment
variable, because it is not required for oracle products to function properly. if
you set the oracle_home environment variable, then oracle universal installer will
unset it for you.

service_name
a logical representation of a database. this is the way a database is presented to
clients. a database can be presented as multiple services and a service can be
implemented as multiple database instances. the service name is a string that
includes:
(a) the global database name
(b) a name comprised of the database name (db_name)
(c) domain name (db_domain)
(d) the service name is entered during installation or database creation.

if you are not sure what the global database name is, you can obtain it from the
combined values of the service_names parameter in the common database
initialization file, initdbname.ora.

25. during a checkpoint in an oracle9i database, a number of dirty database


buffers covered by the log being checkpointed are written to the data files by
dbwn.

which parameter determines the number of buffers being written by dbwn?

a. log_checkpoint_target
b. fast_start_mttr_target
c. log_checkpoint_io_target
d. fast_start_checkpoint_target

answer: b

explanation:
ad a: false. there is no log_checkpoint_target parameter in oracle.
ad b: true. fast_start_mttr_target parameter determines the number of buffers
being written by dbwn. parameter fast_start_mttr_target has been introduced in
oracle9i and it replaces fast_start_io_target and log_checkpoint_interval in
oracle8i, although the old parameters can still be set if required in oracle9i.
fast_start_mttr_target enables you to specify the number of seconds the database
takes to perform crash recovery of a single instance.
ad c: false. there is no log_checkpoint_io_target parameter in oracle.
ad d: false. there is no fast_start_checkpoint_target parameter in oracle.

26. the orders table has a constant transaction load 24 hours a day, so down time
is not allowed. the indexes become fragmented. which statement is true?

a. the index needs to be dropped, and then re-created.


b. the resolution of index fragmentation depends on the type of index.
c. the index can be rebuilt while users continue working on the table.
d. the index can be rebuilt, but users will not have access to the index during
this time.
e. the fragmentation can be ignored because oracle resolves index fragmentation by
means of a freelist.

answer: c

explanation:
http://www.dbatoolbox.com/wp2001/spacemgmt/reorg_defrag_in_o8i_fo.pdf>

oracle8i can create an index online; users can continue to update and query the
base table while the index is being created. no table or row locks are held during
the creation operation. changes to the base table and index during the build are
recorded in a journal table and merged into the new index at the completion of the
operation, as illustrated in figure 1. these online operations also support
parallel index creation and can act on some or all of the partitions of a
partitioned index. online index creation improves database availability by
providing users full access to data in the base table during an index build.

27. you set the value of the os_authent_prefix initialization parameter to ops$
and created a user account by issuing this sql statement:<br />

create user ops$smith <br />


identified externally;<br />

which two statements are true? (choose two.)

a. oracle server assigns the default profile to the user.


b. you can specify the password expire clause for an external user account.
c. the user does not require create session system privilege to connect to the
database.
d. if you query the dba_users data dictionary view the username column will
contain the value smith.
e. the user account is maintained by oracle, but password administration and user
authentication are performed by the operating system or a network service.

answer: a, e

explanation:
with external authentication, your database relies on the underlying operating
system or network authentication service to restrict access to database accounts.
a database password is not used for this type of login. if your operating system
or network service permits, you can have it authenticate users. if you do so, set
the parameter os_authent_prefix, and use this prefix in oracle usernames. this
parameter defines a prefix that oracle adds to the beginning of every user's
operating system account name. oracle compares the prefixed username with the
oracle usernames in the database when a user attempts to connect. if a user with
an operating system account named tsmith" is to connect to an oracle database and
be authenticated by the operating system, oracle checks that there is a
corresponding database user "ops$tsmith" and, if so, allows the user to connect.
see: (a58397.pdf) pg. 377. (20-9)<br /><br />
ad a: true. profile reassigns the profile named to the user. the profile limits
the amount of database resources the user can use. if you omit this clause, oracle
assigns the default profile to the user. see (a58225.pdf) pg. 541. (4-357).
ad b: when you choose external authentication for a user, the user account is
maintained by oracle, but password administration and user authentication is
performed by an external service. this external service can be the operating
system or a network service, such as oracle net.
ad c: false.
ad d: false.
ad e: when you choose external authentication for a user, the user account is
maintained by oracle, but password administration and user authentication is
performed by an external service. this external service can be the operating
system or a network service, such as oracle net.

28. which type of segment is used to improve the performance of a query?

a. index
b. table
c. temporary
d. boot strap

answer: a

explanation:
http://vsbabu.org/oracle/sect16.html

there is a segment_type = 'index' condition -> index is a segment. and it also


helps queries to be faster.

29. which three are the physical structures that constitute the oracle database?
(choose three)

a. table
b. extent
c. segment
d. data file
e. log file
f. tablespace
g. control file

answer: d, e, g

explanation:
http://www.adp-gmbh.ch/ora/notes.html

physical and logical elements


an oracle server consists of an oracle database and an oracle instance. if you
don't want technical terms, you can think of an instance as the software, and the
database as the data that said software operates on. more technically, the
instance is the combination of background processes and memory buffers (or sga
http://www.adp-gmbh.ch/ora/concepts/sga.html).
the data (of the database) resides in datafiles. because these datafiles are
visible (as files) they're called physical structures as opposed to logical
structures.
one ore more datafiles make up a tablespace http://www.adp-
gmbh.ch/ora/concepts/tablespaces.html.
besides of datafiles, there are two other types of physical structures: redo log
files and control files
the logical structures are tablespace, schema objects, data blocks, extends, and
segments.

control files
an oracle database must at least have one control file, but usually (for backup
und recovery http://www.adp-gmbh.ch/ora/concepts/backup_recovery/index.html
reasons) it has more than one (all of which are exact copies of one control file).
the control file contains a number of important information that the instance
needs to operate the database. the following pieces of information are held in a
control file: the name (os path) of all datafiles that the database consists of,
the name of the database, the timestamp of when the database was created, the
checkpoint (all database changes prior to that checkpoint are saved in the
datafiles) and information for rman.
when a database is mounted, its control file is used to find the datafiles and
redo log files for that database. because the control file is so important, it is
imperative to back up the control file whenever a structural change was made in
the database. redo log
whenever something is changed on a datafile, oracle records it in a redo log. the
name redo log indicates its purpose: when the database crashes, oracle can redo
all changes on datafiles which will take the database data back to the state it
was when the last redo record was written. use v$log http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html, v$logfile http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html, v$log_history http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views> and v$thread http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html to find information about the redo
log of your database.
each redo log file belongs to exactly on group (of which at least two must exist).
exactly one of these groups is the current group (can be queried using the column
status of v$log http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views>).
oracle uses that current group to write the redo log entries. when the group is
full, a log switch occurs, making another group the current one. each log switch
causes checkpoint, however, the converse is not true: a checkpoint does not cause
a redo log switch.

i believe the website - files are usually said physical, and these are basic ones
-> d e g
30. which three statements about the oracle database storage structure are true?
(choose three)

a. a data block is a logical structure


b. a single data file can belong to multiple tablespaces.
c. when a segment is created, it consists of at least one extent.
d. the data blocks of an extent may or may not belong to the same file.
e. a tablespace can consist of multiple data files, each from a separate disk.
f. within a tablespace, a segment cannot include extents from more than one file.

answer: a, c, e

explanation:
a is ok, see q29.
b is false (oracle7 documentation, server concepts, 4-10): a tablespace in an
oracle database consists of one or more physical datafiles. a datafile can be
associated with only one tablespace, and only one database.
c is ok (oracle7 documentation, server concepts, 3-10): an extent is a logical
unit of database storage space allocation made up of a number of contiguous data
blocks. each segment is composed of one or more extents.
d is false (oracle7 documentation, server concepts, 3-3): oracle allocates space
for segments in extents. therefore, when the existing extents of a segment are
full, oracle allocates another extent for that segment. because extents are
allocated as needed, the extents of a segment may or may not be contiguous on
disk. the segments also can span files, but the individual extents cannot.
e is ok (oracle7 documentation, server concepts, 4-3): each tablespace in an
oracle database is comprised of one or more operating system files called
datafiles. a tablespace's datafiles physically store the associated database data
on disk.
f is false - see ans for d

31. examine the sql statement:<br /><br />

create tablespace user_data<br />


datafile '/u01/oradata/user_data_0l.dbf' size 100m<br />
locally managed uniform size 1m<br />
automatic segment space management;<br /><br />

which part of the tablespace will be of a uniform size of 1 mb?

a. extent
b. segment
c. oracle block
d. operating system block

answer: a

explanation:
the extent_management_clause lets you specify how the extents of the tablespace
will be managed.
(a) specify local if you want the tablespace to be locally managed. locally
managed tablespaces have some part of the tablespace set aside for a bitmap. this
is the default.
(b) autoallocate specifies that the tablespace is system managed. users cannot
specify an extent size. this is the default if the compatible initialization
parameter is set to 9.0.0 or higher.
(c) uniform specifies that the tablespace is managed with uniform extents of size
bytes. use k or m to specify the extent size in kilobytes or megabytes. the
default size is 1 megabyte.

note: once you have specified extent management with this clause, you can change
extent management only by migrating the tablespace.

remark: one tablespace has many segments.segment is space allocated fo a db


object-> each index has its own segment. if the segment of the index fills up, a
new extent is created for the segment. it means, that the tablespace gets 1mb
bigger, if the segment for one of the objects fills up. if another segment fills
up, the tablespace gets 1 mb bigger again, no matter how much empty space we have
in the previous extent. the consequence is that data does not become fragmented.

32. which is a complete list of the logical components of the oracle database?

a. tablespaces, segments, extents, and data files


b. tablespaces, segments, extents, and oracle blocks
c. tablespaces, database, segments, extents, and data files
d. tablespaces, database, segments, extents, and oracle blocks
e. tablespaces, segments, extents, data files, and oracle blocks

answer: b

see q29

33. which option lists the correct hierarchy of storage structures, from largest
to the smallest?

a. segment, extent, tablespace, data block


b. data block, extent, segment, tablespace
c. tablespace, extent, data block, segment
d. tablespace, segment, extent, data block
e. tablespace, data block, extent, segment

answer: d

explanation:
logical database structures: the logical structures of an oracle database include
schema objects, data blocks, extents, segments, and tablespaces.

oracle data blocks: at the finest level of granularity, oracle database data is
stored in data blocks. one data block corresponds to a specific number of bytes of
physical database space on disk.
extents: the next level of logical database space is an extent. an extent is a
specific number of contiguous data blocks, obtained in a single allocation, used
to store a specific type of information.
segments: above extents, the level of logical database storage is a segment. a
segment is a set of extents allocated for a certain logical structure. the
following table describes the different types of segments.
tablespaces: a database is divided into logical storage units called tablespaces,
which group related logical structures together.

34. extents are a logical collection of contiguous _________________.

a. segments
b. database blocks
c. tablespaces
d. operating system blocks
answer: b

explanation:
an extent is a specific number of contiguous data blocks, obtained in a single
allocation, and used to store a specific type of information.

35. which two statements about segments are true? (choose two.)

a. each table in a cluster has its own segment.


b. each partition in a partitioned table is a segment.
c. all data in a table segment must be stored in one tablespace.
d. if a table has three indexes only one segment is used for all indexes.
e. a segment is created when an extent is created, extended, or altered.
f. a nested table of a column within a table uses the parent table segment.

answer: b, c

explanation:
a single data segment in an oracle database holds all of the data for one of the
following:
(a) a table that is not partitioned or clustered.
(b) a partition of a partitioned table.
(c) a cluster of tables.
a table or materialized view can contain lob, varray, or nested table column
types. these entities can be stored in their own segments.

ad a: false. each table in a cluster does not have its own segment. clustered
tables contain some blocks as a common part for two or more tables. clusters
enable you to store data from several tables inside a single segment so users can
retrieve data from those two tables together very quickly.
ad d: false. for each index, oracle allocates one or more extents to form its
index segment.
ad e: false. oracle creates this data segment when you create the nonclustered
table or cluster with the create command.
ad f: false. a nested table of a column within a table does not use the parent
table segment: it has its own.
oracle databases use four types of segments:
(a) data segments
(b) index segments
(c) temporary segments
(d) rollback segments
see: (a58227.pdf) pg. 107. (2-15)

36. which type of table is usually created to enable the building of scalable
applications, and is useful for large tables that can be queried or manipulated
using several processes concurrently?

a. regular table
b. clustered table
c. partitioned table
d. index-organized table

answer: c

what is scalability?
in the case of web applications, scalability is the capacity to serve additional
users or transactions without fundamentally altering the application's
architecture or program design. if an application is scalable, you can maintain
steady performance as the load increases simply by adding additional resources
such as servers, processors or memory.
cluster
a cluster is an oracle
http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm object that allows
one to store related rows from different tables in the same data
http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm block
http://infoboerse.doag.de/mirror/frank/glossary/faqglosb.htm. table
http://infoboerse.doag.de/mirror/frank/glossary/faqglost.htm clustering is very
seldomly used by oracle
http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm dba
http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm's and developers.

ans is c -> multiprocess access is better, if the 2 processes access different


files.

37. how do you enable the hr_clerk role?

a. set role hr_clerk;


b. create role hr_clerk;
c. enable role hr_clerk;
d. set enable role hr_clerk;

answer: a

explanation:
sql 18-47
set role
purpose: use the set role statement to enable and disable roles for your current
session.

in the identified by password clause, specify the password for a role. if the role
has a password, then you must specify the password to enable the role.

d is out of question, bad syntax. i would go for a, if the role does not have a
password, this command is ok.

38. your database is currently configured with the database character set to
we8iso8859p1 and national character set to af16utf16.
business requirements dictate the need to expand language requirements beyond the
current character set, for asian and additional western european languages, in the
form of customer names and addresses.

which solution saves space storing asian characters and maintains consistent
character manipulation performance?

a. use sql char data types and change the database character set to utf8.
b. use sql nchar data types and change the national character set to utf8.
c. use sql char data types and change the database character set to af32utf8.
d. use sql nchar data types and keep the national character set to af16utf16.

answer: d

testking said c -> wrong: we need nchar not char.

explanation:
sql nchar
supporting multilingual data often means using unicode. unicode is a universal
character encoding scheme that allows you to store information from any major
language using a single character set. unicode provides a unique code value for
every character, regardless of the platform, program, or language. for many
companies with legacy systems making the commitment to migrating their entire
database to support unicode is not practical. an alternative to storing all data
in the database as unicode is to use the sql nchar datatypes. unicode characters
can be stored in columns of these datatypes regardless of the setting of the
database character set. the nchar datatype has been redefined in oracle9i to be a
unicode datatype exclusively. in other words, it stores data in the unicode
encoding only. the national character set supports utf-16 and utf-8 in the
following encodings:
(a) al16utf16 (default)
(b) utf8
sql nchar datatypes (nchar, nvarchar2, and nclob) can be used in the same way as
the sql char datatypes. this allows the inclusion of unicode data in a non
unicode database. some of the key benefits for using the nchar datatype versus
having the entire database as unicode include:
you only need to support multilingual data in a limited number of columns - you
can add columns of the sql nchar datatypes to existing tables or new tables to
support multiple languages incrementally. or you can migrate specific columns from
sql char datatypes to sql nchar datatypes easily using the alter table modify
column command.
example: alter table emp modify (ename nvarchar2(10));
you are building a packaged application that will be sold to customers, then you
may want to build the application using sql nchar datatypes - this is because
with the sql nchar datatype the data is always stored in unicode, and the length
of the data is always specified in utf-16 code units. as a result, you need only
test the application once, and your application will run on your customer
databases regardless of the database character set.
you want the best possible performance - if your existing database character set
is single-byte then extending it with sql nchar datatypes may offer better
performance then migrating the entire database to unicode.
your applications native environment is ucs-2 or utf-16 - a unicode database must
run as utf-8. this means there will be conversion between the client and database.
by using the nchar encoding al16utf16, you can eliminate this conversion.

39. you have just accepted the position of dba with a new company. one of the
first things you want to do is examine the performance of the database. which tool
will help you to do this?

a. recovery manager
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant

answer: b

explanation:
http://www.orafaq.com/faqoem.htm
what is oem (oracle enterprise manager)?
oem is a set of system management tools provided by oracle for managing the oracle
environment. it provides tools to automate tasks (both one-time and repetitive in
nature) to take database administration a step closer to "lights out" management.
what are the components of oem?
oracle enterprise manager (oem) has the following components:
management server (oms): middle tier server that handles communication with the
intelligent agents. the oem console connects to the management server to monitor
and configure the oracle enterprise.
console: this is a graphical interface from where one can schedule jobs, events,
and monitor the database. the console can be opened from a windows workstation,
unix xterm (oemapp command) or web browser session (oem_webstage).
intelligent agent (oia): the oia runs on the target database and takes care of the
execution of jobs and events scheduled through the console.
data gatherer (dg): the dg runs on the target database and takes care of the
gathering database statistics over time.

40. you have a database with the db_name set to prod and oracle_sid set to prod.
these files are in the default location for the initialization files:
- init.ora
- initprod.ora
- spfile.ora
- spfileprod.ora<br />

the database is started with this command:<br />

sql> startup<br />

which initialization files does the oracle server attempt to read, and in which
order?

a. init.ora, initprod.ora, spfileprod.ora


b. spfile.ora, spfileprod.ora, initprod.ora
c. spfileprod.ora, spfile.ora, initprod.ora
d. initprod.ora, spfileprod.ora, spfile.ora

answer: c

explanation:
http://www.trivadis.ch/publikationen/e/spfile_and_initora.en.pdf http://www.adp-
gmbh.ch/ora/notes.html
up to version 8i, oracle traditionally stored initialization parameters in a text
file init.ora (pfile). with oracle9i, server parameter files (spfile) can also be
used. an spfile can be regarded as a repository for initialization parameters
which is located on the database server. spfiles are small binary files that
cannot be edited. editing spfiles corrupts the file and either the instance fails
to start or an active instance may crash.

at database startup, if no pfile is specified at the os-dependent default location


($oracle_home/dbs under unix, $oracle_home\database under nt), the startup command
searches for:
1. spfile${oracle_sid}.ora
2. spfile.ora
3. init${oracle_sid}.ora

41. you are in the planning stages of creating a database. how should you plan to
influence the size of the control file?

a. specify size by setting the control_files initialization parameter instead of


using the oracle default value.
b. use the create controlfile command to create the control file and define a
specific size for the control file.
c. define the maxlogfiles, maxlogmembers, maxloghistory, maxdatafiles,
maxinstances parameters in the create database command.
d. define specific values for the maxlogfiles, maxloggroups, maxloghistory,
maxdatafiles, and maxinstances parameters within the initialization parameter
file.

answer: c

explanation:
control_files
is a string -> name of the files -> does not influence the size

sql 13-15
create controlfile
use the create controlfile statement to re-create a control file in one of the
following cases:
(a) all copies of your existing control files have been lost through media
failure.
(b) you want to change the name of the database.
(c) you want to change the maximum number of redo log file groups, redo log file
members, archived redo log files, datafiles, or instances that can concurrently
have the database mounted and open.

http://coffee.kennesaw.edu/tests/oracle/ch3.doc:
create database
question 19. which clauses in the create database command specify limits for the
database?
the control file size depends on the following limits (maxlogfiles, maxlogmembers,
maxloghistory, maxdatafiles, maxinstances), because oracle pre-allocates space in
the control file.
maxlogfiles: specifies the maximum number of redo log groups that can ever be
created in the database.
maxlogmembers: specifies the maximum number of redo log members (copies of the
redo logs) for each redo log group.
maxloghistory: is used only with parallel server configuration. it specifies the
maximum number of archived redo log files for automatic media recovery.
maxdatafiles: specifies the maximum number of data files that can be created in
this database. data files are created when you create a tablespace, or add more
space to a tablespace by adding a data file.
maxinstances: specifies the maximum number of instances that can simultaneously
mount and open this database.
if you want to change any of these limits after the database is created, you must
re-create the control file.

ad d: false. there is no maxloggroups parameter in oracle.

42. when is the sga created in an oracle database environment?

a. when the database is created


b. when the instance is started
c. when the database is mounted
d. when a user process is started
e. when a server process is started

answer: b

explanation:
http://www.dbaoncall.net/references/ht_startup_shutdown_db.html

to startup oracle database use server manager (srvmgrl), startup command:


startup nomount - starts instance: allocates memory for sga, starts background
processes.
43. you need to enforce these two business rules:<br />

1. no two rows of a table can have duplicate values in the specified column.
2. a column cannot contain null values.<br />

which type of constraint ensures that both of the above rules are true?

a. check
b. unique
c. not null
d. primary key
e. foreign key

answer: d

no comment

44. your company hired joe, a dba who will be working from home. joe needs to have
the ability to start the database remotely.
you created a password file for your database and set remote_login_passwordfile =
exclusive in the parameter file. which command adds joe to the password file,
allowing him remote dba access?

a. grant dba to joe;


b. grant sysdba to joe;
c. grant resource to joe;
d. orapwd file=orapwdprod user=joe password=dba

answer: b

explanation:

see ocp oracle 9i database: fundamentals i, p. 35, 36.:


finally, setting remote_login_passwordfile to exclusive menas that a password file
exists and any user/password combination in the password file can log into oracle
remotely and administer that instance. if this setting is used, the dba may use
the create user command in oracle to create the users who are added to the
password file, grant sysoper and/or sysdba system privileges to those users.

this rules out a and c.

d is rules out because of a wrong syntax:

oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 1-20.
using orapwd
when you invoke the password file creation utility without supplying any
parameters, you receive a message indicating the proper use of the command as
shown in the following sample output:
orapwd
usage: orapwd file=<fname> password=<password> entries=<users>
where
file - name of password file (mand).
password - password for sys (mand).
entries - maximum number of distinct dbas and opers (opt).

there are no spaces around the equal-to (=) character.


the following command creates a password file named acct.pwd that allows up to 30
privileged users with different passwords. in this example, the file is initially
created with the password secret for users connecting as sys. orapwd file=acct.pwd
password=secret entries=30

45. you need to drop two columns from a table. which sequence of sql statements
should be used to drop the columns and limit the number of times the rows are
updated?<br />

a. alter table employees drop column comments drop column email;<br />
b. alter table employees drop column comments; <br />
alter table employees drop column email; <br />
c. alter table employees set unused column comments; <br />
alter table employees drop unused columns;<br />
alter table employees set unused column email; <br />
alter table employees drop unused columns;<br />
d. alter table employees set unused column comments; <br />
alter table employees set unused column email;<br />
alter table employees drop unused columns;<br />

answer: d

explanation:
http://certcities.com/certs/oracle/columns/story.asp?editorialsid=36:
reorganizing columns
while it has been possible to add new columns to an existing table in oracle for
quite a while now, until oracle 8i it was not possible to drop or remove a column
from a table without dropping the table first and then re-creating it without the
column you wanted to drop. with this method, you needed to perform an export
before dropping the table and then an import after creating it without the column,
or issue a create table ... as select statement with all of its associated
headaches (see above).
in oracle 8i, we now have a way of marking columns unused and then dropping them
at a later date. oracle is a little behind the times here compared to sql server,
which does not require a complete rebuild of the table after dropping the column,
but i'm just happy that i have the feature and hope that they'll improve it in
oracle 9i.
to get rid of columns with this new method, the first step is to issue the alter
table &lt;tablename&gt; set unused column <columnname>, which sets the column to
no longer be used within the table but does not change the physical structure of
the table. all rows physically have the column's data stored, and a physical place
is kept for the column on disk, but the column cannot be queried and, for all
intents and purposes, does not exist. in essence, the column is flagged to be
dropped, though you cannot reverse setting the column to unused.
it is possible to set a number of columns unused in a table before actually
dropping them. the overhead of setting columns unused is fairly minimal and allows
you to continue to operate normally, except that any actions on the unused columns
will result in an error. the next step, when you have configured all the columns
you want to get rid of as unused, is to actually physically reorganize the table
so that the data for the unused columns is no longer on disk and the columns are
really gone. this is done by issuing the command alter table ... drop column.
physically dropping a column in an oracle table is a process that will prevent
anyone from accessing the table while the removal of the column(s) is processed.
the commands that will affect an actual removal of a column are:
alter table &lt;tablename&gt; drop column &lt;columnname&gt;
alter table &lt;tablename&gt; drop unused columns
the commands will always do the same thing. this means that if you mark two or
three columns as unused in a table, if you decide you want to drop one of them
using the alter table ... drop column command, you will drop all columns marked as
unused whether you want to or not. the alter table ... drop column can also be
used when a column has not previously been marked as unused but you simply want to
drop it right away, but you will also drop any unused columns because that's the
way it works.
if constraints depend on the column being dropped, you can use the cascade
constraints option to deal with them; if you also want to explicitly mark views,
triggers, stored procedures or other stored program units referencing the parent
table and force them to be recompiled the next time they are used, you can also
specify the invalidate option.
a problem could arise if you issue the drop column command and the instance
crashes during the rebuild of the table. in this case, the table will be marked as
invalid and will not be available to anyone. oracle forces you to complete the
drop column operation before the table can be used again. to get out of this
situation, issue the command alter table ... drop columns continue. this will
complete the process and mark the table as valid upon completion.

http://whizlabs.com/ocp/ocp-1z0-007-tips.html
tip 34: oracle allows columns to be dropped with the 'alter table drop columns'
command. dropping of columns generally takes a lot of time, so an alternative
(faster) option would be to mark the column as unused with the 'set unused column'
clause and later drop the unused column.

46. when an oracle instance is started, background processes are started.


background processes perform which two functions? (choose two)

a. perform i/o
b. lock rows that are not data dictionary rows
c. monitor other oracle processes
d. connect users to the oracle instance
e. execute sql statements issued through an application

answer: a, c

explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 5-11.
to maximize performance and accommodate many users, a multiprocess oracle system
uses some additional processes called background processes. background processes
consolidate functions that would otherwise be handled by multiple oracle programs
running for each user process. background processes asynchronously perform i/o and
monitor other oracle processes to provide increased parallelism for better
performance and reliability.

47. you omit the undo tablespace clause in your create database statement. the
undo_management parameter is set to auto.

what is the result of your create database statement?

a. the oracle server creates no undo tablespaces.


b. the oracle server creates an undo segment in the system tablespace.
c. the oracle server creates one undo tablespace with the name sys_undotbs.
d. database creation fails because you did not specify an undo tablespace on the
create database statement.

answer: c

explanation:
http://www.oracle-
base.com/articles/9i/automaticundomanagement.asp#enablingautomaticundomanagement

using automatic undo management: creating an undo tablespace


oracle recommends that instead of using rollback segments in your database, you
use an undo tablespace. this requires the use of a different set of initialization
parameters, and optionally, the inclusion of the undo tablespace clause in your
create database statement.
you must include the following initialization parameter if you want to operate
your database in automatic undo management mode:
undo_management=auto
in this mode, rollback information, referred to as undo, is stored in an undo
tablespace rather than rollback segments and is managed by oracle. if you want to
create and name a specific tablespace for the undo tablespace, you can include the
undo tablespace clause at database creation time. if you omit this clause, and
automatic undo management is specified, oracle creates a default undo tablespace
named sys_undotbs.

48. a table is stored in a data dictionary managed tablespace.

which two columns are required from dba_tables to determine the size of the extent
when it extends? (choose two)

a. blocks
b. pct_free
c. next_extent
d. pct_increase
e. initial_extent

answer: c, d

explanation:
the size parameter of the allocate extent clause is the extent size in bytes,
rounded up to a multiple of the block size. if you do not specify size, then
oracle calculates the extent size according to the values of the next and
pctincrease storage parameters.
oracle does not use the value of size as a basis for calculating subsequent extent
allocations, which are determined by the values set for the next and pctincrease
parameters.

49. bob is an administrator who has full dba privileges. when he attempts to drop
the default profile as shown below, he receives the error message shown. which
option best explains this error?<br /><br />
sql> drop profile sys.default;<br />
drop profile sys.default<br />
*<br />
error at line 1:<br />
ora-00950: invalid drop option<br />

a. the default profile cannot be dropped.


b. bob requires the drop profile privilege.
c. profiles created by sys cannot be dropped.
d. the cascade option was not used in the drop profile command.

answer: a

explanation:
sql 16-94
restriction on dropping profiles: you cannot drop the default profile.

according to error messages, drop profile is not a valid statement:


error messages 3-10:
ora-00950 invalid drop option
cause: a drop command was not followed by a valid drop option, such as cluster,
database link, index, rollback segment, sequence, synonym, table, tablespace, or
view.
action: check the command syntax, specify a valid drop option, then retry the
statement.

50. you are in the process of dropping the building_location column from the
hr.employees table. the table has been marked invalid until the operation
completes. suddenly the instance fails. upon startup, the table remains invalid.
which step(s) should you follow to complete the operation?

a. continue with the drop column command:<br />


alter table hr.employees drop columns continue;
b. truncate the invalid column to delete remaining rows in the column and release
unused space immediately.
c. use the export and import utilities to remove the remainder of the column from
the table and release unused space.
d. mark the column as unused and drop the column:<br />
alter table hr.employees<br />
set unused column building location;<br />
alter table hr.employees<br />
dpop unused column building_location<br />
cascade constraints;<br />

answer: a

testking said d.
explanation:
drop unused columns clause: specify drop unused columns to remove from the table
all columns currently marked as unused. use this statement when you want to
reclaim the extra disk space from unused columns in the table. if the table
contains no unused columns, then the statement returns with no errors.
column specify one or more columns to be set as unused or dropped. use the column
keyword only if you are specifying only one column. if you specify a column list,
then it cannot contain duplicates.
cascade constraints: specify cascade constraints if you want to drop all foreign
key constraints that refer to the primary and unique keys defined on the dropped
columns, and drop all multicolumn constraints defined on the dropped columns. if
any constraint is referenced by columns from other tables or remaining columns in
the target table, then you must specify cascade constraints. otherwise, the
statement aborts and an error is returned..alter table
invalidate: the invalidate keyword is optional. oracle automatically invalidates
all dependent objects, such as views, triggers, and stored program units. object
invalidation is a recursive process. therefore, all directly dependent and
indirectly dependent objects are invalidated. however, only local dependencies are
invalidated, because oracle manages remote dependencies differently from local
dependencies. an object invalidated by this statement is automatically revalidated
when next referenced. you must then correct any errors that exist in that object
before referencing it.
checkpoint: specify checkpoint if you want oracle to apply a checkpoint for the
drop column operation after processing integer rows; integer is optional and must
be greater than zero. if integeris greater than the number of rows in the table,
then oracle applies a checkpoint after all the rows have been processed. if you do
not specify integer, then oracle sets the default of 512. checkpointing cuts down
the amount of undo logs accumulated during the drop column operation to avoid
running out of rollback segment space. however, if this statement is interrupted
after a checkpoint has been applied, then the table remains in an unusable state.
while the table is unusable, the only operations allowed on it are drop table,
truncate table, and alter table drop columns continue (described in sections that
follow). you cannot use this clause with set unused, because that clause does not
remove column data.
drop columns continue clause: specify drop columns continue to continue the drop
column operation from the point at which it was interrupted. submitting this
statement while the table is in a valid state results in an error. see
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96540/statements_32a.htm#2103766.

if there is a trick in this question, i couldn't find it. i would answer a.

51. as sysdba you created the payclerk role and granted the role to bob. bob in
turn attempts to modify the authentication method of the payclerk role from salary
to not identified, but when doing so he receives the insufficient privilege error
shown below.<br />
sql> connect bob/crusader<br />
connected.<br />
<br />
sql> alter role payclerk not identified;<br />
alter role payclerk not identified<br />
*<br />
error at line 1:<br />
ora-01031: insufficient privileges<br />

which privilege does bob require to modify the authentication method of the
payclerk role?

a. alter any role


b. manage any role
c. update any role
d. modify any role

answer: a

ora-01031 insufficient privileges


cause: an attempt was made to change the current username or password without the
appropriate privilege. this error also occurs if attempting to install a database
without the necessary operating system privileges.
action: ask the database administrator to perform the operation or grant the
required privileges.

oracle_9i_mix/administrators guide a96521.pdf page 25-6 - managing user roles:


to alter the authorization method for a role, you must have the alter any role
system privilege or have been granted the role with the admin option.

wrong: b, c, d - manage any role, update any role, modify any role don't exist.

52. you are going to re-create your database and want to reuse all of your
existing database files.
you issue the following sql statement:

create database sampledb<br />


datafile<br />
'/u01/oradata/sampledb/system0l.dbf'<br />
size 100m reuse<br />
logfile<br />
group 1 ('/u01/oradata/sampledb/logla.rdo',<br />
'/u02/oradata/sampledb/loglb.rdo')<br />
size 50k reuse,<br />
group 2 ('/u01/oradata/sampledb/log2a.rdo',<br />
'/u02/oradata/sampledb/log2b.rdo')<br />
size 50k reuse<br />
maxlogfiles 5<br />
maxloghistory 100<br />
maxdatafiles 10;<br /><br />

why does the create database statement fail?<br />

a. you have set maxlogfiles too low.


b. you omitted the controlfile reuse clause.
c. you cannot reuse the online redo log files.
d. you cannot reuse the data file belonging to the system tablespace.

answer: b

explanation:

ad b: the initial control files of an oracle database are created when you issue
the create database statement. the names of the control files are specified by the
control_files parameter in the initialization parameter file used during database
creation. the filenames specified in control_files should be fully specified and
are operating system specific. if control files with the specified names
currently exist at the time of database creation, you must specify the controlfile
reuse clause in the create database statement, or else an error occurs.
ad a: maxlogile min and max value is operating system dependent. but i think the
min value is 1.
ad c: you can reuse a online redo log file.
ad d: datafile clause specify one or more files to be used as datafiles. all these
files become part of the system tablespace.

53. evaluate this sql command:<br />


grant references (employee_id), <br />
update (employee_id, salary, commission_pct) on hr.employees <br />to oe;<br />

which three statements correctly describe what user oe can or cannot do? (choose
three.)

a. cannot create a table with a constraint


b. can create a table with a constraint that references hr.employees
c. can update values of the employee_id, salary, and commission_pct columns
d. can insert values of the employee_id, salary, and commission_pct columns
e. cannot insert values of the employee_id, salary, and commission_pct columns
f. cannot update values of the employee_id, salary, and commission_pct columns

answer: b, c, e

explanation:
granting multiple object privileges on individual columns: example to grant to
user oe the references privilege on the employee_id column and the update
privilege on the employee_id, salary, and commission_pct columns of the employees
table in the schema hr, issue the following statement:
grant references (employee_id),
update (employee_id, salary, commission_pct)
on hr.employees
to oe;

oe can subsequently update values of the employee_id, salary, and commission_pct


columns. oe can also define referential integrity constraints that refer to the
employee_id column. however, because the grant statement lists only these columns,
oe cannot perform operations on any of the other columns of the employees table.
for example, oe can create a table with a constraint:
create table dependent
(dependno number,
dependname varchar2(10),
employee number
constraint in_emp references hr.employees(employee_id) );

the constraint in_emp ensures that all dependents in the dependent table
correspond to an employee in the employees table in the schema hr.

54. a network error unexpectedly terminated a user's database session.

which two events occur in this scenario? (choose two.)

a. checkpoint occurs.
b. a fast commit occurs.
c. reco performs the session recovery.
d. pmon rolls back the user's current transaction.
e. smon rolls back the user's current transaction.
f. smon frees the system resources reserved for the user session.
g. pmon releases the table and row locks held by the user session.

answer: d, g

explanation:
smon: the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use and
recovers dead transactions skipped during crash and instance recovery because of
file-read or offline errors. these transactions are eventually recovered by smon
when the tablespace or file is brought back online.

pmon: the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below) and
server processes and restarts them if they have failed.
the process monitor process (pmon) cleans up failed user processes and frees up
all the resources used by the failed process. it resets the status of the active
transaction table and removes the process id from the list of active processes.
it reclaims all resources held by the user and releases all locks on tables and
rows held by the user. pmon wakes up periodically to check whether it is needed.

reco: the recoverer process is used to resolve distributed transactions that are
pending due to a network or system failure in a distributed database. at timed
intervals, the local reco attempts to connect to remote databases and
automatically complete the commit or rollback of the local portion of any pending
distributed transactions.
checkpoint (ckpt): at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the
checkpoint process is responsible for signaling dbwn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent
checkpoint.

55. evaluate the sql statement:

create tablespace hr_tbs<br />


datafile '/usr/oracle9i/orahomel/hr_data.dbf' size 2m autoextend on<br />
minimum extent 4k<br />
nologging<br />
default storage (initial 5k next 5k pctincrease 50)<br />
extent management dictionary<br />
segment space management auto;<br /><br />

why does the statement return an error?

a. the value of pctincrease is too high.


b. the size of the data file is too small.
c. you cannot specify default storage for dictionary managed tablespaces.
d. segment storage management cannot be set to auto for a dictionary managed
tablespace.
e. you cannot specify default storage for a tablespace that consists of an
autoextensible data file.
f. the value specified for initial and next storage parameters should be a
multiple of the value specified for minimum extent.

answer: d

explanation:
pctincrease
specify the percent by which the third and subsequent extents grow over the
preceding extent. the default value is 50, meaning that each subsequent extent is
50% larger than the preceding extent. the minimum value is 0, meaning all extents
after the first are the same size. the maximum value depends on your operating
system.

specify the size of the file in bytes. use k or m to specify the size in kilobytes
or megabytes. no minimum size for datafile. no max size either. it can be
unlimited. operating system dependent. if you omit this clause when creating an
oracle-managed file, then oracle creates a 100m file.

default storage_clause
specify the default storage parameters for all objects created in the tablespace.
for a dictionary-managed temporary tablespace, oracle considers only the next
parameter of the storage_clause. restriction on default storage: you cannot
specify this clause for a locally managed tablespace.
segment_management_clause
the segment_management_clause is relevant only for permanent, locally managed
tablespaces. it lets you specify whether oracle should track the used and free
space in the segments in the tablespace using free lists or bitmaps.

initial
specify in bytes the size of the object's first extent. oracle allocates space for
this extent when you create the schema object. use k or m to specify this size in
kilobytes or megabytes.
the default value is the size of 5 data blocks. in tablespaces with manual
segmentspace management, the minimum value is the size of 2 data blocks plus one
data block for each free list group you specify. in tablespaces with automatic
segmentspace management, the minimum value is 5 data blocks. the maximum value
depends on your operating system.
in dictionary-managed tablespaces, if minimum extent was specified for the
tablespace when it was created, then oracle rounds the value of initial up to the
specified minimum extent size if necessary. if minimum extent was not specified,
then oracle rounds the initial extent size for segments created in that tablespace
up to the minimum value (see preceding paragraph), or to multiples of 5 blocks if
the requested size is greater than 5 blocks.
in locally managed tablespaces, oracle uses the value of initial in conjunction
with the size of extents specified for the tablespace to determine the object's
first extent. for example, in a uniform locally managed tablespace with 5m
extents, if you specify an initial value of 1m, then oracle creates five 1m
extents.
restriction on initial: you cannot specify initial in an alter statement.

next
specify in bytes the size of the next extent to be allocated to the object. use k
or m to specify the size in kilobytes or megabytes. the default value is the size
of 5 data blocks. the minimum value is the size of 1 data block. the maximum value
depends on your operating system. oracle rounds values up to the next multiple of
the data block size for values less than 5 data blocks. for values greater than 5
data blocks, oracle rounds up to a value that minimizes fragmentation, as
described in oracle9i database administrator's guide.
if you change the value of the next parameter (that is, if you specify it in an
alter statement), then the next allocated extent will have the specified size,
regardless of the size of the most recently allocated extent and the value of the
pctincrease parameter.

temporary
specify temporary if the tablespace will be used only to hold temporary objects,
for example, segments used by implicit sorts to handle order by clauses.
temporary tablespaces created with this clause are always dictionary managed, so
you cannot specify the extent management local clause. to create a locally managed
temporary tablespace, use the create temporary tablespace statement.

56. sales_data is a nontemporary tablespace.<br />


you have set the sales_data tablespace offline by issuing this command:<br />
<br />
alter tablespace sales_data offline normal;<br /><br />

which three statements are true? (choose three.)<br />

a. you cannot drop the sales_data tablespace.


b. the sales_data tablespace does not require recovery to come back online.
c. you can read the data from the sales_data tablespace, but you cannot perform
any write operation on the data.
d. when the tablespace sales_data goes offline and comes back online, the event
will be recorded in the data dictionary.
e. when the tablespace sales_data goes offline and comes back online, the event
will be recorded in the control file.
f. when you shut down the database the sales_data tablespace remains offline, and
is checked when the database is subsequently mounted and reopened.

answer: b, d, e
explanation:
ad d, e, f: see http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c04space.htm#10136
when a tablespace goes offline
when a tablespace goes offline or comes back online, this is recorded in the data
dictionary in the system tablespace. if a tablespace is offline when you shut down
a database, the tablespace remains offline when the database is subsequently
mounted and reopened.

you can drop a tablespace regardless of whether it is online or offline (-> this
makes a wrong). oracle recommends that you take the tablespace offline before
dropping it to ensure that no sql statements in currently running transactions
access any of the objects in the tablespace.
restriction on the offline clause: you cannot take a temporary tablespace offline.

the for recover setting for alter tablespace ...


offline has been deprecated. the syntax is supported for backward compatibility.
however, users are encouraged to use the transportable tablespaces feature for
tablespace recovery.

normal
specify normal to flush all blocks in all datafiles in the tablespace out of the
sga. you need not perform media recovery on this tablespace before bringing it
back online. this is the default. -> b is true.

temporary
if you specify temporary, then oracle performs a checkpoint for all online
datafiles in the tablespace but does not ensure that all files can be written.
any offline files may require media recovery before you bring the tablespace back
online.

specify offline to take the tablespace offline and prevent further access to its
segments. when you take a tablespace offline, all of its datafiles are also
offline.

57. a table can be dropped if it is no longer needed, or if it will be


reorganized.<br />

which three statements are true about dropping a table? (choose three.)

a. all synonyms for a dropped table are deleted.


b. when a table is dropped, the extents used by the table are released.
c. dropping a table removes the table definition from the data dictionary.
d. indexes and triggers associated with the table are not dropped but marked
invalid.
e. the cascade constraints option is necessary if the table being dropped is the
parent table in a foreign key relationship.

answer: b, c, e

explanation:
ad a: in general, the extents of a segment do not return to the tablespace until
you drop the schema object whose data is stored in the segment (using a drop table
or drop cluster statement).
ad b: dropping a table removes the table definition from the data dictionary. all
rows of the table are no longer accessible.
ad c: all indexes and triggers associated with a table are dropped.
ad d: false. all synonyms for a dropped table remain, but return an error when
used.
ad e: if the table to be dropped contains any primary or unique keys referenced by
foreign keys of other tables and you intend to drop the foreign key constraints of
the child tables, include the cascade clause in the drop table statement

58. examine this truncate table command:<br />

truncate table departments; <br />

which four are true about the command? (choose four.)

a. all extents are released.


b. all rows of the table are deleted.
c. any associated indexes are truncated.
d. no undo data is generated for the table's rows.
e. it reduces the number of extents allocated to the departments table to the
original setting for minextents.

answer: b, c, d, e

explanation:
to remove all rows from a table or cluster and reset the storage parameters to the
values when the table or cluster was created.
you can use the truncate command to quickly remove all rows from a table or
cluster. removing rows with the truncate command is faster than removing them with
the delete command for the following reasons:
the truncate command is a data definition language (ddl) command and generates no
rollback information.
truncating a table does not fire the table's delete triggers.
the truncate command allows you to optionally deallocate the space freed by the
deleted rows. the drop storage option deallocates all but the space specified by
the table's minextents parameter. deleting rows with the truncate command is also
more convenient than dropping and re-creating a table because dropping and re-
creating:
invalidates the table's dependent objects, while truncating does not requires you
to regrant object privileges on the table, while truncating does not requires you
to re-create the table's indexes, integrity constraints, and triggers and
respecify its storage parameters.

see: oracle8 sql reference release 8.0 december 1997 part no. a58225-01
(a58225.pdf) pg.722. (4-538)

59. tom was allocated 10 mb of quota in the users tablespace. he created database
objects in the users tablespace. the total space allocated for the objects owned
by tom is 5 mb. you need to revoke tom's quota from the users tablespace. you
issue this command: alter user tom quota 0 on users; <br />
what is the result?

a. the statement raises the error: ora-00940: invalid alter command.


b. the statement raises the error: ora-00922: missing or invalid option.
c. the objects owned by tom are automatically deleted from the revoked users
tablespace.
d. the objects owned by tom remain in the revoked tablespace, but these objects
cannot be allocated any new space from the users tablespace.

answer: d
explanation:
use the alter user statement to change the authentication or database resource
characteristics of a database user.
tom quota on the users tablespace is revoked with this statement. the objects are
not deleted from the tablespace.
after you have set the quota to zero, you still can insert and delete records from
the table which was created in the users tablespace. so i guess, it will use the
system tablespace for the new results.

60. which background process performs a checkpoint in the database by writing


modified blocks from the database buffer cache in the sga to the data files?

a. lgwr
b. smon
c. dbwn
d. ckpt
e. pmon

answer: c

explanation:
refer to question 54 for the explanation of some of the words used in the answer.
database writer (dbw n)
the database writer writes modified blocks from the database buffer cache to the
datafiles. although one database writer process (dbw0) is sufficient for most
systems, you can configure additional processes (dbw1 through dbw9 and dbwa
through dbwj) to improve write performance for a system that modifies data
heavily. the initialization parameter db_writer_processes specifies the number of
dbwn processes.
checkpoint (ckpt)
at specific times, all modified database buffers in the sga are written to the
datafiles by dbwn. this event is called a checkpoint. the checkpoint process is
responsible for signaling dbwn at checkpoints and updating all the datafiles and
control files of the database to indicate the most recent checkpoint.

61. which command would revoke the role_emp role from all users?

a. revoke role_emp from all;


b. revoke role_emp from public;
c. revoke role_emp from default;
d. revoke role_emp from all_users;

answer: b

explanation:
privileges and roles can also be granted to and revoked from the user group
public. because public is accessible to every database user, all privileges and
roles granted to public are accessible to every database user.
errors given by the answers:
answer a: ora-00987: missing or invalid username(s).
answer b: (it's ok as long as the role was granted to public).
answer c: ora-00987: missing or invalid username(s).
answer d: ora-01917: user or role 'all_users' does not exist.

62. you are experiencing intermittent hardware problems with the disk drive on
which your control file is located. you decide to multiplex your control file.<br
/>
while your database is open, you perform these steps:<br />
1. make a copy of your control file using an operating system command.<br />
2. add the new file name to the list of files for the control files parameter in
your text intialization parameter file using an editor.<br />
3. shut down the instance.<br />
4. issue the startup command to restart the instance, mount, and open the
database.<br />

the instance starts, but the database mount fails. why?

a. you copied the control file before shutting down the instance.
b. you used an operating system command to copy the control file.
c. the oracle server does not know the name of the new control file.
d. you added the new control file name to the control_files parameter before
shutting down the instance.

answer: a

explanation:
to multiplex or move additional copies of the current control file:
1. shutdown the database.
2. exit server manager.
3. copy an existing control file to a different location, using operating system
commands.
4. edit the control_files parameter in the database's parameter file to add the
new control file's name, or to change the existing control filename.
5. restart server manager.
6. restart the database.

for more information refer to steps for creating new control files in
administrator's guide on page 6-7

63. what determines the initial size of a tablespace?

a. the initial clause of the create tablespace statement


b. the minextents clause of the create tablespace statement
c. the minimum extent clause of the create tablespace statement
d. the sum of the initial and next clauses of the create tablespace statement
e. the sum of the sizes of all data files specified in the create tablespace
statement

answer: e

explanation:
minimum extent clause
specify the minimum size of an extent in the tablespace. this clause lets you
control free space fragmentation in the tablespace by ensuring that every used or
free extent size in a tablespace is at least as large as, and is a multiple of,
integer.
the storage_clause is interpreted differently for locally managed tablespaces. at
creation, oracle ignores maxextents and uses the remaining parameter values to
calculate the initial size of the segment.

64. you are going to create a new database. you will not use operating system
authentication.
which two files do you need to create before creating the database? (choose two.)

a. control file
b. password file
c. redo log file
d. alert log file
e. initialization parameter file

answer: b, e

explanation:
i guess answer a is stupid. first of all, creating control files is the purpose of
the whole create database procedure. other proof:
oracle9i sql reference release 2 (9.2) march 2002 part no. a96540-01 (a96540.pdf)
13-26
controlfile reuse clause
specify controlfile reuse to reuse existing control files identified by the
initialization parameter control_files, thus ignoring and overwriting any
information they currently contain. normally you use this clause only when you are
re-creating a database, rather than creating one for the first time. you cannot
use this clause if you also specify a parameter value that requires that the
control file be larger than the existing files. these parameters are maxlogfiles,
maxlogmembers, maxloghistory, maxdatafiles, and maxinstances. if you omit this
clause and any of the files specified by control_files already exist, oracle
returns an error.
password file
but since no os authentication is used, the other choice can only be password-file
authentication. for this purpose a password file is needed.
redo log file
redo log file is used for the transactions within a database, not for database
creation.
alert log file
see question 11 ("trace files, on the other hand, are generated by the oracle
background processes or other connected net8 processes when oracle internal errors
occur and they dump all information about the error into the trace files.").
initialization parameter file
we need one, this file contains the description of the created database.

so the answer is b, e.

remark: it would be logical, that if oracle wants to read from a file, the file
needs to be there. if oracle wants to write to a file, it will create one.

65. based on the following profile limits, if a user attempts to log in and fails
after five tries, how long must the user wait before attempting to log in
again?<br />

alter profile default limit<br />


password_life_time 60<br />
password_grace_time 10<br />
password_reuse_time 1800<br />
password_reuse_max unlimited<br />
failed_login_attempts 5<br />
password_lock_time 1/1440<br />
password_verify_function verify_function;<br />

a. 1 minute
b. 5 minutes
c. 10 minutes
d. 14 minutes
e. 18 minutes
f. 60 minutes
answer: a

explanation:
password_lock_time is the interesting parameter: password_lock_time specifies the
number of days an account will be locked after the specified number of consecutive
failed login attempts.
now we have 1/1440 days = 24/1440 hours = 24*60/1440 minutes = 1 minute.

password_parameters
failed_login_attempts: specify the number of failed attempts to log in to the user
account before the account is locked.
password_life_time: specify the number of days the same password can be used for
authentication. the password expires if it is not changed within this period, and
further connections are rejected.
password_reuse_time: specify the number of days before which a password cannot be
reused. if you set password_reuse_time to an integer value, then you must set
password_reuse_max to unlimited.
password_reuse_max: specify the number of password changes required before the
current password can be reused. if you set password_reuse_max to an integer value,
then you must set password_reuse_time to unlimited.
password_lock_time: specify the number of days an account will be locked after the
specified number of consecutive failed login attempts.
password_grace_time: specify the number of days after the grace period begins
during which a warning is issued and login is allowed. if the password is not
changed during the grace period, the password expires.
password_verify_function: the password_verify_function clause lets a pl/sql
password complexity verification script be passed as an argument to the create
profile statement. oracle provides a default script, but you can create your own
routine or use third-party software instead. for function, specify the name of the
password complexity verification routine, specify null to indicate that no
password verification is performed.

66. evaluate the following sql:<br />

create user sh identified by sh;<br />


grant<br />
create any materialized view,<br />
create any dimension,<br />
drop any dimension,<br />
query rewrite,<br />
global query rewrite,<br />
to dw_manager<br />
with admin option;<br />
grant dw_manager to sh with admin option;<br />

which three actions is the user sh able to perform? (choose three.)

a. select from a table


b. create and drop a materialized view
c. alter a materialized view that you created
d. grant and revoke the role to and from other users
e. enable the role and exercise any privileges in the role's privilege domain

answer: b, d, e

explanation:
create any materialized view: create materialized views in any schema.
create any dimension: create dimensions in any schema.
drop any dimension: drop dimensions in any schema.
query rewrite: enable rewrite using a materialized view, or create a functionbased
index, when that materialized view or index references tables and views that are
in the grantee's own schema.
global query rewrite: enable rewrite using a materialized view, or create a
functionbased index, when that materialized view or index references tables or
views in any schema.
with admin option: specify with admin option to enable the grantee to:
(a) grant the role to another user or role, unless the role is a global role.
(b) revoke the role from another user or role.
(c) alter the role to change the authorization needed to access it.
(d) drop the role.

67. which constraint state prevents new data that violates the constraint from
being entered, but allows invalid data to exist in the table?

a. enable validate
b. disable validate
c. enable novalidate
d. disable novalidate

answer: c

explanation:
enable validate specifies that all old and new data also complies with the
constraint. an enabled validated constraint guarantees that all data is and will
continue to be valid.
enable novalidate ensures that all new dml operations on the constrained data
comply with the constraint. this clause does not ensure that existing data in the
table complies with the constraint and therefore does not require a table lock.
disable validate disables the constraint and drops the index on the constraint,
but keeps the constraint valid. this feature is most useful in data warehousing
situations, because it lets you load large amounts of data while also saving space
by not having an index. this setting lets you load data from a nonpartitioned
table into a partitioned table using the exchange_partition_clause of the alter
table statement or using sql*loader. all other modifications to the table
(inserts, updates, and deletes) by other sql statements are disallowed.
disable novalidate signifies that oracle makes no effort to maintain the
constraint (because it is disabled) and cannot guarantee that the constraint is
true (because it is not being validated).

for more info look at page 7-20 of oracle9 i sql reference document.

68. which storage structure provides a way to physically store rows from more than
one table in the same data block?

a. cluster table
b. partitioned table
c. unclustered table
d. index-organized table

answer: a

explanation:
clusters:
(a) group of one or more tables physically stored together because they share
common columns and are often used together.
(b) since related rows are stored together, disk access time improves.
(c) clusters do not affect application design.
(d) data stored in a clustered table is accessed by sql in the same way as data
stored in a non-clustered table.

partitioning addresses key issues in supporting very large tables and indexes by
letting you decompose them into smaller and more manageable pieces called
partitions. sql queries and dml statements do not need to be modified in order to
access partitioned tables. however, after partitions are defined, ddl statements
can access and manipulate individuals partitions rather than entire tables or
indexes.
this is how partitioning can simplify the manageability of large database objects.
also, partitioning is entirely transparent to applications.

more info look at page 10-64 oracle9 idatabase concepts (nice diagram of cluster
and non-cluster)

69. which are considered types of segments?

a. only lobs
b. only nested tables
c. only index-organized tables
d. only lobs and index-organized tables
e. only nested tables and index-organized tables
f. only lobs, nested tables, and index-organized tables
g. nested tables, lobs, index-organized tables, and boot straps

answer: g

explanation:
2-12 oracle9i database concepts:
a single data segment in an oracle database holds all of the data for one of the
following:
(a) a table that is not partitioned or clustered.
(b) a partition of a partitioned table segments overview.
(c) a cluster of tables.

oracle databases use four types of segments:


(1) data segments 1: a table that is not partitioned or clustered.
(2) data segments 2: a partition of a partitioned table.
(3) index segments: every nonpartitioned index in an oracle database has a single
index segment to hold all of its data. for a partitioned index, every partition
has a single index segment to hold its data.
(4) temporary segments: when processing queries, oracle often requires temporary
workspace for intermediate stages of sql statement parsing and execution. oracle
automatically allocates this disk space called a temporary segment. typically,
oracle requires a temporary segment as a work area for sorting. oracle does not
create a segment if the sorting operation can be done in memory or if oracle finds
some other way to perform the operation using indexes.
operations that require temporary segments:
(a) create index
(b) select ... order by
(c) select distinct ...
(d) select ... group by
(e) select ... union
(f) select ... intersect
(g) select ... minus
look at segment overview on page 2-12 in the oracle 9i concepts pdf file. by the
way, we use segments to store data, so everything is stored in segments.

70. select the memory structure(s) that would be used to store the parse
information and actual value of the bind variable id for the following set of
commands:

variable id number;
begin
:id:=1;
end;
/

a. pga only
b. row cache and pga
c. pga and library cache
d. shared pool only
e. library cache and buffer cache

answer: c

explanation:
reason for c instead of b:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c16sqlpl.htm#cncpt416:
parsing is one stage in the processing of a sql statement. when an application
issues a sql statement, the application makes a parse call to oracle. during the
parse call, oracle:
(a) checks the statement for syntactic and semantic validity.
(b) determines whether the process issuing the statement has privileges to run it.
(c) allocates a private sql area for the statement.

oracle also determines whether there is an existing shared sql area containing the
parsed representation of the statement in the library cache. if so, the user
process uses this parsed representation and runs the statement immediately. if
not, oracle generates the parsed representation of the statement, and the user
process allocates a shared sql area for the statement in the library cache and
stores its parsed representation there.

the basic memory structures associated with oracle include:


(1) system global area (sga), which is shared by all server and background
processes and holds the following:
(a) database buffer cache.
(b) redo log buffer.
(c) shared pool.
(d) large pool (if configured).

(2) program global areas (pga), which is private to each server and background
process; there is one pga for each process. the pga holds the following:
(a) stack areas.
(b) data areas.

as i understood, sga is used to store the pl/sql code only.

71. the new human resources application will be used to manage employee data in
the employees table. you are developing a strategy to manage user privileges. your
strategy should allow for privileges to be granted or revoked from individual
users or groups of users with minimal administrative effort.<br /><br />
the users of the human resources application have these requirements:<br />
a manager should be able to view the personal information of the employees in
his/her group and make changes to their title and salary.<br /><br />
what should you grant to the manager user?

a. grant select on the employees table


b. grant insert on the employees table
c. grant update on the employees table
d. grant select on the employees table and then grant update on the title and
salary columns
e. grant select on the employees table and then grant insert on the title and
salary columns
f. grant update on the employees table and then grant select on the title and
salary columns
g. grant insert on the employees table and then grant select on the title,
manager, and salary columns

answer: d

i suppose this question is logical!

72. an insert statement failed and is rolled back. what does this demonstrate?
a. insert recovery
b. read consistency
c. transaction recovery
d. transaction rollback

answer: d

explanation:
if at any time during execution a sql statement causes an error, all effects of
the statement are rolled back. the effect of the rollback is as if that statement
had never been run. this operation is a statement-level rollback.
errors discovered during sql statement execution cause statement-level rollbacks.
an example of such an error is attempting to insert a duplicate value in a primary
key.

73. the database currently has one control file. you decide that three control
files will provide better protection against a single point of failure. to
accomplish this, you modify the spfile to point to the locations of the three
control files. the message "system altered" was received after execution of the
statement.
you shut down the database and copy the control file to the new names and
locations. on startup you receive the error ora-00205: error in identifying
control file. you look in the alert log and determine that you specified the
incorrect path for the for control file.

which steps are required to resolve the problem and start the database?<br />

a.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. start the database in nomount mode.<br />
4. use the alter system set control_files command to correct the error.<br />
5. shut down the database.<br />
6. start the database.<br /><br />

b.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. start the database in mount mode.<br />
4. remove the spfile by using a unix command.<br />
5. recreate the spfile from the pfile.<br />
6. use the alter system set control_files command to correct the error.<br />
7. start the database.<br />

c.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. remove the control files using the os command.<br />
4. start the database in nomount mode.<br />
5. remove the spfile by using an os command.<br />
6. re-create the spfile from the pfile.<br />
7. use the alter system set control_files command to define the control files.<br
/>
8. shut down the database.<br />
9. start the database.<br />

answer: a

explanation:
some parameters can be changed dynamically by using the alter session or alter
system statement while the instance is running. unless you are using a instance
and database startup server parameter file, changes made using the alter system
statement are only in effect for the current instance. you must manually update
the text initialization parameter file for the changes to be known the next time
you start up an instance.
when you use a server parameter file, you can update the parameters on disk, so
that changes persist across database shutdown and startup.

see question number 62: you do not need to create the spfile again. use alter
system to update the control_files parameter value.

74. which process is started when a user connects to the oracle server in a
dedicated server mode?

a. dbwn
b. pmon
c. smon
d. server

answer: d

explanation:
smon: the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use and
recovers dead transactions skipped during crash and instance recovery because of
file-read or offline errors. these transactions are eventually recovered by smon
when the tablespace or file is brought back online.

pmon: the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below) and
server processes and restarts them if they have failed.
checkpoint (ckpt): at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the
checkpoint process is responsible for signaling dbwn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent
checkpoint.

therefore, this leaves us with only one answer.

75. you are creating a new database. you do not want users to use the system
tablespace for sorting operations.

what should you do when you issue the create database statement to prevent this?

a. create an undo tablespace.


b. create a default temporary tablespace.
c. create a tablespace with the undo keyword.
d. create a tablespace with the temporary keyword.

answer: b

explanation:
you can manage space for sort operations more efficiently by designating temporary
tablespaces exclusively for sorts. doing so effectively eliminates serialization
of space management operations involved in the allocation and deallocation of sort
space.
all operations that use sorts, including joins, index builds, ordering, computing
aggregates (group by), and collecting optimizer statistics, benefit from temporary
tablespaces. the performance gains are significant with real application clusters.
specify a default temporary tablespace when you create a database, using the
default temporary tablespace extension to the create database statement.

when a transaction begins, oracle assigns the transaction to an available undo


tablespace or rollback segment to record the rollback entries for the new
transaction.
a database administrator creates undo tablespaces individually, using the create
undo tablespace statement. it can also be created when the database is created,
using the create database statement.

to improve the concurrence of multiple sort operations, reduce their overhead, or


avoid oracle space management operations altogether, create temporary tablespaces.
a temporary tablespace can be shared by multiple users and can be assigned to
users with the create user statement when you create users in the database.
within a temporary tablespace, all sort operations for a given instance and
tablespace share a single sort segment. sort segments exist for every instance
that performs sort operations within a given tablespace. the sort segment is
created by the first statement that uses a temporary tablespace for sorting, after
startup, and is released only at shutdown. an extent cannot be shared by multiple
transactions.

76. which four statements are true about profiles? (choose four.)

a. profiles can control the use of passwords.


b. profile assignments do not affect current sessions.
c. all limits of the default profile are initially unlimited.
d. profiles can be assigned to users and roles, but not other profiles.
e. profiles can ensure that users log off the database when they have left their
session idle for a period of time.
answer: a, b, c, e

explanation:
it's true that profiles can control the use of passwords. this feature protects
the integrity of assigned usernames as well as the overall data integrity of the
oracle database. all limits of the default profile are initially unlimited. the
default profile isn't very restrictive of host system resources; in fact, default
profile gives users unlimited use of all resources definable in the database. any
option in any profile can be changed at any time; however, the change will not
take effect for users assigned to that profile until the user logs out and logs
back in. also profiles can ensure that users log off the database when they have
left their session idle for a period of time.

introduction to the oracle server 1-47:


each user is assigned a profile that specifies limitations on several system
resources available to the user, including the following:
(1) number of concurrent sessions the user can establish.
(2) cpu processing time available for:
(a) the user's session.
(b) asingle call to oracle made by a sql statement.
(3) amount of logical i/o available for:
(a) the user's session.
(b) a single call tooracle made by a sql statement.
(4) amount of idle time available for the user's session.
(5) amount of connect time available for the user's session database security
overview.
(6) password restrictions:
(a)account locking after multiple unsuccessful login attempts.
(b)password expiration and grace period.
(c)password reuse and complexity restrictions.

different profiles can be created and assigned individually to each user of the
database. a default profile is present for all users not explicitly assigned a
profile.
the resource limit feature prevents excessive consumption of global database
system resources.
to allow for greater control over database security, oracle's password management
policy is controlled by dbas and security officers through user profiles.
to alter the enforcement of resource limitation while the database remains open,
you must have the alter system system privilege.
all unspecified resource limits for a new profile take the limit set by a default
profile. initially, all limits of the default profile are set to unlimited.

77. the database writer (dbwn) background process writes the dirty buffers from
the database buffer cache into the _______.

a. data files only


b. data files and control files only
c. data files and redo log files only
d. data files, redo log files, and control files

answer: a

explanation:
database writer (dbwn)
the database writer writes modified blocks from the database buffer cache to the
datafiles. oracle allows a maximum of 20 database writer processes (dbw0-dbw9 and
dbwa-dbwj). the initialization parameter db_writer_processes specifies the number
of dbwn processes. oracle selects an appropriate default setting for this
initialization parameter (or might adjust a user specified setting) based upon the
number of cpus and the number of processor groups.

78. you used the password file utility to create a password file as follows:<br />

$orapwd file=$oracle_home/dbs/orapwdb01<br />


password=orapass entries=5<br /><br />

you created a user and granted only the sysdba privilege to that user as
follows:<br />
create user dba_user identified by dba_pass;<br />
grant sysdba to dba_user;<br /><br />

the user attempts to connect to the database as follows:<br />


connect dba_user/orapass as sysdba;<br /><br />

why does the connection fail?

a. the dba privilege had not been granted to dba_user.


b. the sysoper privilege had not been granted to dba_user.
c. the user did not provide the password dba_pass to connect as sysdba.
d. the information about dba_user has not been stored in the password file.

answer: c

explanation:
when prompted, connect as sys (or other administrative user) with the sysdba
system privilege:
connect sys/password as sysdba
where password is the password of the user created. in this example, it is
dba_user.

79. you intend to use only password authentication and have used the password file
utility to create a password file as follows:<br />

$orapwd file=$oracle_home/dbs/orapwdb01<br />


password=orapass entries=5<br /><br />
the remote_login_passwordfile initialization parameter is set to none.<br /><br />

you created a user and granted only the sysdba privilege to that user as
follows:<br />
create user dba_user<br />
identified by dba_pass<br />
grant sysdba to dba_user<br /><br />

the user attempts to connect to the database as follows:<br />


connect dba_user/dba_pass as sysdba;<br /><br />

why does the connection fail?

a. the dba privilege was not granted to dba_user.


b. remote_login_passwordfile is not set to exclusive.
c. the password file has been created in the wrong directory.
d. the user did not specify the password orapass to connect as sysdba.
answer: b

oracle 7 documentation, the oracle7 database administrator, 1 - 11


remote_login_passwordfile
in addition to creating the password file, you must also set the initialization
parameter remote_login_passwordfile to the appropriate value. the values
recognized are described below.
note: to startup an instance or database, you must use server manager. you must
specify a database name and a parameter file to initialize the instance settings.
you may specify a fully-qualified remote database name using sql*net. however, the
initialization parameter file and any associated files, such as a configuration
file, must exist on the client machine. that is, the parameter file must be on the
machine where you are running server manager. none setting this parameter to none
causes
oracle7 to behave as if the password file does not exist. that is, no privileged
connections are allowed over non-secure connections. none is the default value for
this parameter.
exclusive
an exclusive password file can be used with only one database. only an exclusive
file can contain the names of users other than sys and internal.
using an exclusive password file allows you to grant sysdba and sysoper system
privileges to individual users and have them connect as themselves.
a shared password file can be used by multiple databases. however, the only users
recognized by a shared password file are sys and internal; you cannot add users to
a shared password file. all users needing sysdba or sysoper system privileges must
connect using the same name, sys, and password. this option is useful if you have
a single dba administering multiple databases.
suggestion: to achieve the greatest level of security, you should set the
remote_login_passwordfile file initialization parameter to exclusive immediately
after creating the password file.

80. for which two constraints are indexes created when the constraint is added?
(choose two.)

a. check
b. unique
c. not null
d. primary key
e. foreign key

answer: b, d

explanation:
oracle enforces all primary key constraints using indexes.
oracle enforces unique integrity constraints with indexes.

81. you check the alert log for your database and discover that there are many
lines that say "checkpoint not complete". what are two ways to solve this problem?
(choose two.)

a. delete archived log files


b. add more online redo log groups
c. increase the size of archived log files
d. increase the size of online redo log files
answer: b, d

explanation:
checkpoint not complete" means that a checkpoint started, but before it could
finish another higher priority checkpoint was issued (usually from a log switch),
so the first checkpoint was essentially rolled-back.

i found these answers from newsgroups and they sound quite good to me:
increasing the number of redo logs seems to be most effective. normally,
checkpoints occur for 1 of 3 reasons:
1) the log_checkpoint_interval was reached.
2) a log switch occurred.
3) the log_checkpoint_timeout was reached.

the archiver copies the online redo log files to archival storage after a log
switch has occurred. although a single arcn process (arc0) is sufficient for most
systems, you can specify up to 10 arcn processes by using the dynamic
initialization parameter log_archive_max_processes. if the workload becomes too
great for the current number of arcn processes, then lgwr automatically starts
another arcn process up to the maximum of 10 processes. arcn is active only when a
database is in archivelog mode and automatic archiving is enabled.

there is no archive log file as far as i know. it is called redo log file.

82. the database needs to be shut down for hardware maintenance. all users
sessions except one have either voluntarily logged off or have been forcibly
killed. the one remaining user session is running a business critical data
manipulation language (dml) statement and it must complete prior to shutting down
the database.
which shutdown statement prevents new user connections, logs off the remaining
user, and shuts down the database after the dml statement completes?

a. shutdown
b. shutdown abort
c. shutdown normal
d. shutdown immediate
e. shutdown transactional

answer: e

explanation:
from a newsgroup:
there are four ways to shut down a database:
(a) shutdown waits for everyone to finish & log out before it shuts down. the
database is cleanly shutdown.
(b) shutdown immediate rolls back all uncommitted transactions before it shuts
down. the database is cleanly shutdown.
(c) shutdown transactional waits for all current transactions to commit or
rollback before it shuts down. the database is cleanly shutdown.
(d) shutdown abort quickly shuts down - the next restart will require instance
recovery. the database is technically crashed.
the key reason for an immediate shutdown not being immediate is because of the
need to rollback all current transactions. if a user has just started a
transaction to update emp set sal = sal * 2 where emp_id = 1000; then this will be
rolled back almost instantaneously.
however, if another user has been running a huge update for the last four hours,
and has not yet committed, then four hours of updates have to be rolled back and
this takes time.
so, if you really want to shutdown right now, then the advised route is: shutdown
abort - startup restrict - shutdown

when you shutdown abort, oracle kills everything immediately. startup restrict
will allow only dba users to get in but, more importantly, will carry out instance
recovery and recover back to a consistent state using the current on-line redo
logs. the final shutdown will perform a clean shutdown. any cold backups taken now
will be of a consistent database.
there has been much discussion on this very subject on the oracle server
newsgroups. some people are happy to backup the database after a shutdown abort,
others are not. i prefer to use the above method prior to taking a cold backup -
if i have been unable to shutdown or shutdown immediate that is.

83. when preparing to create a database, you should be sure that you have
sufficient disk space for your database files. when calculating the space
requirements you need to consider that some of the files may be multiplexed.
which two types of files should you plan to multiplex? (choose two.)

a. data files
b. control file
c. password file
d. online redo log files
e. initialization parameter file

answer: b, d

explanation:
multiplex: files are stored at more than one location.

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 3-22
multiplexed control files: as with online redo log files, oracle enables multiple,
identical control files to be open concurrently and written for the same database.
by storing multiple control files for a single database on different disks, you
can safeguard against a single point of failure with respect to control files. if
a single disk that contained a control file crashes, then the current instance
fails when oracle attempts to access the damaged control file. however, when other
copies of the current control file are available on different disks, an instance
can be restarted
easily without the need for database recovery.
if all control files of a database are permanently lost during operation, then the
instance is aborted and media recovery is required. media recovery is not
straightforward if an older backup of a control file must be used because a
current copy is not available. therefore, it is strongly recommended that you
adhere to the following practices:
(a) use multiplexed control files with each database
(b) store each copy on a different physical disk
(c) use operating system mirroring
(d) monitor backups

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 1-7
redo log files: to protect against a failure involving the redo log itself, oracle
allows a multiplexed redo log so that two or more copies of the redo log can be
maintained on different disks.
the information in a redo log file is used only to recover the database from a
system or media failure that prevents database data from being written to the
datafiles. for.database structure and space management overview example, if an
unexpected power outage terminates database operation, then data in memory cannot
be written to the datafiles, and the data is lost. however, lost data can be
recovered when the database is opened, after power is restored. by applying the
information in the most recent redo log files to the database datafiles, oracle
restores the database to the time at which the power failure occurred.

84. you decide to use oracle managed files in your database.

which two are requirements with respect to the directories you specify in the
db_create_file_dest and db_create_online_log_dest_n initialization parameters?
(choose two).

a. the directory must already exist.


b. the directory must not contain any other files.
c. the directory must be created in the $oracle_home directory.
d. the directory must have appropriate permissions that allow oracle to create
files in it.

answer: a, d

explanation:
setting the db_create_online_log_dest_ n initialization parameter:
you specify the name of a file system directory that becomes the default location
for the creation of the operating system files for these entities. you can specify
up to five multiplexed locations.

setting the db_create_file_dest initialization parameter:


you specify the name of a file system directory that becomes the default location
for the creation of the operating system files for these entities

as a conclusion, the directories must exist, but it doesn't matter what is inside.
also, it can be anywhere since you specify the location of it. and must give
permission to oracle to read and write to those directories.

85. in which two situations does the log writer (lgwr) process write the redo
entries from the redo log buffer to the current online redo log group? (choose
two.)

a. when a transaction commits


b. when a rollback is executed
c. when the redo log buffer is about to become completely full (90%)
d. before the dbwn writes modified blocks in the database buffer cache to the data
files
e. when there is more than a third of a megabyte of changed records in the redo
log buffer

answer: a, d

explanation:
see ocp oracle 9i database: fundamentals i, p. 19.:
the redo log buffer writes to the redo logfile under the following situations:
(a) when a transaction commits.
(b) when the redo log buffer is one-third full.
(c) when there is more than one megabyte of changes recorded in the redo log
buffer.
(d) before the dbwn writes modified blocks in the database buffer cache to the
datafiles.

86. examine the syntax below, which creates a departments table:<br />

create table hr.departments(<br />


department_id number(4),<br />
department_name varcnar2(30),<br />
manager_id number(6),<br />
location_id number(4))<br />
storage(initial 200k next 200k<br />
pctincrease 50 minextents 1 maxextents 5)<br />
tablespace data;<br /><br />

what is the size defined for the fifth extent?

a. 200 k
b. 300 k
c. 450 k
d. 675 k
e. not defined

answer: d

explanation:
the size of the first and the second extent is 200k, pctincrease is set to 50%, so
we have the following calculation:
200 * 1,5 * 1,5 * 1,5 = 675.

87. after running the analyze index orders cust_idx validate structure command,
you query the index_stats view and discover that there is a high ratio of
del_lf_rows to lf_rows values for this index.
you decide to reorganize the index to free up the extra space, but the space
should remain allocated to the orders_cust_idx index so that it can be reused by
new entries inserted into the index.

which command(s) allows you to perform this task with the minimum impact to any
users who run queries that need to access this index while the index is
reorganized?

a. alter index rebuild


b. alter index coalesce
c. alter index deallocate unused
d. drop index followed by create index

answer: b

explanation:
when you rebuild an index, you use an existing index as the data source. creating
an index in this manner enables you to change storage characteristics or move to a
new tablespace. rebuilding an index based on an existing data source removes
intra-block fragmentation. compared to dropping the index and using the create
index statement, re-creating an existing index offers better performance.

improper sizing or increased growth can produce index fragmentation. to eliminate


or reduce fragmentation, you can rebuild or coalesce the index.

coalescing an index online vs. rebuilding an index online. online index coalesce
is an in-place data reorganization operation, hence does not require additional
disk space like index rebuild does. index rebuild requires temporary disk space
equal to the size of the index plus sort space during the operation. index
coalesce does not reduce the height of the b-tree. it only tries to reduce the
number of leaf blocks. the coalesce operation does not free up space for users but
does improve index scan performance.

if a user needs to move an index to a new tablespace, online index rebuild is


recommended. index rebuild also improves space utilization, but the index rebuild
operation has higher overhead than the index coalesce operation

88. you started your database with this command:<br />

startup pfile=initsampledb.ora<br /><br />

one of the values in the initsampledb.ora parameter file is:<br />


log_archive_start=false<br />

while your database is open, you issue this command to start the archiver
process:<br />
alter system archive log start;<br /><br />

you shut down your database to take a back up and restart it using the
initsampledb.ora parameter file again. when you check the status of the archiver,
you find that it is disabled.<br />

why is the archiver disabled?

a. when you take a backup the archiver process is disabled.


b. the archiver can only be started by issuing the alter database archivelog
command.
c. log_archive_start is still set to false because the pfile is not updated when
you issue the alter system command.
d. the archiver can only be started by issuing the alter system archive log start
command each time you open the database.

answer: c

explanation:
if an instance is shut down and restarted after automatic archiving is enabled
using the alter system statement, the instance is reinitialized using the settings
of the initialization parameter file. those settings may or may not enable
automatic archiving. if your intent is to always archive redo log files
automatically, then you should include log_archive_start = true in your
initialization parameters.

answer d is correct, since every time the database is started, the init value of
the log_archive_start=false

answer d is somewhat correct, but c is better. since the pfile's log_archive_start


= false, then the only way you can start the archiver is by issuing the command.
there is no other way, (except of course, if you modify the pfile and the start
the database again)
89. what provides for recovery of data that has not been written to the data files
prior to a failure?

a. redo log
b. undo segment
c. rollback segment
d. system tablespace

answer: a

explanation:

oracle 7 documentation, oracle 7 server concepts, 22-5


the redo log
the redo log, present for every oracle database, records all changes made in an
oracle database. the redo log of a database consists of at least two redo log
files that are separate from the datafiles (which actually store a database's
data). as part of database recovery from an instance or media failure, oracle
applies the appropriate changes in the database's redo log to the datafiles, which
updates database data to the instant that the failure occurred. a database's redo
log can be comprised of two parts: the online redo log and the archived redo log,
discussed in the following sections.
the online redo log every oracle database has an associated online redo log. the
online redo log works with the oracle background process lgwr to immediately
record all changes made through the associated instance. the online redo log
consists of two or more pre-allocated files that are reused in a circular fashion
to record ongoing database changes; see "the online redo log" on page 22-6 for
more information.
the archived (offline) redo log optionally, you can configure an oracle database
to archive files of the online redo log once they fill. the online redo log files
that are archived are uniquely identified and make up the archived redo log. by
archiving filled online redo log files, older redo log information is preserved
for more extensive database recovery operations, while the pre-allocated online
redo log files continue to be reused to store the most current database changes;
see "the archived redo log" page 22-16 for more information.

oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 13-2
undo and rollback segments
every oracle database must have a method of maintaining information that is used
to roll back, or undo, changes to the database. such information consists of
records of the actions of transactions, primarily before they are committed.
oracle refers to these records collectively as undo.
undo records are used to:
(a) roll back transactions when a rollback statement is issued.
(b) recover the database.
(c) provide read consistency.

when a rollback statement is issued, undo records are used to undo changes that
were made to the database by the uncommitted transaction. during database
recovery, undo records are used to undo any uncommitted changes applied from the
redo log to the datafiles. undo records provide read consistency by maintaining
the before image of the data for users who are accessing the data at the same time
that another user is changing it.
historically, oracle has used rollback segments to store undo. space management
for these rollback segments has proven to be quite complex. oracle now offers
another method of storing undo that eliminates the complexities of managing
rollback segment space, and enables dbas to exert control over how long undo is
retained before being overwritten. this method uses an undo tablespace.

90. there are three ways to specify national language support parameters:<br /><br
/>
1. initialization parameters<br />
2. environment variables<br />
3. alter session parameters<br /><br />

match each of these with their appropriate definitions.<br />

a.
1) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
2) parameters on the server side to specify the default server environment<br />
3) parameters override the default set for the session or the server<br />

b.
1) parameters on the server side to specify the default server environment<br />
2) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
3) parameters override the default set for the session or the server<br />

c.
1) parameters on the server side to specify the default server environment<br />
2) parameters override the default set for the session or the server<br />
3) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />

d.
1) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
2) parameters override the default set for the session or the server<br />
3) parameters on the server side to specify the default server environment<br />

answer: b

explanation:
oracle has attempted to provide appropriate values in the starter initialization
parameter file provided with your database software, or as created for you by the
database configuration assistant. you can edit these oracle-supplied
initialization parameters and add others, depending upon your configuration and
options and how you plan to tune the database.

91. which graphical dba administration tool would you use to tune an oracle
database?

a. sql*plus
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant

answer: b

explanation:
if you think sql*plus is a graphical tool, then i call microsoft windows an
artistic tool ;-)

you can more easily administer the database resource manager through the oracle
enterprise manager (oem). it provides an easy to use graphical interface for
administering the database resource manager. you can choose to use the oracle
enterprise manager for administering your database, including starting it up and
shutting it down. the oracle enterprise manager is a separate oracle product, that
combines a graphical console, agents, common services, and tools to provide an
integrated and comprehensive systems management platform for managing oracle
products. it enables you to perform the functions discussed in this book using a
gui interface, rather than command lines.

the database configuration assistant (dbca) an oracle supplied tool that enables
you to create an oracle database, configure database options for an existing
oracle database, delete an oracle database, or manage database templates. dbca
islaunched automatically by the oracle universal installer, but it can be invoked
standalone from the windows operating system start menu (under configuration
assistants)

92. which method is correct for starting an instance to create a database?

a. startup
b. startup open
c. startup mount
d. startup nomount

answer: d

explanation:
start an instance without mounting a database. typically, you do this only during
database creation or while performing maintenance on the database. use the
startup command with the nomount option.

93. you just created five roles using the statements shown:<br />

create role payclerk;<br />


create role oeclerk identified by salary;<br />
create role hr_manager identified externally;<br />
create role genuser identified globally;<br />
create role dev identified using dev_test;<br />

which statement indicates that a user must be authorized to use the role by the
enterprise directory service before the role is enabled?

a. create role payclerk;


b. create role genuser identified globally;
c. create role oeclerk identified by salary;
d. create role dev identified using dev_test;
e. create role hr_manager identified externally;

answer: b

explanation:
creating a global user - the following statement illustrates the creation of a
global user, who is authenticated by ssl and authorized by the enterprise
directory service:
create user scott
identified globally as 'cn=scott,ou=division1,o=oracle,c=us';
the string provided in the as clause provides an identifier (distinguished name,
or dn) meaningful to the enterprise directory.
in this case, scott is truly a global user. but, the disadvantage here is that
user scott must then be created in every database that he must access, plus the
directory.

94. examine the list of steps to rename the data file of a non-system tablespace
hr_tbs. the steps are arranged in random order.<br />

1. shut down the database.


2. bring the hr_tbs tablespace online.
3. execute the alter database rename datafile command.
4. use the operating system command to move or copy the file.
5. bring the tablespace offline.
6. open the database.

what is the correct order for the steps?

a. 1, 3, 4, 6; steps 2 and 5 are not required


b. 1, 4, 3, 6; steps 2 and 5 are not required
c. 2, 3, 4, 5; steps 1 and 6 are not required
d. 5, 4, 3, 2; steps 1 and 6 are not required
e. 5, 3, 4, 1, 6, 2
f. 5, 4, 3, 1, 6, 2

answer: d

explanation:
renaming datafiles in a single tablespace: to rename datafiles from a single
tablespace, complete the following steps:
(1) take the non-system tablespace that contains the datafiles offline.
for example: alter tablespace users offline normal;
(2) rename the datafiles using the operating system.
(3) use the alter tablespace statement with the rename datafile clause to change
the filenames within the database. the new files must already exist; this
statement does not create the files. also, always provide complete filenames
(including their paths) to properly identify the old and new datafiles. in
particular, specify the old datafile name exactly as it appears in the
dba_data_files view of the data dictionary.
(4) back up the database. after making any structural changes to a database,
always perform an immediate and complete backup.
(5) bring the datafile online (this was added by me, i couldn't find it in the
documents). to use this clause for datafiles and tempfiles, the database must be
mounted. the database can also be open, but the datafile or tempfile being renamed
must be offline.

so first make the tablespace offline (step 5) => answers a and b are out.
the alter renames the file, but only on the oracle. the statement does not
actually change the name of the file 'disk. you must perform this operation
through your operating system. => use the step 4 to copy the new file to the
specified location.
then execute the alter database.
you don't need to shut down and start the database.
95. for a tablespace created with automatic segment-space management, where is
free space managed?

a. in the extent
b. in the control file
c. in the data dictionary
d. in the undo tablespace

answer: a

explanation:
when you create a table in a locally managed tablespace for which automatic
segment-space management is enabled, the need to specify the pctfree (or
freelists) parameter is eliminated. automatic segment-space management is
specified at the tablespace level. the oracle database server automatically and
efficiently manages free and used space within objects created in such
tablespaces.

in my opinion, the free space is managed in the table space itself. a table space
consists of extents. therefore, extents are the actual spaces. so, i recommend
answer a. also in answer d, undo table space is used for undo purpose only and
not for space management.

96. which is true when considering the number of indexes to create on a table?

a. every column that is updated requires an index.


b. every column that is queried is a candidate for an index.
c. columns that are part of a where clause are candidates for an index.
d. on a table used in a data warehouse application there should be no indexes.

answer: c

no comment

97. more stringent user access requirements have been issued. you need to do these
tasks for the user pward:

1. change user authentication to external authentication.


2. revoke the user's ability to create objects in the test ts tablespace.
3. add a new default and temporary tablespace and set a quota of unlimited.
4. assign the user to the clerk profile.

which statement meets the requirements?

a. alter user pward<br />


identified externally<br />
default tablespace data_ts<br />
temporary tablespace temp_ts<br />
quota unlimited on data_ts<br />
quota 0 on test_ts<br />
grant clerk to pward;<br />

b. alter user pward<br />


identified by pward<br />
default tablespace dsta_ts<br />
temporary tablespace temp_ts<br />
quota unlimited on data_ts<br />
quota 0 on test_ts<br />
profile clerk;<br />

c. alter user pward<br />


identified externally<br />
default tablespace data_ts<br />
temporary tablespace temp_ts<br />
quota unlimited on data_ts<br />
quota 0 on test_ts<br />
profile clerk;<br />

d. alter user pward<br />


identified externally<br />
default tablespace data_ts<br />
temporary tablespace temp_ts<br />
quota unlimited on data_ts<br />
quota 0 on test ts;<br />
grant clerk to pward;<br />

answer: c

explanation:
creating a user who is authenticated externally:
create user scott identified externally; or use alter instead of create. the
important keyword is the identified externally.

check the picture to see how the default and temporary table spaces are set. also
the quota keyword is shown on the picture.
since the alter user also has a profile keyword, then profile can also be used.
therefore answer c is correct.

98. you create a new table named departments by issuing this statement:<br />

create table departments(<br />


department_id number(4),<br />
department_name varchar2(30),<br />
manager_id number(6),<br />
location_id number(4))<br />
storage(initial 200k next 200k<br />
pctincrease 0 minextents 1 maxextents 5);<br />

you realize that you failed to specify a tablespace for the table. you issue these
queries:<br />

<font face="courier"><p align="left">select username, default_tablespace,


temporary tablespace from user_users;</p></font>
<table border cellspacing="0" cellpadding="7" width="527">
<tr>
<td width="22%" valign="top" height="19">
<p align="left"><font face="courier"><b>username </b></font></td>
<td width="39%" valign="top" height="19">
<p align="left"><font face="courier"><b>default_tablespace </b></font></td>
<td width="39%" valign="top" height="19">
<p align="left"><font face="courier"><b>temporary_tablespace </b></font>
</td>
</tr>
<tr>
<td width="22%" valign="bottom" height="18">
<p align="left"><font face="courier">hr </font></td>
<td width="39%" valign="bottom" height="18">
<p align="left"><font face="courier">sample </font></td>
<td width="39%" valign="bottom" height="18">
<p align="left"><font face="courier">temp </font></td>
</tr>
</table><br />
<font face="courier"><p align="left">select * from user_ts_quotas;</p></font><br
/>
<font face="courier">
<table border cellspacing="0" cellpadding="7" width="561" height="74">
<tr>
<td width="28%" valign="top" height="21">
<p align="left"><b>tablespace_name </b></td>
<td width="18%" valign="top" height="21">
<p align="left"><b>bytes </b></td>
<td width="19%" valign="top" height="21">
<p align="left"><b>max_bytes </b></td>
<td width="15%" valign="top" height="21">
<p align="left"><b>blocks </b></td>
<td width="21%" valign="top" height="21">
<p align="left"><b>max_blocks </b></td>
</tr>
<tr>
<td width="28%" valign="middle" height="1">
<p align="left">sample </td>
<td width="18%" valign="middle" height="1">
<p align="left">28311552 </td>
<td width="19%" valign="middle" height="1">
<p align="left">-1 </td>
<td width="15%" valign="middle" height="1">
<p align="left">6912 </td>
<td width="21%" valign="middle" height="1">
<p align="left">-1 </td>
</tr>
<tr>
<td width="28%" valign="bottom" height="21">
<p align="left">indx </td>
<td width="18%" valign="bottom" height="21">
<p align="left">0 </td>
<td width="19%" valign="bottom" height="21">
<p align="left">-1 </td>
<td width="15%" valign="bottom" height="21">
<p align="left">0 </td>
<td width="21%" valign="bottom" height="21">
<p align="left">-1 </td>
</tr>
</table></font><br />

in which tablespace was your new departments table created?

a. temp
b. system
c. sample
d. user_data
answer: c

<b>explanation:</b><br />the default tablespace clause of the create user


statement names the location where the user's database objects will be created by
default. this clause plays an important role
in protecting the integrity of the system tablespace. if no default tablespace is
named for the user, objects that the user creates may be placed in the system
tablespace. recall that
system contains many database objects, such as the data dictionary and the system
rollback segment, that are critical to database use. users should not be allowed
to create their database
objects in the system tablespaces.

incorrect answers:

a: temp tablespace is set as temporary tablespace for the user, so it will not be
used to store the departments table. the default tablespace sample will be used
for this purpose.
b: user have sample as default tablespace, so it will be used, not system
tablespace, to store the departments table.
d: user_date is not defined as default tablespace for theuser, so it will not be
used to store the departments table.

99. you should back up the control file when which two commands are executed?
(choose two.)

a. create user
b. create table
c. create index
d. create tablespace
e. alter tablespace &lt;tablespace name&gt; add datafile

answer: d, e

explanation:
back up control files
it is very important that you back up your control files. this is true initially,
and at any time after you change the physical structure of your database. such
structural changes include:
(a) adding, dropping, or renaming datafiles.
(b) adding or dropping a tablespace, or altering the read-write state of the
tablespace.
(c) adding or dropping redo log files or groups.

100. you have two undo tablespaces defined for your database. the instance is
currently using the undo tablespace named undotbs_1. you issue this command to
switch to undotbs_2 while there are still transactions using undotbs_1:<br /><br
/>

alter system set undo_tablespace = undotbs_2<br /><br />

which two results occur? (choose two.)

a. new transactions are assigned to undotbs_2.


b. current transactions are switched to the undotbs_2 tablespace.
c. the switch to undotbs_2 fails and an error message is returned.
d. the undotbs_1 undo tablespace enters into a pending offline mode (status).
e. the switch to undotbs_2 does not take place until all transactions in undotbs_1
are completed.

answer: a, d

explanation:
see http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/undo.htm#9117:
switching undo tablespaces
switching undo tablespaces
you can switch from using one undo tablespace to another. because the
undo_tablespace initialization parameter is a dynamic parameter, the alter system
set statement can be used to assign a new undo tablespace.
the database is online while the switch operation is performed, and user
transactions can be executed while this command is being executed. when the switch
operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
the switch operation does not wait for transactions in the old undo tablespace to
commit. if there are any pending transactions in the old undo tablespace, the old
undo tablespace enters into a pending offline mode (status). in this mode,
existing transactions can continue to execute, but undo records for new user
transactions cannot be stored in this undo tablespace.
an undo tablespace can exist in this pending offline mode, even after the switch
operation completes successfully. a pending offline undo tablespace cannot used by
another instance, nor can it be dropped. eventually, after all active transactions
have committed, the undo tablespace automatically goes from the pending offline
mode to the offline mode. from then on, the undo tablespace is available for other
instances (in an oracle real application cluster environment).

101. which two statements grant an object privilege to the user smith? (choose
two.)

a. grant create table to smith;


b. grant create any table to smith;
c. grant create database link to smith;
d. grant alter rollback segment to smith;
e. grant all on scott.salary_view to smith;
f. grant create public database link to smith;
g. grant all on scott.salary_view to smith with grant option;

answer: e, g

object privileges: alter, delete, execute, index, insert, references, select,


update.

ad d: alter: permits the grantee of this object privilege to alter the definition
of a table or sequence only. the alter privilege on all other database objects are
considered system privileges.

102. which memory structure contains the information used by the server process to
validate the user privileges?

a. buffer cache
b. library cache
c. data dictionary cache
d. redo log buffer cache

answer: c

explanation:
ad a: false. the database buffer cache is the portion of the sga that holds copies
of data blocks read from datafiles. all user processes concurrently connected to
the instance share access to the database buffer cache. see (a58227.pdf) pg 155.
(6-3).
ad b: false. library cache the library cache includes the shared sql areas,
private sql areas, pl/sql proce-dures and packages, and control structures such as
locks and library cache handles.
ad c: true. one of the most important parts of an oracle database is its data
dictionary, which is a read-only set of tables that provides information about its
associated database. a data dictionary contains:
(a) the definitions of all schema objects in the database (tables, views, indexes,
clusters, synonyms, sequences, procedures, functions, packages, triggers, and so
on).
(b) how much space has been allocated for, and is currently used by, the schema
objects.
(c) default values for columns.
(d) integrity constraint information.
(e) the names of oracle users.
(f) privileges and roles each user has been granted.
(g) auditing information, such as who has accessed or updated various schema
objects.
(h) in trusted oracle, the labels of all schema objects and users (see your
trustedoracle documentation).
(i) other general database information.
see (a58227.pdf) pg. 134. (4-2).
ad d: false. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12)

103. examine the tablespace requirements for a new database.<br />

<table border cellspacing="0" cellpadding="7" width="362" height="226">


<tr>
<td width="79" valign="top" height="2">
<p align="left"><font face="courier"><b>tablespace</b></font></td>
<td width="171" valign="top" height="2">
<p align="left"><font face="courier"><b>purpose </b></font></td>
<td width="62" valign="top" height="2">
<p align="left"><font face="courier"><b>size </b></font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">app_data </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">application data </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">1 gig</font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">app_ndx </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">application index </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">500m </font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">system </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">system data </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">300m </font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">temp </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">temporary data </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">100m </font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">undotbs </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">undo data </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">150m </font></td>
</tr>
<tr>
<td width="79" valign="top" height="21">
<p align="left"><font face="courier">users </font></td>
<td width="171" valign="top" height="21">
<p align="left"><font face="courier">user data </font></td>
<td width="62" valign="top" height="21">
<p align="left"><font face="courier">100m </font></td>
</tr>
</table><br />

which three tablespaces can be created in the create database statement? (choose
three.)

a. temp
b. users
c. system
d. app_ndx
e. undotbs
f. app_data

answer: a, c, e

<b>explanation:</b><br />you can create default system, temp and undotbs


tablespaces in the create database statement. non-default tablespaces, as users,
app_ndx and app_data, can be created later with create tablespace command.

incorrect answers:

b: user tablespace can be created with the create tablespace command.


d: it is not possible to create non-default app_ndx tablespace with the create
database command.
f: app_data tablespace can be created with the create tablespace command.

104. examine these statements:<br />

1) mount mounts the database for certain dba activities but does not provide user
access to the database.<br />
2) the nomount command creates only the data buffer but does not provide access to
the database.<br />
3) the open command enables users to access the database.<br />
4) the startup command starts an instance.<br />

which option correctly describes whether some or all of the statements are true or
false?

a. 2 and 3 are true


b. 1 and 3 are true
c. 1 is true, 4 is false
d. 1 is false, 4 is true
e. 1 is false, 3 is true
f. 2 is false, 4 is false

answer: b
explanation:
(1) is true:
mounted database: a database associated with an oracle instance. the database can
be opened or closed. a database must be both mounted and opened to be accessed by
users. a database that has been mounted but not opened can be accessed by dbas for
some maintenance purposes. see oracle8(tm) enterprise edition getting started
release 8.0.5 for windows nt june 19, 1998 part no. a64416-01 pg. 446.
(2) is false:
after selecting the startup nomount, the instance starts. at this point, there is
no database. only an sga (system global area is a shared memory region that
contains data and control information for one oracle instance) and background
processes are started in preparation for the creation of a new database. see
oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01 pg.
60. (a58397.pdf).
(3) is true:
opening a mounted database makes it available for normal database operations. any
valid user can connect to an open database and access its information. when you
open the database, oracle opens the online datafiles and online redo log files. if
a tablespace was offline when the database was previously shut down, the
tablespace and its corresponding datafiles will still be offline when you reopen
the database. if any of the datafiles or redo log files are not present when you
attempt to open the database, oracle returns an error. see oracle8 concepts
release 8.0 december, 1997 part no. a58227-01 pg. 149. (a58227.pdf).
(4) is true:
startup: purpose start an oracle instance with several options, including mounting
and opening a database. prerequisites you must be connected to a database as
internal, sysoper, or sysdba. you cannot be connected via a multi-threaded server.
see oracle (r) enterprise manager administrator's guide release 1.6.0 june, 1998
part no. a63731-01 (oemug.pdf) pg. 503. (b-31).

105. a dba has issued the following sql statement:<br />

select max_blocks<br />


from dba_ts_quotas<br />
where tablespace_name='user_tbs'<br />
and username='jenny';<br /><br />

user jenny has unlimited quota on the user_tbs tablespace. which value will the
query return?<br />

a. 0
b. 1
c. -1
d. null
e. 'unlimited'

answer: c

explanation:
ad a: false. value -1, not 0, shows that user jenny has unlimited quota on the
user_tbs tablespace.
ad b: false. value -1, not 1, shows that user jenny has unlimited quota on the
user_tbs tablespace.
ad c: true. a value of -1 in max_bytes or max_blocks means that the user has an
unlimited space quota for the tablespace.
ad d: false. value null can be used to set the quota on the tablespace.
ad e: false. quota value must be numeric. it cannot be defined as string.

oca oracle 9i associate dba certification exam guide, jason couchman, p. 815-817,
chapter 15: managing database users

106. which two statements are true about rebuilding an index? (choose two.)

a. the resulting index may contain deleted entries.


b. a new index is built using an existing index as the data source.
c. queries cannot use the existing index while the new index is being built.
d. during a rebuild, sufficient space is needed to accommodate both the old and
the new index in their respective tablespaces.

answer: b, d

explanation:
(a) false. the resulting index will not contain deleted entries. it's the main
reason to rebuild the index.
(b) true. you can create an index using an existing index as the data source.
creating an index in this manner allows you to change storage characteristics or
move to a new tablespace. re-creating an index based on an existing data source
also removes intra-block fragmentation. in fact, compared to dropping the index
and using the create index command, re-creating an existing index offers better
performance. (58246.pdf) pg. 178. (10-10).
(c) false. a further advantage of this approach is that the old index is still
available for queries (58246.pdf) pg. 178. (10-10).
(d) true.

107. consider this sql statement:<br />

update employees set first_name = 'john'<br />


where emp_id = 1009;<br />
commit;<br />
what happens when a user issues the commit in the above sql statement?

a. dirty buffers in the database buffer cache are flushed.


b. the server process places the commit record in the redo log buffer.
c. log writer (lgwr) writes the redo log buffer entries to the redo log files and
data files.
d. the user process notifies the server process that the transaction is complete.
e. the user process notifies the server process that the resource locks can be
released.

answer: e

testking said b.

questions is posed very ambiguous! what happens first?

explanation:
see ocp oracle 9i database: fundamentals i, p. 19.:
what exactly does processing a commit statement consist of?
(1) release table/row locks acquired by transaction.
(2) release undo segement locks acquired by transaction.
(3) generate redo for commited transaction.

108. a new user, psmith, has just joined the organization. you need to create
psmith as a valid user in the database. you have the following requirements:<br
/><br />

1. create a user who is authenticated externally.<br />


2. make sure the user has connect and resource privileges.<br />
3. make sure the user does not have drop table and create user privileges.<br />
4. set a quota of 100 mb on the default tablespace and 500 k on the temporary
tablespace.<br />
5. assign the user to the data_ts default tablespace and the temp_ts temporary
tablespace.<br />

which statement would you use to create the user?<br />

a.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 100m on data_ts<br />
quota 500k on temp_ts<br />
temporary tablespace temp_ts;<br />
revoke drop_table, create_user from psmith;<br />
b.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 500k on temp_ts<br />
quota 100m on data_ts<br />
temporary tablespace temp_ts;<br />
grant connect, resource to psmith;<br />
c.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 100m on data_ts<br />
quota 500k on temp_ts<br />
temporary tablespace temp_ts;<br />
grant connect to psmith;<br />
d.
create user psmith<br />
indentified globally as ''<br />
default tablespace data_ts<br />
quota 500k on temp_ts<br />
quota 100m on data_ts<br />
temporary tablespace temp_ts;<br />
grant connect, resource to psmith;<br />
revoke drop_table, create_user from psmith;<br />

answer: b

explanation:
(d) is false, because the user must be identified by the operating system, while
globally as 'external_name' indicates that a user must be authenticated by the
oracle security service.
(a) and (c) has no connect and resource privileges.

create user:
purpose to create a database user, or an account through which you can log in to
the database and establish the means by which oracle permits access by the user.
you can assign the following optional properties to the user:
(a) default tablespace.
(b) temporary tablespace.
(c) quotas for allocating space in tablespaces.
(d) profile containing resource limits.

prerequisites: you must have create user system privilege.


see oracle8(tm) sql reference release 8.0 december 1997 part no. a58225-01 pg.
541. (4-357) (a588225.pdf)

109. you are logged on to a client. you do not have a secure connection from your
client to the host where your oracle database is running. which authentication
mechanism allows you to connect to the database using the sysdba privilege?

a. control file authentication


b. password file authentication
c. data dictionary authentication
d. operating system authentication

answer: b

explanation:
local database administration:
do you want to use os authentication?
yes: use os authentication.
no: use a password file.

remote database administration:


do you have a secure connection?
no: use a password file.
yes: do you want to use os authentication?
yes: use os authentication.
no: use a password file.

see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
pg. 37. (a58397.pdf)

110. which type of file is part of the oracle database?

a. control file
b. password file
c. parameter files
d. archived log files

answer: a

explanation:
control file is an administrative file required to start and run the database. the
control file records the physical structure of the database. for example, a
control file contains the database name, and the names and locations of the
database's data files and redo log files. see: oracle8(tm) enterprise edition
getting started release 8.0.5 for windows nt june 19, 1998 part no. a64416-01
(a55928.pdf) pg. 109. (5-9).

111. you issue these queries to obtain information about the regions table:<br />

<font face="courier">
<p align="left">select segment_name, tablespace_name<br />from
user_segments<br />where segment_name = 'regions';</p>
</font>
<p align="left"></p>
<table border cellspacing="0" cellpadding="7" width="329">
<tr>
<td width="136" valign="top" height="19">
<p align="left"><font face="courier"><b>segment_name </b></font></td>
<td width="159" valign="top" height="19">
<p align="left"><font face="courier"><b>tablespace_name </b></font></td>
</tr>
<tr>
<td width="136" valign="top" height="33">
<p align="left"><font face="courier">regions </font> </td>
<td width="159" valign="top" height="33">
<p align="left"><font face="courier">sample </font> </td>
</tr>
</table>
<font face="courier">
<p align="left">select constraint_name, constraint_type<br />from user
constraints<br />where table_name = �regions�;</p>
<table border cellspacing="0" cellpadding="7" width="209" height="45">
<tr>
<td width="162" valign="top" height="18">
<p align="left"><b>constraint_name </b></td>
<td width="13" valign="top" height="18">
<p align="left"><b>c </b></td>
</tr>
</font>
<font face="couriernewpsmt">
<tr>
<td width="162" valign="top" height="1"><font face="courier">
region_id_nn</font></td>
<td width="13" valign="top" height="1"><font face="courier">c</font></td>
</tr>
</font>
<font face="courier">
<tr>
<td width="162" valign="top" height="1">
<p align="left">reg_id </td>
<td width="13" valign="top" height="1">
<p align="left">p </td>
</tr>
</table>
<p align="left">select index_named<br />from user indexes<br />where
table_name = �regions�;</p>
</font>
<p align="left"></p>
<table border cellspacing="0" cellpadding="7" width="268" height="1">
<tr>
<td valign="top" height="1">
<p align="left"><font face="courier"><b>index_name </b></font></td>
</tr>
<tr>
<td valign="middle" height="1">
<p align="left"><font face="courier">reg_id_pk </font> </td>
</tr>
</table>

you then issue this command to move the regions table:<br />

alter table regions<br />


move tablespace user_data;<br />

what else must you do to complete the move of the regions table?

a. you must rebuild the reg_id_pk index.


b. you must re-create the region_id_nn and reg_id_pk constraints.
c. you must drop the regions table that is in the sample tablespace.
d. you must grant all privileges that were on the regions table in the sample
tablespace to the regions table in the user_data tablespace.

answer: a

explanation:
each table's data is stored in its own data segment, while each index's data is
stored in its own index segment. so after move indexes must be rebuilt.

112. you query dba_constraints to obtain constraint information on the


hr_employees table:<br />

<font face="courier"><p align="left">select constraint_name, constraint_type,


deferrable, deferred, validated<br />from dba_constraints<br />where owner = 'hr'
and table_name = 'employees';<br /></p></font>
<table border cellspacing="0" cellpadding="7" width="665">
<tr>
<td width="164" valign="top" height="19">
<p align="left"><font face="courier"><b>constraint_name </b></font></td>
<td width="42" valign="top" height="19" align="center"><font
face="courier"><b>c</b></font></td>
<td width="207" valign="top" height="19">
<p align="left"><font face="courier"><b>&nbsp;deferrable </b></font></td>
<td width="115" valign="top" height="19">
<p align="left"><font face="courier"><b>deferred </b></font></td>
<td width="220" valign="top" height="19">
<p align="left"><font face="courier"><b>validated </b></font></td>
</tr>
<tr>
<td width="164" valign="middle" height="18"><font size="3" face="courier">
<p align="left">emp_dept_fk </font></td>
<td width="42" valign="middle" height="18" align="center">
<font size="3" face="courier">r</font></td>
<td width="207" valign="middle" height="18"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="middle" height="18"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="middle" height="18"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_email_nv </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">c</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_email_uk </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">u
</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_emp_id_pk </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">p</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_hire_date_nn </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">c</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_job_fk </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">r</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_job_nn </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">c</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">deferred </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">not validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_last_name_nn </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">c</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="16"><font size="3" face="courier">
<p align="left">emp_manager_fk </font></td>
<td width="42" valign="top" height="16" align="center">
<font size="3" face="courier">r</font></td>
<td width="207" valign="top" height="16"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="16"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="16"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
<tr>
<td width="164" valign="top" height="15"><font size="3" face="courier">
<p align="left">emp_salary_min </font></td>
<td width="42" valign="top" height="15" align="center">
<font size="3" face="courier">c</font></td>
<td width="207" valign="top" height="15"><font size="3" face="courier">
<p align="left">not deferrable </font></td>
<td width="115" valign="top" height="15"><font size="3" face="courier">
<p align="left">immediate </font></td>
<td width="220" valign="top" height="15"><font size="3" face="courier">
<p align="left">validated </font></td>
</tr>
</table><br />

which type of constraint is emp_job_nn?

a. check
b. unique
c. not null
d. primary key
e. foreign key

answer: c

explanation:
see ocp oracle 9i database: fundamentals i, p. 313.:
constraint_type: displays p for primary key, r for foreign key (referential
integrity constraint), c for check constraints (including checks to see if data is
not null), and u for unique constraints.
because of this, options a and c remain. because of the name of the constraint,
emp_job_nn, i would go for c, because nn usually stands for not null.

113. temporary tablespaces should be locally managed and the uniform size should
be a multiple of the ________.

a. db_block_size
b. db_cache_size
c. sort_area_size
d. operating system block size

answer: c

explanation:
http://www.interealm.com/technotes/roby/temp_ts.html

temporary tablespace considerations


by roby sherman (http://www.interealm.com/roby/)

today, depending on your rdbms version, oracle offers three varieties of temporary
tablespaces to choose from. these spaces are used for disk based sorts, large
index rebuilds, global temporary tables, etc. to ensure that your disk-based
sorting is optimal, it is critical to understand the different types, caveats, and
benefits of these temporary tablespace options:
(a) permanent tablespaces with temporary segments.
(b) tablespaces of type "temporary".
(c) temporary tablespaces.
permanent tablespaces with temporary segments
this option has been available since oracle 7.3 and is the least efficient for
disk-based sorting. in this type of configuration, temporary (sort) extents are
allocated within a permanent tablespace. compared to other temp tablespace
choices, the performance and operation of this disk-sort option suffers in the
areas of:
extent management: the st-enqueue (and subsequent recursive dictionary sql) is
used for the allocation and de-allocation of extents allotted to each sort
segment.
sort segment reuse: each process performing a disk sort creates then drops a
private sort segment. this adds additional overhead to the sorting process.
extent reuse: because of the "private sort segment" policy used in this tablespace
option, there is no ability for disk-based sorts to re-use extents that are no
longer active.

tablespaces of type "temporary"


this disk-sorting option was introduced in oracle 8.0 as a way to provide a more
dedicated facility for disk-based sorting while reducing some amount of resources
and i/o associated with extent management. it is created by invoking the create
tablespace xyz... temporary; sql clause.
sorts assigned to a tablespaces of type temporary use a single sort segment
(multiple segments in an ops environment) that is only dropped at instance start
up and is created during the first disk-based sort.
sorts using this type of tablespace have the ability to reuse extents that are no
longer active. this added level of reuse reduces the amount of resources necessary
to manage individual segments and allocate/deallocate extents.
although extent allocation and de-allocation is reduced in this type of
tablespace, the st-enqueue (and subsequent dictionary-generated recursive sql) is
still required in these activities when they occur. since this type of tablespace
cannot be configured with local extent management, there is not easy way to bypass
this performance degradation.

temporary tablespaces</br>
this new class of temporary tablespace was introduced in oracle 8i and provides
the most robust and efficient means of disk-based sorting in oracle today.
temporary tablespaces are created using the sql syntax create temporary
tablepspace xyz tempfile .... there are a number of performance benefits of this
tablespace option over permanent and tablespaces of type temporary in the areas
of:
extent management: extents in this tablespace are allocated via a locally-managed
bitmap. therefore use of the st-enqueue and recursive sql for this activity is
eliminated.
segment reuse: sorts assigned to a tablespaces of type temporary use a single sort
segment (multiple segments in an ops environment) that is only dropped at instance
start up and is created during the first disk-based sort.
extent reuse: sorts using this type of tablespace have the ability to reuse
extents that are no longer active. this added level of reuse reduces the amount of
resources necessary to manage individual segments and allocate / deallocate
extents.

note: if the extent management clause is not specified for temporary tablespaces,
the database will automatically set the tablespace with a uniform extent size of 1
mb.

which do i choose?
whether you are on an existing database application migrating to a newer oracle
version or a new application in the initial development phase, for optimal
performance you should use the most recent temporary tablespace option available
to your database version:
- for oracle versions 7.3.4 and below, use permanent tablespaces with temporary
segments
- for oracle versions 8.0.3 - 8.0.6 use tablespaces of type "temporary"
- for oracle versions 8.1.5 - 9.x use temporary tablespaces

selecting the right extent size


regardless of the type of temporary tablespace you use, you should ensure that the
extent sizes selected for the space do not impede system performance. in
dictionary-managed temporary tablespaces, the initial and next extent sizes should
be a multiple of sort_area_size and hash_area_size. pctincrease should be set to
0. in locally-managed temp tablespaces, the uniform extent size should be a
multiple of sort_area_size and hash_area_size.

114. your database is in archivelog mode.

which two must be true before the log writer (lgwr) can reuse a filled online redo
log file? (choose two).

a. the redo log file must be archived.


b. all of the data files must be backed up.
c. all transactions with entries in the redo log file must complete.
d. the data files belonging to the system tablespace must be backed up.
e. the changes recorded in the redo log file must be written to the data files.

answer: a, e

explanation:
archivelog: the filled online redo log files are archived before they are reused
in the cycle.
noarchivelog: the filled online redo log files are not archived.
(a58227.pdf) pg. 72. (1-38).
when you run a database in archivelog mode, the archiving of the online redo log
is enabled. information in a database control file indicates that a group of
filled online redo log files cannot be used by lgwr until the group is archived (a
true). a filled group is immediately available to the process performing the
archiving after a log switch occurs (when a group becomes inactive). the process
performing the archiving does not have to wait for the checkpoint of a log switch
to complete before it can access the inactive group for archiving (c false).

see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
(a58397.pdf) pg. 454. (23-2)

115. which two statements are true about the control file? (choose two.)

a. the control file can be multiplexed up to eight times.


b. the control file is opened and read at the nomount stage of startup.
c. the control file is a text file that defines the current state of the physical
database.
d. the control file maintains the integrity of the database, therefore loss of the
control file requires database recovery.

answer: a, d

explanation:
ad a: true. control_files indicates one or more names of control files separated
by commas. the instance startup procedure recognizes and opens all the listed
files. the instance maintains all listed control files during database operation.
see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
(a58397.pdf) pg. 126. (6-2).
ad b: false. after mounting the database, the instance finds the database control
files and opens them. (control files are specified in the control_files
initialization parameter in the parameter file used to start the instance.) oracle
then reads the control files to get the names of the database's datafiles and redo
log files. (a58227.pdf) pg. 148. (5-6).
ad c: false. the control file of a database is a small binary file necessary for
the database to start and operate successfully. a control file is updated
continuously by oracle during database use, so it must be available for writing
whenever the database is open. if for some reason the control file is not
accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-
19).
ad d: true. see previous.

116. which two methods enforce resource limits? (choose two.)

a. alter system set resource_limit= true


b. set the resource_limit parameter to true
c. create profile sessions limit<br />
sessions_per_user 2<br />
cpu_per_session 10000<br />
idle_time 60<br />
connect_time 480;
d. alter profile sessions limit<br />
sessions_per_user 2<br />
cpu_per_session 10000<br />
idle_time 60<br />
connect_time 480;

answer: a, b

explanation:
resource limitation can be enabled or disabled by the resource_limit
initialization parameter in the database's initialization parameter file. valid
values for the parameter are true (enables enforcement) and false. by default,
this parameter's value is set to false. once the initialization parameter file has
been edited, the database instance must be restarted to take effect. every time an
instance is started, the new parameter value enables or disables the enforcement
of resource limitation.

resource limitation feature must be altered temporarily, you can enable or disable
the enforcement of resource limitation using the sql statement alter system. after
an instance is started, an alter system statement overrides the value set by the
resource_limit initialization parameter.

117. the server parameter file (spfile) provides which three advantages when
managing initialization parameters? (choose three.)

a. the oracle server maintains the server parameter file.


b. the server parameter file is created automatically when the instance is
started.
c. changes can be made in memory and/or in the spfile with the alter system
command.
d. the use of spfile provides the ability to make changes persistent across shut
down and start up.
e. the oracle server keeps the server parameter file and the text initialization
parameter file synchronized.

answer: a, c, d

testking said b, c, d.

explanation:
sources:
http://download-
west.oracle.com/docs/cd/b10501_01/rac.920/a96596/glossary.htm#436831.
see ocp oracle 9i database: fundamentals i, p. 70/71.

ad a: true. a server parameter file (spfile) can be thought of as a repository for


initialization parameters that is maintained on the machine where the oracle
database server executes. it is, by design, a server-side initialization parameter
file (http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#999226).

ad b: false. the server parameter file must initially be created from a


traditional text initialization parameter file. it must be created prior to its
use in the startup command. the create spfile statement is used to create a server
parameter file (http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#1012659).

ad c: true. use the set clause of the alter system statement to set or change
initialization parameter values. additionally, the scope clause specifies the
scope of a change as described in the following table:
(1) scope = spfile: the change is applied in the server parameter file only. the
effect is as follows:
(a) for dynamic parameters, the change is effective at the next startup and is
persistent.
(b) for static parameters, the behavior is the same as for dynamic parameters.
this is the only scope specification allowed for static parameters.
(2) scope = memory: the change is applied in memory only. the effect is as
follows:
(a) for dynamic parameters, the effect is immediate, but it is not persistent
because the server parameter file is not updated.
(b) for static parameters, this specification is not allowed.
(3) scope = both: the change is applied in both the server parameter file and
memory. the effect is as follows:
(a) for dynamic parameters, the effect is immediate and persistent.
(b) for static parameters, this specification is not allowed.

ad d: true. if you are using a server parameter file, initialization parameter


file changes made by the alter system statement can persist across shutdown and
startup. this is discussed in "managing initialization parameters using a server
parameter file" (http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#999226).

ad e: false. if you are using a traditional text initialization parameter file,


your changes are only for the current instance. to make them permanent, you must
update them manually in the initialization parameter file, otherwise they will be
lost over the next shutdown and startup of the database (http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#999226).

118. you examine the alert log file and notice that errors are being generated
from a sql*plus session. which files are best for providing you with more
information about the nature of the problem?
a. control file
b. user trace files
c. background trace files
d. initialization parameter files

answer: b

explanation:
ad a: false the control file of a database is a small binary file necessary for
the database to start and operate successfully. a control file is updated
continuously by oracle during database use, so it must be available for writing
whenever the database is open. if for some reason the control file is not
accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-
19).
a trace file is created each time an oracle instance starts or an unexpected event
occurs in a user process or background process. the name of the trace file
includes the instance name, the process name, and the oracle process number. the
file extension or file type is usually trc, and, if different, is noted in your
operating system-specific oracle documentation. the contents of the trace file may
include dumps of the system global area, process global area, supervisor stack,
and registers. two initialization parameters specify where the trace files are
stored:
ad b: false background_dump_des specifies the location for trace files created by
the oracle background processes pmon, dbwr, lgwr, and smon.
ad c: true user_dump_dest specifies the location for trace files created by user
processes such as sql*dba, sql*plus, or pro*c.
see: oracle8(tm) error messages release 8.0.4 december 1997 part no. a58312-01
(a58312.pdf) pg. 27. (1-5).
ad d: false parameter file contains initialization parameters. these parameters
specify the name of the database, the amount of memory to allocate, the names of
control files, and various limits and other system parameters. (a58227.pdf) pg.
61. (1-27)

119. you can use the database configuration assistant to create a template using
an existing database structure.

which three will be included in this template? (choose three.)

a. data files
b. tablespaces
c. user defined schemas
d. user defined schema data
e. initialization parameters

answer: a, b, e

we have to suppose that the type of the template is "non-seed".

explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#1026131.
creating templates using dbca
from an existing template: using an existing template, you can create a new
template based on the pre-defined template settings. you can add or change any
template settings such as initialization parameters, storage parameters, or use
custom scripts.
from an existing database (structure only): you can create a new template that
contains structural information about an existing database, including database
options, tablespaces, datafiles, and initialization parameters specified in the
source database. user defined schema and their data will not be part of the
created template. the source database can be either local or remote.
from an existing database (structure as well as data--a seed database): you can
create a new template that has both the structural information and physical
datafiles of an existing database. databases created using such a template are
identical to the source database. user defined schema and their data will be part
of the created template. the source database must be local.

120. the users pward and psmith have left the company. you no longer want them to
have access to the database. you need to make sure that the objects they created
in the database remain. what do you need to do?

a. revoke the create session privilege from the user.


b. drop the user from the database with the cascade option.
c. delete the users and revoke the create session privilege.
d. delete the users by using the drop user command from the database.

answer: a

explanation:
ad a: true create session right: connect to the database.
ad b: if the user's schema contains any schema objects, use the cascade option to
drop the user and all associated objects and foreign keys that depend on the
tables of the user successfully. if you do not specify cascade and the user's
schema contains objects, an error message is returned and the user is not dropped.
before dropping a user whose schema contains objects, thoroughly investigate which
objects the user's schema contains and the implications of dropping them before
the user is dropped. pay attention to any unknown cascading effects. for example,
if you intend to drop a user who owns a table, check whether any views or
procedures depend on that particular table. see: oracle8 administrator's guide
release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg. 385. (20-17).
ad c: false after deleted one can not revoke privilege.
ad d: when a user is dropped, the user and associated schema is removed from the
data dictionary and all schema objects contained in the user's schema, if any, are
immediately dropped. see: oracle8 administrator's guide release 8.0 december, 1997
part no. a58397-01 (a58397.pdf) pg. 385. (20-17).

121. you need to create an index on the customer_id column of the customers table.
the index has these requirements:

1. the index will be called cust_pk.<br />


2. the index should be sorted in ascending order.<br />
3. the index should be created in the index01 tablespace, which is a dictionary
managed tablespace.<br />
4. all extents of the index should be 1 mb in size.<br />
5. the index should be unique.<br />
6. no redo information should be generated when the index is created.<br />
7. 20% of each data block should be left free for future index entries.<br />

which command creates the index and meets all the requirements?

a.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctfree 20<br />
storage (initial lm next lm pctincrease 0);<br />
b.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctfree 20<br />
storage (initial 1m next 1m pctincrease 0)<br />
nologging;<br />
c.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctused 80<br />
storage (initial lm next lm pctincrease 0)<br />
nologging;<br />
d.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctused 80<br />
storage (initial lm next lm pctincrease 0);<br />

answer: b

explanation:
pctfree is the percentage of space to leave free for updates and insertions within
each of the index's data blocks.
tablespace is the name of the tablespace to hold the index or index partition. if
you omit this option, oracle creates the index in the default tablespace of the
owner of the schema containing the index.
logging / nologging specifies that the creation of the index will be logged
(logging) or not logged (nologging) in the redo log file.
storage pctincrease specifies the percent by which the third and subsequent
extents grow over the preceding extent. the default value is 50, meaning that each
subsequent extent is 50% larger than the preceding extent.
next specifies the size in bytes of the next extent to be allocated to the object.
you can use k or m to specify the size in kilobytes or megabytes.
initial specifies the size in bytes of the object's first extent. oracle allocates
space for this extent when you create the schema object. you can use k or m to
specify this size in kilobytes or megabytes.
asc / desc are allowed for db2 syntax compatibility, although indexes are always
created in ascending order.
(a58225.pdf) pg. 421. (4-237).

ad a: false, nologging missing.


ad b: true.
ad c: false. we need pctfree not pctused. remark pctfree + pctused should be less
than (not equal to) 100.
ad d: false. nologging missing, we need pctfree not pctused.

122. john has issued the following sql statement to create a new user account:

create user john<br />


identified by john<br />
temporary tablespace temp_tbs<br />
quota 1m on system<br />
quota unlimited on data_tbs<br />
profile apps_profile<br />
password expire<br />
default role apps_dev_role;<br />

why does the above statement return an error?


a. you cannot assign a role to a user within a create user statement.
b. you cannot explicitly grant quota on the system tablespace to a user.
c. you cannot assign a profile to a user within a create user statement.
d. you cannot specify password expire clause within a create user statement.
e. you cannot grant unlimited quota to a user within a create user statement.

answer: a

explanation:
it is not possible to assign a role to a user within a create user statement: you
can use grant role_name to user_name command to do that.

123. which two actions cause a log switch? (choose two.)

a. a transaction completes.
b. the instance is started.
c. the instance is shut down
d. the current online redo log group is filled
e. the alter system switch logfile command is issued.

answer: d, e

explanation:
a log switch, by default, takes place automatically when the current online redo
log file group fills. see: oracle8 administrator's guide release 8.0 december,
1997 part no. a58397-01 (a58397.pdf) pg. 118. (5-10).
to force a log switch, you must have the alter system privilege. to force a log
switch, use either the switch logfile menu item of enterprise manager or the sql
command alter system with the switch logfile option. the following statement
forces a log switch: alter system switch logfile; see: oracle8 administrator's
guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg. 121. (5-13)

124. evaluate the sql command:<br />

create temporary tablespace temp_tbs<br />


tempfile '/usr/oracle9i/orahomel/temp_data.dbf'<br />
size 2m<br />
autoextend on;<br />

which two statements are true about the temp_tbs tablespace? (choose two.)

a. temp_tbs has locally managed extents.


b. temp_tbs has dictionary managed extents.
c. you can rename the tempfile temp_data.dbf.
d. you can add a tempfile to the temp_tbs tablespace.
e. you can explicitly create objects in the temp_tbs tablespace.

answer: a, d

testking said b, d.

explanation:<br />
ad a: true. use the create temporary tablespace statement to create a locally
managed temporary tablespace, which is an allocation of space in the database that
can contain schema objects for the duration of a session.<br /><br /> if you
subsequently assign this temporary tablespace to a particular user, then oracle
will also use this tablespace for sorting operations in transactions initiated by
that user. (a96540.pdf) pg. 1258. (15-92)<br /><br />
ad b: false. because of previous. starting with oracle 9i, oracle creates non-
system tablespaces to be localley managed by default. see ocp oracle 9i database:
fundamentals i, p. 153.<br /><br />
ad c: renaming is not possible.<br /><br />
ad d: ?<br /><br />
ad e: ?<br />

125. which statement is true regarding enabling constraints?

a. enable novalidate is the default when a constraint is enabled.


b. enabling a constraint novalidate places a lock on the table.
c. enabling a unique constraint to validate does not check for constraint
violation if the constraint is deferrable.
d. a constraint that is currently disabled can be enabled in one of two ways:
enable novalidate or enable validate.

answer: d

explanation:
constraint states
table constraints can be enabled and disabled using the create table or alter
table statement. in addition the validate or novalidate keywords can be used to
alter the action of the state:
(1) enable validate is the same as enable. the constraint is checked and is
guaranteed to hold for all rows.
(2) enable novalidate means the constraint is checked for new or modified rows,
but existing data may violate the constraint.
(3) disable novalidate is the same as disable. the constraint is not checked so
data may violate the constraint.
(4) disable validate means the constraint is not checked but disallows any
modification of the constrained columns.

126. which statement about the shared pool is true?

a. the shared pool cannot be dynamically resized.


b. the shared pool contains only fixed structures.
c. the shared pool consists of the library cache and buffer cache.
d. the shared pool stores the most recently executed sql statements and the most
recently accessed data definitions.

answer: d

explanation:
ad c: false. the shared pool portion of the sga contains three major areas:
library cache, dictionary cache, and control structures.
ad d: true. in general, any item (shared sql area or dictionary row) in the shared
pool remains until it is flushed according to a modified lru algorithm. the memory
for items that are not being used regularly is freed if space is required for new
items that must be allocated some space in the shared pool.
(a58227.pdf) pg. 158. (6-6)

127. as a dba, one of your tasks is to periodically monitor the alert log file and
the background trace files. in doing so, you notice repeated messages indicating
that log writer (lgwr) frequently has to wait for a redo log group because a
checkpoint has not completed or a redo log group has not been archived.
what should you do to eliminate the wait lgwr frequently encounters?
a. increase the number of redo log groups to guarantee that the groups are always
available to lgwr.
b. increase the size of the log buffer to guarantee that lgwr always has
information to write.
c. decrease the size of the redo buffer cache to guarantee that lgwr always has
information to write.
d. decrease the number of redo log groups to guarantee that checkpoints are
completed prior to lgwr writing.

answer: a

explanation:
you need to increase the number of redo log groups to guarantee that the groups
are always available to lgwr. log writer (lgwr) frequently has to wait for a redo
log group because a checkpoint has not completed or a redo log group has not been
archived if there are not enough redo log groups or they are too small.

ad b: increasing the size of the log buffer will not affect the checkpoint
frequency. you can increase the redo log file size to eliminate the wait lgwr
frequently encounters.
ad c: decreasing the size of the redo buffer cache will not affect the checkpoint
frequency.
ad d: decreasing the number of redo log groups you will just make lgwr wait for a
redo log group more frequently because a checkpoint has not completed or a redo
log group has not been archived.

128. which privilege is required to create a database?

a. dba
b. sysdba
c. sysoper
d. resource

answer: b

explanation:
you must have the osdba role enabled.
the roles connect, resource, dba, exp_full_database, and imp_full_database are
defined automatically for oracle databases. these roles are provided for backward
compatibility to earlier versions of oracle and can be modified in the same manner
as any other role in an oracle database. see (a58227.pdf) pg. 622. (26-16).
ad c: false. sysoper permits you to perform startup, shutdown, alter database
open/mount, alter database backup, archive log, and recover, and includes the
restricted session privilege.
ad b: true. sysdba contains all system privileges with admin option, and the
sysoper system privilege; permits create database and time-based recovery. see
(a58227.pdf) pg. 637. (25-7).

129. which structure provides for statement-level read consistency?

a. undo segments
b. redo log files
c. data dictionary tables
d. archived redo log files

answer: a
explanation:
oracle7 server concepts 10-6
statement level read consistency
oracle always enforces statement-level read consistency. this guarantees that the
data returned by a single query is consistent with respect to the time that the
query began. therefore, a query never sees dirty data nor any of the changes made
by transactions that commit during query execution. as query execution proceeds,
only data committed before the query began is visible to the query. the query does
not see changes committed after statement execution begins. a consistent result
set is provided for every query, guaranteeing data consistency, with no action on
the user's part.
the sql statements select, insert with a query, update, and delete all query data,
either explicitly or implicitly, and all return consistent data. each of these
statements uses a query to determine which data it will affect (select, insert,
update, or delete, respectively). a select statement is an explicit query and may
have nested queries or a join operation. an insert statement can use nested
queries. update and delete statement can use where clauses or subqueries to affect
only some rows in a table rather than all rows.
while queries used in insert, update, and delete statements are guaranteed a
consistent set of results, they do not see the changes made by the dml statement
itself. in other words, the data the query in these operations sees reflects the
state of the data before the operation began to make changes.

for this purpose only the undo segments are necessary from the possible answers.

130. you just issued the startup command. which file is checked to determine the
state of the database?

a. the control file


b. the first member of redo log file group 1
c. the data file belonging to the system tablespace
d. the most recently created archived redo log file

answer: a

explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 4-16
quiescing a database
there are times when there is a need to put a database into a state where only dba
transactions, queries, fetches, or pl/sql statements are allowed. this is called a
quiesced state, in the sense that there are no ongoing non-dba transactions,
queries, fetches, or pl/sql statements in the system. this quiesced state allows
you or other administrators to perform actions that cannot safely be done
otherwise.

placing a database into a quiesced state


to place a database into a quiesced state, issue the following statement:
alter system quiesce restricted;

viewing the quiesce state of an instance


the v$instance view can be queried to see the current state of an instance. it
contains a column named active_state, whose values are shown in the following
table:

<table border="1">

active_state description
normal normal unquiesced state
quiescing being quiesced, but there are still active non-dba sessions
running
quiesced quiesced, no active non-dba sessions are active or allowed

since the state can be queried, i believe it's really in a control file.

131. which two are true about the data dictionary views with prefix user_? (choose
two.)

a. the column owner is implied to be the current user.


b. a user needs the select any table system privilege to query these views.
c. the definitions of these views are stored in the user's default tablespace.
d. these views return information about all objects to which the user has access.
e. users can issue an insert statement on these views to change the value in the
underlying base tables.
f. a user who has the create public synonym system privilege can create public
synonyms for these views.

answer: a, f

explanation:
ad a: true. views with the prefix user usually exclude the column owner. this
column is implied in the user views to be the user issuing the query. see
(a58227.pdf) pg. 137. (4-5) have columns identical to the other views, except that
the column owner is implied the current user. see (a58227.pdf) pg. 138. (4-6).
ad b: false. the data dictionary views accessible to all users of an oracle
server. most views can be accessed by any user with the create_session privilege.
the data dictionary views that begin with dba_ are restricted. these views can be
accessed only by users with the select_any_table privilege. this privilege is
assigned to the dba role when the system is initially installed. see (a58242.pdf)
pg. 171 (2-1).
ad c: false. the data dictionary is always available when the database is open. it
resides in the system tablespace, which is always online. see (a58227.pdf) pg.
137. (4-5).
ad d: false. these views do not return information about all objects to which the
user has access. the data
dictionary views with prefix all_ provide this access.
ad e: false. any oracle user can use the data dictionary as a read-only reference
for information about the database. see (a58227.pdf) pg. 135. (4-3).
ad f: true.

132. an oracle instance is executing in a nondistributed configuration. the


instance fails because of an operating system failure.
which background process would perform the instance recovery when the database is
reopened?

a. pmon
b. smon
c. reco
d. arcn
e. ckpt

answer: b

explanation:
smon (oracle system monitor)
smon is an oracle background process created when you start a database
http://www.orafaq.com/glossary/faqglosd.htm instance
http://www.orafaq.com/glossary/faqglosi.htm. the smon process performs instance
http://www.orafaq.com/glossary/faqglosi.htm recovery, cleans up after dirty
shutdowns and coalesces adjacent free extents into larger free extents.
pmon (oracle process monitor)
pmon is an oracle background process created when you start a database
http://www.orafaq.com/glossary/faqglosd.htm instance
http://www.orafaq.com/glossary/faqglosi.htm. the pmon process will free up
resources if a user http://www.orafaq.com/glossary/faqglosu.htm process fails (eg.
release database http://www.orafaq.com/glossary/faqglosd.htm locks).
reco (oracle recoverer process)
reco is an oracle background process created when you start an instance
http://www.orafaq.com/glossary/faqglosi.htm with distributed_transactions= in the
init.ora http://www.orafaq.com/glossary/faqglosi.htm file
http://www.orafaq.com/glossary/faqglosf.htm. the reco process will try to resolve
in-doubt transactions across oracle distributed databases.
arch (oracle archiver process)
arch is an oracle background process created when you start an instance
http://www.orafaq.com/glossary/faqglosi.htm in archive log mode. the arch process
will archive on-line redo log http://www.orafaq.com/glossary/faqglosr.htm files to
some backup http://www.orafaq.com/glossary/faqglosb.htm media.
ckpt
ckpt (oracle http://www.orafaq.com/glossary/faqgloso.htm checkpoint
http://www.orafaq.com/glossary/faqglosc.htm process) is the oracle
http://www.orafaq.com/glossary/faqgloso.htm background process that timestams all
datafiles and control files to indicate that a checkpoint
http://www.orafaq.com/glossary/faqglosc.htm has occurred

133. your database contains a locally managed uniform sized tablespace with
automatic segment-space management, which contains only tables. currently, the
uniform size for the tablespace is 512 k.
because the tables have become so large, your configuration must change to improve
performance. now the tables must reside in a tablespace that is locally managed,
with uniform size of 5 mb and automatic segment-space management.

what must you do to meet the new requirements?

a. the new requirements cannot be met.


b. re-create the control file with the correct settings.
c. use the alter tablespace command to increase the uniform size.
d. create a new tablespace with correct settings then move the tables into the new
tablespace.

answer: d

explanation:
ad a: false. the new requirements can be met by creating a new tablespace with
correct settings and by moving the tables into the new tablespace.
ad b: false. it's wrong way to recreate control files. you will need that when you
will create new tablespace with new uniform size to save changes in the control
files. but changing the control files themselves will not fix the issue.
ad c: false. you cannot dynamically change the uniform size.

134. you created a tablespace sh_tbs. the tablespace consists of two data files:
sh_tbs_datal .dbf and sh_tbs_data2.dbf. you created a nonpartitioned table
sales_det in the sh_tbs tablespace.
which two statements are true? (choose two.)
a. the data segment is created as soon as the table is created.
b. the data segment is created when the first row in the table is inserted.
c. you can specify the name of the data file where the data segment should be
stored.
d. the header block of the data segment contains a directory of the extents in the
segment.

answer: a, d

explanation:
ad a: true. every nonclustered table or partition and every cluster in an oracle
database has a single data segment to hold all of its data. oracle creates this
data segment when you create the nonclustered table or cluster with the create
command. if the table or index is partitioned, each partition is stored in its own
segment. see: oracle8 concepts release 8.0 december, 1997 part no. a58227-01
(a58227.pdf) pg. 107. (2-15).
ad b: false. because of the previous.
ad c: false.
ad d: true. for maintenance purposes, the header block of each segment contains a
directory of the extents in that segment. see: oracle8 concepts release 8.0
december, 1997 part no. a58227-01 (a58227.pdf) pg. 103. (2-11).

135. the dba can structure an oracle database to maintain copies of online redo
log files to avoid losing database information.
which three are true regarding the structure of online redo log files? (choose
three.)

a. each online redo log file in a group is called a member.


b. each member in a group has a unique log sequence number.
c. a set of identical copies of online redo log files is called an online redo log
group.
d. the oracle server needs a minimum of three online redo log file groups for the
normal operation of a database.
e. the current log sequence number of a redo log file is stored in the control
file and in the header of all data files.
f. the lgwr background process concurrently writes the same information to all
online and archived redo log files in a group.

answer: a, c, e

explanation:
http://www.siue.edu/~dbock/cmis565/ch7-redo_log.htm

each redo log group has identical redo log files. the lgwr concurrently writes
identical information to each redo log file in a group. the oracle server needs a
minimum of two online redo log groups for normal database operation. thus, if disk
1 crashes as shown in the figure above, none of the redo log files are truly lost
because there are duplicates. if the group has more members, you need more disk
drives!

if possible, you should separate the online redo log files from the archive log
files as this reduces contention for the i/o buss path between the arcn and lgwr
background processes. you should also separate datafiles from the online redo log
files as this reduces lgwr and dbwn contention. it also reduces the risk of losing
both datafiles and redo log files if a disk crash occurs.

redo log files in a group are called members. each group member has identical log
sequence numbers and is the same size - they cannot be different sizes. the log
sequence number is assigned by the oracle server as it writes to a log group and
the current log sequence number is stored in the control files and in the header
information of all datafiles - this enables synchronization between datafiles and
redo log files.

plus: archived redo logs are not used: f is not ok.

136. which three statements are true about the use of online redo log files?
(choose three.)

a. redo log files are used only for recovery.


b. each redo log within a group is called a member.
c. redo log files are organized into a minimum of three groups.
d. an oracle database requires at least three online redo log members.
e. redo log files provide the database with a read consistency method.
f. redo log files provide the means to redo transactions in the event of an
instance failure.

answer: a, b, f

explanation:
ad a: true. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12)
ad c: false. every oracle database has a set of two or more redo log files. 2
files can not be organized to 3 groups see (a58227.pdf) pg. 46. (1-12)
ad d: false. there is requirement to have at least two, not three redo log groups
in oracle.
ad e: false. every database contains one or more rollback segments, which are
portions of the database that record the actions of transactions in the event that
a transaction is rolled back. you use rollback segments to provide read
consistency, rollback transactions, and recover the database. (a58227.pdf) pg.
109. (2-17)

137. which steps should you follow to increase the size of the online redo log
groups?

a. use the alter database resize logfile group command for each group to be
resized.
b. use the alter database resize logfile member command for each member within the
group being resized.
c. add new redo log groups using the alter database add logfile group command with
the new size.
drop the old redo log files using the alter database drop logfile group command.
d. use the alter datbase resize logfile group command for each group to be
resized.
use the alter database resize logfile member command for each member within the
group.

answer: c

explanation:
ad a: there is no alter database resize logfile group command in oracle.
ad b: there is no alter database resize logfile member command in oracle.
ad c: to increase the size of the online redo log groups you need first to add new
redo log groups using the alter database add logfile group with increased size of
redo log group members. after that you can change status of redo log group with
small size of file by using command alter system switch logfile and than drop the
old redo log files using the alter database drop logfile group command.
ad d: there are no alter database resize logfile group and alter database resize
logfile member commands in oracle.

138. oracle guarantees read-consistency for queries against tables. what provides
read-consistency?

a. redo logs
b. control file
c. undo segments
d. data dictionary

answer: c

explanation:
ad a: false. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12).
ad b: false. the control file of a database is a small binary file necessary for
the database to start and operate successfully. (a58227.pdf) pg. 693. (28-19).
ad c: true. every database contains one or more rollback segments, which are
portions of the database that record the actions of transactions in the event that
a transaction is rolled back. you use rollback segments to provide read
consistency, rollback transactions, and recover the database. (a58227.pdf) pg.
109. (2-17).
ad d: false. each oracle database has a data dictionary. an oracle data dictionary
is a set of tables and views that are used as a read-only reference about the
database. for example, a data dictionary stores information about both the logical
and physical structure of the database. (a58227.pdf) pg. 81, 134 (1-47, 4-1).

139. you need to shut down your database. you want all of the users who are
connected to be able to complete any current transactions. which shutdown mode
should you specify in the shutdown command?

a. abort
b. normal
c. immediate
d. transactional

answer: d

explanation:
ad a: false. this option of the shutdown command is used for emergency database
shutdown.
ad b: false. normal database shutdown proceeds with the following conditions:
(a) no new connections are allowed after the statement is issued.
(b) before the database is shut down, oracle waits for all currently connected
users to disconnect from the database.
(c) the next startup of the database will not require any instance recovery proce-
dures.
ad c: false. immediate database shutdown proceeds with the following conditions:
(a) current client sql statements being processed by oracle are terminated
immediately.
(b) any uncommitted transactions are rolled back. if long uncommitted transactions
exist, this method of shutdown might not complete quickly, despite its name.
(c) oracle does not wait for users currently connected to the database to
disconnect.
(d) oracle implicitly rolls back active transactions and disconnects all connected
users.
ad d: true. after submitting this statement, no client can start a new transaction
on this particular instance. if a client attempts to start a new transaction, they
are disconnected. after all transactions have either committed or aborted, any
client still connected to the instance is disconnected. at this point, the
instance shuts down just as it would when a shutdown immediate statement is
submitted. a transactional shutdown prevents clients from losing work, and at the
same time, does not require all users to log off.

see (a58397.pdf) pg. 78. (3-8)

140. you decided to use multiple buffer pools in the database buffer cache of your
database. you set the sizes of the buffer pools with the db_keep_cache_size and
db_recycle_cache_size parameters and restarted your instance.
what else must you do to enable the use of the buffer pools?

a. re-create the schema objects and assign them to the appropriate buffer pool.
b. list each object with the appropriate buffer pool initialization parameter.
c. shut down the database to change the buffer pool assignments for each schema
object.
d. issue the alter statement and specify the buffer pool in the buffer_pool clause
for the schema objects you want to assign to each buffer pool.

answer: d

explanation:
ad a: false. it is not required to recreate the schema objects to assign them to
the appropriate buffer pool. you can do that with alter table command.
ad b: false. you don't need to list each object with the appropriate buffer pool
initialization parameter. by default object is stored in the default buffer pool.
ad c: false. to change the buffer assignments for each schema object from default
to keep or recycle you need just use alter table command. you don't need to
restart database to enforce these changes.
ad d: true. unlike db_block_buffers, which specifies the number of data block-
sized buffers that can be stored in sga, oracle9i introduces a new parameter,
db_cache_size, which can be used to specify the size of the buffer cache in the
oracle sga. there are two other parameters used to set keep and recycle parts of
the buffer pools: db_keep_cache_size and db_recycle_cache_size. to enable the use
of the buffer pools you need to issue the alter statement and specify the buffer
pool (or exact part of buffer pool, default, keep or recycle) in the buffer_pool
clause for the schema objects you want to assign to each buffer pool. syntax of
these statements: alter table table_name storage (buffer_pool default), alter
table table_name storage (buffer_pool keep) or alter table table_name storage
(buffer_pool recycle).

oca oracle 9i associate dba certification exam guide, jason couchman, p. 544-547,
chapter 10: basics of the oracle database architecture

141. a user calls and informs you that a 'failure to extend tablespace' error was
received while inserting into a table. the tablespace is locally managed.
which three solutions can resolve this problem? (choose three.)
a. add a data file to the tablespace
b. change the default storage clause for the tablespace
c. alter a data file belonging to the tablespace to autoextend
d. resize a data file belonging to the tablespace to be larger
e. alter the next extent size to be smaller, to fit into the available space

answer: a, c, d

explanation:
ad a, c, d: you can add a data file to the tablespace, alter a data file belonging
to the tablespace to extend automatically, resize a data file belonging to the
tablespace to be larger.
ad b: false. changing the default storage of the tablespace will not solve the
problem.
ad e: false. if you alter the next extent size to be smaller and insert data into
a table, but it's just temporary decision of problem: error will be generated
again when the size of next extents will grow to fit the segment.

oca oracle 9i associate dba certification exam guide, jason couchman, p. 637-640,
chapter 12: managing tablespaces and datafiles

142. which table type should you use to provide fast key-based access to table
data for queries involving exact matches and range searches?

a. regular table
b. clustered table
c. partitioned table
d. index-organized table

answer: d

explanation:
ad a: regular table will require indexes to provide fast key-based access to table
data for queries involving exact matches and range searches.
ad b: false. clusters are an optional method of storing table data. clusters are
groups of one or more tables physically stored together because they share common
columns and are often used together. because related rows are physically stored
together, disk access time improves. (a58227.pdf) pg. 79. (1-45).
ad c: false. partitioning addresses the key problem of supporting very large
tables and indexes by allowing you to decompose them into smaller and more
manageable pieces called partitions. once partitions are defined, sql statements
can access and manipulate the partitions rather than entire tables or indexes.
partitions are especially useful in data warehouse applications, which commonly
store and analyze large amounts of historical data. all partitions of a table or
index have the same logical attributes, although their physical attributes can be
different.
for example, all partitions in a table share the same column and constraint
definitions; and all partitions in an index share the same index columns. however,
storage specifications and other physical attributes such as pctfree, pctused,
initrans, and maxtrans can vary for different partitions of the same table or
index. each partition is stored in a separate segment. optionally, you can store
each partition in a separate tablespace. see (a58227.pdf) pg. 244. (9-2).
ad d: true. an index-organized table differs from a regular table in that the data
for the table is held in its associated index. changes to the table data, such as
adding new rows, updating rows, or deleting rows, result only in updating the
index. the index-organized table is like a regular table with an index on one or
more of its columns, but instead of maintaining two separate storages for the
table and the b*-tree index, the database system only maintains a single b*-tree
index which contains both the encoded key value and the associated column values
for the corresponding row. benefits of index-organized tables because rows are
stored in the index, index-organized tables provide a faster key-based access to
table data for queries involving exact match and/or range search. (a58227.pdf) pg.
229. (8-29).

143. you issue the following queries to obtain information about the redo log
files:<br />

<font face="courier"><p align="left">select group#, type, member from


v$logfile;</p></font>
<table border cellspacing="0" cellpadding="7" width="451" height="5">
<tr>
<td width="36" valign="top" height="7">
<p align="left"><b><font face="courier">group</font></b></td>
<td width="59" valign="top" height="7">
<p align="left"><b><font face="courier">type</font></b></td>
<td width="306" valign="top" height="7">
<p align="left"><b><font face="courier">member</font></b></td>
</tr>
<tr>
<td width="36" valign="top" height="28"><font face="courier">1</font></td>
<td width="59" valign="top" height="28"><font face="courier">online </font>
</td>
<td width="306" valign="top" height="28"><font
face="courier">/databases/db01/oradata/u02/log1a.rdo</font></td>
</tr>
<tr>
<td width="36" valign="top" height="18"><font face="courier">1</font></td>
<td width="59" valign="top" height="18"><font face="courier">online </font>
</td>
<td width="306" valign="top" height="18"><font
face="courier">/databases/db01/oradata/u03/log1b.rdo</font></td>
</tr>
<tr>
<td width="36" valign="top" height="17"><font face="courier">2</font></td>
<td width="59" valign="top" height="17"><font face="courier">online </font>
</td>
<td width="306" valign="top" height="17"><font
face="courier">/databases/db01/oradata/u02/log2a.rdo</font></td>
</tr>
<tr>
<td width="36" valign="top" height="18"><font face="courier">2</font></td>
<td width="59" valign="top" height="18"><font face="courier">online </font>
</td>
<td width="306" valign="top" height="18"><font
face="courier">/databases/db01/oradata/u03/log2b.rdo</font></td>
</tr>
<tr>
<td width="36" valign="top" height="19"><font face="courier">3</font></td>
<td width="59" valign="top" height="19"><font face="courier">online </font>
</td>
<td width="306" valign="top" height="19"><font
face="courier">/databases/db01/oradata/u02/log3a.rdo</font></td>
</tr>
<tr>
<td width="36" valign="top" height="4">
<p align="left"><font face="courier">3 </font></td>
<td width="59" valign="top" height="4">
<p align="left"><font face="courier">online </font></td>
<td width="306" valign="top" height="4">
<p align="left"><font face="courier">/databases/db01/oradata/u03/log3b.rdo
</font></td>
</tr>
</table><br />
<font face="courier"><p align="left">select group#, sequence#, status from
v$log;</p></font><br />
<table border cellspacing="0" cellpadding="7" width="290">
<tr>
<td width="24%" valign="top" height="21">
<p align="left"><font face="courier"><b>group# </b></font></td>
<td width="40%" valign="top" height="21">
<p align="left"><font face="courier"><b>sequence# </b></font></td>
<td width="35%" valign="top" height="21">
<p align="left"><font face="courier"><b>status </b></font></td>
</tr>
<tr>
<td width="24%" valign="middle" height="21">
<p align="left"><font face="courier">1 </font></td>
<td width="40%" valign="middle" height="21">
<p align="left"><font face="courier">250 </font></td>
<td width="35%" valign="middle" height="21">
<p align="left"><font face="courier">inactive </font></td>
</tr>
<tr>
<td width="24%" valign="middle" height="18">
<p align="left"><font face="courier">2 </font></td>
<td width="40%" valign="middle" height="18">
<p align="left"><font face="courier">251 </font></td>
<td width="35%" valign="middle" height="18">
<p align="left"><font face="courier">current </font></td>
</tr>
<tr>
<td width="24%" valign="bottom" height="15">
<p align="left"><font face="courier">3 </font></td>
<td width="40%" valign="bottom" height="15">
<p align="left"><font face="courier">249 </font></td>
<td width="35%" valign="bottom" height="15">
<p align="left"><font face="courier">inactive </font></td>
</tr>
</table>
<font face="couriernewpsmt"><p align="left">alter database drop logfile member
'/databases/db01/oradata/u03/log2b.rdo';</p></font><br />

why does the command fail?

a. each online redo log file group must have two members.
b. you cannot delete any members of online redo log file groups.
c. you cannot delete any members of the current online redo log file group
d. you must delete the online redo log file in the operating system before issuing
the alter database command.

answer: c
explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 9-41
drop logfile clause
use the drop logfile clause to drop all members of a redo log file group. specify
a redo log file group as indicated for the add logfile member clause.
(a) to drop the current log file group, you must first issue an alter system
switch logfile statement.
(b) you cannot drop a redo log file group if it needs archiving.
(c) you cannot drop a redo log file group if doing so would cause the redo thread
to contain less than two redo log file groups.
see also: alter system on page 10-22 and "dropping log file members: example" on
page 9-54

if you execute switch logfile, then the current logfile will be different, so
answer c is ok.

144. which statement about an oracle instance is true?

a. the redo log buffer is not part of the shared memory area of an oracle
instance.
b. multiple instances can execute on the same computer, each accessing its own
physical database.
c. an oracle instance is a combination of memory structures, background processes,
and user processes.
d. in a shared server environment, the memory structure component of an instance
consists of a single sga and a single pga.

answer: b

not completely sure about d.

explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c06start.htm#8106.
multiple instances can run concurrently on the same computer, each accessing its
own physical database. in clustered and massively parallel systems (mps), real
application clusters enables multiple instances to mount a single database.

ad a: false. the redo log buffer is a circular buffer in the sga that holds
information about changes made to the database. see (a58227.pdf) pg. 158, 144. (6-
6, 5-2).
ad c: false. oracle allocates a memory area called the system global area (sga)
and starts one or more oracle processes. this combination of the sga and the
oracle processes is called an oracle instance. see (a58227.pdf) pg. 144. (5-2).
ad d: true/false. ??? a pga is nonshared memory area to which a process can write.
one pga is allocated for each server process; the pga is exclusive to that server
process and is read and written only by oracle code acting on behalf of that
process. a pga is allocated by oracle when a user connects to an oracle database
and a session is created, though this varies by operating system and
configuration. the basic memory structures associated with oracle include:
(a) software code areas
(b) system global area (sga): the database buffer cache, the redo log buffer, the
shared pool
(c) program global areas (pga): the stack areas, the data areas, sort areas

see (a58227.pdf) pg. 154. (6-2)


145. the current password file allows for five entries. new dbas have been hired
and five more entries need to be added to the file, for a total of ten. how can
you increase the allowed number of entries in the password file?

a. manually edit the password file and add the new entries.
b. alter the current password file and resize if to be larger.
c. add the new entries; the password file will automatically grow.
d. drop the current password file, recreate it with the appropriate number of
entries and add everyone again.

answer: d

explanation:
you can create a password file using the password file creation utility, orapwd
or, for selected operating systems, you can create this file as part of your
standard installation.
entries: this parameter sets the maximum number of entries allowed in the password
file. this corresponds to the maximum number of distinct users allowed to connect
to the database as sysdba or sysoper. if you ever need to exceed this limit, you
must create a new password file. it is safest to select a number larger than you
think you will ever need. see (a58397.pdf) pg. 39, 41. (1-9, 1-11).

146. abc company consolidated into one office building, so the very large
employees table no longer requires the office_location column. the dba decided to
drop the column using the syntax below:<br />

alter table hr.employees<br />


drop column building_location<br />
cascade constraints;<br />

dropping this column has turned out to be very time consuming and is requiring a
large amount of undo space.

what could the dba have done to minimize the problem regarding time and undo space
consumption?

a. use the export and import utilities to bypass undo.


b. mark the column as unused. remove the column at a later time when less activity
is on the system.
c. drop all indexes and constraints associated with the column prior to dropping
the column.
d. mark the column invalid prior to beginning the drop to bypass undo. remove the
column using the drop unused columns command.
e. add a checkpoint to the drop unused columns command to minimize undo space.

answer: e

testking said b.

explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/tables.htm#5508.
removing unused columns
the alter table ... drop unused columns statement is the only action allowed on
unused columns. it physically removes unused columns from the table and reclaims
disk space.
in the example that follows the optional keyword checkpoint is specified. this
option causes a checkpoint to be applied after processing the specified number of
rows, in this case 250. checkpointing cuts down on the amount of undo logs
accumulated during the drop column operation to avoid a potential exhaustion of
undo space.
alter table hr.admin_emp drop unused columns checkpoint 250;

147. user a issues this command:<br />

update emp set id=200 where id=1<br />


then user b issues this command:<br />
update emp set id=300 where id=1<br /><br />

user b informs you that the update statement seems to be hung. how can you resolve
the problem so user b can continue working?

a. no action is required
b. ask user b to abort the statement
c. ask user a to commit the transaction
d. ask user b to commit the transaction

answer: c

explanation:
because of the consistency, while a transaction not committed no one else can
modify the same columns.
ad a: false. this situation requires dba intervention if session of user a keeps
emp table locked for other users updates during a long time.
ad b: false. user a needs to commit update command to resolve this issue. user b
does not need to abort the transaction.
ad d: false. user b cannot commit his/her transaction before user a commits
his/her transaction.

148. anne issued this sql statement to grant bill access to the customers table in
anne's schema:<br /><br />

grant select on customers to bill with grant option;<br />

bill issued this sql statement to grant claire access to the customers table in
anne's schema:<br /><br />

grant select on anne.customers to claire;<br />

later, anne decides to revoke the select privilege on the customers table from
bill.<br />

which statement correctly describes both what anne can do to revoke the privilege,
and the effect of the revoke command?

a. anne can run the revoke select on customers from bill statement. both bill and
claire lose their access to the customers table.
b. anne can run the revoke select on customers from bill statement. bill loses
access to the customers table, but claire will keep her access.
c. anne cannot run the revoke select on customers from bill statement unless bill
first revokes claire's access to the customers table.
d. anne must run the revoke select on customers from bill cascade statement. both
bill and claire lose their access to the ri istomers table.
answer: a

explanation:
anne can run the revoke select on customers from bill statement. both bill and
claire lose their access to the customers table because of cascade revoking of
privilege.

ad a: true. anne can run the revoke select on customers from bill statement. both
bill and claire lose their access to the customers table because of cascade
revoking of privilege.
ad b: false. both bill and claire lose their access to the customers table, not
only bill.
ad c: false. anne can run the revoke select on customers from bill statement.
there is no limitation in oracle that bill needs first to revoke claire's access
to the customers table if anne granted this privilege to bill with grant option.
ad d: false. anne can revoke the privilege from the bill and claire just with
revoke command. there is no cascade clause in the revoke command. but the cascade
constraints optional clause requires if you are revoking the references privilege.
in our case it is not required.

149. john has created a procedure named salary_calc. which sql query allows him to
view the text of the procedure?
a. select text from user_source where name = 'salary_calc';
b. select * from user_source where source_name = 'salary_calc';
c. select * from user_objects where object_name = 'salary_calc';
d. select * from user_procedures where object_name = 'salary_calc';
e. select text from user_source where name = 'salary_calc' and owner = 'john';

answer: a

explanation:
sql> desc user_source

name null? type


name varchar2(30)
type varchar2(12)
line number
text varchar2(4000)

150. which statement should you use to obtain information about the number, names,
status, and location of the control files?

a. select name, status from v$parameter;


b. select name, status from v$controlfile;
c. select name, status, location from v$control_files;
d. select status, location from v$parameter where parameter=control_files;

answer: b

explanation:
ad a: false. v$parameter this view lists information about initialization
parameters. see (a58242.pdf) pg. 402.
ad b: true. v$controlfile this view lists the names of the control files. see
(a58242.pdf) pg. 360.
ad c: false. v$control_files does not exist. see (a58242.pdf).
ad d: false. v$parameter this view lists information about initialization
parameters, it has no parameter column. see (a58242.pdf) pg. 402.

151. you need to make one of the data file of the prod_tbs tablespace auto
extensible.<br />

you issue this sql command:

alter tablespace prod_tbs <br />


datafile '/uo1/private/oradata/prod.dbf' <br />
autoextend on;

which error occurs?

a. ora 02789 max number of files reached.


b. ora 03280 invalid datafile filename specified.
c. ora 03283 specified datafile string does not exist.
d. ora 02142 missing or invalid alter tablespace option.
e. ora 01516 non existent log file, data file or tempfile 'string'.
f. ora 03244 no free space found to place the control information.
g. ora 00238 operation would reuse a filename that is part of the database.

answer: d

explanation:
try it!

see ocp oracle 9i database: fundamentals i, p. 162.:


the correct statement should be:
alter database datafile '/uo1/private/oradata/&lt;filename&gt;' autoextend on;

152. you issue this command:<br />

startup mount<br />

which three events occour when the instance is started and the database is
mounted? (choose three)

a. the sga is allocated.


b. the control file is opened.
c. the background process is started.
d. the existence of the datafile is verified.
e. the existence of the online redo log file is verified.

answer: a, b, c

explanation:
see ocp oracle 9i database: fundamentals i, p. 56.:
a and c already occur with nomount option.
b also occurs with the mount option. with this option, the control file is read
(!) to obtain the names and status of the datafiles and the redo log files.
ad d, e: datafiles and redo log files are opened (and therefore checked) with the
open option.

153. you are creating a data base manually and you need to limit the number of
initial online redo log groups and members. which two keywords should you use
within the create database command to define the maximum number of online redo log
files? (choose two).
a. maxlogmembers, which determines the maximum number of members per group.
b. maxredologs, which specifies the maximum number of online redo log files.
c. maxlogfiles, which determines the absolute maximum of online redo log groups.
d. maxloggroups, which specifies the maximum number of online redo log files,
groups and members.

answer: a, c

explanation:
see ocp oracle 9i database: fundamentals i, p. 77f.:
the maxlogfiles option defines the maximum number of redo log file groups and the
maxlogmembers option defines the maximum number of members for a redo log file
group that can be created in the database.
the other options (maxredologs, maxloggroups) do not exist.

154. which four do you find in the alert log file ? (choose four)

a. an entry for creation for a user.


b. an entry for creation of a table.
c. an entry for creation of a tablespace.
d. an entry for the startup of the instance.
e. an entry indicating a log switch has occured.
f. a list of the values of an undefault initialization parameter at the time the
instance starts.

answer: c, d, e, f

explanation:
create user and create table do not produce an entry in the alert log file.

see ocp oracle 9i database: fundamentals i, p. 64.:


the alert log stores information that is extremely useful in order to know the
health of the database. it records the starting and stopping of the databases,
creation of new redo log file
(which happens every time a log switch occurs), creation of tablespaces, addition
of new datafiles to the tablespaces, and most importantly the errors that are
generated by oracle.

155. you need to determine the amount of space currently used in each tablespace.

you can retrieve this information in a single sql statment using only one dba view
in the from clause providing you use either the _______ or _______ dba view.

a.dba_extents.
b.dba_segments.
c.dba_data_files.
d.dba_tablespaces.

answer: a, c

explanation:
see ocp oracle 9i database: fundamentals i, p. 211

Vous aimerez peut-être aussi