Vous êtes sur la page 1sur 32

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk...

Pgina 1 de 6

The Oracle AWR introduce and Reports Analysis (1) final


Tag: oracle10g, sql, session, the oracle Category: Database Author: caobingkai Date: 2012-06-27

1 AWR basic operation


C: \> sqlplus "/ as sysdba"
SQL * Plus: Release 10.2.0.1.0 - Production on Wed May 25 08:20:25 2011
Copyright (c) 1982, , . All rights reserved.
Connected to:
Oracle Database Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Mining Scoring Engine options
SQL> @ D: \ oracle \ product \ 10.2.0 \ db_2 \ RDBMS \ ADMIN \ awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
------------------------------------------3556425887 TEST01 1 test01
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Input value of report_type:
Type Specified: html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
-------------------------------------------------- -----* 3556425887 1 TEST01 test01 PCE-TSG-036
Using 3556425887 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(N) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
The value input num_days: 2
Listing the last 2 days of Completed Snapshots
Snap

Instance DB Name Snap Id Snap Started Level


-------------------------------------------------- -----test01 TEST01 214 24 5 2011 07:53 1
09:00 1 215 245 2011
10:01 1 216 245 2011
11:00 1 217 245 2011
12:00 1 218 245 2011
13:01 1 219 245 2011
220 24 May 2011 14:00
15:00 1 221 245 2011
16:00 1 222 245 2011
17:00 1 223 245 2011
07:51 1 224 255 2011
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 2 de 6

To input begin_snap: 223


Begin Snapshot Id specified: 223
To input end_snap: 224
End Snapshot Id specified: 224
declare
*
Line 1 error:
ORA-20200: The instance was shutdown between snapshots 223 and 224
ORA-06512: in LINE 42
From Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options disconnect
One more time:
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 214
Begin Snapshot Id specified: 214
To input end_snap: 215
End Snapshot Id specified: 215
Then enter the name of the report you want to generate ....
......
<P>
<P>
End of Report
</ BODY> </ HTML>
Report written to awrrpt_1_0524_08_09.html
SQL>
Generated report to postpone doing. We first get to know ASH and AWR.
Recognizing ASH (Active Session History)
2.1 ASH (Active Session History) architecture
Oracle10g before the current session record stored in the v $ session; session in a wait state will be copied placed in a
The v $ session_wait. When the connection is disconnected, the connection information in the v $ session and the v $ session_wait will be deleted. No view can
provide information about session in the history of each time point are doing, and waiting for resources. The original v $ session and v $ session_wait just current
session is running and wait for what resources.
Oracle10g, Oracle provides Active Session History (ASH) to solve this problem. Every 1 second
ASH will currently active session information is recorded in a buffer of the SGA (recycled). ASH, this process is called sampling (Sampling). ASH default every
second collection v $ session in active sessions, waiting for the event to record the session, the inactive session will not be sampled at intervals determined by the
_ash_sampling_interval parameter.
In 10g there is a new view: the v $ SESSION_WAIT_HISTORY. This view is saved for each active session in
The v $ session_wait of waiting for an event in the last 10 but this the monitoring data performance status for a period of time is not enough, To solve this problem, in
10g newly added one view: V $ ACTIVE_SESSION_HISTORY. This is ASH
(Active session history).
2.2 ASH strategy adopted by --Typical case, in order to diagnose the state of the current , you need more information in the recent five to ten minutes. However, since the recording
information on the activities of the session is a lot of time and space, ASH using the strategy: Save the activity session information in a wait state, per second from the
v $ session_wait and v $ session sampling and sampling information is stored in the memory (Note: ASH the sampled data is stored in memory).
2.3 Ash work principle --Active Session sampling (related view the information collected per second) data stored in the SGA allocated to the SGA in the size of the ASH from v $ sgastat is in
the query (the shared pool under Ash buffers), the space can be recycled, if required, the previous information can be new information coverage. All the activities of
all session should be recorded is very resource consuming. ASH only from
V $ SESSION view and a few other get information of the activities of the session. ASH every 1 second to collect session information, not through SQL statement,
instead of using direct access to memory is relatively more efficient.
Need to sample data per second, so ASH cache very large amount of data, all of them flushed to disk, it will be very consume disk space, so the ASH data in the
cache is flushed to the AWR related tables when to take the following Strategy:
1 MMON default every 60 minutes (can be adjusted) ash buffers data in the 1/10 is flushed to disk.
The MMNL default when ASH buffers full 66% of the ash buffers 1/10 data is written to disk (specific 1/10 which is the data, follow the FIFO principle).
The MMNL written data percentage of 10% the percentage of total ash buffers in the amount of sampled data is written to disk data (rather than accounting for the
proportion of the the Ash Buffers total size)
4 To save space, the data collected by the AWR in default automatically cleared after 7 days.
Specific reference implicit parameter:

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 3 de 6

_ash_sampling_interval: sampled once per second


_ash_size: ASH Buffer minimum value defined, the default is 1M
_ash_enable: Enable ASH sampling
_ash_disk_write_enable: sampling data written to disk
_ash_disk_filter_ratio: sampling data is written to disk accounted for the ASH buffer percentage of the total sampling data, the default 10%
_ash_eflush_trigger: ASH buffer full would later write, default 66%
_ash_sample_all: If set to TRUE, all sessions will be sampled, including those that session is idle waiting. The default is FALSE.
ASH cache is a fixed size of the SGA area corresponding to each CPU 2M space. ASH cache not exceed the maximum shared pool
5% or is 2% of sga_target.
The data inquiry: v $ active_session_history ASH buffers
ASH buffers to flush data to the table: WRH $ _active_session_history
(A partition table, WRH = Workload Repository History)
Respect to the table View: dba_hist_active_sess_history,
2.4 ASH --This view by the v $ ACTIVE_SESSION_HISTORY view access to relevant data, can also get some performance information.
----------Sampling information
----------SAMPLE_ID sample ID
SAMPLE_TIME sampling time
IS_AWR_SAMPLE AWR sampling data is 1/10 of the basic data
---------------------Information that uniquely identifies the session
---------------------SESSION_ID corresponds to the SID V $ SESSION
SESSION_SERIAL # uniquely identifies a session objects
SESSION_TYPE background or foreground program foreground / background
USER_ID Oracle user identifier; maps to V $ SESSION.USER #
SERVICE_HASH Hash that identifies the Service; maps to V $ ACTIVE_SERVICES.NAME_HASH
PROGRAM procedures
MODULE procedures corresponding and version
ACTION
CLIENT_ID Client identifier of the session
---------------------session executing SQL statement information
---------------------The SQL_ID sampling executing SQL ID
The executing SQL SQL_CHILD_NUMBER sampling sub-cursor Number
The SQL_PLAN_HASH_VALUE SQL plan hash value
SQL_OPCODE pointed out that the SQL statement at which stage of the operation corresponds to V $ SESSION.COMMAND
QC_SESSION_ID
QC_INSTANCE_ID
---------------------session wait state
---------------------SESSION_STATE session state Waiting / ON CPU
WAIT_TIME
---------------------session wait event information
---------------------EVENT
EVENT_ID
Event #
SEQ #
P1
P2
P3
TIME_WAITED
---------------------the session waits object information
---------------------CURRENT_OBJ #
CURRENT_FILE #
CURRENT_BLOCK #
AWR (Automatic Workload Repository)
ASH sample data is stored in memory. Assigned to the ASH memory space is limited, the old record will be overwritten when the allocated space occupied; database
is restarted, all these the ASH information will disappear.
Thus, for long-term performance of the detection oracle is impossible. Oracle10g, permanently retained ASH
The method of information, which is AWR (automatic workload repository). Oracle recommends using AWR replace
Statspack (10gR2 still retains the statspack).
3.1 ASH to AWR
ASH and AWR process can use the following icon Quick description:

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 4 de 6

v $ session -> the v $ session_wait -> v $ SESSION_WAIT_HISTORY> (in fact, without this step)
-> V $ active_session_history (ASH) -> wrh $ _active_session_history (AWR)
-> Dba_hist_active_sess_history
v $ session on behalf of the database activity began from the source;
The v $ session_wait view to wait for the real-time recording activity session, current information;
The v $ SESSION_WAIT_HISTORY enhanced is v $ session_wait the simple record activity session last 10 waiting;
The v $ active_session_history is the core of the ASH to record activity session history to wait for information, samples per second,
This part is recorded in the memory and expectation is to record one hour;
the WRH $ _active_session_history is the AWR the v $ active_session_history in the storage pool,
V $ ACTIVE_SESSION_HISTORY recorded information will be refreshed on a regular basis (once per hour) to load the library, the default one week reserved for
analysis;
The view of dba_hist_active_sess_history joint show wrh $ _active_session_history view and several other view, we usually this view access to historical data.
Above it comes to ASH by default MMON, MMNL background process sampling data every one hour from ASH buffers, then the collected data is stored in it?
AWR more tables to store the collected performance statistics, tables are stored in the SYSAUX tablespace SYS user, and to WRM $ _ *
And WRH $ _ *, WRI $ _ *, WRR $ _ * format name. The AWR historical data stored in the underlying table wrh $ _active_session_history
(Partition table).
WRM $ _ * the type storage AWR metadata information (such as checking the database and collection of snapshots), M behalf of metadata
WRH $ _ * Type to save sampling snapshot of history statistics. H stands for "historical data"
WRI $ _ * data type representation of the stored database in Hong suggestion feature (advisor)
WRR $ _ * represents the new features Workload Capture and Workload Replay related information
Built several the prefix DBA_HIST_ the view on these tables, these views can be used to write your own performance diagnostic tools. The name of the view directly
associated with the table; example, view DBA_HIST_SYSMETRIC_SUMMARY is the in
Built on the WRH $ _SYSMETRIC_SUMMARY table.
Note: ASH save the session recording system is the latest in a wait, can be used to diagnose the current state of the database;
AWR longest possible delay of 1 hour (although can be adjusted manually), so the sampling information and can not be used in the current state of the diagnostic
database, but can be used as a period of the adjusted reference database performance.
3.2 Setup AWR
To use AWR must be set the parameters of STATISTICS_LEVEL, a total of three values: BASIC, TYPICAL, ALL.
A. typical - default values, enable all automation functions, and this collection of information in the database. The information collected includes: Buffer Cache
Advice, MTTR Advice, Timed
Statistics, Segment Level Statistics, PGA Advice ..... and so on, you can select statistics_name, activation_level from v $ statistics_level
order by 2; to query the information collected. Oracle recommends that you use the default values ??typical.
B. all - If set to all, in addition to typical, but also collect additional information, including
plan execution statistics and Timed OS statistics (Reference A SQL query).
In this setting, it may consume too much resources in order to collect diagnostic information .
C. basic - Close all automation functions.
3.3: AWR related data collection and management
3.3.1 Data
In fact, the the AWR information recorded not only is Ash can also collect all aspects of statistical information to the database is running and wait for the information
to diagnostic analysis.
The AWR sampling at fixed time intervals for all of its important statistical information and load information to perform a sampling
And sampling information is stored in the AWR. It can be said: Ash in the information is saved to the AWR view
in wrh $ _active_session_history. ASH AWR subset of.
These sampling data are stored in the SYSAUX tablespace SYSAUX tablespace full, AWR will automatically overwrite the old information, and information in the
warning log records a:
ORA-1688: unable to the the extend SYS.WRH to $ _ACTIVE_SESSION_HISTORY partition WRH $ _ACTIVE_3533490838_1522 by 128 in tablespace
SYSAUX
3.3.2 collection and management
The AWR permanently save the system performance diagnostic information, is owned by the SYS user. After a period of time, you might want to get rid of these
information; sometimes for performance diagnosis, you may need to define the sampling frequency to obtain a system snapshot information.
Oracle 10g in the package dbms_workload_repository provide a lot of processes, these processes, you can snapshots and set baseline.
AWR information retention period can be modified by modifying the retention parameters. The default is seven days, the smallest value is the day.
Retention is set to zero, automatically cleared close. If, awr found sysaux space is not enough, by removing the oldest part of the snapshot to re-use the space, it
would also give the DBA issued a warning, telling sysaux space enough (in the alert log). AWR information sampling frequency can be modified by modifying the
interval parameter. Youngest
Value is 10 minutes, the default is 60 minutes. Typical value is 10,20,30,60,120 and so on. Interval is set to 0 to turn off automatically capture snapshots such as the
collection interval was changed to 30 minutes at a time. (Note: The units are minutes) and 5 days reserved
The MMON collection snapshot frequency (hourly) and collected data retention time (7 days) and can be modified by the user.
View: Select * from dba_hist_wr_control the;
For example: Modify the frequency of 20 minutes, to collect a snapshot retain data two days:
begin

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 5 de 6

dbms_workload_repository.modify_snapshot_settings (interval => 20,


retention => 2 * 24 * 60);
end;
The 3.4 manually create and delete AWR snapshot
AWR automatically generated by ORACLE can also through DBMS_WORKLOAD_REPOSITORY package to manually create, delete and modify. Desc command
can be used to view the process of the package. The following is only a few commonly used:
SQL> select count (*) from wrh $ _active_session_history;
COUNT (*)
---------317
SQL> begin
2 dbms_workload_repository.create_snapshot ();
End;
4/
PL / SQL procedure successfully completed.
SQL> select count (*) from wrh $ _active_session_history;
COUNT (*)
---------320
Manually delete the specified range of snapshots
SQL> select * from wrh $ _active_session_history;
SQL> begin
2 dbms_workload_repository.drop_snapshot_range (low_snap_id => 96,
high_snap_id => 96, dbid => 1160732652);
End;
4/
SQL> select * from wrh $ _active_session_history where snap_id = 96;
No rows selected
3.5 Setting and remove baseline (baseline)
Baseline (baseline) is a mechanism so you can mark important snapshot of the time information set. A baseline defined between a pair of snapshots, snapshot through
their snapshot sequence number to identify each baseline and only a snapshot. A typical performance tuning practices from the acquisition measurable baseline
collection, make changes, and then start collecting another set of baseline.
You can compare two sets to check the effect of the changes made. AWR, the existing collection of snapshots can perform the same type of comparison.
Assume that a name apply_interest highly resource-intensive process run between 1:00 to 3:00 pm,
Corresponds to the the snapshot ID 95 to 98. We can define a name for these snapshots of apply_interest_1 baseline:
SQL> select * From dba_hist_baseline;
SQL> select * from wrm $ _baseline;
SQL> exec dbms_workload_repository.create_baseline (95, 98, 'apply_interest_1');
After some adjustment steps, we can create another baseline - assuming the name apply_interest_2, and then only for those with two baseline snapshot of comparative
measurements
SQL> exec dbms_workload_repository.create_baseline (92, 94, 'apply_interest_2');
Can be used in the analysis drop_baseline () to delete the reference line; snapshot is retained (cascade delete). Furthermore,
Clear routines delete the old snapshot, a baseline snapshot will not be cleared to allow for further analysis.
To delete a baseline:
SQL> exec dbms_workload_repository.drop_baseline (baseline_name => 'apply_interest_1', cascade => false);
AWR environment
Rac environment, each snapshot includes all the nodes of the cluster (as stored in the shared database, not in each instance). Snapshot data of each node has the same
snap_id, but rely on the instance id to distinguish. In general, in the RAC snapshot is captured at the same time.
You can also use the Database Control to manually snapshot. Manual snapshot support system automatic snapshot.
ADDM
Automatic Database Diagnostic Monitor: ADDM the introduction of this AWR data warehouse, Oracle can naturally achieve a higher level of intelligence
applications on this basis, greater play to the credit of the AWR, which is Oracle 10g introduced In addition, a function Automatic Database Diagnostic Monitor
program (Automatic Database Diagnostic Monitor, ADDM), by
ADDM, Oracle is trying to make database maintenance, management and optimization become more automated and simple.
The ADDM may be periodically check the state of the database, according to the built-in expert system automatically determines the potential database performance
bottlenecks, and adjustment measures and recommendations. All built within the Oracle database system, its implementation is very efficient, almost does not affect
the overall performance of the database. The new version of Database Control provides a convenient and intuitive form ADDM findings and recommendations, and
guide the administrator to the progressive implementation of the recommendations of the ADDM, quickly resolve performance problems.
AWR common operations
AWR configuration by dbms_workload_repository package configuration.

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 6 de 6

6.1 Adjusting the AWR snapshot frequency and retention policies, such as the collection interval changed to 30 minutes at a time,
And reserved 5 days (units are in minutes):
SQL> exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 5 * 24 * 60);
6.2 Close AWR, the interval is set to 0 to turn off automatically capture snapshot
SQL> exec dbms_workload_repository.modify_snapshot_settings (interval => 0);
6.3 manually create a snapshot
SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
6.4 View snapshot
SQL> select * from sys.wrh $ _active_session_history
6.5 manually delete the specified range of snapshots
SQL> exec DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 973, high_snap_id => 999, dbid => 262089084);
6.6 created Baseline, save the data for future analysis and comparison
SQL> exec dbms_workload_repository.create_baseline (start_snap_id => 1003, end_snap_id => 1013, 'apply_interest_1');
6.7 Delete baseline
SQL> exec DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'apply_interest_1', cascade => FALSE);
6.8 AWR data export and migrate to other databases for later analysis
SQL> exec DBMS_SWRF_INTERNAL.AWR_EXTRACT (dmpfile => 'awr_data.dmp', mpdir => 'DIR_BDUMP', bid => 1003, eid => 1013);
6.9 Migration the AWR data file to the other database
SQL> exec DBMS_SWRF_INTERNAL.AWR_LOAD (SCHNAME => 'AWR_TEST', dmpfile => 'awr_data.dmp', dmpdir => 'DIR_BDUMP');
The the AWR data transfer to the TEST mode:
SQL> exec DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHNAME => 'TEST');
Analysis AWR report
See below a

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 1 de 13

The Oracle AWR introduce and analysis of the final report


Tag: buffer, sql, Oracle management, session Category: Database Author: luona322 Date: 2011-01-17

Malwarebytes New Version


Detect Threats Antivirus Will Miss. The New Malwarebytes 2.0. Buy Now!

y: Arial; mso-ascii-font-family: Calibri; mso-ascii-theme-font: minor-Latin; the mso-FAREAST-font-family: Arial; mso-Fareast-theme-font: minor-FAREAST; MSOhansi-font-family: Calibri; mso-hansi-theme-font: minor-latin '> presentation and report analyzes final

1 AWR basic operation


C: \> sqlplus "/ as sysdba"
SQL * Plus: Release 10.2.0.1.0 - Productionon Wednesday May 25 08:20:25 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise EditionRelease 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data MiningScoring Engine options
SQL> @ D: \ oracle \ product \ 10.2.0 \ db_2 \ RDBMS \ ADMIN \ awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DBId DB Name Inst Num Instance
------------------------------------------3556425887 TEST01 1 test01
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plaintext report?
Enter 'html' for an HTML report, or 'text'for plain text
Defaults to 'html'
Input value of report_type:
Type Specified: html
Instances in this Workload Repositoryschema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DBId Inst Num DB Name Instance Host
-------------------------------------------------- -----* 3556425887 1 TEST01 test01 PCE-TSG-036
Using 3556425887 for database Id
Using 1 for instance number
Specify the number of days of snapshots tochoose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will resultin the most recent
(N) days of snapshots being listed. Pressing <return> without
specifying a number lists all completedsnapshots.
The value input num_days: 2
Listing the last 2 days of CompletedSnapshots
Snap
Instance DB Name Snap Id Snap Started Level
-------------------------------------------------- -----test01 TEST01 214 24 5 2011 07:53 1
09:00 1 215 245 2011

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 2 de 13

10:01 1 216 245 2011


11:00 1 217 245 2011
12:00 1 218 245 2011
13:01 1 219 245 2011
14:00 1 220 245 2011
15:00 1 221 245 2011
16:00 1 222 245 2011
17:00 1 223 245 2011
07:51 1 224 255 2011
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 223
Begin Snapshot Id specified: 223
To input end_snap: 224
End Snapshot Id specified: 224
declare
*
Line 1 error:
ORA-20200: The instance was shutdownbetween snapshots 223 and 224
ORA-06512: in LINE 42
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-Production
With the Partitioning, OLAP and Data MiningScoring Engine options disconnect
One more time:
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 214
Begin Snapshot Id specified: 214
To input end_snap: 215
End Snapshot Id specified: 215
Then enter the name of the report you want to generate ....
......
<P>
<P>
End of Report
</ BODY> </ HTML>
Report written to awrrpt_1_0524_08_09.html
SQL>
Generated report to postpone doing. We first get to know ASH and AWR.

Recognizing ASH (Active Session History)


2.1 Ash (ActiveSession History) architecture
Oracle10g before the current session record stored in the v $ session; session in a wait state will be copied placed in a
The v $ session_wait. When the connection is disconnected, the original connection information in the v $ session and v $ SESSION_WAIT
Will be deleted. No view can provide information about session each time point in the history of doing, and waiting for
What resources. The original v $ session and v $ session_wait just display the current session is running SQL and wait
What resources.
Oracle10g, Oracle provides Active Session History (ASH) to solve this problem. Every 1 second

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 3 de 13

ASH will currently active session information is recorded in a buffer of the SGA (recycled). ASH, this too
Process is called sampling (Sampling). ASH default every second collection v $ session active sessions, recording sessions waiting
Event, an inactive session will not be sampled OK the interval _ash_sampling_interval parameters.
In 10g there is a new view: the v $ SESSION_WAIT_HISTORY. This view is saved for each active session in
v $ session_wait in wait in the last 10 events, but this data for a period of time performance status monitoring is not enough
To solve this problem, in 10g new added to a view: the V $ ACTIVE_SESSION_HISTORY. This is ASH
(Active session history).

2.2 ASH strategy adopted by --Typical case, in order to diagnose the state of the current database, you need more information in the recent five to ten minutes. However, since information on the
activities of the recording session is time and space, ASH adopted the strategy is: to save the the activity session information in a wait state, per second from the v $
session_wait and v $ session sampling, and sampling information (Note: ASH the sampled data is stored in memory) stored in memory.

2.3 Ash work principle --Active Session sampling (related view the information collected per second) data stored in the SGA allocated to the SGA in the size of the ASH from v $ sgastat is in
the query (the shared pool under Ash buffers), the space can be recycled, if required, the previous information can be new information coverage. All the activities of
all session should be recorded is very resource consuming. ASH only be obtained from the V $ SESSION and a few view the session information of those activities.
ASH every 1 second to collect session information, not through SQL statement, instead of using direct access to memory is relatively more efficient.
Need to sample data per second, so ASH cache very large amount of data, all of them flushed to disk, it will be very consume disk space, so the ASH data in the
cache is flushed to the AWR related tables when to take the following Strategy:
1 MMON default every 60 minutes (can be adjusted) ash buffers data in the 1/10 is flushed to disk.
The MMNL default when ASH buffers full 66% of the ash buffers 1/10 data is written to disk (specific 1/10 which is the data, follow the FIFO principle).
The MMNL written data percentage of 10% the percentage of total ash buffers in the amount of sampled data is written to disk data (rather than accounting for the
proportion of the the Ash Buffers total size)
4 To save space, the data collected by the AWR in default automatically cleared after 7 days.
Specific reference implicit parameter:
_ash_sampling_interval: sampled once per second
_ash_size: ASH Buffer minimum value defined, the default is 1M
_ash_enable: Enable ASH sampling
_ash_disk_write_enable: sampling data written to disk
_ash_disk_filter_ratio: the sampling data written to disk accounted for a percentage of the total sampling data ASHbuffer, default 10%
_ash_eflush_trigger: ASH buffer full would later write, default 66%
_ash_sample_all: If set to TRUE, all sessions will be sampled, including those that session is idle waiting. The default is FALSE.
ASH cache is a fixed size of the SGA area corresponding to each CPU 2M space. The ASH cache can not over sharedpool the 5% or 2% of the sga_target.
The data inquiry: v $ active_session_history ASH buffers
ASH buffers to flush data to the table: WRH $ _active_session_history
(A partition table, WRH = WorkloadRepository History)
Respect to the table View: dba_hist_active_sess_history,

2.4 ASH --This view by the v $ ACTIVE_SESSION_HISTORY view access to relevant data, can also get some performance information.
----------Sampling information
----------SAMPLE_ID sample ID
SAMPLE_TIME sampling time
IS_AWR_SAMPLE AWR sampling data is 1/10 of the basic data
---------------------Information that uniquely identifies the session
---------------------SESSION_ID corresponds to the SID V $ SESSION
SESSION_SERIAL # uniquely identifies a session objects

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 4 de 13

SESSION_TYPE background or foreground program foreground / background


USER_ID Oracle user identifier; maps to V $ SESSION.USER #
SERVICE_HASH Hash that identifies the Service; maps toV $ ACTIVE_SERVICES.NAME_HASH
PROGRAM procedures
MODULE procedures corresponding software and versions
ACTION
CLIENT_ID Client identifier of the session
---------------------session executing SQL statement information
---------------------The SQL_ID sampling executing SQL ID
The executing SQL SQL_CHILD_NUMBER sampling sub-cursor Number
The SQL_PLAN_HASH_VALUE SQL plan hash value
SQL_OPCODE pointed out that the SQL statement at which stage of the operation corresponds to V $ SESSION.COMMAND
QC_SESSION_ID
QC_INSTANCE_ID
---------------------session wait state
---------------------SESSION_STATE session state Waiting / ON CPU
WAIT_TIME
---------------------session wait event information
---------------------EVENT
EVENT_ID
Event #
SEQ #
P1
P2
P3
TIME_WAITED
---------------------the session waits object information
---------------------CURRENT_OBJ #
CURRENT_FILE #
CURRENT_BLOCK #

Of AWR (AutomaticWorkload Repository)


ASH sample data is stored in memory. The memory space allocated to the ASH is limited, when the allocated space
Occupied, the old record will be overwritten; database is restarted, all these the ASH information will disappear.
Thus, for long-term performance of the detection oracle is impossible. Oracle10g, permanently retained ASH
The method of information, which is AWR (automatic workload repository). Oracle recommends using AWR replace
Statspack (10gR2 still retains the statspack).

3.1 ASH to AWR


ASH and AWR process can use the following icon Quick description:

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 5 de 13

v $ session -> the v $ session_wait -> v $ SESSION_WAIT_HISTORY> (in fact, without this step)
-> V $ active_session_history (ASH) -> wrh $ _active_session_history (AWR)
-> Dba_hist_active_sess_history
v $ session on behalf of the database activity began from the source;
The v $ session_wait view to wait for the real-time recording activity session, current information;
The v $ SESSION_WAIT_HISTORY enhanced is v $ session_wait the simple record activity session last 10 waiting;
The v $ active_session_history ASH core to the history of recorded activity session waiting for information, samples per second, this part of the record in memory,
the expected value is the contents of the record one hour;
the WRH $ _active_session_history is the AWR the v $ active_session_history in the storage pool,
v $ active_session_history recorded information will be refreshed on a regular basis (once per hour) to load the library, and the default
One week reserved for analysis;
view dba_hist_active_sess_history is wrh $ _active_session_history view and several other view
The joint show, we usually this view historical data access.
Above it comes to ASH by default MMON, MMNL background process sampling data every one hour from ASH buffers, then the collected data is stored in it?
AWR more tables to store the collected performance statistics, tables are stored in the SYSAUX tablespace SYS user, and to WRM $ _ * and WRH $ _ * and WRI $
_ *, WRR $ _ * format name. The AWR historical data stored in the underlying table wrh $ _active_session_history
(Partition table).
WRM $ _ * the type storage AWR metadata information (such as checking the database and collection of snapshots), M behalf of metadata
WRH $ _ * Type to save sampling snapshot of history statistics. H stands for "historical data"
WRI $ _ * data type representation of the stored database in Hong suggestion feature (advisor)
WRR $ _ * represents the 11g new features Workload Capture and Workload Replay related information
Built several the prefix DBA_HIST_ the view on these tables, these views can be used to write your own performance diagnostic tools. The name of the view directly
associated with the table; example, view DBA_HIST_SYSMETRIC_SUMMARY is the the WRH $ _SYSMETRIC_SUMMARY table built.
Note: ASH save the session recording system is the latest in a wait, can be used to diagnose the current state of the database;
While the the AWR information in the longest possible delay of 1 hour (can be adjusted manually), so the sampling information does not
This state for the diagnostic database, but can be used as a period of the adjusted reference database performance.

3.2 Setup AWR


To use AWR must be set the parameters of STATISTICS_LEVEL, a total of three values: BASIC, TYPICAL, ALL.
A. typical - default values, enable all automation functions, and this collection of information in the database. The information collected includes: Buffer Cache
Advice, MTTR Advice, Timed Statistics, Segment LevelStatistics, PGA Advice ..... and so on, you can select statistics_name, activation_level from v $
statistics_level
order by 2; to query the information collected. Oracle recommends that you use the default values ??typical.
B. all - If set to all, in addition to typical, but also collect additional information, including
plan execution statistics and Timed OSstatistics of SQL query (Reference A).
In this setting, it may consume too much server resources in order to collect diagnostic information.
C. basic - Close all automation functions.

3.3: AWR related data collection and management


3.3.1 Data
In fact, the the AWR information recorded not only is Ash can also collect all aspects of statistical information to the database is running and wait for the information
to diagnostic analysis.
The AWR sampling at fixed time intervals for all of its important statistical information and load information to perform a sampling
And sampling information is stored in the AWR. It can be said: Ash in the information is saved to the AWR view
in wrh $ _active_session_history. ASH AWR subset of.
These sampling data is stored in the SYSAUX tablespace SYSAUX tablespace is full, AWR will automatically overwrite the old
Information in the warning log records an information:
ORA-1688: unable to extend tableSYS.WRH $ _ACTIVE_SESSION_HISTORY partition WRH $ _ACTIVE_3533490838_1522 by 128 intablespace SYSAUX
3.3.2 collection and management
The AWR permanently save the system performance diagnostic information, is owned by the SYS user. After a period of time, you might want to get rid of these
information; sometimes for performance diagnosis, you may need to define the sampling frequency to obtain a system snapshot information.
Oracle 10g in the package dbms_workload_repository provide a lot of processes, these processes, you can manage snapshots and set baseline.

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 6 de 13

AWR information retention period can be modified by modifying the retention parameters. The default is seven days, the smallest value is the day.
Retention is set to zero, automatically cleared close. Awr find sysaux space is not enough, it by removing
The oldest part of the snapshot to re-use these spaces, it would also send a warning to the dba tell sysaux space
Enough (in the alert log). AWR information sampling frequency can be modified by modifying the interval parameter. Youngest
Value is 10 minutes, the default is 60 minutes. Typical value is 10,20,30,60,120 and so on. The interval is set to 0 Close
Automatically capture snapshots such as the collection interval was changed to 30 minutes at a time. (Note: The units are minutes) and 5 days reserved
The MMON collection snapshot frequency (hourly) and collected data retention time (7 days) and can be modified by the user.
View: Select * from dba_hist_wr_control the;
For example: Modify the frequency of 20 minutes, to collect a snapshot retain data two days:
begin
dbms_workload_repository.modify_snapshot_settings (interval => 20, retention => 2 * 24 * 60);
end;

The 3.4 manually create and delete AWR snapshot


AWR automatically generated by ORACLE can also through DBMS_WORKLOAD_REPOSITORY package to manually create, delete and modify. Desc command
can be used to view the process of the package. The following is only a few commonly used:
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------317
SQL> begin
2 dbms_workload_repository.create_snapshot ();
End;
4/
PL / SQL procedure successfully completed.
SQL> select count (*) fromwrh $ _active_session_history;
COUNT (*)
---------320
Manually delete the specified range of snapshots
SQL> select * fromwrh $ _active_session_history;
SQL> begin
2 dbms_workload_repository.drop_snapshot_range (low_snap_id => 96, high_snap_id => 96, dbid => 1160732652);
End;
4/
SQL> select * fromwrh $ _active_session_history where snap_id = 96;
No rows selected

3.5 Setting and remove baseline (baseline)


Baseline (baseline) is a mechanism so you can mark important snapshot of the time information set. A baseline definition
Between a pair of snapshots, snapshot through their snapshot sequence number to identify each baseline has one and only one pair of snapshots. A typical
Performance Tuning practice from the acquisition measurable baseline collection, make changes, and then start collecting another set of baseline.
You can compare two sets to check the effect of the changes made. AWR, the existing collection of snapshots can perform
Row of the same type of comparison.
Assume that a name apply_interest highly resource-intensive process run between 1:00 to 3:00 pm,
Corresponds to the the snapshot ID 95 to 98. We can define a name for these snapshots of apply_interest_1 baseline:
SQL> select * From dba_hist_baseline;
SQL> select * from wrm $ _baseline;
SQL> execdbms_workload_repository.create_baseline (95, 98, 'apply_interest_1');

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 7 de 13

After some adjustments steps, we can create another baseline - assuming the name apply_interest_2 then
Only with two baseline snapshot measures
SQL> execdbms_workload_repository.create_baseline, (92, 94 'apply_interest_2');
Can be used in the analysis drop_baseline () to delete the reference line; snapshot is retained (cascade delete). Furthermore,
Clear routines delete the old snapshot, a baseline snapshot will not be cleared to allow for further analysis.
To delete a baseline:
SQL> execdbms_workload_repository.drop_baseline (baseline_name => 'apply_interest_1', cascade => false);

AWR RAC environment


Rac environment, each snapshot includes all the nodes of the cluster (as stored in the shared database, not in each instance). Snapshot data of each node has the same
snap_id, but rely on the instance id to distinguish. In general, in the RAC snapshot is captured at the same time. You can also use the Database Control to manually
snapshot. Manual snapshot support system automatic snapshot.

ADDM
Automatic Database Diagnostic Monitor: ADDM the introduction of this AWR data warehouse, Oracle can naturally achieve a higher level of intelligence
applications on this basis, greater play to the credit of the AWR, which is Oracle 10g introduced In addition, a function Automatic Database Diagnostic Monitor
program (Automatic Database Diagnostic Monitor, ADDM) by ADDM, Oracle attempts to become more automated and simple database maintenance, management
and optimization.
The ADDM may be periodically check the state of the database, according to the built-in expert system automatically determines the potential database performance
bottlenecks, and adjustment measures and recommendations. All built within the Oracle database system, its implementation is very efficient, almost does not affect
the overall performance of the database. The new version of DatabaseControl a convenient and intuitive form ADDM findings and recommendations, and guide the
administrator to the progressive implementation of the recommendations of the ADDM, quickly resolve performance problems.

AWR common operations


AWR configuration by dbms_workload_repository package configuration.
6.1 Adjusting the AWR snapshot frequency and retention policies, such as the collection interval changed to 30 minutes at a time, and retain five days time (in
minutes):
SQL> execdbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 5 * 24 * 60);
6.2 Close AWR, the interval is set to 0 to turn off automatically capture snapshot
The SQL> execdbms_workload_repository.modify_snapshot_settings (interval => 0);
6.3 manually create a snapshot
The SQL> execDBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
6.4 View snapshot
SQL> select * fromsys.wrh $ _active_session_history
6.5 manually delete the specified range of snapshots
SQL> execDBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 973, high_snap_id => 999, dbid => 262089084);
6.6 created Baseline, save the data for future analysis and comparison
SQL> execdbms_workload_repository.create_baseline (start_snap_id => 1003, end_snap_id => 1013, 'apply_interest_1');
6.7 Delete baseline
SQL> execDBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'apply_interest_1', cascade => FALSE);
6.8 AWR data export and migrate to other databases for later analysis
SQL> execDBMS_SWRF_INTERNAL.AWR_EXTRACT (dmpfile => 'awr_data.dmp', mpdir => 'DIR_BDUMP', bid => 1003, eid => 1013);
6.9 Migration the AWR data file to the other database
SQL> execDBMS_SWRF_INTERNAL.AWR_LOAD (SCHNAME => 'AWR_TEST', dmpfile => 'awr_data.dmp', dmpdir => 'DIR_BDUMP');
The the AWR data transfer to the TEST mode:
SQL> exec DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHNAME => 'TEST');
Analysis AWR report
Prohibit the uploading of files, because of the company's the AWR report failed to upload. Description can know the corresponding field.
DB Time = CPU time + wait time (does not include idle waiting) (background process), db time is recorded the server spent on database operations (background
process) and wait for the (non-idle wait)
System for 24-core CPU, the snapshot interval, a total of about 1380.04 minutes, a total of 1380.4 * 24 = 33129.6 minutes of CPU time, the DB time is 2591.15
minutes, which means that cpu spent 2591.15 minutes in dealing with the non-idle wait and operator of Oracle ( For example, the logical reading)

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 8 de 13

That CPU 2591.15/33129.6 * 100% (percentage: 7.82%) spent in dealing with Oracle's operation, which does not include a background process, server load average
is relatively low. Elapsed from AWR report and DB Time can get an idea db load.
That is: by DB Time / (Elapsed the * CPU Auditors) * 100% of the value obtained illustrates the CPU spent on dealing with Oracle's operating ratio (excluding
background processes). The higher the proportion of the higher load.
The Load Profile describes the current state of the library as a whole.
Redo size: The average per second or per transaction generated redo log size is 161K (unit: bytes) per transaction 5K redo log.
Physical writes: Average the physical per second write 66.52 blocks.
Physical reads / Logical Reads = 430.48 / 38788.19 = 1.1% of logical reads led to the physical I / O. Average in things the logical reads 1351.11 (blocks). This
number should be as small as possible. Read the unit block.
Parses: CPU per second of 1454.21 parsing system is busy, 35.79 hard parse (the hard parse proportionate share of 2.5%) per second, 1/35.79 = 0.02 sec CPU to be
dealt a new SQL statement, indicating the system within different SQL statements more recommended to use bind variables and procedure processing.
Sorts: 70.30 times per second sort is more.
Transactions: number of transactions generated per second, reflecting the heavy task of database or
% Blocks changed per Read: 1-2.43% = 97.57% of logical reads for read-only instead of modified blocks, average duration of each operation to update only
accounted for 2.43% of the block. Is the DML update operation (the 23-hour collection of snapshots) block the total block operation (logical R) ratio is 2.43%.
Recursive Call%: 71.77% SQL PL / SQL to perform. Recursive: recursive
Rollback per transaction%: the percentage of transaction rollback, the smaller the better. 19.95 value is very high (every thing 0.1995 rollback), the system has rolled
back issues, very expensive because of the cost of rollback, or an average of every 5 (1/0.1995) of the transaction is necessary to generate a return roll. The
combination of the previous transactions per second at 28.71, 28.71 / 5 = 5.7 times per second will be rolled back. Should carefully examine why such a high rate of
rollback.
The efficiency of the database instance. The target value is 100%.
The Buffer Nowait: obtained in the buffer Buffer not wait the ratio (buffercache request hit rate), if the Buffer Nowait <99% Description, there may be a heat block
(Find x $ bh tch and the v $ latch_children of cache buffers chains).
Redo NOWAIT%: Get Buffer Redo buffer wait ratio.
Buffer Hit%: the data blocks in the data buffer hit ratio should normally be more than 95%, or less than 95%, the important parameters need to be adjusted, less than
90% may be added db_cache_size, but a large number of non-selected index can also cause the value is high (the db file Sequential read).
In-memory Sort%: sort in memory. If it is too low, you need to consider increasing the PGA or inspection procedures to reduce Sort.
Library Hit%: the main representative of SQL in the shared area (library cache) hit rate, usually more than 95%, or increased need to consider SharedPool bind
variables to modify the cursor_sharing (need to be careful to modify this parameter) and other parameters.
Soft Parse%: the percentage of soft parse SQL approximately regarded as the hit rate in the shared area, less than 95%, need to be considered binding if less than
80%, then it is likely that your SQL basically not be reused.
Execute to Parse%: SQL statement execution and resolution ratio. If a new SQL statement after a resolution and then perform and can no longer be performed not in
the same session, then the ratio of 0, this ratio should be as high as possible. For example, the 36.04% Notes, the SQL statement executed in the same session, only
36.04% of the SQL is already parsing the (you do not need to resolve it again). DB new SQL statement is relatively large.
Execute to parse = round (100 * (1-Parses/Executions), 2), if the parse times greater than the executions may cause this value is negative, affect the performance will
be. This value is closer to 100% as possible (ie Parses / Executions closer to zero, that is almost all SQL has parsed the simply run just fine).
Latch Hit%: a latch for each application, the probability of success is how much. If less than 99%, then the with latch competitive problems. To ensure that> 99%,
otherwise there is a serious performance issues, such as bind variables, the hot block dispersed, shared pool adjustment (too small).
Parse CPU to Parse Elapsd%:
Is calculated as: Parse CPU to Parse Elapsd% = 100 * (parse time cpu / parse timeelapsed). Namely: to resolve the actual running time / (parse the actual running time
+ analytic resources of time). Here was 89.28% for parsing spent each CPU seconds spent about 1/0.8928 = 1.12 seconds of Wall clock (wall clock) time, 0.12
seconds to wait for a resource. If the ratio is 100%, which means that the CPU time is equal to the elapsed time, without any wait. The larger the value, the less time
consumed in waiting for resources.
% Non-Parse CPU: is calculated as follows:% Non-Parse CPU = round (100 * 1-PARSE_CPU/TOT_CPU), 2). Too low means that the resolution time-consuming
too much. This ratio closer to 100% as possible, the database most of the time is spent executing the SQL statement, instead of parsing the SQL statement.
Memory Usage%: means that part accounted for percentage of total sharedpool size, if it is too low, a waste of memory, if the value is too high, the excessive
utilization, probably because of the shared pool object is often flushed out of memory, resulting in the SQL statement hard The parsing increase. This number should
be stabilized at 75% to 90% for a long time.
% SQLwith executions> 1: shared pool in the implementation of the SQL statement number greater than 1% of the total number of SQL statements proportion is
94.48%.
% Memory for SQL w / exec> 1: This is not frequently used SQL statements compared to frequently used SQL statements of memory accounted sharedpool
consumption percentage. This figure will, in general, with the% SQL With executions> 1 is very close, there is no law unless there is some query task consumes
memory. In the steady state, will see the overall approximately from 75% to 85% of shared pool to be used over time. If the time window for the report is large
enough to cover all of the cycle, the execution times greater than the percentage of SQL statements should be close to 100%. This is observed between the duration of
the statistics. You can expect it to increase with the length of time between the observed increases.
Top 5 Timed Events idle time, without concern, we only need to care about non-idle wait events. The common free event:
dispatcher timer
lock element cleanup
Null event
parallel query dequeue wait

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database S... Pgina 9 de 13

parallel query idle wait - Slaves


pipe get
PL / SQL lock timer
pmon timer-pmon
rdbms ipc message
slave wait
smon timer
SQL * Net break / reset to client
SQL * Net message from client
SQL * Net message to client
SQL * Net more data to client
virtual circuit status
client message
Waiting for an event may not be listed in the Top 5 Timed Events are then listed five, each collection will change
Often some of the events listed here to do a simple analysis. Note that in Oracle 9.2 This project is called the Top 5 Wait Events, version 9.2 and after changes to Top
5 Timed Events, and contains the CPU time which Waits waiting times, Time (s) wait time (in seconds) Generally mainly to see the waiting time. AVG Wait (ms)
average each time to wait,% Total Call Time said the wait for the event what is the percentage of the total call time, WAIT Class said to wait for the level.
CPU time: CPU time is not really waiting for an event. It is an important indicator to measure whether the CPU bottleneck,
Elapsed Time = CPU Time + Wait Time. In general, a good system, the CPU time should be ranked first in the TOP 5 TIME Event, otherwise, it is necessary to make
adjustments to reduce the other the WAIT TIME. Of course, this is relative, if there is significant the latch wait or excessive Logical Read and other high percentage
of CPU time accounting for the time is reassuring. That CPU work in high efficiency is a good thing, but whether because of inefficient settings or SQL consume
CPU time on the need to pay attention.
db file sequential read and db filescattered read.
These two events are more frequent events. They show that the Oracle kernel request data blocks read from disk (to the buffer cache), the difference between them is
the sequential single block read (serial read), and scattered multi-block read. (And whether full table scan has nothing to do, just full table scan general performance
of the multi-block read). These two events described how the data blocks stored in the memory, rather than how to read from the disk.
db file scattered read
Fetch block is dispersed in the the discontinuous buffer space, usually means too many full table scan can check whether the application is a reasonable use of the
index, the database is reasonable to create the index. db file scattered read is used to indicate sequential read (for example, a full table scan).
db file sequential read
Usually implies that the index for large amount of data (such as through an index range scan for table data percentage is too large or the wrong use of the index),
multi-table joins improper connection order, hash join when hash_area_size can not accommodate hash table. db file sequential read is used to indicate the random
read (for example, an index scan).
Depth analysis of db file sequential read and db file scatteredread of:
Defined
The the event name db file sequential read and db file scatteredread described how the data blocks stored in the memory, rather than how to read from the disk the fill
disk to read the contents of the memory is continuous, the occurrence of disk read is db file sequential read, when filled with the data read from the disk memory
continuity can not be guaranteed, the occurrence of disk read is db file scatteredread.
db file sequential read
Oracle for all single block reads to generate db filesequential read event (since it is a single, of course, is continuous, you can find the P3 parameter db file sequential
read wait events are generally 1) Oracle always a single block of data stored in a single cache block (cache buffer), so a single block reads will never produce dbfile
scattered read event index block if it is not a fast full index scan, are generally a block read, so to say, the wait event many When are indexed read.
This event usually display a single data block read operations (such as index read). If this wait event is significant, it may indicate in a multi-table joins, table join
order problems, may not have the correct driver table; indiscriminately index. In most cases, we say that the index can be more rapid access to records, for a coding
standard, well-tuned database, the wait is normal. However, in many cases, the use of the index is not the best choice, for example, to read large amounts of data in a
large table full table scan may be significantly faster than an index scan, so in development we should note that, for this query should avoid the use of an index scan.
db file scattered read
db file scattered read are generally wait to read multiple blocks into memory. Performance and more efficient memory space utilization Oracle generally will disperse
these blocks in memory. db file scattered read wait event the P3 parameter indicates the number of blocks per I / O read. Every time I / O to read the number of
blocks, controlled by parameters db_file_multiblock_read_count. Full table scan or index fast full scan generally read block this way, so the wait are often caused
because a full table scan; most cases, the full table scan and fast full index scan will produce one or more times db file scattered read. Sometimes, however, these
scans will only have dbfile sequential read.
Full table scan is placed LRU (Least Recently Used, the least recently used) list the cold side (cold end) Cache put them into memory for frequently accessed smaller
data table, you can choose to avoid repeated read . When this wait event more significant, it is possible to combine v $ session_longops is dynamic performance view
to the diagnosis, the view recorded in a long time (running time of more than 6 seconds) to run things may be a lot of full table scan operation (in any case , this part
of the information is worthy of our attention).
latch free

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database... Pgina 10 de 13

The latch is a lightweight lock. In general, latch consists of three memory elements: pid (process id), memory address and memory length. Latch ensure the shared
data structure exclusive access to, in order to ensure the integrity of the memory structure damage. In multiple sessions at the same time to modify or view the same
memory structure (inspect) SGA must be serialized access to ensure the integrity of the sga data structure.
Latch is used to protect the memory structure in the SGA. Protection for objects in the database, the use of the lock is not a latch. Oracle SGA in many latch used to
protect sga memory structures will not be damaged because of concurrent access. Common the latch free wait event is caused due to the heat block (buffer cache latch
contention) and not using bind variables (in the shared pool latch contention).
The most common Latch focused on the competition BufferCache competition and Shared Pool. Latch competition related and Buffer Cache cache buffers chains and
the cache buffers LRU chain, and Shared Pool Latch competition SharedPool Latch and Library Cache Latch. The Buffer Cache Latch competition often is caused
due to the hot block competition or inefficient SQL statements; Latch Shared Pool of competition is usually caused due to the hard parse SQL. Too large shared pool
could lead to a shared pool latch contention (version 9i before);
When the latch system-wide wait time significantly, you can v $ latch sleeps column to find contention significantly latch:
Select name, gets, misses, immediate_gets, immediate_misses, sleeps
from v $ latch order by sleeps desc;
buffer busy waits
Conditions occur:
block is read into the buffer, or already in the buffer being other session to modify a session to try to pin live it, then the current block has been pin live competition to
produce a bufferbusy waits, the value should not be greater than 1%. View the v $ waitstat see approximate buffer BUSY WAITS distribution.
The solution:
This happens usually may be adjusted in several ways: increasing the data buffer, freelist, reduce pctused, increasing the number of rollback segments, increases
initrans, consider using the LMT + ASSM confirmation is not caused due to the hot block (can inverted index, or more small size).
The wait event indicates that is waiting for a non-shared buffer, or currently being read into the buffer cache. In general buffer BUSY wait should not be more than
1%. Check buffer wait statistics section (see below) Segments by Buffer Busy Waits (or the V $ WAITSTAT), look at the wait is in paragraph head
(SegmentHeader,). If so, you can consider increasing the free list (freelist for Oracle8i DMT) or increase the freelist groups (in many cases this adjustment is
immediate, 8.1.6 and later, the dynamic modification feelists need to set COMPATIBLE at least 8.1.6) Oracle9i or later can use ASSM.
alter table xxx storage (freelists n);
- Find wait block type
SELECT 'segment Header' CLASS, a.Segment_Type,
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE a.Header_File = b.P1
AND a.Header_Block = b.P2
AND b.Event = 'buffer busy waits'
UNION
The SELECT 'freelist Groups' Class
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Header_Block + 1
AND (a.Header_Block + a.Freelist_Groups)
AND a.Header_File = b.P1
All AND a.Freelist_Groups>
AND b.Event = 'buffer busy waits'
UNION
SELECT a.Segment_Type | | 'Block' CLASS,
a.Segment_Type
a.Segment_Name
a.Partition_Name
FROM Dba_Extents a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Block_Id AND a.Block_Id + a.Blocks - 1
AND a.File_Id = b.P1
AND b.Event = 'buffer busy waits'

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database... Pgina 11 de 13

AND NOT EXISTS (SELECT 1


FROM DBA_SEGMENTS
WHERE Header_File = b.P1 AND Header_Block = b.P2);
For different wait block type, we take a different approach:
1.data segment header:
Process recurring access Data Segment header usually for two reasons: to obtain or modify process freelists information is; expansion of the high-water mark. First
case, the process frequently access processfreelists information leading to freelist contention, we can increase the storage parameters of the corresponding segment
object freelist or freelist Groups a; often want to modify the freelist the data block and out of freelist a result of the process, you can the pctfree value and value
pctused of settings is a big gap, so as to avoid frequent data block and out of the freelist; For the second case, the segment space consumed quickly, and set the next
extent is too small, resulting in frequent expansion of the high-water mark, the The approach is to increase the segment object storage parameters next extent or create
a table space set extent size uniform.
2.data block:
One or more data blocks are multiple processes simultaneously read and write, has become a hot block, to solve this problem by the following way:
(1) reduce the concurrency of the program If the program uses a parallel query, reduce paralleldegree, in order to avoid multiple parallel slave simultaneously access
the same data object wait degrade performance
(2) adjusting the application so that it can read less data block will be able to obtain the required data, reducing the Buffer gets and physical reads
(3) to reduce the number of records in the same block, so that the distribution of records in the data block, which can be achieved in several ways: You can adjust the
segment object pctfree value segment can be rebuilt to a smaller block size table space , you can also use the alter table minimize records_per_block statement to
reduce the number of records in each block
(4) If the hot block object is similar to the index increment id field, you can index into reverse index, scattered data distribution, dispersion hot block; wait in the
index block should consider rebuilding the index, partitioned index or use reverse key index.
ITL competition and wait for multi-transactional concurrent access to the data sheet, may occur, in order to reduce this wait, you can increase initrans, using multiple
ITL slots.
3.undo segment header:
undo segment header contention because the system undosegment not enough, the need to increase the undo segment, the undo segment management methods,
manual management mode, you need to modify ROLLBACK_SEGMENTS initialization parameter to increase the rollback segment, if the automatic mode, you can
reduce the transactions_per_rollback_segment initialization parameter to the oracle automatic increase in the number of rollbacksegment
4.undo block:
undo block contention with the application data read and write at the same time (requires appropriate to reduce the large-scale consistency read), the read process to
undo segment to obtain consistent data, the solution is to stagger application modify the data and a lot of time to query data ASSM combination LMT completely
changed the Oracle storage mechanism, bitmap freelist can reduce the buffer busy waits (buffer busy wait), this problem was a serious problem in the previous
versions of Oracle9i.
Oracle claims ASSM significantly improve the performance of the DML concurrent operation, because (a) the different portions of the bitmap can be used
simultaneously, thus eliminating the serialized Looking remaining space. According to the the Oracle test results, use bitmap will eliminate all sub-head (competition
for resources), but also very fast concurrent insert operation. Among Oracle9i or later, the buffer busy wait no longer common.
Free buffer waits
There is no free data buffer available buffer, so that the process of the current session in Free the buffer wiats wait state, free buffer waits reason for the wait
Like the following:
- DATA buffer is too small;
- DBWR process to write the efficiency is relatively low;
- LGWR write too slow, DBWR wait;
- A large number of dirty blocks are written to disk;
- Low efficiency of SQL statements that need to be optimized on the Top SQL.
enqueue
The queue competition: enqueue a locking mechanism to protect shared resources. The lock mechanisms to protect shared resources, such as of the the the data in the
in the record, in order to avoid that the two people at the same time update the same data. The Enqueue including a queuing mechanism, FIFO (first-in, first-out)
queuing mechanism. Enqueue wait for the ST, HW, TX, TM
STenqueue interval allocated for space management, and dictionary-managed tablespace (DMT) DMT is typical for uet $ and FET $ data dictionary table contention.
Version LMT should try to use locally managed tablespaces or consider the manual pre-allocated a certain number of areas (Extent) reduce the dynamic expansion
serious queue competition.
The HW enqueue the segment high water mark the relevant wait; manually assign an appropriate area to avoid this wait.
The TX lock (affairs lock) is the most common enqueue wait. TX enqueue wait is usually the result of one of the following three issues.
The first question is a duplicate index unique index, you need to release the enqueue (commit) performs a commit / rollback (rollback) operation.
The second problem is the same bitmap index segment is updated several times. As single bitmap segment may contain more than one row address (rowid), when
multiple users attempt to update the same period of a user locks the records requested by the other users, then wait for the. Committed or rolled back until the locked
user enqueue release. The third question, the problem is most likely to occur, multiple users to simultaneously update the same block. If there is not enough ITL slot
occurs block-level locking. By increasing initrans and / or maxtrans to allow the use of multiple ITL slots (the data sheet for frequent concurrent DML operations in
the beginning of the construction of the table should be considered a reasonable value for the corresponding parameter settings to avoid changes to the system is
running online, before 8i, the freelists and other parameters can not be changed online design consideration is particularly important), or increase table on pctfree
value, you can easily avoid this situation.

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database... Pgina 12 de 13

The TM enqueue queue lock during DML operations before the acquisition, in order to prevent the data being operated table any DDL operations (DML operations
on a data-sheet, its structure can not be changed).
log file parallel write / logfile sync (synchronize log file)
If you log group there are several members of the group, when flush log buffer, the write operation is parallel, this time waiting for this event possible.
Trigger LGWR process:
1 user submits
2 1/3 redo log buffer is full
Greater than 1M redo log buffer is not written to disk
4.3 seconds timeout
Need to write data 5.DBWR the SCN greater than LGWR records the SCN the DBWR trigger LGWR writes.
When a user commits (commits) or rollback (rollback), the session's redo information needs to be written out to the redo logfile user process will inform the LGWR
to perform the write operation will notify the user process LGWR to complete tasks. Wait event means user process waiting for the LGWR write completion notice.
rollback operation, the event record from the user to issue a rollback command to the time of the rollback is complete.
If the wait too much, may indicate that LGWR write inefficient or submitted too often solve the problem, can follow: log file parallel write wait event. user commits,
user rollback statistics can be used to observe the number of committed or rolled back
Solution:
1. Increase LGWR properties: try to use a fast disk, do not redo log file is stored in the disk of RAID 5's
2 Use batch submission
Appropriate use of the NOLOGGING / UNRECOVERABLE option
Average redo write size can be calculated by the following equation:
avg.redo write size = (Redo blockwritten / redo writes) * 512 bytes
If the system generates a lot of redo each write less general description LGWR is activated too often.
Competition may lead to excessive redo latch.
The following wait events and RAC (resource contention between nodes):
gc current block busy:
gcs log flush sync:
gc buffer busy: hot block; the node isolation / service isolation to reduce inter-node resources contention;
Log File Switch
When this wait appears, which means that the request submission (commit) need to wait for the completion of the log file switch ". This wait event occurs usually
because the log group cycle is full, the first log archive is not yet complete, there the wait. The wait may indicate io problems.
The solution:
Consider increasing the log file and increase the log group
The archive files are moved to a fast disk
Adjustment log_archive_max_processes.
log file switch (checkpoint incomplete) - log switch (checkpoint not complete)
The wait event usually indicates you the DBWR write speed of slow or IO problems.
Want to consider adding additional DBWR or increase your log group or log file size.
control file sequential read / control fileparallel write
If you wait a long time, it is clear, you need to consider improving the control file where the disk I / O.
SQL Statistics accordance with the statistics of different indicators Sort useful data, combined with all the statistics, you can
Identify poor performance running SQL statements and run unreasonable (such as the number of runs very much) SQL easier to understand, here not described in
detail.
Many of the above are better understood, explain it here briefly a few of the following:
SQLordered by Parse Calls: Parse calls please reference (including hard parse and soft parse, and softer resolution):
SQL ordered by Version Count: SQL statement contains a version more same parentcursors, children cursors sql statement. That is, the SQL text is exactly the same,
the father, the cursor can be shared, but the different optimizer environment settings (OPTIMIZER_MISMATCH), bind variables length of the value in the second
execution occurrence of significant changes (BIND_MISMATCH), licensing relationship does not match (AUTH_CHECK_MISMATCH ) or basis convert an object
does not match (TRANSLATION_MISMATCH) lead to sub-cursors can not be shared, you need to generate a new child cursor. Shared with SQL (cursor sharing).
This case, the execution plan may be different, and may be the same (we can be seen through the plan_hash_value); specific mismatch can query V $
SQL_SHARED_CURSOR
Advisory Statistics
With this view recommendations. By the following view query.

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and analysis of the final report - Database - Database... Pgina 13 de 13

GV_ $ DB_CACHE_ADVICE
GV_ $ MTTR_TARGET_ADVICE
GV_ $ PGATARGET_ADVICE_HISTOGRAM
GV_ $ PGA_TARGET_ADVICE
GV_ $ SHARED_POOL_ADVICE
V_ $ DB_CACHE_ADVICE
V_ $ MTTR_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE_HISTOGRAM
V_ $ SHARED_POOL_ADVICE
Buffer Pool Advisory / PGAMemory Advisory / SGA Target Advisory /. . . . . .
WaitStatistics
Description buffer wait what kind of the wait block type (refer to the previous buffer wait instructions and ways to improve).
Segment Statistics:
* Segments by Logical Reads
* Segments by Physical Reads
* Segments by Row Lock Waits
* Segments by ITL Waits
* Segments by Buffer Busy Waits
* Segments by Global Cache Buffer Busy
* Segments by CR Blocks Received
* Segments by Current Blocks Received

http://www.databaseskill.com/1566063/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 1 de 6

The Oracle AWR introduce and Reports Analysis (1) final


Tag: oracle10g, sql, session, the oracle Category: Database Author: caobingkai Date: 2012-06-27

Gestiona Organiza Crece


Preparado?

Microsoft Dynamics AX ERP

1 AWR basic operation


C: \> sqlplus "/ as sysdba"
SQL * Plus: Release 10.2.0.1.0 - Production on Wed May 25 08:20:25 2011
Copyright (c) 1982, , . All rights reserved.
Connected to:
Oracle Database Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Mining Scoring Engine options
SQL> @ D: \ oracle \ product \ 10.2.0 \ db_2 \ RDBMS \ ADMIN \ awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
------------------------------------------3556425887 TEST01 1 test01
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Input value of report_type:
Type Specified: html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
-------------------------------------------------- -----* 3556425887 1 TEST01 test01 PCE-TSG-036
Using 3556425887 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(N) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
The value input num_days: 2
Listing the last 2 days of Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level
-------------------------------------------------- -----test01 TEST01 214 24 5 2011 07:53 1
09:00 1 215 245 2011
10:01 1 216 245 2011
11:00 1 217 245 2011
12:00 1 218 245 2011
13:01 1 219 245 2011
220 24 May 2011 14:00
15:00 1 221 245 2011
16:00 1 222 245 2011
17:00 1 223 245 2011
07:51 1 224 255 2011
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 223
Begin Snapshot Id specified: 223

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 2 de 6

To input end_snap: 224


End Snapshot Id specified: 224
declare
*
Line 1 error:
ORA-20200: The instance was shutdown between snapshots 223 and 224
ORA-06512: in LINE 42
From Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options disconnect
One more time:
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To input begin_snap: 214
Begin Snapshot Id specified: 214
To input end_snap: 215
End Snapshot Id specified: 215
Then enter the name of the report you want to generate ....
......
<P>
<P>
End of Report
</ BODY> </ HTML>
Report written to awrrpt_1_0524_08_09.html
SQL>
Generated report to postpone doing. We first get to know ASH and AWR.
Recognizing ASH (Active Session History)
2.1 ASH (Active Session History) architecture
Oracle10g before the current session record stored in the v $ session; session in a wait state will be copied placed in a
The v $ session_wait. When the connection is disconnected, the connection information in the v $ session and the v $ session_wait will be deleted. No view can
provide information about session in the history of each time point are doing, and waiting for resources. The original v $ session and v $ session_wait just current
session is running and wait for what resources.
Oracle10g, Oracle provides Active Session History (ASH) to solve this problem. Every 1 second
ASH will currently active session information is recorded in a buffer of the SGA (recycled). ASH, this process is called sampling (Sampling). ASH default every
second collection v $ session in active sessions, waiting for the event to record the session, the inactive session will not be sampled at intervals determined by the
_ash_sampling_interval parameter.
In 10g there is a new view: the v $ SESSION_WAIT_HISTORY. This view is saved for each active session in
The v $ session_wait of waiting for an event in the last 10 but this the monitoring data performance status for a period of time is not enough, To solve this problem, in
10g newly added one view: V $ ACTIVE_SESSION_HISTORY. This is ASH
(Active session history).
2.2 ASH strategy adopted by --Typical case, in order to diagnose the state of the current , you need more information in the recent five to ten minutes. However, since the recording
information on the activities of the session is a lot of time and space, ASH using the strategy: Save the activity session information in a wait state, per second from the
v $ session_wait and v $ session sampling and sampling information is stored in the memory (Note: ASH the sampled data is stored in memory).
2.3 Ash work principle --Active Session sampling (related view the information collected per second) data stored in the SGA allocated to the SGA in the size of the ASH from v $ sgastat is in
the query (the shared pool under Ash buffers), the space can be recycled, if required, the previous information can be new information coverage. All the activities of
all session should be recorded is very resource consuming. ASH only from
V $ SESSION view and a few other get information of the activities of the session. ASH every 1 second to collect session information, not through SQL statement,
instead of using direct access to memory is relatively more efficient.
Need to sample data per second, so ASH cache very large amount of data, all of them flushed to disk, it will be very consume disk space, so the ASH data in the
cache is flushed to the AWR related tables when to take the following Strategy:
1 MMON default every 60 minutes (can be adjusted) ash buffers data in the 1/10 is flushed to disk.
The MMNL default when ASH buffers full 66% of the ash buffers 1/10 data is written to disk (specific 1/10 which is the data, follow the FIFO principle).
The MMNL written data percentage of 10% the percentage of total ash buffers in the amount of sampled data is written to disk data (rather than accounting for the
proportion of the the Ash Buffers total size)
4 To save space, the data collected by the AWR in default automatically cleared after 7 days.
Specific reference implicit parameter:
_ash_sampling_interval: sampled once per second
_ash_size: ASH Buffer minimum value defined, the default is 1M

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 3 de 6

_ash_enable: Enable ASH sampling


_ash_disk_write_enable: sampling data written to disk
_ash_disk_filter_ratio: sampling data is written to disk accounted for the ASH buffer percentage of the total sampling data, the default 10%
_ash_eflush_trigger: ASH buffer full would later write, default 66%
_ash_sample_all: If set to TRUE, all sessions will be sampled, including those that session is idle waiting. The default is FALSE.
ASH cache is a fixed size of the SGA area corresponding to each CPU 2M space. ASH cache not exceed the maximum shared pool
5% or is 2% of sga_target.
The data inquiry: v $ active_session_history ASH buffers
ASH buffers to flush data to the table: WRH $ _active_session_history
(A partition table, WRH = Workload Repository History)
Respect to the table View: dba_hist_active_sess_history,
2.4 ASH --This view by the v $ ACTIVE_SESSION_HISTORY view access to relevant data, can also get some performance information.
----------Sampling information
----------SAMPLE_ID sample ID
SAMPLE_TIME sampling time
IS_AWR_SAMPLE AWR sampling data is 1/10 of the basic data
---------------------Information that uniquely identifies the session
---------------------SESSION_ID corresponds to the SID V $ SESSION
SESSION_SERIAL # uniquely identifies a session objects
SESSION_TYPE background or foreground program foreground / background
USER_ID Oracle user identifier; maps to V $ SESSION.USER #
SERVICE_HASH Hash that identifies the Service; maps to V $ ACTIVE_SERVICES.NAME_HASH
PROGRAM procedures
MODULE procedures corresponding and version
ACTION
CLIENT_ID Client identifier of the session
---------------------session executing SQL statement information
---------------------The SQL_ID sampling executing SQL ID
The executing SQL SQL_CHILD_NUMBER sampling sub-cursor Number
The SQL_PLAN_HASH_VALUE SQL plan hash value
SQL_OPCODE pointed out that the SQL statement at which stage of the operation corresponds to V $ SESSION.COMMAND
QC_SESSION_ID
QC_INSTANCE_ID
---------------------session wait state
---------------------SESSION_STATE session state Waiting / ON CPU
WAIT_TIME
---------------------session wait event information
---------------------EVENT
EVENT_ID
Event #
SEQ #
P1
P2
P3
TIME_WAITED
---------------------the session waits object information
---------------------CURRENT_OBJ #
CURRENT_FILE #
CURRENT_BLOCK #
AWR (Automatic Workload Repository)
ASH sample data is stored in memory. Assigned to the ASH memory space is limited, the old record will be overwritten when the allocated space occupied; database
is restarted, all these the ASH information will disappear.
Thus, for long-term performance of the detection oracle is impossible. Oracle10g, permanently retained ASH
The method of information, which is AWR (automatic workload repository). Oracle recommends using AWR replace
Statspack (10gR2 still retains the statspack).
3.1 ASH to AWR
ASH and AWR process can use the following icon Quick description:

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 4 de 6

v $ session -> the v $ session_wait -> v $ SESSION_WAIT_HISTORY> (in fact, without this step)
-> V $ active_session_history (ASH) -> wrh $ _active_session_history (AWR)
-> Dba_hist_active_sess_history
v $ session on behalf of the database activity began from the source;
The v $ session_wait view to wait for the real-time recording activity session, current information;
The v $ SESSION_WAIT_HISTORY enhanced is v $ session_wait the simple record activity session last 10 waiting;
The v $ active_session_history is the core of the ASH to record activity session history to wait for information, samples per second,
This part is recorded in the memory and expectation is to record one hour;
the WRH $ _active_session_history is the AWR the v $ active_session_history in the storage pool,
V $ ACTIVE_SESSION_HISTORY recorded information will be refreshed on a regular basis (once per hour) to load the library, the default one week reserved for
analysis;
The view of dba_hist_active_sess_history joint show wrh $ _active_session_history view and several other view, we usually this view access to historical data.
Above it comes to ASH by default MMON, MMNL background process sampling data every one hour from ASH buffers, then the collected data is stored in it?
AWR more tables to store the collected performance statistics, tables are stored in the SYSAUX tablespace SYS user, and to WRM $ _ *
And WRH $ _ *, WRI $ _ *, WRR $ _ * format name. The AWR historical data stored in the underlying table wrh $ _active_session_history
(Partition table).
WRM $ _ * the type storage AWR metadata information (such as checking the database and collection of snapshots), M behalf of metadata
WRH $ _ * Type to save sampling snapshot of history statistics. H stands for "historical data"
WRI $ _ * data type representation of the stored database in Hong suggestion feature (advisor)
WRR $ _ * represents the new features Workload Capture and Workload Replay related information
Built several the prefix DBA_HIST_ the view on these tables, these views can be used to write your own performance diagnostic tools. The name of the view directly
associated with the table; example, view DBA_HIST_SYSMETRIC_SUMMARY is the in
Built on the WRH $ _SYSMETRIC_SUMMARY table.
Note: ASH save the session recording system is the latest in a wait, can be used to diagnose the current state of the database;
AWR longest possible delay of 1 hour (although can be adjusted manually), so the sampling information and can not be used in the current state of the diagnostic
database, but can be used as a period of the adjusted reference database performance.
3.2 Setup AWR
To use AWR must be set the parameters of STATISTICS_LEVEL, a total of three values: BASIC, TYPICAL, ALL.
A. typical - default values, enable all automation functions, and this collection of information in the database. The information collected includes: Buffer Cache
Advice, MTTR Advice, Timed
Statistics, Segment Level Statistics, PGA Advice ..... and so on, you can select statistics_name, activation_level from v $ statistics_level
order by 2; to query the information collected. Oracle recommends that you use the default values ??typical.
B. all - If set to all, in addition to typical, but also collect additional information, including
plan execution statistics and Timed OS statistics (Reference A SQL query).
In this setting, it may consume too much resources in order to collect diagnostic information .
C. basic - Close all automation functions.
3.3: AWR related data collection and management
3.3.1 Data
In fact, the the AWR information recorded not only is Ash can also collect all aspects of statistical information to the database is running and wait for the information
to diagnostic analysis.
The AWR sampling at fixed time intervals for all of its important statistical information and load information to perform a sampling
And sampling information is stored in the AWR. It can be said: Ash in the information is saved to the AWR view
in wrh $ _active_session_history. ASH AWR subset of.
These sampling data are stored in the SYSAUX tablespace SYSAUX tablespace full, AWR will automatically overwrite the old information, and information in the
warning log records a:
ORA-1688: unable to the the extend SYS.WRH to $ _ACTIVE_SESSION_HISTORY partition WRH $ _ACTIVE_3533490838_1522 by 128 in tablespace
SYSAUX
3.3.2 collection and management
The AWR permanently save the system performance diagnostic information, is owned by the SYS user. After a period of time, you might want to get rid of these
information; sometimes for performance diagnosis, you may need to define the sampling frequency to obtain a system snapshot information.
Oracle 10g in the package dbms_workload_repository provide a lot of processes, these processes, you can snapshots and set baseline.
AWR information retention period can be modified by modifying the retention parameters. The default is seven days, the smallest value is the day.
Retention is set to zero, automatically cleared close. If, awr found sysaux space is not enough, by removing the oldest part of the snapshot to re-use the space, it
would also give the DBA issued a warning, telling sysaux space enough (in the alert log). AWR information sampling frequency can be modified by modifying the
interval parameter. Youngest
Value is 10 minutes, the default is 60 minutes. Typical value is 10,20,30,60,120 and so on. Interval is set to 0 to turn off automatically capture snapshots such as the
collection interval was changed to 30 minutes at a time. (Note: The units are minutes) and 5 days reserved
The MMON collection snapshot frequency (hourly) and collected data retention time (7 days) and can be modified by the user.
View: Select * from dba_hist_wr_control the;
For example: Modify the frequency of 20 minutes, to collect a snapshot retain data two days:
begin

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 5 de 6

dbms_workload_repository.modify_snapshot_settings (interval => 20,


retention => 2 * 24 * 60);
end;
The 3.4 manually create and delete AWR snapshot
AWR automatically generated by ORACLE can also through DBMS_WORKLOAD_REPOSITORY package to manually create, delete and modify. Desc command
can be used to view the process of the package. The following is only a few commonly used:
SQL> select count (*) from wrh $ _active_session_history;
COUNT (*)
---------317
SQL> begin
2 dbms_workload_repository.create_snapshot ();
End;
4/
PL / SQL procedure successfully completed.
SQL> select count (*) from wrh $ _active_session_history;
COUNT (*)
---------320
Manually delete the specified range of snapshots
SQL> select * from wrh $ _active_session_history;
SQL> begin
2 dbms_workload_repository.drop_snapshot_range (low_snap_id => 96,
high_snap_id => 96, dbid => 1160732652);
End;
4/
SQL> select * from wrh $ _active_session_history where snap_id = 96;
No rows selected
3.5 Setting and remove baseline (baseline)
Baseline (baseline) is a mechanism so you can mark important snapshot of the time information set. A baseline defined between a pair of snapshots, snapshot through
their snapshot sequence number to identify each baseline and only a snapshot. A typical performance tuning practices from the acquisition measurable baseline
collection, make changes, and then start collecting another set of baseline.
You can compare two sets to check the effect of the changes made. AWR, the existing collection of snapshots can perform the same type of comparison.
Assume that a name apply_interest highly resource-intensive process run between 1:00 to 3:00 pm,
Corresponds to the the snapshot ID 95 to 98. We can define a name for these snapshots of apply_interest_1 baseline:
SQL> select * From dba_hist_baseline;
SQL> select * from wrm $ _baseline;
SQL> exec dbms_workload_repository.create_baseline (95, 98, 'apply_interest_1');
After some adjustment steps, we can create another baseline - assuming the name apply_interest_2, and then only for those with two baseline snapshot of comparative
measurements
SQL> exec dbms_workload_repository.create_baseline (92, 94, 'apply_interest_2');
Can be used in the analysis drop_baseline () to delete the reference line; snapshot is retained (cascade delete). Furthermore,
Clear routines delete the old snapshot, a baseline snapshot will not be cleared to allow for further analysis.
To delete a baseline:
SQL> exec dbms_workload_repository.drop_baseline (baseline_name => 'apply_interest_1', cascade => false);
AWR environment
Rac environment, each snapshot includes all the nodes of the cluster (as stored in the shared database, not in each instance). Snapshot data of each node has the same
snap_id, but rely on the instance id to distinguish. In general, in the RAC snapshot is captured at the same time.
You can also use the Database Control to manually snapshot. Manual snapshot support system automatic snapshot.
ADDM
Automatic Database Diagnostic Monitor: ADDM the introduction of this AWR data warehouse, Oracle can naturally achieve a higher level of intelligence
applications on this basis, greater play to the credit of the AWR, which is Oracle 10g introduced In addition, a function Automatic Database Diagnostic Monitor
program (Automatic Database Diagnostic Monitor, ADDM), by
ADDM, Oracle is trying to make database maintenance, management and optimization become more automated and simple.
The ADDM may be periodically check the state of the database, according to the built-in expert system automatically determines the potential database performance
bottlenecks, and adjustment measures and recommendations. All built within the Oracle database system, its implementation is very efficient, almost does not affect
the overall performance of the database. The new version of Database Control provides a convenient and intuitive form ADDM findings and recommendations, and
guide the administrator to the progressive implementation of the recommendations of the ADDM, quickly resolve performance problems.
AWR common operations
AWR configuration by dbms_workload_repository package configuration.

http://www.databaseskill.com/3286705/

26/09/2015

The Oracle AWR introduce and Reports Analysis (1) final - Database - Database Sk... Pgina 6 de 6

6.1 Adjusting the AWR snapshot frequency and retention policies, such as the collection interval changed to 30 minutes at a time,
And reserved 5 days (units are in minutes):
SQL> exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 5 * 24 * 60);
6.2 Close AWR, the interval is set to 0 to turn off automatically capture snapshot
SQL> exec dbms_workload_repository.modify_snapshot_settings (interval => 0);
6.3 manually create a snapshot
SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
6.4 View snapshot
SQL> select * from sys.wrh $ _active_session_history
6.5 manually delete the specified range of snapshots
SQL> exec DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 973, high_snap_id => 999, dbid => 262089084);
6.6 created Baseline, save the data for future analysis and comparison
SQL> exec dbms_workload_repository.create_baseline (start_snap_id => 1003, end_snap_id => 1013, 'apply_interest_1');
6.7 Delete baseline
SQL> exec DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'apply_interest_1', cascade => FALSE);
6.8 AWR data export and migrate to other databases for later analysis
SQL> exec DBMS_SWRF_INTERNAL.AWR_EXTRACT (dmpfile => 'awr_data.dmp', mpdir => 'DIR_BDUMP', bid => 1003, eid => 1013);
6.9 Migration the AWR data file to the other database
SQL> exec DBMS_SWRF_INTERNAL.AWR_LOAD (SCHNAME => 'AWR_TEST', dmpfile => 'awr_data.dmp', dmpdir => 'DIR_BDUMP');
The the AWR data transfer to the TEST mode:
SQL> exec DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHNAME => 'TEST');
Analysis AWR report
See below a

http://www.databaseskill.com/3286705/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 1 de 7

Oracle the AWR introduced and Report Analysis (2) final


Tag: buffer, parallel, sql, file, Transactions Category: Sql Author: phoebeyu Date: 2011-06-26

Malwarebytes New Version


Detect Threats Antivirus Will Miss. The New Malwarebytes 2.0. Buy Now!

Because of the company's prohibit the uploading of file, so the the AWR report failed to uploads. According to description can be know the the field of the the
corresponding.
the time on the the db time = the CPU time + wait time (does not contain a idle wait) (non-background process), that is, db time is the the recorded of the spent on
computing (non-background process) and wait for the (non-idle wait)
System as the the 24-core CPU, in the snapshot interval in the, a total of about 1380.04 minutes, CPU time, a total of 1380.4 * 24 = 33129. 6 minutes, here DB time
for the 2591.15 minutes, which means that cpu spent a 2591.15 minutes in dealing with the on the non-idle to wait for and operators ( For example, the logic read)
In other words cpu there are 2591.15/33129.6 * 100% (the percentage of: 7.82%) to spend in dealing with the on the Oracle's operation, which does not include a
background process, the the the average load of this server is relatively low. From the AWR report's Elapsed and DB Time will be able to probably understand the the
load of the db.
That is to say: through the the DB Time / (Elapsed * the number of CPU nuclear) * the 100% the values ??derived on the illustrates the the the proportion of of the of
the CPU to spend in the the on the the the operation of of of handling the Oracle (does not include to as background process). The higher the proportion of the higher
load.
Load Profile Description the the the current the library as a whole state.
For the 161K (Unit: Bytes), the a per-transaction generate 5K redo log higher than the average generally observed redo size: with an average per second or the size of
the the redo generated by of a per-transaction log the.
Physical writes: the average per second physical be written as 66.52 blocks.
Physical reads / Logical reads = 430.48 / 38788.19 = 1.1% of the logic read led to the physical I / O. On average, generated by each things logical reads 1351.11
(blocks). This number should be the smaller the better. Read's unit is the Block.
Parses: The the CPU per second need to carried out the 1454.21 times parsing, the system is relatively busy, there are are a per second of 35.79 times hard parse
(hard parse accounted for the the proportion of of 2.5%), Notes 1/35.79 = 0.02 sec End-of-CPU to be dealt with the a new statement, explain the system within the
different more SQL statement, it is recommended that multi-use bind variables and procedure processing.
Sorts: the the per second 70.30 times of Sort is is also relatively large number of.
Transactions: the number of the affairs of in the generated per second, reflecting the the database tasks are arduous or not
% Blocks changed per Read: Description 1-2.43% = 97.57% the logic of read is be used for those who read-only of the rather than that can be modified or opinions
from, that presented in this block, with an average each time operation the block of the to update only the accounted for 2.43% of the the. Also 's is the the best of the
(in the the hoarded the these 23 more-hour period in the snapshot) DML profile Actions of the attributable to of the the entire of has been operated (logic Reading of)
and provided to us by the block up block the proportion of number number that 2.43% is.
Recursive Call%: indicates that the 71.77% of the SQL are through the PL / SQL to executed. Recursive: recursive
Rollback per transaction%: represents the the the percentage of of the transaction rollback, the the smaller the the better. 19.95 value is has been very high
the .1995 rollback) on the (means that each things of generated, the system existence of the problem of rollback aspects of the, because the the of the consideration for
the rollback was is is very expensive, ie, an average of for every 5 (1/0.1995) Affairs is necessary to generated once back to roll-. Transactions is per second of the
combined with in front of as a 28.71, that is, per second there will be 28.71 / 5 = 5.7 times a rollback. Should be carefully check the system why the generated the the
a such a high rollback rate of.
The efficiency of database instance. Target value are 100%.
Buffer Nowait%: in the buffer zone in the to obtain the Buffer of the the is not wait for the ratio (buffer cache the requested the hit rate of), If the the the Buffer
Nowait is <99% Description, there are may there is a the hot block (Find a the the the tch and v of of the x $ bh the the cache of the $ latch_children buffers chains).
Redo NoWait in%: with buffer in the Redo at the to obtain Buffer did not wait for the ratio.
Buffer Hit%: The data block in the data buffer in the hit rate, should normally be in the the more than 95%, Otherwise, the in less than 95%, need to adjust the
important parameter, less than 90% may be want to plus db_cache_size, but a large number of of the the the index of the of the the non-choice for can also cause the
this value is high (a large number of db file Sequential read).
In-memory Sort%: in the memory in the Sort rate of. If it is too low, you need to consider increasing the-in PGA or check the procedures to reduce Sort.
Library Hit%: the main representative of SQL in the the the the hit rate of of the shared District, (library cache), usually in the more than 95%, otherwise need to
consider the to increase the the Shared Pool, bind variables, modify the cursor_sharing to (need to be careful modify this parameter), and and so on on parameters.
Soft the Parse%: soft parsing percentage of, be approximately regarded as the the hit rate of of the SQL In shared-area that they are, an is less than of <95% that,
need to consider the to the bound, and If less than 80%, then there is likely that your SQL basically no be reused.
The Execute to Parse%: the the ratio of of SQL statement execution with the analytical. If a certain Article new SQL statement After once of to parse and then
perform, and the no longer are not in the same the executed in the the a session the words of, then the ratio was 0, the this ratio should be the higher the better. For

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 2 de 7

example, the 36.04% Notes, the SQL statement executed in the same session, only 36.04% of the SQL is already parsing the (you do not need to resolve it again).
Description the new SQL statement in the the DB a relatively large.
Execute to parse = round (100 * (1-Parses/Executions), 2), If the the the parse number of times is is greater than executions, may cause this value is negative, on the
to the performance there will be affect the. This value the more close to 100% the better the (ie, Parses / Executions closer to 0, also that is, almost the all SQL are has
been parsed over the, as long as the the implementation of like a).
Latch Hit%: each time to apply for a latch when, the probability of success There are how many. If lower than the 99%, is illustrated in there a latch the issue of
competition. To ensure that> 99%, otherwise there is a serious performance problems, For instance, bind variables use, hot spots block dispersed, the shared pool
adjust the (is too small), and and so on on.
Parse CPU to Parse Elapsd%:
Is calculated as that the: Parse CPU to Parse Elapsd% = 100 * (parse time cpu / parse time elapsed). Namely: to resolve the actual running time / (parse the actual
running time + analytic resources of time). Here for the 89.28%, used to parsing the it takes to each CPU seconds of spend about 1/0.8928 = 1.12 seconds of the Wall
clock (wall clock) time, This shows that to spent a .12 seconds time to waiting for a resources. If the the The Ratio is for a Are 100-percent-, The means that the CPU
THE time is equal to the elapsed time, no of any wait. Bigger the the value, the Description the information of Dissipation at the on the the waiting for a resource is
Time the more Time less.
% Non-Parse CPU: calculation formula to you and that for:% Non-Parse CPU = round (100 * 1-PARSE_CPU/TOT_CPU), 2). Too low means that the resolution
time-consuming too much. This ratio closer to 100% the better, Description the database most of the time is spent on execute the SQL statement on the, rather than
parsing the SQL statement.
Memory Usage%: said that those parts that be used is accounted for the a percentage of the the the total the size of of of the shared pool, If it is too low, a waste of
memory, If the value is is too high, Description the utilization rate of is is too large, may be because the an object in the the shared pool in the is often refreshed out of
memory, leading to the SQL statement hard parsing increase. This number should be stabilized at 75% ~ 90% for a long time.
% SQL with executions> 1: said that to the shared pool in the perform a a number of times of greater than 1 times SQL statement accounted for the proportion of the
total number of SQL statement is 94.48%.
% Memory for SQL w / exec> 1: which is associated with the compared to the do not frequent use of the SQL statement, frequent-to-use SQL statement the
memory consumed by accounted for the the percentage of of the shared pool. This figure will the WITH executions on the with the the% SQL on the the in general,>
1 very close to the, there is no law with the unless there is the consumed by of some queries task memory. In the steady state, will see the overall approximately from
75% to 85% of shared pool to be used over time. If the the the time window of the statements is is large enough to In to the to cover all of the cycle, the the the
implementation of number of times is greater than once of the the the percentage of of an SQL statement should be close to 100%. This is the the to statistics a by the
the the duration of between the in the observe the affect the of the,. Can expect it to Sui the the the length of time between the the observe the increases the increase
of.
Top 5 Timed Events in the If it is idle waiting for time, you can do not need to Concerned about the, we only need to to concerned about the non-idle to waiting for
an event. The common free event:
The dispatcher timer is
lock element cleanup
Null EVENT
parallel query dequeue wait
parallel query idle wait - Slaves
pipe get
PL / SQL lock timer
pmon timer-pmon
rdbms ipc message
slave wait
smon timer
SQL * Net break / reset to client
SQL * Net message from client
SQL * Net message to client
SQL * Net more to client
virtual circuit status
client message
The waiting for an event are listed in the in the Top 5 Timed Events is is not necessarily is the At that time listed in 5, each collection as to will be change,
Listed here to often appear in and Conditions is the world's some of the events to to to do the a simple analysis of. Pay attention to in the the Oracle 9.2 previous of
this project to is called the Top 5 Wait Events, In the on and after 9.2, versions in order to changed to Top 5 Timed Events, and contains the the Waits of of the "CPU
time" of which said that to wait for number of times, Time (s), said wait for the time (in seconds), general mainly to see the the wait time. AVG Wait (ms) average
each time to wait,% Total Call Time said the wait for the event what is the percentage of the total call time, WAIT Class said to wait for the level.
CPU time: CPU time In fact, is not a true wait events. Is the an important indicator of of the a measure of whether or not the CPU the bottleneck of,
Elapsed Time = CPU Time + Wait Time. Generally speaking, the a good system, the CPU TIME should be row of in the the the top of of the TOP 5 TIME Event,
Otherwise, it is necessary to carried out adjusted in order to reduce the other of the WAIT TIME. Of course, This is also the relative, If you does not exist significant
of the latch wait or is too high of the the Logical Read and so on and high, the the proportion of of the CPU time accounted for the time is the reassuring. In other

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 3 de 7

words the CPU in terms of high efficiency to work that the is a good thing, but whether it is because inefficient the settings or SQL while the consumption CPU time
on the need to pay attention to.
db file sequential read with the db file scattered read.
These two events is the emergence of to the comparison frequent events. They indicates that the the Oracle kernel requests read from the disk data block (to the the
the buffer cache), the difference between them is
sequential is a single block read (serial read), while the scattered represents multi-block read. (And whether full table scan has nothing to do, just full table scan
general performance of the multi-block read). These two events described in to the is the how to will the data block storage to the the memory in the, rather than how
to from the disk be read.
db file scattered read
To fetch at a time block is dispersed in the the the in the the the do not continuous space of of buffer are, the usually indicates that the full table scan is too much, and
can be Checking to see if an application whether it is reasonable the use of the index, the database whether it is reasonable the creation of the index. The db file
scattered read is used to indicates that the order to read the (for example, the full table scan).
db file sequential read
Normally imply that the through the index obtain the larger than the data (For example, through the index carried out the range scan Get the table data percentage of
is too large or if the wrong use the index), multi-when of the table connection the order of connection improper, the hash join when hash_area_size can not
accommodate hash table and so on. db file sequential read is used to indicate the random read the (for example, index scan).
In-depth be resolved the db file sequential read and the db file scattered read:
Defined
Event name db file sequential read with the db file scattered read Description of the is how to will the data block storage to the the memory in the, rather than how to
from the disk be read. If the are memory in which a the of the the contents of the to be read by the the fill disk, is continuous, the occurrence of the disk read is the db
file sequential read, When the the the the the the continuity of of memory of of the the the data read by the filled from the disk can not be guaranteed when, the
occurrence of the disk read is the db file scattered read.
db file sequential read
Oracle for the a all of the the a single block of read to generate the db file sequential read event (Since it is a a single, of course, is continuous, You can found that db
file sequential read of the P3 parameters of the waiting for an event are generally 1). Oracle always will the a single data block is stored in the a single cache block
(cache buffer), so the a single block of read forever will not produce the db file scattered read event. For index block, If fast is not the full-index scan, are generally
one by one block read, so to being said, this to wait for, Event a lot of of the time are index to read the caused by.
The is usually displayed of the this incident related to to the with the a single data block read operation (such as index read). If this waiting for an event is relatively
significant, it may indicate to At the in the the connection of of the multi-table, the the order of connection of the table there is a problem, may not have the correct
the use of driving table; or may instructions does not plus the conducted in an selectively the index. In most cases, we say that, through the index can be more fast of
the to obtain records, so For the a coding specification, to adjust the good database, this wait for the a great is quite normal. However, in many cases, the use of the
the index is not the best the choice of, be such as the read of as the larger table of Secondary large amounts of data,, full table scan may will be significantly faster
than the index scan, so in the we in the development of on the should pay attention to, For inquiries For such, please call It should be carried out that the avoid the use
of the index scan.
db file scattered read
db file scattered read are generally wait to read multiple blocks into memory. Performance and more efficient memory space utilization Oracle generally will disperse
these blocks in memory. db file scattered read wait event the P3 parameter indicates the number of blocks per I / O read. Every time I / O to read the number of
blocks, controlled by parameters db_file_multiblock_read_count. Full table scan or index fast full scan generally read block this way, so the wait are often caused
because a full table scan; most cases, the full table scan and fast full index scan will generated one or more times db file scattered read. But in, and sometimes, the
these scans will only generate db file sequential read.
In because to full table scan is placed in to LRU (Least Recently Used, least-recently-use the) the list the cold end of (Cold end), For the a smaller the data table for
frequently accessed, you can choose put them Cache it into memory, with a view in order to to avoid repeated read . When the relatively significant when of the this
waiting for an event, you can be combined with the the v $ session_longops as well as the dynamic performance view to to carry out the diagnosis, the long time
(running time over six seconds) running things are recorded in in the the view, may be a lot of is full table scan operation (In any case , this part of information are
worthy of our attention).
latch free
latch is the the the lock of the a lightweight. Generally speaking, LATCH by the the three memory element composition: the the pid (the process of id), memory
address and memory in length of the. Latch ensure the shared data structure exclusive access to, in order to ensure the integrity of the memory structure damage. In
multiple sessions at the same time to modify or view the same memory structure (inspect) SGA must be serialized access to ensure the integrity of the sga data
structure.
Latch just used to protect the the memory structure in the in the the sga. Protection for objects in the database, the use of the lock is not a latch. Oracle SGA in the
There are many latch, used to protect the in a variety of of the sga's memory structure will not because of the concurrent access to while the damage to the. Common
the latch free wait event is caused due to the heat block (buffer cache latch contention) and not using bind variables (in the shared pool latch contention).
The most common Latch concentrated in the the the competition of the the Buffer Cache of the competition and Shared Pool of the. Related Latch competition and
Buffer Cache cache buffers chains and the cache buffers LRU chain, the Latch competition and Shared Pool Shared Pool Latch and Library Cache Latch. The the
Latch competition of of the Buffer Cache is often is caused due to the the competition in the hot spots block or the the inefficient the SQL statement; Shared Pool of
the does not have a Latch competition is usually due to to the SQL-hard parse caused by. Is too large shared pool of may lead to shared pool latch contention the the
version of (9i before);
When the latch system-wide wait time significantly, you can v $ latch sleeps column to find contention significantly latch:
Select name, gets, misses, immediate_gets, immediate_misses, sleeps
from v $ latch order by sleeps desc;
buffer busy waits
An conditions has occurred:
block is being read into the buffer or has been in the the buffer is being used other session modify, a session try to to to go pin to live it, the At this time the current
BLOCK has been pin live, took place a competitive, to yield a buffer-BUSY waits, This value does not should be is greater than for Are 1%. You can view the the v $
waitstat see the approximate buffer BUSY WAITS distribution of.

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 4 de 7

The solution:
Appear This situation is typically may be provided through the the in several ways to adjust the: increasing the DATA BUFFER, to increase freelist, to reduce the
pctused, increasing the number of rollback segment, increasing the initrans, and to consider use LMT + ASSM, recognized is not due to caused by hot spots block (if
it is can , using inversion index, or using the A more small piece of size of the of).
The the wait for the event indicates that the is waiting for a in order to the non-shared way the buffer used by, or indicates that the current is being read into the the
buffer cache. In general buffer BUSY wait should not be more than 1%. The check buffer wait for the Statistical part of the (are as follows) Segments By Buffer Busy
Waits (or V $ WAITSTAT), look at wait for the whether the is located in, (Segment Header), paragraph head,. If yes, you can consider increasing the freedom of list
of (freelist, For Oracle8i DMT) or increase the freelist Groups (in the the a lot of of the time this adjustment is immediate, in the 8.1.6 and beyond of the version,
dynamic to modify the feelists need to set the that the COMPATIBLE of at least for the 8.1.6), Oracle9i or can later be use ASSM.
alter table xxx storage (freelists n);
- Look for the to wait for block type
SELECT 'segment Header' CLASS, a.Segment_Type,
a.Segment_Name,
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE a.Header_File = b.P1
AND a.Header_Block = b.P2
AND b.Event = 'buffer busy waits'
UNION
SELECT 'freelist Groups' CLASS,
a.Segment_Type,
a.Segment_Name,
a.Partition_Name
FROM Dba_Segments a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Header_Block + 1
AND (a.Header_Block + a.Freelist_Groups)
AND a.Header_File = b.P1
The AND a.Freelist_Groups> 1
AND b.Event = 'buffer busy waits'
UNION
SELECT a.Segment_Type | | 'Block' CLASS,
a.Segment_Type,
a.Segment_Name,
a.Partition_Name
FROM Dba_Extents a, V $ session_Wait b
WHERE b.P2 BETWEEN a.Block_Id AND a.Block_Id + a.Blocks - 1
AND a.File_Id = b.P1
AND b.Event = 'buffer busy waits'
AND NOT EXISTS (SELECT 1
The FROM dba_segments
WHERE Header_File = b.P1 AND Header_Block = b.P2);
For the different wait block type, we have taken the of different approaches with:
1.data segment header:
The process of-recurring of the access DATA Segment header usually There are two reasons: Get or modify the of the information of process freelists; to expansion
the the high-water level mark. In In response to the first case, the the process of frequently accessed the process freelists information to be lead to for the freelist
contention, we can increase the the storage parameters freelist of the the corresponding segment pin object or freelist Groups a; If the due to the to the data block is
frequent in and out of an freelist by the cause the process to often have to modify the the freelist, then the can will the the pctfree value and the the the pctused value
of settings is a the larger of the gap between the, & thus avoiding the the data block is frequent to an automatic and out of the freelist of the item (s); For to the for the
second case, due to, you can save the The segment space consumption soon, in while the to set next extent is too small, lead to frequent the extend the high water
mark, to solve way is to increase the segment object the storage parameters next extent or directly, in the the the when of create the table space be set the by spacing
the control points extent size uniformly..
2.data block:
One or more data blocks are multiple processes simultaneously read and write, has become a hot block, to solve this problem by the following way:
(1) reduce the of the the degree of concurrency of the program,, in order to avoid the multiple parallel the slave at the same time access the same data object If the the
a parallel query is used in the program and the, reduce the a PARALLEL Degree is while the the formation of the to wait for degrade performance

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 5 de 7

(2) on application of adjustments procedures so that it can to read less data block will be able to to obtain the required data, reduce the Buffer gets and physical reads
(3) decrease in the the number of records in in the the same a block, so that the records distributed in the the more the data blocks in the, This can be through the the
number of avenues to achieve: can adjust the the the pctfree of value of the segment object course, you can be able to Segment reconstruction to the block size smaller
tables space , it is also can be used to the the alter table minimize records_per_block statement reducing the the number of records in the per block in the
(4) If the the the hot spots block object is similar to to the auto-incremented the the index of the of the id field, you can will index conversion for the reversal index,
the break up the data distribution, to spread the hot spots block; If you wait is in the index block, should consider the rebuild the index, partitioned index or use the
reverse key index.
For the the data table of the multi--transactional concurrent access to, the competition and to wait for of the about the ITL's that may arise, in order to we reduce the
that this wait for, can increase the the initrans, the use of a plurality of ITL grooves.
3.undo segment header:
undo segment header contention is because the system Use the undo segment is not enough, you need to add sufficient undo segment, according to undo segment
manual management mode, you need to modify the ROLLBACK_SEGMENTS initialization parameter to increase the rollback segment, automatic management
mode, can be reduced transactions_per_rollback_segment the the value of the of the initialization parameters to make the the oracle the automatically number of the
the number of of the rollback segment
4.undo block:
Use the undo block an indisputable with is due to exists in your application on the on the read and write of the data at the same time carried out (requires appropriate
to reduce the large-scale the consistency of read), the read process need to go to to go get the the consistency data in the the undo segment, solution is to stagger the
application the the modify the data and the the a large number of query the data of the time ASSM combined with LMT completely changed the Oracle's storage
mechanism, the bitmap freelist be able to help mitigate buffer to busy-wait (buffer busy wait), this problem in the the previous versions of Oracle9i li was is a serious
problem.
Oracle declared that ASSM a explicit and significantly manner improves the DML of concurrent operations performance, because the the The the the type of part of
the of the The (the same a) Bit FIG. Of of the can are simultaneously use, so respect of Elimination of a the of the serial oriented Looking for the Residual space.
According to the the test results of the Oracle's, the the use of bitmap will the elimination of all segmented the head (on the the the compete for of the resources), but
also get very fast concurrent to insert the operation. In the Oracle9i on or after version among the, Buffer Busy wait is no longer common.
Free buffer waits
Expressed data buffer li there is no idle is available buffer, making the the the current session process is at a Free buffer wiats wait state, the the reason for the wait of
of the Free buffer waits a
-Like has the following are several:
- DATA buffer is too small;
- The the efficiency of the to write of DBWR the process of is is relatively low;
- LGWR to write is too slow, cause the DBWR to to wait for;
- A large number of dirty blocks are written to disk;
- SQL statement efficiency is low, need to on the Top SQL to optimize the.
enqueue
The queue competition: enqueue a locking mechanism to protect shared resources. The locking mechanisms to protect shared resources, such as data in the record, in
order to avoid two people update the same data at the same time. Enqueue includes a queuing mechanism, that is, FIFO the) queuing mechanism of the (first-in, firstout. Enqueue to wait for common are the ST, HW, TX, TM and so on
ST enqueue, used for the allocation of the space management, and the the table space of the dictionary-managed the the interval of the (DMT), in the DMT typical is
the For the the uet $ and the FET $ data dictionary table contention. To For the the support-LMT the version of, it should be it is as far as possible to use select the
space of the the local management table. Of stay or as well as to the consider the hand-pre-allocated a certain number of District, (Extent), to reduce the the incidents
of serious queue competition in the the dynamic expansion-when.
The high water level of the HW enqueue refers to the and the segment marked related to wait for; manually assigned appropriate District, to can be avoided the
expecting this kind of to wait for..
TX lock (transaction lock) is the most common enqueue to wait for. To wait for is usually Does TX Enqueue is the the results of of the a the following three-one of
the issues generated.
The first problem is the the a duplicate index in the unique index, you need to perform to Submit a new (commit) / See rollback (rollback) operation to release the
enqueue.
The The second problem is to the the multiple updates to the of the index segment of the the same bitmap. Because the a single bitmap segment may contain the
address of the more than one row (rowid), Therefore, When the multiple users attempt to to when you to update the same a period of, it may a be possible to the user
will the requested records with a lock other users up, At this time waiting to be appear. In until the to obtain locked user commit or rollback, enqueue release. Third
question, the is also the most problems that may occur, multiple users at the same time update the the same to a block of. If the there is not enough ITL groove, occurs
block-level locked. By increasing the initrans and / or, in the maxtrans to allows the use of the multiple ITL groove (in For to frequent concurrent to carried out the
DML operation the data table, Construction in progress the the beginning of the table on the should consider for the corresponding parameter settings a reasonable
amount of the numerical, to avoid the the changes of online in the after the system is running, In the prior to 8i, the freelists and other parameters can not be changed
online, the considerations at the time of the design is is particularly important), or to increase the amount of the the pctfree value of on the the in the table, you can
very easy avoid this situation.
In during the before the DML operation, TM enqueue queue lock obtained by, with a view in order to prevent the be carried out on the the the data being operated
table any DDL operations, (when the a data-sheet is in the DML operation, and its structure can not be change the),.
log file parallel write / log file sync (the log file synchronization)
If your Log group the presence of a member of the more than one group, When the flush log buffer when, write operation is parallel, At this time This waiting for an
event may appear.
A the the conditions of of the LGWR process facility is triggered there are:
1. The user submits the
2. There are 1/3 redo log buffer is full
3. There is of the redo log of the is greater than the 1M buffer has not been written to disk
4.3 seconds timeout
5 the DBWR need to write the data SCN greater than LGWR record the SCN the DBWR trigger LGWR writes.
When the a user commits (commits) or rollback (rollback), the session the redo of information needs to be written out to the the redo logfile in the. The the user
process will notify the the LGWR the implementation of write out operation, will be notified user process in the After LGWR completion of task. This waiting for an
event is to refers to the the user process waiting for the LGWR write completion notice. In For to rollback operation, the the event have recorded now issue a rollback
from the user command to the the of the time of rollback is complete,.

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 6 de 7

If the wait too much, may indicate that LGWR write inefficient or submitted too often solve the problem, can follow: log file parallel write wait event. user commits,
user rollback statistics can be used to observe the number of committed or rolled back
Solution:
1. To improve LGWR performance: as far as possible use the fast disk, do not put redo log file stored in the the on the disk of of RAID 5 of the
2. Using bulk to submit
3. The appropriate use of NOLOGGING / UNRECOVERABLE and and so on option
Can to calculation of the average redo in through the with the are as follows formula Write Size:
avg.redo write size = (Redo block written / redo writes) * 512 bytes
If the system to produce the redo a lot of, while the each time sure of how to write one of of the less, the general description of LGWR is too frequent the activation
of.
May lead to the competition in the an excessive number of the redo-related latch of the.
The waiting for an event of the the following and the information about (inter-node resource contention used in):
gc current block busy:
gcs log flush sync:
gc buffer busy: the issue of hot spots block; node isolation / business isolation in order to reduce resources between the the node contention;
Log File Switch
When the this wait is when the advent of, which means that that the the The all of the submitted to the (commit)-of the requests are need to wait for the "the the
completion of of the log file switch". When the appear of the this waiting for an event is usually because to the after the the filled with of the log group cycle, the the
first a log archive not yet been completed, appear the wait. Appear The to wait for, it may indicate There is a problem with the io.
The solution:
Can consider increasing the the log files and to increase the log group
Archive files are moved to a fast disk
Adjust the log_archive_max_processes.
log file switch (checkpoint incomplete) - log switch (Check that the point is not completed)
The wait event usually indicates you the DBWR write speed of slow or IO problems.
Need to consider increasing additional DBWR or increase your log group or log file size.
control file sequential read / control file parallel write
If the wait time is is relatively long, is very obvious, need to consider the to improve the the control file the disk where the I / O.
SQL Statistics in accordance with the statistics of different indicators Sort data is very useful, combined with all the statistics information, you can
Locate the run performance is poor the SQL statement, as well as run is unreasonable (such as the The frequency of such run very and more) of the SQL. Relatively
easy to understand, here do not described in detail.
Many of the above item are better to understand, explain it here briefly the the following a few of:
SQL ordered by Parse Calls: What is Parse calls please reference (including hard parse and soft parse, and softer resolution):
SQL ordered by Version Count: contains the sql statement more by the version, that is, parent the Cursors the same the, while the the Children Cursors different the
of the sql statement. That is, SQL text is exactly the same, Father's cursor can be shared, However, because to the set a different of the environment of the the
optimizer (OPTIMIZER_MISMATCH), the the the length of the of the the value of the of the bind variables in the the the second times implementation of the time
occurrence of significant change (BIND_MISMATCH), the authorized the relationship between does not match the (the AUTH_CHECK_MISMATCH ) or the basis
of convert an object of does not match the (TRANSLATION_MISMATCH), and and so on on lead to the sub-cursor can not be shared, need for a new to to generate
a sub-cursor to. This are be shared with the SQL (ie, are not cursor shared) there is a relationship. The the Plan of Implementation of under the the this case may be
different, is also may be the same (we can be seen through the the plan_hash_value); the specific the mismatch of can query the V $ SQL_SHARED_CURSOR
Advisory Statistics
In addition to the through the this to view outside the related proposals are. But they can also through the the are as follows a view of query.
GV_ $ DB_CACHE_ADVICE
GV_ $ MTTR_TARGET_ADVICE
GV_ $ PGATARGET_ADVICE_HISTOGRAM
GV_ $ PGA_TARGET_ADVICE
GV_ $ SHARED_POOL_ADVICE
V_ $ the db_cache_advice utility
V_ $ MTTR_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE
V_ $ PGA_TARGET_ADVICE_HISTOGRAM
V_ $ SHARED_POOL_ADVICE
Buffer Pool Advisory / PGA Memory Advisory / SGA Target Advisory /. . . . . .
The Wait Statistics
Description buffer wait is the what kind of to wait for block type (the reference in front of buffer wait Help and to improve the method).
Segment Statistics:
* Segments by Logical Reads
* Segments by Physical Reads

http://www.databaseskill.com/2088215/

26/09/2015

Oracle the AWR introduced and Report Analysis (2) final - Sql - Database Skill

Pgina 7 de 7

* Segments by Row Lock Waits


* Segments by ITL Waits
* Segments by Buffer Busy Waits
* Segments by Global Cache Buffer Busy
* Segments by CR Blocks Received
* Segments by Current Blocks Received
........

http://www.databaseskill.com/2088215/

26/09/2015

Vous aimerez peut-être aussi