Vous êtes sur la page 1sur 20

Library Cache Discussion

By: Sheela Rani

Topics
Common issues related to library cache Wait Events Lib Cache Latch/lock mechanism Init.ora parameters effecting lib cache

Common Issues related to Lib Cache


Shared pool fragmentation Slow Performance

Shared Pool Fragmentation


Query to figure out fragmentation:
X$KSMLRU: There is a fixed table called x$ksmlru that tracks allocations in the shared pool that cause other objects in the shared pool to be aged out. This fixed table can be used to identify what is causing the large allocation. The columns of this fixed table are the following: KSMLRCOM allocation comment that describes the type of allocation. If this comment is something like 'MPCODE' or 'PLSQL%' then there is a large PL/SQL object being loaded into the shared pool. This PL/SQL object will need to be 'kept' in the shared pool. If this comment is 'kgltbtab' then the allocation is for a dependency table in the library cache. This is only a problem when several hundred users are logged on using distinct user ids. The solution in this case is to use fully qualified names for all table references. If you are running MTS and the comment is something like 'Fixed UGA' then the problem is that the init.ora parameter 'open_cursors' is set too high.

Shared Pool Fragmentation (Continued..)


KSMLRSIZ - amount of contiguous memory being allocated. Values over around 5K start to be a problem, values over 10K are a serious problem, and values over 20K are very serious problems. Anything less then 5K should not be a problem. KSMLRNUM - number of objects that were flushed from the shared pool in order allocate the memory. In release 7.1.3 or later, the following columns also exist: KSMLRHON - the name of the object being loaded into the shared pool if the object is a PL/SQL object or a cursor. KSMLROHV - hash value of object being loaded KSMLRSES - SADDR of the session that loaded the object. select * from x$ksmlru where ksmlrsiz > 5000;

Notes about X$KSMLRU


The advantage of X$KSMLRU is that it allows you to identify problems with fragmentation that are effecting performance, but that are not bad enough to be causing ORA-04031 errors to be signalled. If a lot of objects are being periodically flushed from the shared pool then this will cause response time problems and will likely cause library cache latch contention problems when the objects are reloaded into the shared pool. With version 7.2, the library cache latch contention should be significantly reduced with the breaking up of the library cache pin latch into a configurable set of symmetric library cache latches. One unusual thing about the x$ksmlru fixed table is that the contents of the fixed table are erased whenever someone selects from the fixed table. This is done since the fixed table stores only the largest allocations that have occurred. The values are reset after being selected so that subsequent large allocations can be noted even if they were not quite as large as others that occurred previously. Because of this resetting, the output of selecting from this table should be carefully noted since it cannot be reselected if it is forgotten. Also you should take care that there are not multiple people on one database that select from this table because only one of them will select the real data.

How to Avoid Shared Pool Fragmentation


i) KEEPING OBJECTS
Objects are 'kept' in the shared pool using the dbms_shared_pool package that is defined in the dbmspool.sql file. For example: execute dbms_shared_pool.keep('SYS.STANDARD');

ii) USE BIND VARIABLES One of the best things that can be done to reduce the amount of fragmentation is to reduce or eliminate the number of sql statements in the shared pool that are duplicates of each other except for a constant that is embedded in the statement. iii) MAX BIND SIZE It is possible for a SQL statement to not be shared because the max bind variable lengths of the bind variables in the statement do not match. iv) ELIMINATING LARGE ANONYMOUS PL/SQL Large anonymous PL/SQL blocks should be turned into small anonymous PL/SQL blocks that call packaged functions. v) REDUCING USAGE

Correction Of Shared Pool Fragmentation


i) KEEPING OBJECTS The primary source of problems is large PL/SQL objects. The means of correcting these errors is to 'keep' large PL/SQL object in the shared pool at startup time. This will load the objects into the shared pool and will make sure that the objects are never aged out of the shared pool. If the objects are never aged out then there will not be a problem with trying to load them and not having enough memory. Objects are 'kept' in the shared pool using the dbms_shared_pool package that is defined in the dbmspool.sql file. For example: execute dbms_shared_pool.keep('SYS.STANDARD'); All large packages that are shipped should be 'kept' if the customer uses PL/SQL. This includes 'STANDARD', 'DBMS_STANDARD', and 'DIUTIL'. With 7.3, the only package left in this list is 'STANDARD'. All large customer packages should also be marked 'kept'.

Correction Of Shared Pool Fragmentation (Continued..)


v) REDUCING USAGE Another way to reducing fragmentation is to reduce consumption. This is of special importance when using MTS, when every user's session memory is in the shared pool and the impact is multiplied by the total concurrent users. Insert, update, delete and anonymous blocks complete the execution in one round trip. All the memory that is allocated on the server for the execute comes from the PGA and is freed before the call returns to the user. But in the case of selects, memory required to execute the statement - which could be large if a sort was involved - is not freed until the end-of-fetch is reached or the query is cancelled. In these situations using the OCI features to do an exact fetch and cancel helps free memory back to the pool. If the application logic has been embedded into server side PL/SQL, a large number of cursors may be getting cached on the server for every user. Though this results in reduced latch contention and faster response, it does use more memory in the UGA. Setting the close_cached_open_cursors init.ora to TRUE closes the PL/SQL cached cursors on the server, freeing the memory.

Sizing of Shared Pool


SIZING OF SHARED POOL
OBJECTS STORED IN THE DATABASE
select sum(sharable_mem) from v$db_object_cache; SELECT SUBSTR(owner,1,10) Owner, SUBSTR(type,1,12) Type, SUBSTR(name,1,20) Name, executions, sharable_mem Mem_used, SUBSTR(kept||' ',1,4) "Kept?" FROM v$db_object_cache WHERE type in ('TRIGGER','PROCEDURE','PACKAGE BODY','PACKAGE') ORDER BY executions desc;

SQL
select sum(sharable_mem) from v$sqlarea;

PER-USER PER-CURSOR MEMORY


select 250 * value bytes_per_user from v$sesstat s, v$statname n where s.statistic# = n.statistic# and n.name = 'opened cursors current' and s.sid = &sid

Performance Issue Related to Lib Cache


Excessive parsing Wait Events

Limiting Parsing
Types of Parsing
Soft Parse: Syntax + Semantics check only Hard Parse: Syntax +Semantics +optimize, generate the plan for the query

Limiting Parsing
Use Bind variables Use session_cache_cursor Use cursor_sharing=force Avoid using sql, pl.sql blocks inside loop Limiting no. of parse by using bind variables cursor_space_for_time parameter # # Cursor space for time is an optimization which essentially # results in holding pins on cursors and their associated # frames/buffers for longer periods of time. The pins are held # until the cursor is closed, instead of at the end-of-fetch # (normal behavior). This reduces library cache pin traffic # which reduces library cache latch gets. Cursor space for time # is useful for large Applications environments whereby library # cache latch contention, specifically due to pin gets, is an # issue in terms of performance.

Wait Events Related to Library Cache


Wait Events related to Library Cache:
library cache load lock
The session tries to find the load lock for the database object so that it can load the object. The load lock is always obtained in Exclusive mode, so that no other process can load the same object. If the load lock is busy the session will wait on this event until the lock becomes available. Wait Time: 3 seconds (1 second for PMON) Parameters: object address Address of the object being loaded. lock address Address of load lock being used. mask Indicates which data pieces of the object that needs to be loaded. This event controls the concurrency between clients of the library cache. It acquires a lock on the object handle so that either: one client can prevent other clients from accessing the same object the client can maintain a dependency for a long time (e.g., no other client can change the object) This lock is also obtained to locate an object in the library cache. Wait Time: 3 seconds (1 second for PMON) Parameters: handle address Address of the object being loaded. lock address Address of the load lock being used. This is not the same thing as a latch or an enqueue, it is a State Object. mode Indicates the data pieces of the object which need to be loaded. namespace See "namespace". This event manages library cache concurrency. Pinning an object causes the heaps to be loaded into memory. If a client wants to modify or examine the object, the client must acquire a pin after the lock. Wait Time: 3 seconds (1 second for PMON) Parameters: handle address Address of the object being loaded. pin address Address of the load lock being used. This is not the same thing as a latch or an enqueue, it is basically a State Object. mode Indicates which data pieces of the object that needs to be loaded. namespace See "namespace".

library cache lock


library cache pin


More aboutt Lib Cache Locks and Pins


Q1. What kind of case will "Library cache lock" be used in? This event controls the concurrency between clients of the library cache. It acquires a lock on the object handle so that either: One client can prevent other clients from accessing the same object. The client can maintain a dependency for a long time (for example, no other client can change the object). This lock is also obtained to locate an object in the library cache. How Many Resources: Database object referenced during parsing or compiling of SQL or PL/SQL statements (table, view, procedure, function, package, package body, trigger, index, cluster, synonym); released at the end of parse or compilation; cursors (SQL and PL/SQL areas), pipes and any other transient objects do not use this lock. It is deadlock sensitive and the operation is synchronous.. Q2. where is "Library cache pin" used? This event manages library cache concurrency. Pinning an object causes the heaps to be loaded into memory. If a client wants to modify or examine the object, the client must acquire a pin after the lock. Q3. Why oracle needs this two type locks? Both locks and pins are provided to access objects in the library cache. Locks manage concurrency between processes, whereas pins manage cache coherence. In order to access an object, a process must first lock the object handle, and then pin the object heap itself. Requests for both locks and pins will wait until granted. This is a possible source of contention, because there is no NOWAIT request mode. Locks and pins are externalized in X$KGLLK and X$KGLPN, respectively.

Some Fixes to Reduce Wait Events


How to reduce Library cache lock? ======================== 1. To reduce the reloads by increasing the shared pool size as the locks could take long time if the pool is undersized. 2. By setting the cursor_sharing to similar 3. By reducing the invalidations like running the batch jobs to collect statistics or any other maintanence jobs to be separated from OLTP. Apart from hard parsing, if the session wants to change the definition of the object specified in the SQL or do any modifications, then it has to acquire a library cache lock along with the library cache pin. It is pinning because it needs the dictionary information to be loaded in the memory to access the same to modify/change the object. Refer Note 34579.1 for library cache pin. How to reduce Library cache pin? ======================= 1. This could be reduced by not doing any change to object definitions like alter / truncate / drop / gather statistics during OLTP and the same could be accomplished during off time where the load is less Q4. What is Library cache load lock? The session tries to find the load lock for the database object so that it can load the object. The load lock is always obtained in Exclusive mode, so that no other process can load the same object. If the load lock is busy the session will wait on this event until the lock becomes available. How to reduce the Library cache load lock? ============================== If an object is not in memory, then a library cache lock cannot be acquired on it. So the object has to be loaded into the memory to to acquire the lock. Then the session tries to find the load lock for the database object so that it can load the object. In order to prevent multiple processes to request the load of the same object simultaneously, the other requesting sessions have to wait for the library cache load lock as the lock is busy with loading the object into the memory. The waits on the library cache load lock is due to the objects not available in the memory. The inavailability of the library cache object in the library cache is due to the undersized shared pool causing reloads often, too many hard parse because of unshared sqls. To avoid this, the general recommendation would be 1. To increase the shared pool ( to avoid the reloads often) 2. To increase the session cached cursors (to avoid the cursors flushing out of shared pool) 3. To set cursor_sharing to force (to reduce hard parsing)

Diff Between Lib Cahce Latch and Lib Cache Pin Latch
Library cache latch: The library cache latches protect the cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache. During a parse, Oracle searches the library cache for a matching statement. If one is not found, then Oracle will parse the SQL statement, obtain the library cache latch and insert the new SQL. The first resource to reduce contention on this latch is to ensure that the application is reusing as much as possible SQL statement representation. Use bind variables whenever possible in the application. Misses on this latch may also be a sign that the application is parsing SQL at a high rate and may be suffering from too much parse CPU overhead.If the application is already tuned the SHARED_POOL_SIZE can be increased. Be aware that if the application is not using the library cache appropriately, the contention might be worse with a larger structure to be handled. The _KGL_LATCH_COUNT parameter controls the number of library cache latches. The default value should be adequate, but if contention for the library cache latch cant be resolved, it may be advisable to increase this value. The default value for _KGL_LATCH_COUNT is the next prime number after CPU_COUNT. This value cannot exceed 66 (See: <Bbug 1381824>). Library cache pin latch: The library cache pin latch must be acquired when a statement in the library cache is reexecuted. Misses on this latch occur when there is very high rates SQL execution. There is little that can be done to reduce the load on the library cache pin latch, although using private rather than public synonyms or direct object references such as OWNER.TABLE may help

Init.ora parameters effecting Lib Cache


Open_cursors
Parameter typeInteger Default value50 ModifiableALTER SYSTEM Range of values0 to 65535 BasicYes

OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an excessive number of cursors. It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number will vary from one application to another. Assuming that a session does not open the number of cursors specified by OPEN_CURSORS, there is no added overhead to setting this value higher than actually needed. To see how many open cursors you have, type:
select * from v$sysstat where name = 'opened cursors current';

Init.ora parameters effecting Lib Cache (Continued..)


SESSION_CACHED_CURSORS Parameter typeInteger Default value0 Parameter classDynamic: ALTER SESSION Range of values0 to operating system-dependent Real Application ClustersMultiple instances can have different values.SESSION_CACHED_CURSORS lets you specify the number of session cursors to cache. Repeated parse calls of the same SQL statement cause the session cursor for that statement to be moved into the session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen the cursor. Oracle uses a least recently used algorithm to remove entries in the session cursor cache to make room for new entries when needed.

Init.ora parameters effecting Lib Cache (Continued..)


CURSOR_SHARING Parameter type String Syntax CURSOR_SHARING = {SIMILAR | EXACT | FORCE} Default value EXACT Parameter classDynamic: ALTER SESSION, ALTER SYSTEMCURSOR_SHARING determines what kind of SQL statements can share the same cursors. Values: FORCE Forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement. SIMILAR Causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized. EXACT Only allows statements with identical text to share the same cursor.

Init.ora parameters effecting Lib Cache (Continued..)


CURSOR_SPACE_FOR_TIME Parameter type Boolean Default value false Parameter class Static Range of values true | false CURSOR_SPACE_FOR_TIME lets you use more space for cursors in order to save time. It affects both the shared SQL area and the client's private SQL area. Values: TRUE Shared SQL areas are kept pinned in the shared pool. As a result, shared SQL areas are not aged out of the pool as long as an open cursor references them. Because each active cursor's SQL area is present in memory, execution is faster. However, the shared SQL areas never leave memory while they are in use. Therefore, you should set this parameter to TRUE only when the shared pool is large enough to hold all open cursors simultaneously. In addition, a setting of TRUE retains the private SQL area allocated for each cursor between executions instead of discarding it after cursor execution, saving cursor allocation and initialization time FALSE Shared SQL areas can be deallocated from the library cache to make room for new SQL statements.

Vous aimerez peut-être aussi