Vous êtes sur la page 1sur 15

OBIEE Cache Management

OBIEE caching is a file based system. It (OBIEE Caching) is not in memory. Extension of cache file is .TBL
Caching is very useful in achieving better performance by not going to database and getting results from cache file
on OBIEE server itself. And this way we can save network resources by not making round trip. Business
Intelligence
system
user
usually
queries
a
larger
Data
Warehouse
or
Data
Marts.
And when ask few import things they are looking for, and performance ALWAYS comes!!!. Cache helps to make a
BIG win when it comes to performance.
We have to enable cache. For that we need to go to NQSConfig.INI file and enable the cache by saying YES.
1) First way: We can see cache entries from the
Administration tool > Manage> Cache
2) Second way: We can see caching by checking file on a server directory (Location for file is usually we gave the
path in NQSConfig.INI).
So Cache helps us in
1) Achieving better performance for query by not going to Database server and satisfying request from cache file on
OBIEE
server
itself.(
There
are
some
criteria
for
cache
hit
to
occur)
2) It saves network resources by not make round trip.
But with performance comes some cost.
1) Human Resources to mange
2) Physical Resource (data storage etc)
3) Keeping cache up to date. (If users gets data quickly but its not fresh/updated then it does not make sense either).
Below is the process flow or Architecture of OBIEE cache

Cache System Architecture


Steps to Configure the Cache

Step1: Change NQSConfig.ini file


Go to \ OracleBI\server\Config\ NQSConfig.INI
In NQSConfig.ini go to cache section you will see something as below screen shot

Cache Sesction of Config file


All the parameters are basically self explanatory like path of file , size , number of rows etc etc
Oracle suggests having cache file on high speed storage.
Step 2: Managing the cache.
We can do cache management with various techniques
A : Configuring cache parameters shown in the file above
B : Controlling cache at physical layer in Administrator tool
Table level caching
Here you can manage caching through directly physical Table.

Double click on Physical table > General tab> enable


caching option by clicking. This way you can enable Particular table.
There are 2 options for how long u want to see the cache Entry.
1)

Caching never expires

2)

Cache persistence time

C :Using Cache manager


D: Automatically purging cache entries
E: Seeding cache and event pooling table.
A : Configuring cache parameters in NQSConfig.ini
Go to [ CACHE ] section of the config file
1 ENABLE should be YES for cache to work.
2 DATA_STORAGE_PATHS : Need path where cache file to be stored and max allowed size
3 MAX_ROWS_PER_CACHE_ENTRY: Max row to be in any cache file. And hence his parameter helps to avoid
run away query results being cache
4 MAX_CACHE_ENTRY_SIZE : Max size of each cache file
5 MAX_CACHE_ENTRIES: Max cache file in the directory specified.
Note: When limit specified in MAX_CACHE_ENTRIES reached server will delete the Least Recently Used (i.e
file which has not been used since long) to accommodate the new entry
6 POPULATE_AGGREGATE_ROLLUP_HITS: default is NO. It will store the agg result even if that request is
being served from cache
7 MAX_SUBEXPR_SEARCH_DEPTH: it seaches the express till the level specified for the cache hit.
B : Controlling cache at physical layer in Administrator tool

Physical Property Of table :Cache


Please note the cacheable checkbox. By default all the physical tables are cacheable.
rest two options determines cache is valid till what time ?
Cache never expires will put the entry in cache permanently unless its cleaned by paramer 5 above
Cache persistence time This will determine validity time of cache
C :Using Cache manager
Open repository in online mode
Go to Manage>Cache You will see the screen as below. OBIEE administrator can control the cache from here as
well.

Cache Manager Cache Tab

Cache Manager Physical tabl

Cache Operations
You can see all the cache entries as shown in the screen above. You can purge the cache from here , see the sql being
generated , Copy -save sql , and see the info which basically shows you the config parameters for the cache.
So this will be helpful in purging the cache entries manually.
I would suggest you to explore all the options.
D: Automatically purging cache entries
Cache entry will automatically gets purged when MAX_CACHE_ENTRIES limit reaches or when on physical layer
it reaches that time.
Another Way to Automatically purge cache is to use even pulling table.
Event pulling table reads the data ( or status of the ETL ) and based on that it purges the cache. The problem with
this is: it is not purely even base. i.e we can do like purge the cache and seed cache when ETL is done. However it
can read the pulling table at specific interval and decides its action.
It a separate post in itself. I will address this in my future post.
E: Seeding cache
In normal English Seeding cache means running report automatically or manually (mostly in non business hours)
before user actually needs it.
So when users needs that data it reads from the cache and hence performance is really amazing.
1 you can run report maually to generate cache( this is not possible for non business hours)
2 setting and iBot to run the report at specific time so that it generates cache.
Please refer the screen print below for better understanding.

Seeding Cache: You may create an iBot for the report for which you want to seed the cache. In Destinaton tab select
Oracle BI Server Cache check box to seed the cache as shown above.

OBIEE REPORT/DASHBOARD PERFORMANCE TUNING


1. First check NQServer.log
2. Admin Tool -> click Session Manager. Check if there is any bottleneck and accordingly resolve.
Refresh the sessions
3. Analytics or Answers -> Administration-> Manage Sessions
Check the physical query and execute from the back-end(BE).
Performance Tuning at real time
Performance Tuning is the huge topic to answer;
lets take step by step approach as follows
Check whether the same query is running or anyone update the query.If the same query is running then follow the steps given below,

When you open the dashboard page, first figure out whether Prompt or Report is taking more time.

If it is prompt, check for any multi select prompts which are taking time to load all values.. (best practice is set default values).

If Report is taking longer time, set some default filters and query for just couple of records and check the report performance.

Take physical SQL from the session log (if SQL is not generated check for log level whether it is less than 2).

Run this Physical SQL in the TOAD or any SQL Editor.

Check Explain plan for the cost of the query.

Hash Joins/Cartesian Joins are always kills performance.

Some-times, Force Inner-join also helps to force the tables to have inner-join between two tables.

Check are there any Full Table Scans happening instead of Index Scan.

OBIEE 10G/11G - Event Pooling table (Event Table) to purge the cache after ETL process

The use of an Oracle BI Server event polling table (event table) is a way to notify the Oracle BI
Server that one or more physical tables have been updated and then that the query cache entries
are stale. Each row that is added to an event table describes a single update event, such as an
update occurring to a Product table. The Oracle BI Server cache system reads rows from, or
polls, the event table, extracts the physical table information from the rows, and purges stale
cache entries that reference those physical tables.
The event table is a physical table that resides on a database accessible to the Oracle BI Server.
Regardless of where it residesin its own database, or in a database with other tablesit
requires a fixed schema.
It is normally exposed only in the Physical layer of the Administration Tool, where it is identified
in the Physical Table dialog box as being an Oracle BI Server event table.
It does require the event table to be populated each time a database table is updated. Also,
because there is a polling interval in which the cache is not completely up to date, there is always
the potential for stale data in the cache.
A typical method of updating the event table is to include SQL INSERT statements in the
extraction and load scripts or programs that populate the databases. The INSERT statements add
one row to the event table each time a physical table is modified.
After this process is in place and the event table is configured in the Oracle BI repository, cache
invalidation occurs automatically. As long as the scripts that update the event table are accurately
recording changes to the tables, stale cache entries are purged automatically at the specified
polling intervals.
Polling Table Structure
You can set up a physical event polling table on each physical database to monitor changes in the
database. The event table should be updated every time a table in the database changes.

The event table needs to have the structure shown below. The column names for the event table
are suggested; you can use any names you want. However, the order of the columns has to be the
same in the physical layer of the repository (then by alphabetic ascendant order)

Column Name
by alphabetic Data Type Null
ascendant order

CatalogName

CHAR or
VARCHAR

Advise
Description

Yes

Populate the CatalogName


The name of the column only if the event
catalog where the table does not reside in the
physical table that same database as the
was updated
physical tables that were
resides.
updated. Otherwise, set it
to the null value.

DatabaseName

CHAR or
VARCHAR

Yes

Populate the
DatabaseName column
The name of the
only if the event table does
database where
not reside in the same
the physical table
database as the physical
that was updated
tables that were updated.
resides.
Otherwise, set it to the null
value.

Other

CHAR or
VARCHAR

Yes

Reserved for
future
enhancements.

Yes

Populate the SchemaName


The name of the column only if the event
schema where the table does not reside in the
physical table that same database as the
was updated
physical tables being
resides.
updated. Otherwise, set it
to the null value.
The name has to match the
The name of the
name defined for the table
physical table that
in the Physical layer of the
was updated.
Administration Tool.

SchemaName

CHAR or
VARCHAR

TableName

CHAR or
VARCHAR

No

UpdateTime

DATETIME

No

This column must be set to


a null value.

The time when the To make sure a unique and


update to the
increasing value, specify
event table
the current timestamp as a
occurs. This needs default value for the
to be a key
column. For example,
(unique) value
specify DEFAULT

that increases for


CURRENT_TIMESTAMP for
each row added to
Oracle 8i.
the event table.

UpdateType

INTEGER

No

Specify a value of
1 in the update
Other values are reserved
script to indicate a for future use.
standard update.

Step by Step for the Oracle database


Create the user
CREATE user OBIEE_REPO IDENTIFIED BY OBIEE_REPO;
GRANT connect,resource TO OBIEE_REPO;
Create the table

In 10g: Obi_Home\bi\server\Schema\SAEPT.Oracle.sql
--------------------------------------- Create the Event Polling Table. --------------------------------------CREATE TABLE S_NQ_EPT (
UPDATE_TYPE
DECIMAL(10,0) DEFAULT 1
NOT NULL,
UPDATE_TS
DATE
DEFAULT SYSDATE NOT NULL,
DATABASE_NAME VARCHAR2(120)
NULL,
CATALOG_NAME
VARCHAR2(120)
NULL,
SCHEMA_NAME
VARCHAR2(120)
NULL,
TABLE_NAME
VARCHAR2(120)
NOT NULL,
OTHER_RESERVED VARCHAR2(120) DEFAULT NULL
NULL
) ;

In 11g, the table is present in the BIPLATFORM metadata repository.


Import the table with ODBC (not with OCI)

Import the table with ODBC and change the call interface connection pool to OCI.
If you don't have an Oracle ODBC connection, you can simply suppress the table after
importation with OCI and copy paste on the schema OBIEE_REPO the following UDML
statmement:
DECLARE TABLE "ORCL".."OBIEE_REPO"."S_NQ_EPT" AS "S_NQ_EPT" NO INTERSECTION
PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."UPDATE_TYPE" AS "UPDATE_TYPE"
TYPE "DOUBLE"
PRECISION 10 SCALE 0 NOT NULLABLE PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."UPDATE_TS" AS "UPDATE_TS" TYPE
"TIMESTAMP"
PRECISION 19 SCALE 0 NOT NULLABLE PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."DATABASE_NAME" AS
"DATABASE_NAME" TYPE "VARCHAR"
PRECISION 120 SCALE 0 NULLABLE
PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."CATALOG_NAME" AS
"CATALOG_NAME" TYPE "VARCHAR"
PRECISION 120 SCALE 0 NULLABLE PRIVILEGES ( READ);

DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."SCHEMA_NAME" AS "SCHEMA_NAME"


TYPE "VARCHAR"
PRECISION 120 SCALE 0 NULLABLE PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."TABLE_NAME" AS "TABLE_NAME"
TYPE "VARCHAR"
PRECISION 120 SCALE 0 NOT NULLABLE PRIVILEGES ( READ);
DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."OTHER_RESERVED" AS
"OTHER_RESERVED" TYPE "VARCHAR"
PRECISION 120 SCALE 0 NULLABLE PRIVILEGES ( READ);

When you import the table with OCI, the UDML metadata are not the same. They
have different syntax for the definition of a column. You can remark in the two
below statement that the OCI metadata has an extra EXTERNAL clause that you
don't find in the ODBC metadata.

Declaration of the column CATALOGNAME after import with OCI:


DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."CATALOGNAME" AS "CATALOGNAME"
EXTERNAL "obiee_repo"
TYPE "VARCHAR" PRECISION 40 SCALE 0 NULLABLE
PRIVILEGES ( READ);

Declaration of the column CATALOGNAME after import with ODBC:


DECLARE COLUMN "ORCL".."OBIEE_REPO"."S_NQ_EPT"."CATALOGNAME" AS "CATALOGNAME"
TYPE "VARCHAR" PRECISION 40 SCALE 0 NULLABLE PRIVILEGES ( READ);

Define the table as an event table

You can restart the Oracle BI Server service to be sure that the table is now seen as
a good event table. If you have any error in the NQServer.log during the start,
correct them.
Test

Create a report with a product attribute to seed the cache.

Insert a row to give to OBIEE the instruction to delete the cache from all SQL
query cache which contain the product table.

INSERT
INTO
S_NQ_EPT
(
update_type,
update_ts,

database_name,
catalog_name,
schema_name,
table_name,
other_reserved

)
VALUES
(
1,
sysdate,
'orcl SH',
NULL,
'SH',
'PRODUCTS',
NULL
)

Wait the polling interval frequency and verify that the cache entry is deleted
and that you can find the below trace in the NQQuery.log file.

+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33
-------------------- Sending query to database named ORCL (id: <<13628>>):
select T4660.UPDATE_TYPE as c1,
T4660.UPDATE_TS as c2,
T4660.DATABASE_NAME as c3,
T4660.CATALOG_NAME as c4,
T4660.SCHEMA_NAME as c5,
T4660.TABLE_NAME as c6
from
S_NQ_EPT T4660
where ( T4660.OTHER_RESERVED in ('') or T4660.OTHER_RESERVED is null )
minus
select T4660.UPDATE_TYPE as c1,
T4660.UPDATE_TS as c2,
T4660.DATABASE_NAME as c3,
T4660.CATALOG_NAME as c4,
T4660.SCHEMA_NAME as c5,
T4660.TABLE_NAME as c6
from
S_NQ_EPT T4660
where ( T4660.OTHER_RESERVED = 'oracle10g' )
+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33
-------------------- Sending query to database named ORCL (id: <<13671>>):
insert into
S_NQ_EPT("UPDATE_TYPE", "UPDATE_TS", "DATABASE_NAME", "CATALOG_NAME",
"SCHEMA_NAME", "TABLE_NAME", "OTHER_RESERVED")
values (1, TIMESTAMP '2010-06-16 00:44:53', 'orcl SH', '', 'SH', 'PRODUCTS',
'oracle10g')

+++Administrator:fffe0000:fffe0003:----2010/06/16 00:45:33
-------------------- Cache Purge of query:
SET VARIABLE
QUERY_SRC_CD='Report',SAW_SRC_PATH='/users/administrator/cache/answers_with_pr
oduct';
SELECT Calendar."Calendar Year" saw_0, Products."Prod Category" saw_1, "Sales
Facts"."Amount Sold"
saw_2 FROM SH ORDER BY saw_0, saw_1
+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33
-------------------- Sending query to database named ORCL (id: <<13672>>):
select T4660.UPDATE_TIME as c1
from
S_NQ_EPT T4660
where ( T4660.OTHER_RESERVED = 'oracle10g' )
group by T4660.UPDATE_TS
having count(T4660.UPDATE_TS) = 1
+++Administrator:fffe0000:fffe001c:----2010/06/16 00:45:33
-------------------- Sending query to database named ORCL (id: <<13716>>):
delete from
S_NQ_EPT where

S_NQ_EPT.UPDATE_TS = TIMESTAMP '2010-06-16 00:44:53'

Support
The cache polling event table has an incorrect schema

In the NQServer.log file


[56001] The cache polling event table S_NQ_EPT has an incorrect schema.

You must import the event table via ODBC 3.5. When you use OCI, the UDML of the table has a
difference. You can then change the call interface to OCI.
The physical table in a cache polled row does not exist

In the NQServer.log file


[55001] The physical table ORCL::SH:PRODUCTS in a cache polled row does not
exist.

Verify the location of your table.


For instance, in my case, the good location was orcl SH::SH:PRODUCTS as you can see in the
picture below.

The cache polling SELECT query failed for table


When you get the following trace in NQQuery.log, it's because:

your event table was deleted

or that the name in the physical layer doesn't match any more the table in
the database.

2010-06-15 07:52:28 [55003] The cache


S_NQ_EPT.
2010-06-15 07:52:28 [nQSError: 17001]
00942: table or view does not exist
at OCI call OCIStmtExecute.
[nQSError: 17010]
2010-06-15 07:52:28 [55005] The cache
table S_NQ_EPT.

polling SELECT query failed for table


Oracle Error code: 942, message: ORASQL statement preparation failed.
polling delete statement failed for

faulting module NQSCache.dll


Oracle BI Server can literally crash when you use the example script that is given in the
documentation (with an OTHER column). You can find this information in the even viewer of
Windows.
Description: Faulting application NQSServer.exe, version 10.1.3.4, faulting
module NQSCache.dll,
version 10.1.3.4, fault address 0x00016b43.

Solution: change the structure of the table by using the script in the schema directory (such as
above in the document)