Académique Documents
Professionnel Documents
Culture Documents
Demantra Performance Solutions - Real World Problems and Solutions [ID 1081962.1]
In this Document
Abstract
Document History
Demantra Performance Solutions - Real World Problems and Solutions
Summary
Applies to:
Abstract
Purpose
-------
To guide the participant through real world performance solutions, maintenance and setup.
This problems below are actual Demantra performance issues as reported from the field or
end user customer communities. At the end of this document you will find two helpful guides
reporting your performance issues to Oracle Support.
- Performance Problem? Save Time, Gather the Required Data Before Contacting Oracle Support
- Steps to take before logging a performace SR
Also, please see the Demantra Performance Best Practices Guide, Doc ID 1081936.1.
Document History
Author:
Create Date 07-Apr-2010
Update Date 07-Apr-2010
Expire Date 07-Apr-2013 (ignore after this date)
==============================================================================================================================
Performance Solutions
==============================================================================================================================
===============================================================
The source RDBMS server
===============================================================
The shipment and booking history program tries to get all data from the table OE_ORDER_LINES_ALL
for the given date ranges. Since currently there are no indexes on date column, it has no other
option but to do FULL TABLE SCAN.
Indexes can be created at the source instance, to be used specifically for the Demantra data collection
process. The indexes(s) required will depend upon the collection parameters to the Shipment and
Booking History concurrent program. Which streams do you plan to collect into Demantra?
Then in this case, only one index (on request_date column) needs to be created.
Then in this case, two indexes, one each for request_date and actual_shipment_date columns need to be created.
Depending upon the streams that you intend to collect into Demantra you will have to define either 1 or 2
indexes on OE_ORDER_LINES_ALL.
The custom indexe(s) can be made passive - meaning they are needed only while running the collection
program. When not needed these indexes can be disabled.
Note: The decision on the maintenance of the custom index is a responsibility of the customer.
It is up to the customer whether index is to be present all the time or only during the collection.
===============================================================
The Demantra RDBMS server
===============================================================
As a rule the number in the Chain_Cnt column should not be higher than 5% of the number found in the
Num_Rows column.
Note: The objects that we will be concentrate on are the Sales_Data and Mdp_Matrix tables although others are certainly important.
If it is higher than 5% this would indicate fragmentation of the table and will require a rebuild of those objects. We are also going to look at the Last_Analyzed column
as well as the Sample_Size column (the latter as a percentage of the number in the Num_Rows columns).
Note: That a Chain_Cnt of 0 for a large table like Sales_Data either represents a very efficiently laid out
tablespace or more likely that the Sample_Size of the Compute Stats was not large enough to discover the presence of Chained Rows.
A simple, but not 100% effective solution in Oracle is to add the keyword NOLOGGING when creating the table:
A more effective solution is to use a separate tablespace for temporary tables, and to create the tablespace
without logging enabled.
Tablespace Av Rd(ms)
------------------- -----------
TS_MDP_MATRIX 75213.66
TS_SALES_DATA 211230.12
TS_PROMOTION_DATA 204440.64
TEMP 10.49
TS_PROMOTION_DATA_X 209627.96
TS_SALES_DATA_X 181928.17
SYSTEM 178143.95
UNDOTBS1 158710.24
TS_DP 204306.89
SYSAUX 131322.26
TS_MDP_MATRIX_X 93844.24
TS_DP_X 139546.01
TS_MANUALS_X 162708.28
TS_MANUALS 262186.11
TS_SALES_DATA_ENGINE 0
TS_SALES_DATA_ENGINE_X 50
TS_SIM 30
TS_SIM_X 0
USERS 50
The IO stats have to be investigated from a hardware perspective. The normal range
should be between 5-10 ms and in no way near the 100K ms.
To put this into perspective - if the average read time from TS_MDP_MATRIX is 75213.66 it
means every read is 75213/1000/60= 1 minute and 15 seconds!
We used the same SQL from above (Cartesian product) on a local schema and the SQL
returned in 200ms: PROMOTION_DATA is 25 million, MDP_MATRIX is 1.2 million.
Database I/O: To reduce system I/O you may need to rebuild the tables by
primary key. To determine whether we need table rebuild, run the SQL below.
This SQL will report the out of sequence ratio of 2 big tables of the system which
are heavily used by the worksheets. These queries would take time to run, so
it would be better if we run this in test first and gauge the timing before
executing in Production.
document 1085012.1.
*Note: If it takes hours then we need to run this in Production during off hours.
You can determine just how "out-of-order" a table is by using these instructions:
If you decide to reorder / rebuild the table you will need to do something like this:
1. Create a new table structure exactly like the old table.
2. INSERT INTO new_table SELECT * FROM old_table ORDER BY primary_key_fields;
3. Create new indexes on the new table.
4. Rename the old table and its indexes to keep them as a backup
5. Rename the new table and indexes to use the official names.
You can determine how much you will save by executing the following:
-- Replace
-- <TABLE> with the table name in question. This should be submitted once for each heavily used table.
-- <KEY COLUMNS> with a list of the primary key column names, in the order that they appear in the PK.
- The Materialized View might not work depending on the specific series expressions that you use.
- You need to choose between the regular View and the Materialized View based on your usage patterns.
- If you do many exports with little data change then the Materialized Views may be faster.
We are looking to guidelines and procedures. For now, it is more important to determine if your
implementation is a candidate. When we release greater details, the notice will be available in the
Demantra Forum:
http://myforums.oracle.com/jive3/forum.jspa?forumID=1414
===============================================================
The Demantra Parameter Settings
===============================================================
3. client.worksheet.calcSummaryExpressions=0
If you typically will have 4 users running worksheets at the exact same time, not staring at the
screen, but actually RUNNING the worksheets, then you should set threadpool.query_run.per_userand=4
(which is 16/4).
*Note that 4 simultaneous users running worksheets is probably equal to 30 or 40 users logged in.
UseParallelExportHint
=====================
I cannot make the new functionality UseParallelExportHint work correctly when testing Demantra processes.
Customer ran integration interface but it created the view without the hints. Is there a profile that needs
to be set to add the hint?
Customer confirmed that it now works after enabling the parameter 'UseParallelExportHint'.
1) Use open with instead of Enabling extra filters. I think that this method can make sense plus the user
eliminates clicks from the user side.
2) MaxAvailableFilterMembers specifies the maximum number of members that can be retrieved in the
worksheet filter screen. The ability to configure the max selected filters member is accomplished by
adjusting the value of MaxSqlInExpressionTokens in AppServer.properties file. The webserver will need to be restarted after modifying AppServer.properties file.
There have been a number of requests to obsolete this parameter, to make the selection of
MaxAvailableFilterMembers automatic.
- Some customers do not want the restriction set on the number of filter members.
- If you set the MaxAvailableFilterMembers too high, it affects the performance.
- Setting value too low limits number of filter members.
Prior to 7.2 MaxAvailableFilterMembers could not be set higher then 1000. 7.2 and above provides the
ability to set upper limits using the following procedure:
2. Based on the number of members available, in this case 3000, please update the same
UPDATE SYS_PARAMS
SET PVAL = '3000'
WHERE PNAME = 'MaxAvailableFilterMembers'
===============================================================
The Demantra Worksheet
===============================================================
- The crosstab includes series (number of columns), date aggregation and range (rows per combination)
and aggregation levels.
- The total amount of memory a worksheet consumes (the number of cells in it) is contributed from of all
three factors, and more. You will need to balance, so the crosstab will not have too many cells in it.
- So the answer is complex. In most cases I would suggest having as many levels as you can in the page items and not have more then 2-4 levels in the crosstab.
- One common mistake is having levels in the crosstab that are mainly descriptive levels. In this case a more appropriate approach will be to create these as level
attributes and present them as a series thus saving the need to processed levels at the crosstab.
In summary, the number of members seems to have little or no effect on loading times. Applying loads of filters to decrease the amount of members will still lead to
several minutes load time or perhaps an out of memory error.
* The number of rows in combination with order/amount of levels in the crosstab does have a very large
impact on performance.
Steps to reproduce
------------------
Open a member which has more than about 14 rows in a certain worksheet and you will receive an outofmemoryerror. Alternatively, if you have increased the memory
amount in the JRE parameters, you might not receive the outofmemoryerror, but loading will still take a very long time.
Very poor WS performance is also connected to WS design. In this case there were 10 levels in the crosstab. Plus Excel like designs can suffer from Performance
problems. See more below.
1. Series which call the GET_MAX_DATE application function which executes selects on SYS_PARAMS table for each expression on each row that is being
aggregated. At one customer site, replacing with a constant value reduced 25% of run time.
2. GL Series with EXTRA_FROM and EXTRA_WHERE that includes ITEMS/LOCATION tables. These should be replaced by MDP_MATRIX table which is already
included in the select and avoid adding extra tables. At one customer site replacing with MDP_MATRIX reduced 8% of run time.
3. Server expression complexity can be to high and include repetition of the same columns using the NVL function several times. Verify your server expression(s).
When we open a member which has more than about 14 rows in a certain worksheet, we receive
an outofmemoryerror. Alternatively, if you have increased the memory amount in the JRE parameters,
you might not get the outofmemoryerror, but loading will still take a very long time.
This means that it takes a relatively long time to draw a 3 second database query with very few rows.
Plus we are still receiving many out of memory errors, which leave the user hanging.
Answer
------
In this case, poor worksheet performance is mostly caused by your worksheet design.
You have 10 levels in the crosstab. We had a few sessions with the implementation team.
The customer is using Excel sheets to support current process. They wanted to use Demantra as Excel
and mimic the Excel sheets behavior in Demantra. The problem was that they had 10 or more levels and
they wanted to put them all in the crosstab.
We explained the difference between the Demantra and Excel. We explained the memory limitations and
the alternatives:
- using the page and not having all levels in the crosstab.
- showing different parts of the data not in the main worksheet, but rather embedded in the worksheet.
- present some of the level information as a level attribute and show it as a series and not as a level.
- use open with but do not load the full data set all the time.
- We have changed the client expressions in the BLE to not update the series unless the final value
of the series is greater than zero.
- This prevents the generation of empty (zero) rows during BLE execution.
SYMPTOM
-------
- when we attempt to open the worksheet with a relatively small volume of data, the performance is bad.
- The Worksheet continues to display the message 'Loading' but it never come up..
- We also enabled the Java console and re-tested to determine if there are any error messages.
- After re-testing the following error message was displayed then everything froze.
SOLUTION
--------
From the Java client log:
66,000 combinations is a HUGE amount of data. You will need to redesign this worksheet so that it pulls
less data. Using filters is a good place to start.
Question
--------
Is there anyway to speed up rolling update? When we have increased the volume of data the runtime for rolling update has increased and is now one of the main
bottlenecks.
Answer
------
We acknowledge the fact that rolling updates is a bottleneck and does not scale up nicely. We are currently re-writing the entire code for the rolling updates. Making it
faster and scalable. The new procedure will start to roll out to the different versions in the near future.
Problem
-------
We tried to open a worksheet and it caused an outOfMemory error. I have 71,000
combinations. I have other worksheets that have 19,000, which is still too many,
but that worked without error.
I know that we need to reconfigure this worksheet in order to lower the amount of
data being retrieved but is there a method to increase the available memory?
Solution
--------
Increase the available memory via the Java control panel. Before setting the -Xmx parameter
in the Java control panel, ensure that all of the applications running Java on that machine,
and of course the browser, are closed. Make the change for ALL known Java environments.
Test Case
---------
I had two desktops workstations, one with 1g memory and the other with 3g memory. I was experiencing
a hang in internet explorer. There were no error messages but I suspected a memory issue.
On my 1g desk top I updated the JRE with '-Xmx512m', the internet explorer hangs or I receive
'Java Runtime Environment cannot be loaded.'. At least now I had evidence that there is a Java issue.
Conclusion
-------------
Since I succeeded in configuring the memory parameter on one machine it is clear that this is not
an application issue. Also, multiple Java instances on the client machine forces you to have manage
them separately to avoid collisions. This can be managed using the library_path.
Total Memory
------------
- Is the total memory allocated to the JVM by the OS for application objects.
- This is what you see in the Windows Task Manager under the Memory tab.
- This can grow and shrink as the application runs, and is controlled by the JVM.
- Actually more memory is used by the garbage collector, but this is usually not important for us.
Used Memory
-----------
- Is a subset of the Total Memory that is actually used for live objects that were not collected
by the garbage collector.
There are also two major JVM parameters that can be used to control the Total memory:
-Xmx : Max memory that the JVM will ask for from the OS. (For example: -Xmx256M)
-Xms : The starting Total memory when the application starts up.
If you want to know which parameters are currently configured you can press 's' in the Java console, and look up these parameters.
- It is important to understand that the JVM sometimes reaches the Max memory although it could have avoided if the garbage collection had ran.
- So actually the Total Memory value is important to us only in the sense that when it is reached and the application needs more memory an outOfMemoryError occurs.
- This terminates the application.
- If you see the Total Memory growing it does not mean you have a leak, it might be that the garbage collection has not been called in a while.
In order to know the current actual memory usage at runtime (a basic tool for finding memory leaks) you use the Java console commands:
'm' - Prints Total memory and Free memory. total - free = usage.
'g' - Asks the JVM to garbage collect (equivalent to System.gc() call). This is only a suggestion to the JVM, but it is usually performed. It also prints the same data as
'm' after the collection completes.
- So in order to check the current memory of the client application: press 'g' 3 times, this will invoke a full
garbage collection.
- Then calculate Total Memory - Free Memory = Used Memory. The actual memory consumption
percentage is thus: (Used Memory / MAX Memory)*100
Note: Sometimes Plug-in Console Window is not accessible from the System Tray. In such a case it is possible to view the plug-in log file directly. Go to
$USER_HOME/Application Data/Sun/Java/Deployment/log directory and open pluginXXX_XX.trace file that matches your Java version.
Symptoms
--------
Displaying levels in the page item section of the worksheet. I am testing the query redesign where I
put the sub-category in the page area.
Possible Options
----------------
- Caching worksheet
Customer does not want to consider this option since they want to view the data in real time.
Hardware
- Increase the number of disks in the RAID array. For example, if your current configuration has only
3 stripes (4-1), add 3 more disks making it 6 stripes (7-1). This should greatly improve I/O performance.
- If you are at 32 bits, switch to 64 bits and double the buffer pool memory allocation.
==========================
Integration Multithreading
==========================
There will be additional information regarding this feature available soon. The way the worksheets make use of parallelism is by running multiple threads in the Java
application server, not at the database. You can configure this in AppServer.properties or in APP_PARAMS, 7.3.0.
threadpool.query_run.size=40
threadpool.query_run.per_user=4
- A single user running a worksheet query will have 4 parallel threads accessing the database at the same time. All users together are limited to 40 threads.
Parameter Setting
-----------------
- There are no valid general recommendations available. Each implementation is different.
- The setting for the parameters depend on the number of concurrent users, the number of concurrent batch jobs and their nature, the database hardware configuration
and more.
- Each setting, listed below, has a description and configuration rules that need to be addressed per implementation.
# Maximum size of the Query Run Thread pool, if this value is missing or is negative
# the query run execution mechanism will not use threads.
threadpool.query_run.size=40
threadpool.query_run.per_user=4
#Support for parallel integration procedure # Max number of Parallel Update Threads.
# Default Threads = 5 (Number of DB Server CPU + 1)
MaxUpdateThreads=5
Demantra workskeets contain aggregations of data across different dimensions. Careful worksheet
design is needed to ensure that loading/running a worksheet does not access millions of data rows.
===============================================================
The Desktop Workstation / Client
===============================================================
JRE parameters
- add details
4) Then re-test and send us the Java console output (from the client).
4) After you experience the disconnecting issue, please examine the client Java console + JavaConsole.log files.
===============================================================
General
===============================================================
I reviewed the options outlined in performance white paper Trouble Shooting Demantra Worksheet Performance
<<470852.1>>, available on My Oracle Support. My performance problem still exists.
- The disk I/O is also slow. We are used to seeing 3-5 ms wait times, but we are
experiencing 16-19 ms on the TS_SALES_DATA tablespace.
- We could ease the I/O bottleneck by dedicating more memory to the buffer pool, but that will be hard to do in 32 bits. Currently the buffer pool is 1432 MB. By
doubling it, we may achieve a 40% I/O improvement.
- Data fragmentation: Rebuilding sales_data and mdp_matrix improved the SQL run time. Please confirm.
SOLUTION:
1. Reorganize SALES_DATA
2. Change the bock size to 16k or even 32k (currently it is still 8K).
4. Increase the number of disks in the RAID-5 array. Your current configuration only has 3 stripes (4-1).
Adding 3 more disks will make it 6 stripes (7-1) and should greatly improve I/O performance.
Middle Tier
===========
We recommend at least 512 MB for the application server Java.
For Tomcat server please make sure to add in system environment variables this parameter:
- Name: JAVA_OPTS
- Value: -Xmx512m
Performance Problem? Save Time, Gather the Required Data Before Contacting Oracle Support
==========================================================================================
In addition to the data dump file, we ask you to collect performance statistics using the AWR reports.
This report should include two process (worksheet first run, worksheet rerun).
- In order to get an accurate picture we'd like a short time period. This can be done in Enterprise Manager by clicking:
Server > AWR Baselines > Create > Single > Baseline Name: Demantra Time Range
Specify Start and Stop times in the future, say 10 minutes apart. Then run the operation in question
during the time range specified.
--------------------------------------------
1) Provide answers from the Demantra Performance Questionnaire, Demantra - Performance Questionnaire for 4-Nov-2009 Webcast located at the Demantra forum.
2) Review note <<738503.1>> How to set the Java Plugin Heap Size in response to an OutofMemory error and in general for best performance
3) Review note <<863025.1>> How to Analyze Demantra Forecast Engine Performance (Processing Time) Issues / Demantra Engine is Slow
4) Review note <<867238.1>> How to run a Query Servlet Report to help diagnose Demantra Worksheet Performance bottlenecks
5) Please supply the record count of some main tables on the instance is as below:
select count(*) from t_ep_item
select count(*) from t_ep_ebs_cpn_code
select count(*) from mdp_matrix
select count(*) from sales_datail
select count(*) from t_ep_site
select count(*) from t_ep_organization
6) A vital part of performance diagnostics is the environment and parameter settings. At the database server:
- Provide up-to-date init parameters
- OS version
- Is it a dedicated machine or is it running additional applications?
- List the processes running at the time of this performance issue.
FOR IMPORT SPECIFIC PERFORMANCE ISSUE, PROVIDE THE ADDITIONAL ITEMS BELOW:
7) Collaborator.log
8) Integration.log
9) A copy of the data that was used for the loading, taken prior to loading.
10) A copy of the _ERR table taken once the import is done to reflect any
records that had errors and did not go through the import.
11) A record of the start time and end time of the import (should be server-side time so
that we'll know to to match log timing.
Summary
Related
Products
More Applications > Value Chain Planning > Oracle Demantra > Oracle Demantra Demand Management
Keywords
TABLESPACE; PERFORMANCE PROBLEMS; PERFORMANCE; DB_BLOCK_SIZE; ORACLE DEMANTRA; DEMANTRA; PERFORMANCE STATISTICS;
MEMORY USAGE
Errors
Back to top