Académique Documents
Professionnel Documents
Culture Documents
Table of Contents
Database and Transactions.....................................................................................................................4
Timeouts .................................................................................................................................................5
Threading ....................................................................................................................................................7
Introduction............................................................................................................................................7
Introduction............................................................................................................................................9
Logging..................................................................................................................................................... 10
Introduction......................................................................................................................................... 10
Tips........................................................................................................................................................ 10
Introduction......................................................................................................................................... 12
JMX Configuration........................................................................................................................... 12
Introduction......................................................................................................................................... 13
Introduction......................................................................................................................................... 15
Oracle BPM has traditionally worked with the Oracle internal JDBC Drivers
(OEM from DataDirect JDBC Drivers). Recently (as of March 2010), Oracle BPM
certified the Oracle Thin JDBC Drivers for connecting to Oracle RAC instances when
deployed on WebLogic Server. When using the Oracle Thin JDBC Driver, the
recommendation is to use the one distributed with WLS. For all other
configurations, it is still recommended you use the Oracle Internal JDBC Drivers
distributed with the Oracle BPM distribution.
Data Sources
The most important aspect is the Connection Pool. Proper sizing is important
and there is a strong relationship with the amount of threads that can execute BPM
transactions. Additionally, it is important to properly set the transaction timeouts to
enforce proper execution.
A general rule of thumb is that the connection pool size should not be too big
on each cluster node. We also need to consider that all cluster nodes will be
accessing a single Engine DB instance and as such if multiple cluster nodes exists
with large connection pools, a big stress will be generated on the Database side.
Some past performance analysis shows that a value between 30 and 50 is a good
number. Please check the “Threading” section for best practices and relationships
between Thread and DB Connections.
Then the Engine Database Data Source size should be estimated as follows:
General good rules of thumb for Connection Pool sizes on a per container
basis are:
Timeouts
Additional Properties
Oracle BPM supports multi lingual capabilities. All schemas are prepared to
store double byte code to ensure data integrity regardless of the location and locale
where OBPM is used. When using the Oracle Internal JDBC Drivers (OEM from
When the OBPM Engine starts processing lot of load, the following 2 queries
will be shown as top in the list in any database analysis.
Threading
Introduction
One other important aspect of an optimal processing is the need to fine tune
the amount of threads that can process business process operations on the engine
side. There are 2 specific sets of threads that can be controlled and managed by an
OBPM Engine. The threads dedicated to automated work and the threads dedicated
to processing interactive requests.
In order to fine tune this parameter, the following file can be edited
$ENTERPRISE/j2ee/template/engine/default/ejb/META-INF/weblogic-ejb-jar.xml .
This is on the machine that runs the Oracle Process Administrator Web Application
and there the BPM JEE applications are assembled. After modifying this file, it will
ne necessary to re-assemble the OBPM Engine EAR file and redeploy to the JEE
container.
<weblogic-enterprise-bean>
<ejb-name>item-execution-$escape.escapeXML($engine)</ejb-name>
<message-driven-descriptor>
<destination-jndi-name>$escape.escapeXML($itemExecutionQueue)</destination-jndi-
name>
<ejb-reference-description>
<ejb-ref-name>ejb/EngineStartup</ejb-ref-name>
<jndi-name>local/engines/startup/$escape.escapeXML($engine)</jndi-name>
</ejb-reference-description>
<dispatch-policy>wm/albpmAutomaticWorkManager</dispatch-policy>
</weblogic-enterprise-bean>
Once that a Work Manager is specified in the Engine EAR descriptor file, it is
necessary to make sure the referenced Work Manager is created and that it is
assigned a Min and Max value. Otherwise, it will use up to 16 threads.
Messaging
Introduction
This section has as objective to describe general tips and best practices when
configuring the messaging subsystem needed by the Oracle BPM Engine to process
automated work exercised completely by the Oracle BPM Engine.
The OBPM Engine Queue is used to connect the dispatcher when the actual
thread executing the automatic activity implementation defined in the business
process. The Queue connects with the Oracle BPM Engine MDB as explained in the
previous section.
Logging
Introduction
Properly managing the information that is logged into the Oracle BPM Engine log
files and the accuracy of this data is very important.
The following are some general-purpose tips of how to tune the Oracle BPM Engine
logging subsystem.
Tips
• The Oracle BPM Engine logging can be configured through the Oracle BPM
Process Administrator on a per engine basis on the “Log” Tab for each
defined Oracle BPM Engine showing in the “Engines” list
• The size of the files should not be much bigger than the current default.
Appending information to large files is an expensive operation. It is
recommended to increase the amount of logs.
• When having more than one cluster node in the same machine, all the cluster
nodes will log information into the same log file with the possibility of losing
sensitive information due to concurrency by 2 Java processes storing
information in the same Oracle BPM Engine log file. To ensure proper state of
the information stored in the BPM Engine log files, it is recommended that
the cluster nodes are started with the following system property (-
Dfuego.server.log.file) that indicates each individual Java process running
the Oracle BPM Engine uses its own log file: -
Dfuego.server.log.file=/OracleBPM103/engine/logs/ClusterNode1.log
• In this case, Cluster Node #1, can specify a file with name “ClusterNode1.log”.
Additional Cluster Nodes running on the same machine could store the
Oracle BPM Engine log files in the same directory but with a different name
(i.e.: /OracleBPM103/engine/logs/ClusterNode2.log, etc).
When the Engine needs to pick up the log file to dump information, it will use
the following order for resolving the log file location:
3. If none of the previous system properties were specified, it will use the
file location specified in the Logging Tab of the Engine preferences
accessible from the Process Administrator.
Introduction
When the OBPM Engine is up and running, it is very likely that it will need to
be notified when certain management operations take place such as project
deployment, project un-deployment or project deprecation. For this, a client
application running these management operations (like the Process Administrator -
but it could also be Ant Tasks) need to notify the OBPM Engine so that it can update
the information without needing to restart or recycle the OBPM Engine application
or the container where it runs.
JMX Configuration
The most import piece is the host and port as well as the needed Principal
and Credentials.
The host name is the host where one of container nodes (cluster node if you
are using a cluster configuration – managed server node in WLS and cluster node in
WAS) where the OBPM Engine EAR is deployed and up & running. It is important to
also specify the port where this container is running. This pair will uniquely identify
an OBPM Engine up and running. The principal and credential parameters are used
in case security is enabled.
It is important to note that we should not specify the WLS Admin Server or
WebSphere Admin Node. The host:port pair needs to be pointing to a node or
cluster node where the OBPM Engine is running. If for some reason, the node or
cluster node specified is not running, then the notification will be lost and it will be
necessary to restart the OBPM Engine to pick up the latest changes.
Introduction
When the Oracle BPM Engine runs on a cluster configuration, it internally uses the
Apache Tribes framework to maintain and control cluster node behavior. Among the
things that this framework controls is the notion of a “Master Node”. This master
node will have certain specific administrative tasks that ONLY ONE node in the
cluster can execute. For example, which node in the cluster should manage project
undeployments and when a project is deployed, which is the node that initializes the
Global Automatic ToDo Items. These are just some of the engine behavior that are
managed and controlled through this framework. In previous releases of ALBPM,
JGroups was used.
The Apache Tribes configuration is specified in the “Cluster” Tab in the Process
Administrator. In this Tab, the default settings are:
UDP(mcast_addr=224.8.8.8;mcast_port=45566)
This is the basic information used to allow the communication of OBPM Engine
cluster nodes through UDP. In addition, Apache Tribes uses TCP settings. As the
Ports used by the UDP and TCP protocols may generate some collision with existing
ones in the environment in which the OBPM Engine cluster nodes are deployed, it is
possible to change these settings by explicitly indicating the required ports. The
following would be the recommended entry settings when specific TCP values other
than the defaults are required. These are some possibilities:
UDP(mcast_addr=224.8.8.8;mcast_port=45566):TCP(ip_port=:5000)
In this case, it is assumed that the TCP/IP port to be used by Tribes is 5000 in all the
machines or hosts where there is a cluster node hosting the Oracle BPM Engine.
UDP(mcast_addr=224.8.8.8;mcast_port=45566):TCP(ip_port=host1:5000)
In this case, it is assumed that the TCP/IP port to be used by Tribes on host1 is 5000.
For all other cluster node machines hosting the Oracle BPM Engine, it will assume
the default port (4000)
UDP(mcast_addr=224.8.8.8;mcast_port=45566):TCP(ip_port=host1:5000;
ip_port=host2:6000)
In this case, it is assumed that the TCP/IP port to be used by Tribes on host1 is 5000
and on host2 it is 6000. For all other cluster node machines hosting the Oracle BPM
Engine, it will assume the default port (4000)
Additionally and specially on Solaris and HP-UX, it may be necessary to include the
usage of the “use_nio” setting. This value is needed for OBPM 10.3.0 and 10.3.1. For
OBPM 10.3.2, a newer version of the Apache Tribes library is being used and it is no
longer necessary to specify this parameter explicitly. To enable this setting, the
entry would look something like this:
UDP(mcast_addr=224.8.8.8;mcast_port=45566):TCP(use_nio=false)
Optionally, you can specify the timeout for cluster nodes to connect with each other.
The setting would be as follows:
UDP(mcast_addr=224.8.8.8;mcast_port=45566):TCP(timeout=3000)
Have in mind that the value assigned to the timeout parameter is specified in
milliseconds. The default value is 3000.
Bear in mind that if multiple of these parameters need to be enabled at the same
time, “;” is the separator.
For the TCP/IP port specified on the Engine setup, this will be the starting port
number. If for some reason, the initial port is taken, Apache Tribes will raise a
warning message and will move on to try with the following integer number until it
finds one is free. If after the first 100 ports, it was not able to get a free one, it will
report a failure and it will make the engine fail to start.
If the physical machine has more than one network adapter, the hostname should be
the IP address rather than the name of the machine itself.
PAPI-WS Tips
Introduction
This section will highlight some tips and considerations when using PAPI-WS
as a mechanism for client applications to talk to business processes deployed on an
OBPM Engine.
Session Pooling
The new PAPI-WS introduced in ALBPM 6.0 and also existing in OBPM 10gR3
moved from a stateful implementation to a stateless one. This feature also
introduced the notion of a dynamic and on demand session pool implementation
managed by the OBPM PAPI-WS Web Application.
The session pool can be tuned and configured by updating system properties
in the papiws.properties file. Some of these properties will have a direct impact on
how the PAPI Instance Cache is managed on the PAPI-WS Web Application context.
These aspects will be reviewed and highlighted on each properties description.
parameter, it is possible to
have a list of sessions
associated to this participant
id. This would ideally need to
be set to the max amount of
client concurrent threads
doing activity with the PAPI-
WS application.
fuego.papiws.pool.min Min number of sessions This parameter controls that
nodisposable per entry that are kept at least there will be one
alive after timeout. session one active all the time.
# May be useful to Not only this is advisable from
improve performance if a time to create a new session
there are many but also from a direct
concurrent users relationship with the lifecycle
connected with the same of the PAPI Instance Cache. If
participant. this value is changed from its
default 0 to 1, then once a
session has been created and
established, then the PAPI
cache will be filled for the
accessed processes. As there
will be at least one session
opened, it means that the
count of sessions will never
reach 0 and the PAPI Instance
cache will never be recycled
or cleaned which could save
significant time when
performing queries through
PAPI-WS.
fuego.papiws.pool.time Session timeout in This is the default time before
out minutes. idle sessions in the
# Related to the length of “fuego.papiws.pool.maxentrys
the interval between calls ize” are recycled and closed.
from the same user.
fuego.papiws.pool.wait Indicates whether session Generally speaking, this
acquisition should be parameter should be kept in
blocking or throw an false as it may block a client
exception when the cache connection in trying to
limit has been reached. perform an operation for a
long time. It is recommended
that the retry/timeout
mechanism is used.
fuego.papiws.pool.max Number of retries to This is the amount of retries