Vous êtes sur la page 1sur 14

High Availability

Hyperion- Settings
Prepared by: Jeff Tantleff / Nareshkumar JayaveluKPI Partners

Description
This document outlines HA Practices and standards for multi-tenant connections (distributed)
approach to performance tune Presentation, Application layers.
Revision Log

Date

Author

Change Made

Impact

01/13/2016

Jeff Tantleff

Initial Settings

Best practice for hyperion components

EPM Deployment Topology HA


As EPM usage reaches global users, implementing Oracle EPM in a highly available fashion is
becoming a major part of infrastructure planning. Lets go through a couple of key
considerations when considering High Availability for your organization.
The first question I ask clients is if they really need it. I find that sometimes IT organizations are
trying to start implementing load balancing and H/A as part of a corporate standard or some
made up Sarbanes Oxley requirement. In reality, in many cases unplanned downtime, although
obviously not welcome, can be tolerated in organizations in the event of a catastrophic hardware
failure.
The best bet is to work with the Finance to truly understand their availability times. Then, take a
look at your backup/recovery plan, and your hardware vendor agreements for on-site emergency
part replacement. Some clients have a 2-hour guarantee field replacement by their hardware
vendor, and even full replacement parts on-site. Think of all the scenarios of unplanned
downtime and apply a probability to it to accurately assess the total risk of a non-redundant
implementation.
If that does not make you take pause think cost. A highly available installation will double
your hardware cost and almost triple your implementation time. There also could be additional
licensing costs associated with redundant components.
If after all of that and you decide that H/A is mandatory in your organization, go for it! But
beware do not try this at home. A Highly available installation should only be performed by an
Oracle Certified infrastructure partner.
The biggest change in Oracles stance for high availably in version 11 is the dropped support for
all 3rd party clustering solutions such as Veritas Cluster Server and Microsoft Cluster Server.
which is quite unfortunate. The only supported clustering methodology is Oracle Clusterware.
This is really bad news for those IT shops that already have an older system deployed on a
different clustering technology. It also is especially frustrating because Oracle Clusterware
requires the use of Oracle Cluster File System (OCFS) for shared disk resources.which on
windows can take 5 minutes to failover. Nice.

Terminology
First lets talk terminology, at least how I will define these terms for the use of this posting.
High Availability

The ability to continue to provide computing resources in the event of a fatal hardware failure.

Cluster
Two or more linked machines that are used for Load Balancing and/or failover of services for
high availability
Load Balancing
Distributing requests among multiple applications servers to evenly distribute load. Many times
a Load Balancer is used as the single entry point and it will distribute requests based on load of
the individual servers or in a simple round-robin fashion. The nice thing about load balancing, is
that most of the time, you also get High Availability.
Failover
The ability to automatically switch services to a standby server if the primary server fails.

Notes on Essbase
Sure if you look at the High Availability support matrix, Oracle is happy to say that Essbase can
be clustered and Load Balanced. But look closely that is in read only mode only. While there
are a few situations a read-only Essbase cluster makes senses for an organization, I almost
always see this limitation making the Essbase cluster solution useless. It certainly will not work
for Planning.
To make matters worse, Oracle will not support 3rd party clustering of any kind on Essbase not
even Oracle Clusterware. While this is supposed to be address in the next release in Sept/Nov, it
leaves us little choice on the Essbase layer.
Our options are to:
Do it anyway.
But just know that if Oracle determines that a support issue is related to the cluster, they have
every right to insist that you demonstrate the issue without it in the mix before they will take

responsibility. However, for most behavioral or performance issues that you would call support
about, Oracle should not know or even care that your Essbase is running on top of a cluster.

Do an active / Cold stand-by.


In essence have a powered-off machine configured with the same hostname/IP address connected
to the same shared disk resource as the active machine. Upon failure, there will be a manual
process of shutting down the active server (if needed), dismounting the disk resource, starting up
the cold server, mounting the disk resource, and bringing services online.

Oracle recommends that you set up a base deployment with one server for each component,
validate that it is working, and then scale out each component, depending on your highavailability and system-load requirements.

Deployment Directory Structure and Location Reference

The following illustration shows the enterprise deployment directory structure on each
server. Binaries and configuration files for each EPM System component are located
only on the server on which the component is installed. For example, Oracle Essbase
binaries and configuration files, along with Essbase data directory (not shown in the
illustration), are located on the Essbase host machine (ESSHOST1 in the deployment
diagram).

Data directories of EPM System products other than Essbase are located on a shared
disk that must be accessible from all the servers in the deployment. Because EPM
System components are not installed on the shared disk, the shared disk does not have
the directory structure depicted in the illustration. The shared disk hosts data
directories for:

Oracle Hyperion Reporting and Analysis Repository directory

Exported or imported artifacts using Oracle Hyperion Enterprise Performance


Management System Lifecycle Management

This document refers to the following installation and deployment locations:

MIDDLEWARE_HOME refers to the location of middleware components such as Oracle


WebLogic Server, Oracle HTTP Server, Java, and, optionally,
oneEPM_ORACLE_HOME or more. The MIDDLEWARE_HOME is defined during
EPM System product installation. The default MIDDLEWARE_HOME directory
isOracle/Middleware.

EPM_ORACLE_HOME refers to the installation directory containing the files required


to support EPM System products. EPM_ORACLE_HOME resides
withinMIDDLEWARE_HOME. The
default EPM_ORACLE_HOME is MIDDLEWARE_HOME/EPMSystem11R1; for
example, Oracle/Middleware/EPMSystem11R1.

EPM_ORACLE_INSTANCE denotes a location that is defined during the configuration


process where some products deploy components. The default location
isEPM_ORACLE_INSTANCE is MIDDLEWARE_HOME/user_projects/empsystem1;
for example, Oracle/Middleware/user_projects/empsystem1.

WEBLOGIC_DOMAIN_HOME refers to the directory that contains the WebLogic Server


domain configuration for EPM System.
Deployment Approach
Oracle recommends a modular deployment approach, depicted in the following
illustration, to facilitate the verification of each EPM System component.

This approach simplifies troubleshooting during the setup process and facilitates configuration.
The tasks that must be performed to complete each module in this deployment process are
discussed in this document.

Deployment Scenarios

Before starting the EPM System setup process, you must complete the required
server, database, and network setup process appropriate for your deployment
strategy.
EPM System Deployment Scenarios
Scenario

Deployment Instructions to Use

Set up base working environments

Set Up Oracle Hyperion


Planning

Installing and Configuring Foundation Services on


FNDHOST1

Installing and Configuring Planning on PLANHOST1

Installing and Configuring Essbase Server on ESSHOST1

Set Up Consolidation

Installing and Configuring Foundation Services on


FNDHOST1

Installing and Configuring Financial Management Server


on HFMHOST1

Installing and Configuring Financial Management Web


Application on HFMWEBHOST1

Installing and Configuring FDM on FDMHOST1

Add products to an existing environment


Add Consolidation to
Planning

Installing and Configuring Financial Management Server


on HFMHOST1

Installing and Configuring Financial Management Web


Application on HFMWEBHOST1

Scenario

Add Planning to
Consolidation

Deployment Instructions to Use

Installing and Configuring FDM on FDMHOST1

Installing and Configuring Planning on PLANHOST1

Installing and Configuring Essbase Server on ESSHOST1

Minimum Server Specifications


The Standard Deployment Topology uses these servers, which have the Microsoft
Windows 2008 R2 operating system installed. The server names indicated in this
table are used throughout this document to identify specific servers involved in
the deployment.
Minimum Server Specifications for Standard Deployment
EPM System Component

Machine

Processor

Memory

Oracle Hyperion Foundation Services

FNDHOST1

4 core 2 CPU 16 GB

Planning

PLANHOST1

4 core 2 CPU 12 GB

Essbase

ESSHOST1

4 core 2 CPU 16 GB

Oracle Hyperion Financial Management Server

HFMHOST1

4 core 2 CPU 16 GB

Financial Management Web

HFMWEBHOST1

4 core 2 CPU 12 GB

Oracle Hyperion Financial Data Quality Management


Server

FDMHOST1

4 core 2 CPU 16 GB

User Access Control


Disable User Access Control (UAC) on each deployment server. To disable UAC:
1. Select Start, then Control Panel, then User Accounts, then User Accounts, and
then Change User Account Control settings.
2. In User Account Control Settings, ensure that the slider is set to Never Notify.

Deployment User Requirements

All EPM System components should be installed from a generic Windows user account
(an account that does not belong to a specific user). All patches for this deployment must
be run using the user account that was used to install EPM System.

The deployment user account is referred to as the deployment account in this document.
This user account should satisfy the following requirements:

The deployment account is a member of the Administrators group on the server.

User account control is disabled for the deployment account.

Act as part of the operating system

The following local security policies are assigned to the deployment account:

Bypass traverse checking

Log on as a batch job

Log on as a service

2. To view local security policy assignments on a server, select Start,


then Administrative Tools, then Local Security Policy, then Local Policies, and
then User Rights Assignment.

Shared File System Requirements

A shared file system that is accessible from all the servers in the deployment is required
to host these components:

Installation files

Reporting and Analysis Repository data

Artifacts for Lifecycle Management in Oracle Hyperion Shared Services

Data directory for FDM applications


The shared file system could be a share on a NAS/SAN or on one of the Windows
servers. The deployment account must have read-write access to the shared file
system. In the rest of this document, this shared location is referred to
as \\SharedHost\SharedLocation.

Host Name Resolution Requirements

The canonical host name of each server must be the same when accessed from within the
server and from other servers in the deployment.You may want to create a local hosts file
on each server to resolve host name issues.

EPM System uses Javas canonical host name resolution for resolving host names. To
validate host names as resolved by Java, EPM System provides a utility
(epmsys_hostname.bat. An archive of the utility (epmsys_hostname.zip) is available in
the directory where you unzip the part numbers to install EPM System. See Downloading
and Extracting EPM System Software for instructions.
Clock Synchronization

The clock on each server must be synchronized to within one second difference. To
synchronize clocks, point each server to the same network time server.

Environment Details

Use this section to record the details for your environment.

Vous aimerez peut-être aussi