Vous êtes sur la page 1sur 30

Exploiting the Experience - Essbase Introduction

Business Intelligence: An Intelligent Move or Not? In recent years, Business Intelligence (BI) technologies have changed for the better, with new technologies emerging rapidly. Yet amid these developments, a question arises: Why, with all the initiatives and projects already on the corporate IT agenda, should anyone invest in BI? Today, when the rationale of technology for technologys sake is no longer valid, the question is especially important. To address the question, lets look at what BI is, what it offers, and how you can determine a return on investment (ROI) when you implement BI. The Need BI technologies attempt to help people understand data more quickly so that they can make better and faster decisions and, ultimately, better move toward business objectives. The key drivers behind BI objectives are to increase organizational efficiency and effectiveness. Some BI technology aims to make the flow of data within an organization faster and more accessible (e.g., making standard reports easier to build, maintain, and distribute). Yet other, newer BI technologies take a more aggressive approach by redefining existing processes with new, more streamlined ones that eliminate entire steps or create new capabilities that are beyond the reach of legacy approaches. Changes in todays business environment, combined with the competitive advantages of new technologies, make sticking to legacy techniques problematic. Major changes are driving the need for new tools and new approaches to decision making: Ruthless competition is placing huge pressure on earning profits. Businesses that act and react at an historical pace are likely candidates for the endangered species list. Electronic data and databases are literally exploding in size. Todays sophisticated ERP systems, e-commerce systems, data warehouses, and the Web are greatly expanding the amount of data available. Old tools are simply not up to the challenge. Historically, the problem was getting to the data; now the problem is largely the opposite: How do people filter and make sense of it all? Pressure on profits and the increasing pace of business have led to flatter and leaner organizations. As anyone left after a right-sizing effort will tell you, expectations for results are not downsized. Everywhere, managers face the same issue: How do I achieve greater results with fewer resources? Flatter organizations are expected to move more quickly as decision-making is pushed down into the organizational ranks, but how do you equip these newly empowered people? Today, instead of a few experts spending 90 percent of their time
Page 1 of 30

analyzing data, many people throughout an organization spend 5 to 10 percent of their time trying to make sense of it all. The old tools were not designed for that situation.
(Source : Article by ProClarity Corporation)

Problem Statement
The diversity and pace of today's business require complementary tools that support greater variability of use and dynamic interaction with the data to support operational managers as they explore and evaluate interrelationships in the data. An operational manager doing trend analysis on a particular product line would require a mountain of static reports to accommodate his/her analysis needs, an approach that is not desired or sustainable. On-Line Analytical Processing (OLAP) tools meet the need for interactive multidimensional reporting and analysis. They allow operational managers to perform trend, comparative, and time-based analysis by enabling exploration of pre-calculated and summarized data along multiple dimensions. Operational managers can explore data first at a summary level, and then drill down through the data hierarchy to examine increasingly granular levels of detail.

Previous Options
On-Line Transactional Processing like Microsoft SQL server and it is about Current data, Short database transactions, online update/insert/delete Normalization is promoted which is more on to the storing of the transactional, high volume transactions and transaction recovery is necessary for the same. No Data warehouses!! No Data marts!! Simple terms You get what the situation is without proper insights of whats being measured against what scenarios across which dimensions time, market, demographics.

Solution Proposed
On-Line Analytical Processing (OLAP) tools meet the need for interactive multidimensional reporting and analysis. They allow operational managers to perform trend, comparative, and time-based analysis by enabling exploration of pre-calculated and summarized data along multiple dimensions. Operational managers can explore data first at a summary level, and then drill down through the data hierarchy to examine increasingly granular levels of detail.

Benefit 1

Page 2 of 30

Multidimensional: There is a multidimensional data structure and view that allows operational managers to analyze numerical values from different perspectives, e.g. product, time, and geography.

Benefit 2
Consistently fast: To ensure fast, predictable query times, OLAP vendors pre-aggregate data. This is done using pre-aggregated relational tables, or through a highly compressed multidimensional file known as a cube.

Benefit 3
Hierarchical: Granularity is a key benefit which gives the power user visibility to the leaf-most node under his purview. KPIs should be measurable to last granular level of all measures that are being captured to give a Business Intelligence View. Complex calculations: With multiple dimensions come more complex, cross-dimensional calculations. An analysis session might require the subtotal of sales for a particular state to be expressed as a percentage of the whole country, or for that singular product, or both! Further, this result may require presentation as part of a time-series analysis, i.e. current quarter versus last quarter versus a year ago.

Uniqueness in Solution
There's a reason that Hyperion Essbase now its Oracle Essbase is the industry-leading OLAP solution. It simply offers more features and functionality than any other solution. Oracle Essbase provides adaptable data storage mechanisms for specific types of analytic and performance management applications, thereby ensuring sub-second response times regardless of the complexity of the analytics. For example, Block data storage option (BSO) enables driver-based scenario modeling, forecasting, and predictive analytic applications. Aggregate data storage option (ASO) is optimized for massive sparse data sets, supporting thousands of concurrent business users performing sophisticated analyses at the speed of thought. With Essbase, companys managers, executives and analysts can:

Analyze even large, complex data sets quickly Employ familiar spreadsheet programs such as Microsoft Excel and Lotus 1-2-3 to arrange and analyze data

Perform trend analysis and develop "what if" scenarios


Page 3 of 30

Carry out complex analyses, such as multi-currency financial translations and interdepartmental budget allocations, using sophisticated, built-in Essbase formulas

IT managers and project leaders can:

Access data in legacy systems, online transaction processing (OLTP) systems, relational data warehouses, enterprise resource planning (ERP) systems and even spreadsheets to create multidimensional Essbase databases or "cubes"

Include thousands of dimensions and attributesup to a million members per dimension

Modify databases to reflect changes in business structures and conditions

Implementation
Essbase Architecture

Page 4 of 30

Analytic Server: This is where the MOLAP cube is stored. The server acts as a shared resource handling all data storage, calculation, sorting etc. It also contains the Outlines (just assume that outline is a file that stores the dimension and measure specifications), rules (one can define rules for data load) etc. Analytic server has 2 kinds of storage Block Storage (Easy to use and implement but does not scale) Aggregate Storage (Has certain limitations but can scale) Analytic Administration Services: Analytic Administration Servicesthe database and system administrators interface to Analytic Services provides a single-point-of-access console to multiple Analytic Servers. Using Analytic Administration Services you can design, develop, maintain, and manage multiple Analytic Servers, applications, and databases. You can preview data from within the console, without having to open a client application such as Spreadsheet Add-in. You can also use custom Java plug-ins to leverage and extend key functionality. Analytic Integration Services: This is a very important component wherein one designs the dimensions and fact tables if one wants to leverage different data sources like Oracle, DB2, and SQL Server etc. It basically

Page 5 of 30

uses ODBC to connect to different data sources. It also provides a drill through feature wherein one can drill down from the database (MDB) to relational database. Analytic Provider Services: This is for enabling clustering of the database (Essbase cube) across multiple machines. ALL ABOUT Essbase (MOLAP) Although OLAP applications are found in widely divergent functional areas, all require the following key features: Multidimensional views of data Calculation-intensive capabilities Time intelligence Key to OLAP systems are multidimensional databases, which not only consolidate and calculate data; but also provide retrieval and calculation of a variety of data subsets. A multidimensional database supports multiple views of data sets for users who need to analyze the relationships between data categories. For example, a marketing analyst might ask following questions: how did Product A sell last month? How does this figure compare to sales in the same month over the last five years? How did the product sell by branch, region, and territory? Did this product sell better in particular regions? Are there regional trends? Did customers return Product A last year? Were the returns due to product defects? Did the company manufacture the products in a specific plant? Did commission and pricing affect how salespeople sold the product? Did certain salespeople sell more? In multidimensional databases, the number of data views is limited only by the database outline, the structure that defines all elements of the database. Users can pivot the data to see information from a different viewpoint, drill down to find more detailed information, or drill up to see an overview. Analyzing Source Data

Page 6 of 30

First, evaluate the source data to be included in the database. Think about where the data resides and how often you plan to update the database with the data. This up-front research saves time when you create the database outline and load data into the Essbase database. Determine the scope of the database. If an organization has thousands of product families containing hundreds of thousands of products, you may want to store data values only for product families. Interview members from each user department to find out what data they process, how they process data today, and how they want to process data in the future. Carefully define reporting and analysis needs.

How do users want to view and analyze data? How much detail should the database contain? Does the data support the desired analysis and reporting goals? If not, what additional data do you need and where can you find the needed data? Determine the location of the current data. Where does each department currently store data? Is data in a form that Essbase can use? Do departments store data in a DB2 database on an IBM mainframe, in a relational database On a UNIX-based server, or in a PC-based database or spreadsheet? Who updates the database and how frequently? Do the individuals who need to update data have access to the data? Make sure that the data is ready to load into Essbase. Does data come from a single source or from multiple sources? Is data in a format that Essbase can use? For a list of valid data sources that you can load into Essbase, Is all data that you want to use readily available?

Page 7 of 30

Creating Database Models Next, create a model of the database on paper. To build the model, identify the perspectives and views that are important to your business. These views translate into the dimensions of the database model. Most businesses choose to analyze the following areas: Time periods Accounting measures Scenarios Products Distribution channels Geographical regions Business units Identifying Analysis Objectives After you identify the major areas of information in a business, the next step in designing an Essbase database is deciding how the database enables data analysis: If analyzing by time, which time periods are needed? Does the analysis need to include only the current year or multiple years? Does the analysis need to include quarterly and monthly data? Does the analysis need to include data by season? If analyzing by geographical region, how do you define the regions? Do you define regions by sales territories? Do you define regions by geographical boundaries such as states and cities? If analyzing by product line, do you need to review data for each specific product? Can you summarize data into product classes? Regardless of the business area, you need to determine the perspective and detail needed in the analysis. Each business area that you analyze provides a different view of the data. Determining Dimensions and Members
Page 8 of 30

You can represent each business view as a separate standard dimension in the database. If you need to analyze a business area by classification or attribute, such as by the size or colour of products, you can use attribute dimensions to represent the classification views. The dimensions that you choose determine what types of analysis you can perform on the data. With Essbase, you can use as many dimensions as you need for analysis. A typical Essbase database contains at least seven standard dimensions (non-attribute dimensions) and many more attribute dimensions. When you know approximately what dimensions and members you need, review the following topics and develop a tentative database design: Relationships Among Dimensions Example Dimension-Member Structure Checklist for Determining Dimensions and Members After you determine the dimensions of the database model, choose the elements or items within the perspective of each dimension. These elements become the members of their respective dimensions. For example, a perspective of time may include the time periods that you want to analyze, such as quarters, and within quarters, months. Each quarter and month becomes a member of the dimension that you create for time. Quarters and months represent a two-level hierarchy of members and their children. Months within a quarter consolidate to a total for each quarter. Checklist for Determining Dimensions and Members Use the following checklist when determining the dimensions and members of your model database: What are the candidates for dimensions? Do any of the dimensions classify or describe other dimensions? These dimensions are candidates for attribute dimensions. Do users want to qualify their view of a dimension? The categories by which they qualify a dimension are candidates for attribute dimensions. What are the candidates for members?

Page 9 of 30

How many levels does the data require? How does the data consolidate? Analyzing Database Design While the initial dimension design is still on paper, you should review the design according to a set of guidelines. The guidelines help you to fine-tune the database and leverage the multidimensional technology. The guidelines are processes or questions that help you achieve an efficient design and meet consolidation and calculation goals. The number of members needed to describe a potential data point should determine the number of dimensions. If you are not sure that you should delete a dimension, keep it and apply more analysis rules until you feel confident about deleting or keeping it. Use the information in the following topics to analyze and improve your database design: Dense and Sparse Dimensions Standard and Attribute Dimensions Dimension Combinations Repetition in Outlines Inter dimensional Irrelevance Reasons to Split Databases Checklist to Analyze the Database Design

Which Essbase Storage we should go with? As Essbase have two storage?


Block Storage (Easy to use and implement but does not scale) Aggregate Storage (Has certain limitations but can scale)

Page 10 of 30

If the analysis needs complex calculations for Budgeting, Forecasting than we should go with Block Storage as it has the feature of Calculation Script with many inbuilt functions to calculate the members and in-built Dynamic Time Series functionality which differs itself from Aggregate Storage type. Block Storage has a limitation in handling large number of Dimensional members and it constraint itself to 10 million members. So if the applications need high dimensionality than should consider ASO (Aggregate Storage Database). Building Dimensions There are three methods using which we can build the dimensions

Parent Child Generation Reference Level Reference

How to go about loading the various dimensions using rule files. Rule Files, as the name suggests, helps in loading data as well as dimension members using certain rules. Loading a dimension means, loading the various hierarchies that comprise a dimension. In our example, we have 2 hierarchies for Customer dimension, 1 hierarchy for Channels, 1 hierarchy for Time and 1 for Product dimension. Lets start with Customer Dimension first. Before starting to build these dimensions, the first step is to identify which would be the primary hierarchy and which would be the alternate hierarchy. In Essbase, it does not matter what hierarchy type it is since every hierarchy is treated in the same way. But it is mandatory to understand how the members would be present in each of the hierarchies. For example, in some cases, we can have hierarchies wherein the members within & across the hierarchies are completely unique. But in some cases, we can have the same members shared across multiple hierarchies.

So, let us start with building the first hierarchy Shipments. In order to do that, we shall create a new rule file. In the rule file let us enter the SQL below.

Page 11 of 30

SELECT TOTAL_CUSTOMER_ID, TOTAL_CUSTOMER_DSC, REGION_ID, REGION_DSC, WAREHOUSE _ID, WAREHOUSE_DSC, SHIP_TO_ID,SHIP_TO_DSC FROM CUSTOMER_DIM

Page 12 of 30

Ensure that you are pointing to the right SQL source. Once that is done, click on OK/Retrieve which will retrieve the data back within the rule file.

Since we are loading a dimension, we need to set the data source property as Dimension Build.

Page 13 of 30

After that, we need to set the dimension build specific properties. There are 6 types of dimension build methods available in Essbase. They are shown below

1. Generation References This facilitates column based loading from data sources. Typically used while loading a level based hierarchy. 2. Level References This is similar to Generation Type loading. But this is bottom up loading instead of a to p down approach. Typically used while loading a level based hierarchy. 3. Parent Child References This is used when source data is in the form of a parent child hierarchy. Typically used while loading a value based hierarchy. 4. Add as Sibling With match Typically used while loading members as siblings to other members. 5. Add as a Sibling to Lowest level Typically used while loading members as siblings to the lowest level members. 6. Add as a Child of Typically used while loading members as child of a specific member.

Page 14 of 30

In our case, since we are loading a level based hierarchy, we shall be using Generation References method. Also, there are quite a few options that we have within each dimension build method. They are given below

Each option has its own significance. The significance of each method is given below. 1. Allow Moves - This will allow a member to change its parent. 2. Allow Property Changes This will allow changes to the property of a member like Alias, Aggregation Property, Time Based Aggregation etc 3. Allow Formula Changes This will allow formula for a member to be dynamically populated 4. Allow UDA Changes - This will allow UDA changes to a member So, in our case this is what we shall use for the primary Segments Hierarchy.

Page 15 of 30

Once this is done, we need to set the Field Based property for each column obtained from Our SQL Query. This is to designate the member, Alias etc. In Essbase, the same member name cannot be used across dimensions. So, we shall not be using the Dimension IDs as our member names as other dimensions would have the same ids. So, for each column containing the dimension ids, let us just ignore them.

Page 16 of 30

So, we shall mark Field2, Field4, Field6 and Field8 as Generation2, Generation3, Generation4 and Generation5. So, our rule file should look like the one shown below

Then validate the rule file and just save it as Segments. Once that is done, right click on the Global database and click on load data. Then choose the rule file and start building the dimension.

Page 17 of 30

So, effectively we should get the dimension as shown below in the outline.

Next we shall see how to go about loading data into this Essbase Cube. Also we shall be seeing how to go aggregating portions of the Essbase Database using calculation scripts.
Page 18 of 30

A block storage cube can accept input data at any level. The data need not always be at the lowest level. The process of loading the data into Essbase is again done through rule files. As a general rule, lets create one rule file for each measure (UNITS, UNIT_PRICE & UNIT_COST). Generally the number of rule files is determined by how your SQL source is structured. The SQL for the rule file (for UNITS) in our case is given below As a general rule, it is recommended to have columns in the select statement arranged in the same order as the dimensions in the Outline. This will ensure that the data load is fast. The process of creating the rule files remains the same as for the dimensions. The major difference lies in property settings. Lets start with creating a new rule file. Go to Open SQL and enter the above SQL. Then ensure that the Data Load Property is set.

The next step is to make sure that each and every column is properly tagged to the corresponding dimension as shown below

Page 19 of 30

This needs to be set for all the columns apart from the data. The last data column needs to set with a data property. Also, by default in a BSO cube if the existing cells have data they would be overwritten. There are properties which can change this default property. This is shown below

Page 20 of 30

Once this is done, validate the rule file and save it. Now, lets load the data from this rule file using the same approach as before. The only difference in the data load is that, while doing the data load we need to choose the load only option which is Build Only by default.

This would load all the data in the Essbase Cube. This can be verified from the database properties. You would see that the Input Level 0 blocks would have data.

Page 21 of 30

The various statistics above are very important while loading an Essbase Cube. Now, the next step is to aggregate the cube. Aggregation in Block Storage cubes are done through scripts called as calculation scripts. The major advantage of these scripts is that they provided the flexibility to aggregate any portion of the cube. So, we shall create a single calculation script which will aggregate UNITS across all the dimension and UNIT_PRICE, UNIT_COST across 2 dimensions. The calculation script is provided below SET CACHE HIGH; SET CALCPARALLEL 2; FIX ("Units") CALC DIM ("Time"); AGG("Channels"); AGG("Product"); AGG("Customer");
Page 22 of 30

ENDFIX; FIX("Unit Price","Unit Cost","Channels","Customer") CALC DIM("TIME"); AGG("Product"); ENDFIX; Basically what this does, it calculates all the dimensions for the Units measure. But for the Unit Price and Unit Cost measure it calculates only the Product and Time dimensions.

Now, lets execute this calculation and look at the database properties.

Page 23 of 30

As you see, the number of upper level blocks has increased. Also the block density has gone up which is good. Now as the last step let us look at the data.

Page 24 of 30

Converting BSO to ASO In this section we shall see how to go about converting a BSO cube to an ASO cube using the outline migration wizard. Before we do that, let us try to understand the basic differences between ASO and BSO cubes. ASO cubes have certain restrictions. But they are an excellent fit if you have huge dimensions. Typically an ASO cube aggregates very fast and this is primarily due to the fundamental differences in the architecture of ASO and BSO. On a high level following are the ASO properties that one would have to know 1. ASO cannot be used to load data at non-level0 members. ASO will accept data only at the lowest level. 2. ASO does not support calculations in stored members for Non Account dimensions. 3. Each non-Account dimension hierarchies in an ASO cube can be of 3 types. They are Stored, Dynamic and Multiple Hierarchies. 4. A Stored Hierarchy dimension is like a normal hierarchy in BSO. But the major difference is that the same member cannot be shared more than once within the hierarchy. Also, non-level0 members cannot be shared within a stored hierarchy. This hierarchy does not support stored members within calculations. 5. A dynamic hierarchy on a non-accounts dimension has all the properties of a dimension in a BSO cube. But the major difference is that the upper level member values are dynamically obtained (during data retrieval). Also, calculated members are supported in this hierarchy. 6. A multiple hierarchy dimension can have both stored and dynamic hierarchies. But this dimension should have at least one stored hierarchy. 7. ASO data loads are typically more flexible than BSO data loads. ASO supports the concept of load buffers which can do data addition, subtraction etc of data coming in from multiple data sources in memory 8. There is no need for identifying sparse and dense dimensions. There are other differences as well (like attribute dimension is not supported on all the dimensions etc). But the ones listed above are the most important ones at least from an outline and data load standpoint. Our goal in this article, as stated above, is to create the same Global
Page 25 of 30

BSO cube in ASO as well. Let us first start with creating a new ASO application and an ASO database called GlobASO.

Now, let us migrate the BSO outline that we created before using the Aggregate Outline Migration Wizard.

Page 26 of 30

In the source, choose the Global Databases outline.

Page 27 of 30

As you see below, as soon as we click on next, we would see a lot of errors/warnings which inherently show the differences between ASO and BSO. Do not do the conversion automatically. Instead use Interactive Conversion.

Due to these inherent differences, the wizard would not migrate everything correctly. So, we shall correct all the errors manually now.

The first step in the correction is to make one of the customer hierarchies as a dynamic hierarchy. This is because by default chosen as a stored hierarchy by the wizard. The reason for doing this is that ASO does not support a shared member to occur twice in a stored

Page 28 of 30

hierarchy. To make this change, make the customer dimension to be multiple hierarchies enabled and make the Total Market as a dynamic hierarchy.

Once this done the verification would go through without any problem. But from a data load standpoint we still have one open issue. If you recollect, we had UNIT_PRICE and UNIT_COST measures having a grain of only 2 dimensions. Since BSO supported the option of loading values directly to the parent members, we had loaded the values against CHANNELS and CUSTOMER dimensions (non grain dimensions). But ASO does not allow data to be loaded at non-level0 members. So, we need to create 2 dummy members(or use existing lev0 members) under Channels and Customer dimensions. These members are shown below

Page 29 of 30

We also have to change their hierarchy types in order to ensure that the dummy members do not rollup. Instead of proceeding further in the wizard just copy all the members (dimension by dimension) to the GlobASO outline and save the members

Summary
Customers are mostly interested Analyzing the data faster and efficiently in-order to take right and correct decision for the budgeting, forecasting, data mining using OLAP which is XOLAP (X being H-Hybrid,M- Multi-dimensional,R- Relational), so Oracle Essbase is the best solution currently in market. The competency we have inhouse and the cumulative experience within the team/practice can deliver the above documented.

Reference:

Essbase Database Administration Guide. http://oraclebizint.wordpress.com/2008/12/18/hyperion -essbase-931obe-series-using-outline-migration-wizard-to-convert-bso-to-aso-outlinesunderstanding-aso-cubes-part-5/

Page 30 of 30

Vous aimerez peut-être aussi